url stringlengths 31 38 | title stringlengths 7 229 | abstract stringlengths 44 2.87k | text stringlengths 319 2.51M | meta dict |
|---|---|---|---|---|
https://arxiv.org/abs/2001.01304 | Approximation of PDE eigenvalue problems involving parameter dependent matrices | We discuss the solution of eigenvalue problems associated with partial differential equations that can be written in the generalized form $\m{A}x=\lambda\m{B}x$, where the matrices $\m{A}$ and/or $\m{B}$ may depend on a scalar parameter. Parameter dependent matrices occur frequently when stabilized formulations are used for the numerical approximation of partial differential equations. With the help of classical numerical examples we show that the presence of one (or both) parameters can produce unexpected results. | \section{Introduction}
\label{se:intro}
Several schemes for the approximation of eigenvalue problems arising from
partial differential equations lead to the algebraic form: find
$\lambda\in\mathbb{R}$ and $x\in\mathbb{R}^n$ with $x\ne 0$ such that
\begin{equation}
\label{eq:eig}
\m{A}x=\lambda\m{B}x,
\end{equation}
where $\m{A}$ and $\m{B}$ are matrices in $\mathbb{R}^{n\times n}$.
We consider the case when the matrices $\m{A}$ and $\m{B}$ are symmetric and
positive semidefinite and may depend on a parameter. This is a typical
situation found in applications where elliptic partial differential equations
are approximated by schemes that require suitable parameters to be tuned (for
consistency and/or stability reasons). In this paper we discuss in particular
applications arising from the use of the Virtual Element Method (VEM),
see~\cite{MRR,BMRR,GV,MRV,MV,GMV,CGMMV}, where suitable parameters have to
be chosen for the correct approximation.
Similar situations are present, for instance, when a parameter-dependent
stabilization is used for the approximation of discontinuous Galerkin
formulations and when a penalty penalty term is added to the discretization of
the eigenvalue problem associated with Maxwell's
equations~\cite{CoDaMax,CoDaDurham,CoDareg,bfg,2006,WarburtonEmbree,2010,BoGue,BaCo}
In general, it may be not immediate to describe how the matrices $\m{A}$ and
$\m{B}$ depend on the given parameters. For simplicity, we
consider the case when the dependence is linear: under suitable assumptions
it is easy to discuss how the computed spectrum varies with respect to the
parameters.
The description of the spectrum in the linear case is not surprising and is
well known to a broad scientific
community~\cite{ElsnerSun,StewartSun,MR1066108,LiStewart,greenbaum2019firstorder}.
Nevertheless, the main focus of perturbation theory for eigenvalues and
eigenvectors is usually centered on the asymptotic behavior when the
parameters tend to zero. In our case, the asymptotic parameter is usually the
mesh size $h$ and we are interested in the convergence when $h$ goes to zero,
that is when the size of the involved matrices tends to infinity.
The presence of additional parameters makes the convergence more difficult to
describe and can produce unexpected results in the pre-asymptotic regime. For
this reason, we start by recalling how the spectrum of problem~\eqref{eq:eig}
is influhenced by the parameter, without considering $h$, and we translate
those results to an example of interest in Section~\ref{se:subVEM} where the
discretization parameter $h$ is considered as well.
We assume that the matrices $\m{A}$ and $\m{B}$
satisfy the following condition for $\m{C}=\m{A},\m{B}$.
\begin{ass}
\label{ass:C}
The matrix $\m{C}$ can be split into the sum
\begin{equation}
\label{eq:C}
\m{C}=\m{C}_1+\gamma\m{C}_2,
\end{equation}
where $\gamma$ is a non negative real number and $\m{C}_1$ and $\m{C}_2$ are
symmetric.
The matrices $\m{C}_1$ and $\m{C}_2$ satisfy the following properties:
\begin{enumerate}[a)]
\item $\m{C}_1$ is positive semidefinite with kernel $K_{\m{C}_1}$;
\item $\m{C}_2$ is positive semidefinite and positive definite on $K_{\m{C}_1}$;
\item $\m{C}_2$ vanishes on $K_{\m{C}_1}^\perp$, the orthogonal complement of
$K_{\m{C}_1}$ in $\mathbb{R}^n$.
\end{enumerate}
\end{ass}
In~\cref{se:param} we describe the spectrum of~\eqref{eq:eig} as a
function of the parameters, in various situations that mimic the behavior of
matrices $\m{A}$ and $\m{B}$ originating from several discretization schemes.
\Cref{se:VEM}, which is the core of this paper, discusses the influence
of the parameters on the VEM approximation of eigenvalue problems. Several
numerical examples complete the papers, showing that the parameters have to be
carefully tuned and that wrong choices can produce useless results.
\section{Parametric algebraic eigenvalue problem}
\label{se:param}
Given two symmetric and positive semidefinite matrices $\m{A}$ and $\m{B}$
that can be written as
\begin{equation}
\label{eq:A}
\m{A}=\Au+\alpha\Ad
\end{equation}
and
\begin{equation}
\label{eq:B}
\m{B}=\Bu+\beta\Bd,
\end{equation}
with nonnegative parameters $\alpha$ and $\beta$, we consider the
eigensolutions to the generalized problem~\eqref{eq:eig}.
We assume that the splitting of the matrices $\m{A}$ and $\m{B}$ is obtained
with symmetric matrices and satisfies~\cref{ass:C} for
$\m{C}_1=\Au,\Bu$ and $\m{C}_2=\Ad,\Bd$. Moreover we denote by $\NA$ and $\NB$
the dimension of $\KA$ and $\KB$, respectively.
\begin{remark}
\label{re:geneig}
Problem~\eqref{eq:eig} has $n$ eigenvalues if and only if
$\mathrm{rank}\m{B}=n$, see~\cite{GVL}. If $\m{B}$ is singular the spectrum can
be finite, empty, or infinite (if $\m{A}$ is singular too). If $\m{A}$ is non
singular, usually one can
circumvent this difficulty by computing the eigenvalues of $\m{B} x=\mu\m{A}x$
and setting $\lambda=1/\mu$. The kernel of $\m{B}$ is the eigenspace associated
with the vanishing eigenvalue with multiplicity $m$, and
the original problem has exactly $m$ eigenvalues conventionally set to
$\infty$.
\end{remark}
We want to study the behavior of the eigenvalues as the parameters $\alpha$ and
$\beta$ vary. We consider three cases.
\subsection{Case 1}
\label{se:caso1}
We fix $\beta>0$ so that $\m{B}$ is positive definite. This implies
that the eigenvalues of~\eqref{eq:eig} are all non negative.
Let us consider first $\alpha=0$ so that~\eqref{eq:eig} reduces to
\begin{equation}
\label{eq:eigA1}
\Au x=\lambda \m{B}x.
\end{equation}
Since $\Au$ is positive semidefinite, $\lambda=0$ is an eigenvalue
of~\eqref{eq:eigA1} with multiplicity equal to $\NA=\dim(\KA)$ and $\KA$ is the
associated eigenspace.
In addition, we
have $m_A=n-\NA$ positive eigenvalues $\{\mu_1\le\dots\le\mu_{m_A}\}$ counted
with their multiplicity (since we are dealing with a symmetric problem, we do
not distinguish between geometric and algebraic multiplicity). We denote by
$v_j\in\KA^\perp$ the eigenvector associated with $\mu_j$, that is
\[
\Au v_j=\mu_j \m{B}v_j.
\]
Thanks to property c) of~\cref{ass:C} when $\m{C}=\m{A}$, we observe
that
\[
\m{A}v_j=\Au v_j+\alpha\Ad v_j=\Au v_j=\mu_j\m{B} v_j.
\]
Therefore $(\mu_j,v_j)$, for $j=1,\dots,m_A$, are eigensolutions of the
original system~\eqref{eq:eig}.
On the other hand, the eigensolutions of
\[
\Ad w=\nu\m{B}w
\]
are characterized by the fact that $\NA$ eigenvalues $\nu_i$ ($i=1,\dots,\NA$)
are strictly positive with corresponding eigenvectors $w_i$ belonging to
$\KA$, while the remaining $m_A$ eigenvalues vanish and have $\KA^\perp$ as
eigenspace.
Thus, property a) of~\cref{ass:C}, for $\m{C}=\m{A}$, yields
\[
\m{A}w_i=\Au w_i+\alpha\Ad w_i=\alpha\Ad w_i=\alpha\nu_i \m{B}w_i,
\]
which means that $(\alpha\nu_i,w_i)$, for $i=1,\dots,\NA$, are eigensolutions
of~\eqref{eq:eig}.
Summarizing the eigenvalues of~\eqref{eq:eig} are:
\begin{equation}
\label{eq:eigA}
\lambda_k=
\left\{
\begin{array}{ll}
\alpha\nu_k & \text{if }1\le k\le\NA\\
\mu_{k-\NA} & \text{if } \NA+1\le k\le n.
\end{array}
\right.
\end{equation}
The left panel in~\cref{fig:case1} shows the eigenvalues
of a simple example where $\m{A}\in\mathbb{R}^{6\times6}$ is
obtained by the combination of diagonal matrices with entries
\begin{equation}
\label{eq:matrici}
\diag(\Au)=[3,4,5,6,0,0],\quad\diag(\Ad)=[0,0,0,0,1,2].
\end{equation}
and $\m{B}=\mathbb{I}_6$ is the identity matrix.
Along the vertical lines we see the eigenvalues corresponding to a fixed value
of $\alpha$. The eigenvalues $3,4,5,6$ are associated with eigenvectors in
$\KA^\perp$ and do not depend on $\alpha$. The solid lines starting at the origin
display the eigenvalues $1,2$ multiplied by $\alpha$.
\begin{figure}
\begin{center}
\includegraphics[width=.48\textwidth]{matlab/caso1-eps-converted-to.pdf}
\includegraphics[width=.48\textwidth]{matlab/caso2-eps-converted-to.pdf}
\caption{Dependence of the eigenvalues on the parameters $\alpha$ (Case~1) and
$\beta$ (Case~2), respectively}
\label{fig:case1}
\end{center}
\end{figure}
\begin{remark}
\label{re:intersection}
We observe that if $\Ad$ is not positive definite on $\KA$, its kernel has a
nonempty intersection with $\KA$. Let $n_{12}$ be the dimension of this
intersection, then problem~\eqref{eq:eig} admits $n_{12}$ vanishing eigenvalues
which appear in the first case of~\eqref{eq:eigA}.
\end{remark}
\subsection{Case 2}
\label{se:caso2}
Let us now fix $\alpha>0$, so that $\m{A}$ is positive definite. We have
that all the eigenvalues are positive. We observe that when $\beta=0$, the
matrix $\m{B}=\Bu$ may be singular, therefore it is convenient to consider the
following problem:
\begin{equation}
\label{eq:eig2}
\m{B}x=\chi\m{A}x,
\end{equation}
where $\chi=\frac 1\lambda$. If $\chi=0$, we conventionally set $\lambda=\infty$.
Problem~\eqref{eq:eig2} reproduces the same situation we had in Case~1, with the
matrices $\m{A}$ and $\m{B}$ switched.
Repeating the same arguments as before, we obtain that problem~\eqref{eq:eig2}
has two families of eigenvalues
\[
\chi_k=
\left\{
\begin{array}{ll}
\beta\xi_k & \text{if }1\le k\le\NB\\
\zeta_{k-\NB} & \text{if } \NB+1\le k\le n,
\end{array}
\right.
\]
where
\[
\aligned
&\Bu r_j=\zeta_j \m{A} r_j,\ j=1,\dots,n-\NB &&\text{ with }r_j\in\KB^\perp\\
&\Bd s_i=\xi_i\m{A}s_i,\ i=1,\dots,\NB &&\text{ with }s_i\in\KB.
\endaligned
\]
Going back to the original problem~\eqref{eq:eig}, we can conclude
that the eigensolutions of~\eqref{eq:eig} are the following ones:
\begin{equation}
\label{eq:eigB}
\aligned
&\left(\frac1{\beta\xi_k},s_k\right) &&\text{ for } k=1,\dots,\NB\\
&\left(\frac1{\zeta_{k-\NB}},r_{k-\NB}\right)&&\text{ for }k=\NB+1,\dots,n.
\endaligned
\end{equation}
In the right panel of~\cref{fig:case1}, we report the eigenvalues
of a simple example where $\m{A}=\mathbb{I}_6$ and $\m{B}$ is obtained by
combining $\Bu=\Au$ and $\Bd=\Ad$ defined in~\eqref{eq:matrici}. We see that
the eigenvalues $\frac13,\frac14,\frac15,\frac16$ are independent of $\beta$
and that the remaining two eigenvalues lie along the hyperbolas $\frac1\beta$
and $\frac1{2\beta}$, plotted with solid line.
\subsection{Case 3}
\label{se:caso3}
We consider now the case when $\alpha$ and $\beta$ can vary independently from
each other. We have different situations corresponding to the relation between
$\KA$ and $\KB$.
To ease the reading, let us introduce the following notation:
\begin{subequations}
\begin{alignat}{1}
&\Au v=\mu\Bu v
\label{eq:notation11}\\
&\Au w=\nu \Bd w
\label{eq:notation12}\\
&\Ad y=\chi\Bu y
\label{eq:notation21}\\
&\Ad z=\eta\Bd z.
\label{eq:notation22}
\end{alignat}
\end{subequations}
In this case the space
$\mathbb{R}^n$ can be decomposed into four mutually orthogonal subspaces
\[
\mathbb{R}^n=(\KA\cap\KB)\oplus(\KA\cap\KB^\perp)\oplus(\KA^\perp\cap\KB)\oplus
(\KA^\perp\cap\KB^\perp).
\]
Let us denote by $n_{\Au\cap\Bu}$ the dimension of $\KA\cap\KB$.
If $\KA\cap\KB\ne\emptyset$,
for $x\in\KA\cap\KB$ the eigenproblem to be solved is
$\alpha\Ad x=\lambda\beta\Bd x$, hence the eigenvalues are given by
$\frac{\alpha}{\beta}\eta_i$ $i=1,\dots,n_{\Au\cap\Bu}$,
see~\eqref{eq:notation22}. Next, if $x\in\KA\cap\KB^\perp$ we have to solve
$\alpha\Ad x=\lambda\Bu x$, which admits $(\alpha\chi_i,y_i)$
$i=1,\dots,\NA-n_{\Au\cap\Bu}$ as eigensolutions where $(\chi_i,y_i)$ are
defined in~\eqref{eq:notation21}. Similarly, if $x\in\KA^\perp\cap\KB$, we find
that the eigensolutions are $\left(\frac1{\beta}\nu_i,w,_i\right)$
$i=1,\dots,\NB-n_{\Au\cap\Bu}$ with $(\nu_i,w_i)$ given
by~\eqref{eq:notation12}. In the last case, $x\in\KA^\perp\cap\KB^\perp$, the
matrices $\m{A}$ and $\m{B}$ are non singular and thanks to property c) in
~\cref{ass:C}, for $\m{C}=\m{A}$ and $\m{C}=\m{B}$, we obtain that
the eigenvalues are positive and independent of $\alpha$ and $\beta$ and
correspond to those of~\eqref{eq:notation11}. In conclusion, we have
\[
\lambda_k=\left\{
\begin{array}{ll}
\displaystyle \frac{\alpha}{\beta}\eta_k
&\quad\text{ if }1\le k\le n_{\Au\cap\Bu}\\
\alpha\chi_{k-n_{\Au\cap\Bu}} &\quad\text{ if }n_{\Au\cap\Bu}+1\le k\le\NA\\
\displaystyle\frac1{\beta}\nu_{k-\NA}
&\quad\text{ if }\NA+1\le k\le\NA+\NB-n_{\Au\cap\Bu}\\
\mu_{k-\NA+\NB-n_{\Au\cap\Bu}}
&\quad\text{ if }\NA+\NB-n_{\Au\cap\Bu}+1\le k\le n.
\end{array}
\right.
\]
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.98\textwidth]{matlab/caso7-eps-converted-to.pdf}
\caption{Eigenvalues when $\KA\cap\KB\ne\emptyset$ as a function of $\alpha$
and $\beta$}
\label{fig:ab7}
\end{center}
\end{figure}
We report in~\cref{fig:ab7} the eigenvalues illustrating this last case when
we have diagonal matrices given by
\[
\aligned
&\diag(\Au)=[3,0,0,4,5,6] &&\quad\diag(\Ad)=[0,1,2,0,0,0]\\
&\diag(\Bu)=[7,8,0,0,9,10] &&\quad\diag(\Bd)=[0,0,0.8,1,0,0].
\endaligned
\]
The surface contains the
eigenvalues depending on both $\alpha$ and $\beta$, the hyperbolas those
depending only on $\beta$ and the straight lines those depending only on
$\alpha$. If we cut the three dimensional picture with a plane at $\beta>0$
fixed we recognize the behavior analyzed in~\cref{se:caso1} and shown
in~\cref{fig:case1} left. Analogously, taking a plane with $\alpha>0$
fixed, we recover Case 2 (see~\cref{se:caso2}).
If $\KA\cap\KB=\emptyset$, we set $n_{\Au\cap\Bu}=0$, hence the eigenvalues
are
\[
\lambda_k=\left\{
\begin{array}{ll}
\alpha\chi_k & \text{ if }1\le k\le \NA\\
\displaystyle\frac1\beta{\nu_{k-\NA}} & \text{ if } \NA+1\le k\le\NA+\NB\\
\mu_{k-\NA-\NB} & \text{ if } \NA+\NB+1\le k\le n
\end{array}.
\right.
\]
\begin{figure}
\begin{center}
\includegraphics[width=.98\textwidth]{matlab/caso4-eps-converted-to.pdf}
\caption{Eigenvalues when $\KA\cap\KB=\emptyset$ as a function of $\alpha$ and
$\beta$}
\label{fig:ab4}
\end{center}
\end{figure}
In order to illustrate the case $\KA\cap\KB=\emptyset$, we report in~\cref{fig:ab4}
the eigenvalues computed using the following diagonal matrices with entries
\[
\aligned
&\diag(\Au)=[0,0,3,4,5,6],&&\quad\diag(\Ad)=[1,2,0,0,0,0],\\
&\diag(\Bu)=[7,8,9,10,0,0],&&\quad\diag(\Bd)=[0,0,0,0,0.8,1].
\endaligned
\]
For a fixed $\alpha$, we can see in solid line the hyperbolas
$\frac{\nu_j}\beta$, $j=1,2$ while when $\beta$ is fixed we can see the
straight lines $\alpha\chi_j$, $j=1,2$. The remaining two eigenvalues are
independent of $\alpha$ and $\beta$.
\section{Virtual element method for eigenvalue problems}
\label{se:VEM}
In this section we recall how algebraic eigenvalue problems similar to the
ones discussed in the previous section can be obtained withing the framework
of the Virtual Element Method (VEM) for the discretization of elliptic
eigenvalue problems, see~\cite{GV,GMV}.
We consider the model problem of the Laplacian operator.
Given a connected open domain with Lipschitz continuous boundary
$\Omega\subseteq\mathbb{R}^d$, with $d=2,3$, we look for eigenvalues
$\lambda\in\mathbb{R}$ and eigenfunctions $u\ne0$ such that
\[
\left\{
\begin{array}{ll}
-\Delta u=\lambda u &\quad\text{ in }\Omega\\
u=0&\quad\text{ on }\partial\Omega.
\end{array}
\right.
\]
In view of the application of VEM, we consider the weak form: find
$\lambda\in\mathbb{R}$ and $u\in\Huo$ with $u\ne0$ such that
\begin{equation}
\label{eq:Laplace}
a(u,v)=\lambda b(u,v)\quad\forall v\in\Huo,
\end{equation}
where
\[
a(u,v)=(\nabla u,\nabla v),\quad b(u,v)=(u,v),
\]
and $(\cdot,\cdot)$ is the scalar product in $L^2(\Omega)$.
It is well-known that problem~\eqref{eq:Laplace} admits an infinite sequence
of positive eigenvalues
\[
0<\lambda_1\le \dots\le\lambda_i\le\cdots
\]
repeated according to their multiplicity, each one associated with an
eiegenfunction $u_i$ with the following properties
\begin{equation}
\label{eq:eigf}
\aligned
& a(u_i,u_j)=b(u_i,u_j)=0\quad \text{if }i\ne j\\
& b(u_i,u_i)=1,\quad a(u_i,u_i)=\lambda_i.
\endaligned
\end{equation}
Let us briefly recall the definition of the virtual element spaces and of the
discrete bilinear forms which we are going to use in this section,
see~\cite{BBCMMR,AABMR}.
We present only the two dimensional spaces, the three dimensional ones are
obtained using the 2D virtual elements on the faces.
We decompose $\Omega$ into polygons $P$, with diameter $h_P$ and area $|P|$.
Similarly, if $e$ is an edge of an element $P$, we denote by $h_e=|e|$ its
length. Depending on the context $\partial P$ refers to either the boundary of
$P$ or the set of the edges of $P$.
The notation $\T_h$ and $\E_h$ stands for the set of the elements
and the edges, respectively. As usual, $h=\max_{P\in\T_h} h_P$.
We assume the following mesh regularity condition (see~\cite{BBCMMR}):
there exists a positive constant $\gamma$, independent of $h$, such that each
element $P\in\T_h$ is star-shaped with respect to a ball of radius greater
than $\gamma h_P$; moreover, for every element $P$ and for every edge
$e\subset\partial P$, it holds $h_e \ge \gamma h_P$.
For $k\ge1$ and $P\in\T_h$ we define
\[
\tilde{V}_h^k(P)=\{v\in H^1(P): v|_{\partial P}\in C^0(\partial P),
v|_e\in\P_k(e)\ \forall e\subset\partial P, \Delta v\in\P_k(P)\}.
\]
We consider the following linear forms on the space $\tilde{V}_h^k(P)$
\begin{enumerate}
\item[D1]: the values $v(V_i)$ at the vertices $V_i$ of $P$,
\item[D2]: the scaled edge moments up to order $k-2$
\[
\dfrac{1}{|e|}\int_e vm\,\text{d}s\quad\forall m\in\mathcal{M}_{k-2}(e),\
\forall e\subset\partial P,
\]
\item[D3]: the scaled element moments up to order $k-2$
\[
\dfrac{1}{|P|}\int_P vm\,\text{d}x\quad\forall m\in\mathcal{M}_{k-2}(P),\
\]
\end{enumerate}
where $\mathcal{M}_{k-2}(\omega)$ is the set of scaled monomials on $\omega$,
namely
\[
\mathcal{M}_{k-2}(\omega)=\Big\{\Big(\dfrac{\mathbf{x}-\mathbf{x}_\omega}
{h_\omega}\Big)^s, |s|\le k-2\Big\},
\]
with $\mathbf{x}_\omega$ the barycenter of $\omega$, and with the convention
that $\mathcal{M}_{-1}(\omega)=\emptyset$.
From the values of the linear operators D1--D3, on each element $P$ we can
compute a projection operator $\Pinabla: \tilde{V}_h^k(P)\rightarrow
\P_k(P)$ defined as the unique solution of the following problem:
\begin{equation}
\label{eq:pinabla}
\aligned
& a^P(\Pinabla v-v,p)=0\quad\forall p\in\P_k(P)\\
& \int_{\partial P}(\Pinabla v-v)\text{d}s=0,
\endaligned
\end{equation}
where $a^P(u,v)=(\nabla u,\nabla v)_P$ and $(\cdot,\cdot)_P$ denotes the
$L^2(P)$-scalar product.
The local virtual space is defined as
\begin{equation}
\label{eq:VemP}
\VemP=\left\{v\in\tilde{V}_h^k(P):\int_P (v-\Pinabla v) p\text{d}x=0\
\forall p\in(\P_k\setminus\P_{k-2})(P)\right\},
\end{equation}
where $(\P_k\setminus\P_{k-2})(P)$ contains the polynomials in $\P_k(P)$
$L^2$-orthogonal to $\P_{k-2}(P)$.
We recall that by construction $\P_k(P)\subset\VemP$, so that the optimal rate
of convergence is ensured. Moreover, the linear operators D1--D3 provide a
unisolvent set of degrees of freedom (DoFs) for $\VemP$, which allows us to
define and compute $\Pinabla$ on $\VemP$. In addition, the
$L^2$-projection operator $\Pio:\VemP\to\P_k(P)$ is also computable using the
DoFs.
The global virtual space is
\begin{equation}
\label{eq:Vem}
\V=\{v\in\Huo: v|_P\in\VemP\ \forall P\in\T_h\}.
\end{equation}
In order to discretize problem~\eqref{eq:Laplace}, we introduce the
discrete counterparts $a_h$ and $b_h$ of the bilinear forms $a$ and $b$,
respectively. Both discrete forms are obtained as sum of the following
local contributions: for all $u_h,v_h\in\V$
\begin{equation}
\label{eq:abP}
\aligned
&a_h^P(u_h,v_h)=a^P(\Pinabla u_h,\Pinabla v_h)
+S_a^P((I-\Pinabla)u_h,(I-\Pinabla)v_h)\\
&b_h^P(u_h,v_h)=b^P(\Pio u_h,\Pio v_h)+S_b^P((I-\Pio)u_h,(I-\Pio)v_h),
\endaligned
\end{equation}
where $b^P(u,v)=(u,v)_P$, and $S_a^P$ and $S_b^P$ are symmetric
positive definite bilinear forms on $\VemP\times\VemP$ such that
\begin{equation}
\label{eq:stab}
\aligned
&c_0 a^P(v,v)\le S_a^P(v,v)\le c_1a^P(v,v) &&\quad\forall v\in\VemP
\text{ with }\Pinabla v=0\\
&c_2 b^P(v,v)\le S_b^P(v,v)\le c_3b^P(v,v) &&\quad\forall v\in\VemP
\text{ with }\Pio v=0,
\endaligned
\end{equation}
for some positive constants $c_i$ ($i=0,\dots,3$) independent of $h$.
We define
$a_h(u_h,v_h)=\sum_{P\in\T_h}a_h^P(u_h,v_h)$ and
$b_h(u_h,v_h)=\sum_{P\in\T_h}b_h^P(u_h,v_h)$.
The virtual element counterpart of~\eqref{eq:Laplace} reads: find $\lambda_h$
and $u_h\in\V$ with $u_h\ne0$ such that
\begin{equation}
\label{eq:LaplV}
a_h(u_h,v_h)=\lambda_h b_h(u_h,v_h)\quad\forall v_h\in\V.
\end{equation}
Thanks to~\eqref{eq:stab}, the discrete problem~\eqref{eq:LaplV} admits
$N_h=\dim{\V}$ positive eigenvalues
\[
0<\lambda_{1h}\le\dots\lambda_{N_hh}
\]
and the corresponding eigenfunctions $u_{ih}$, for $i=1,\dots,N_h$, enjoy the
discrete counterpart of properties in~\eqref{eq:eigf}.
The following convergence result has been proved in~\cite{GV}.
\begin{thm}
\label{th:conv}
Let $\lambda$ be an eigenvalue of~\eqref{eq:Laplace} of multiplicity $m$ and
$\mathcal{E}_\lambda$ the corresponding eigenspace. Then there are exactly $m$
discrete eigenvalues of~\eqref{eq:LaplV} $\lambda_{j(i)h}$ ($i=1,\dots,m$)
tending to $\lambda$.
Moreover, assuming that $u\in H^{1+r}(\Omega)$, for all
$u\in\mathcal{E}_\lambda$, the following inequalities hold true:
\[
\aligned
&|\lambda-\lambda_{j(i)h}|\le Ch^{2t}\\
&\hat\delta(\mathcal{E}_\lambda,\oplus_i\mathcal{E}_{j(i)h})\le Ch^t,
\endaligned
\]
where $t=\min(k,r)$, $\hat\delta(\mathcal{E},\mathcal{F})$ represents
the gap between the spaces $\mathcal{E}$ and $\mathcal{F}$, and
$\mathcal{E}_{\ell h}$ is the eigenspace spanned by $u_{\ell h}$.
\end{thm}
\begin{remark}
\label{re:nonstab}
It is also possible to consider on the right hand side of~\eqref{eq:LaplV}
the bilinear form for $\tb(u_h,v_h)=\sum_{P\in\T_h}b^P(\Pio u_h,\Pio v_h)$.
This leads to the following discrete eigenvalue problem: find
$(\tl,\tu)\in\mathbb{R}\times\V$ with $\tu\ne0$ such that
\begin{equation}
\label{eq:nonstab}
a_h(\tu,v_h)=\tl\tb(\tu,v_h)\quad\forall v_h\in\V.
\end{equation}
The analogue of~\cref{th:conv} holds true for this partially non
stabilized discretization as well.
\end{remark}
\subsection{Computational aspects and numerical results}
\label{se:subVEM}
In order to compute the solution of problems~\eqref{eq:LaplV}
and~\eqref{eq:nonstab}, we need to describe how to obtain the matrices
associated to our bilinear forms. By construction the matrix $\Au$
(respectively, $\Bu$) associated with $\sum_P a^P(\Pinabla\cdot,\Pinabla\cdot)$
(respectively, $\sum_P b^P(\Pio\cdot,\Pio\cdot)$) has kernel corresponding to
the elements $v_h\in\V$ such that
$\Pinabla v_h$ is constant (respectively, $\Pio v_h=0$) for all $P\in\T_h$.
We observe that the local contributions of the bilinear forms displayed
in~\eqref{eq:abP} mimic the following exact relations
\begin{equation}
\label{eq:exactab}
\aligned
&a^P(u_h,v_h)=
a^P(\Pinabla u_h,\Pinabla v_h)+a^P((I-\Pinabla)u_h,(I-\Pinabla)v_h)\\
&b^P(u_h,v_h)=
b^P(\Pio u_h,\Pio v_h)+b^P((I-\Pio)u_h,(I-\Pio)v_h).
\endaligned
\end{equation}
Let us denote by $\Aul$, $\Adl$, $\Bul$ and $\Bdl$ the matrices whose entries are
given by
\begin{equation}
\label{eq:matr}
\aligned
&(\Aul)_{ij}=a^P(\Pinabla \phi_i,\Pinabla \phi_j),&\quad
&(\Adl)_{ij}=a^P((I-\Pinabla)\phi_i,(I-\Pinabla)\phi_j)\\
&(\Bul)_{ij}=b^P(\Pio \phi_i,\Pio \phi_j),&\quad
&(\Bdl)_{ij}=b^P((I-\Pio)\phi_i,(I-\Pio)\phi_j)
\endaligned
\end{equation}
with $\phi_i$ basis functions for $\VemP$.
Even if the global matrices $\m{A}$ and $\m{B}$ do not satisfy the properties
stated in~\cref{ass:C}, it turns out that~\cref{ass:C} is
fulfilled by $\m{C}=\Bul+\beta\Bdl$; moreover, $\m{C}=\Aul+\alpha\Adl$ is
characterized by the situation described in~\cref{re:intersection}.
We start with the pair $\Aul$ and $\Adl$.
The kernel $K_{\Aul}$, with abuse of notation, is characterized by
\[
K_{\Aul}=\{v\in\VemP: a^P(\Pinabla v,\Pinabla w)=0\ \forall w\in\VemP\},
\]
that is, $K_{\Aul}$ is made of $v$ with constant $\Pinabla v$ on $P$.
Moreover, the orthogonal complement of $K_{\Aul}$, denoted by
$K_{\Aul}^\perp$ contains the elements $v\in\VemP$ such that $a^P(v,w)=0$ for
all $w\in K_{\Aul}$.
We now show that $\Adl( K_{\Aul}^\perp)=0$, that is, for all $v\in K_{\Aul}^\perp$,
$a^P((I-\Pinabla)v,(I-\Pinabla)w)=0$ for all $w\in\VemP$.
We recall that, if $v\in K_{\Aul}^\perp$, then $a^P(v,w)=0$ for
all $w\in K_{\Aul}$. This implies that for $v\in K_{\Aul}^\perp$ and
$w\in K_{\Aul}$, it holds true that
$a^P(v,w)=a^P((I-\Pinabla)v,(I-\Pinabla)w)=0$.
Now we can write for all $w\in\VemP$
\[
\aligned
&a^P((I-\Pinabla)v,(I-\Pinabla)w)\\
&\quad=a^P((I-\Pinabla)v,(I-\Pinabla)(I-\Pinabla)w)+
a^P((I-\Pinabla)v,(I-\Pinabla)\Pinabla w)=0.
\endaligned
\]
Indeed, $\Pinabla(I-\Pinabla)w=0$ implies that $(I-\Pinabla)w\in K_{\Aul}$, and
thus the first term vanishes,
while for the second term it is enough to observe that
$\Pinabla(\Pinabla w)=\Pinabla w$.
Thus property c) of~\cref{ass:C} is verified for $\m{C}=\m{A}$.
Concerning property b) of~\cref{ass:C}, we have
by construction, that $a^P((I-\Pinabla)v,(I-\Pinabla)v)\ge0$
for all $v\in\VemP$, see~\eqref{eq:exactab}. On the other hand, if $v$ is
constant on $P$, then $\Pinabla v=v$ is constant, therefore
$v\in K_{\Aul}$ and $(I-\Pinabla)v=0$ so that $v$ belongs also to the
kernel of $\Adl$. Hence the pair $\Aul$ and $\Adl$ does not satisfy
property b), but it is in the situation described in~\cref{re:intersection}.
Let us now consider the pair $\Bul$ and $\Bdl$. We observe that the kernel of
$\Bul$ is characterized by $\Pio v=0$. The analysis performed for the pair
$\Aul$ and $\Adl$ can be repeated and gives that in this case~\cref{ass:C}
is verified for $\m{C}=\m{B}$.
As a consequence of the assembling of the local matrices, the global
matrices $\Au$ and $\Ad$ ($\Bu$ and $\Bd$, respectively) do not satisfy anymore
the properties listed in~\cref{ass:C}. In particular, for $k=1$ we
shall see that the matrices $\Au$ and $\Bu$ are not singular. Nevertheless, we
are going to show that the numerical results look pretty much similar to the
ones reported in~\cref{se:param}.
Moreover, in practice the matrices $\Adl$ and $\Bdl$ are not available and
they are replaced by using the local bilinear forms $S_a^P$ and $S_b^P$ given
in~\eqref{eq:abP} as follows.
Let us denote by $\mathbf{u}_h,\mathbf{v}_h\in\mathbb{R}^{N_P}$ the vectors containing
the values of the $N_P$ local DoFs associated to $u_h,v_h\in\VemP$. Then, we
define the local stabilized forms as
\[
S_a^P(u_h,v_h)=\sigma_P \mathbf{u}_h^\top\mathbf{v}_h,\quad
S_b^P(u_h,v_h)=\tau_P h_P^2\mathbf{u}_h^\top\mathbf{v}_h
\]
where the stability parameters $\sigma_P$ and $\tau_P$ are positive constants
which might depend on $P$ but are independent of $h$. We point out that this
choice implies the stability requirements in~\eqref{eq:stab}.
In the applications, the parameter $\sigma_P$ is usually
chosen depending on the mean value of the eigenvalues of the matrix
stemming from the term $a_P(\Pinabla\cdot,\Pinabla\cdot)$, and
$\tau_P$ as the mean value of the eigenvalues of the matrix resulting from
$\frac1{h^2_P}(\Pio\cdot,\Pio\cdot)_P$.
The choice of the stabilized form $S_a^P$
is discussed in some papers concerning the source problem, see, e.g.,
\cite{BLR} and the references therein. One can find an analysis of the
stabilization parameters $\sigma_P$ in~\cite{CMRS}.
If $\sigma_P$ and $\tau_P$ vary in a small range, it is reasonable to take
$\sigma_P=\alpha$ and $\tau_P=\beta$
for all $P$ and this is the situation which we discuss further. Therefore, the
structure of the matrices is $\m{A}=\Au+\alpha\Ad$ and $\m{B}=\Bu+\beta\Bd$
where $\Ad$ and $\Bd$ are the matrices with local contribution given by
$\mathbf{u}_h^\top\mathbf{v}_h$ and $h_P^2\mathbf{u}_h^\top\mathbf{v}_h$,
respectively.
We study the behavior of the eigenvalues as $\alpha$ and $\beta$ vary in given
ranges.
In the following tests $\Omega$ is the unit square partitioned using a sequence
of Voronoi meshes with a given number of elements. In~\cref{fig:Voronoi} we
report the coarsest mesh with 50 elements ($h=0.2350$, 151 edges, 102
vertices). We recall that the exact
eigenvalues are given by $(i^2+j^2)\pi^2$ for $i,j\in\mathbb{N}\setminus\{0\}$
with eigenfunctions $\sin(i\pi x)\sin(j\pi y)$.
The following numerical results have been obtained using \textsc{Matlab} and, in
particular, the routine \texttt{eig} for the computation of the eigenvalues.
In the following figures, we shall always report the computed eigenvalues
divided by $\pi^2$.
\begin{center}
\begin{figure}[ht]
\includegraphics[width=0.6\textwidth]{figures/mesh50square-eps-converted-to.pdf}
\caption{Voronoi mesh with 50 polygons.}
\label{fig:Voronoi}
\end{figure}
\end{center}
\Cref{tb:kerAu} and~\cref{tb:kerBu} display the dimension of the kernel of the
matrices $\Au$
and $\Bu$ for $k=1,2,3$, and for different numbers $N$ of the elements in
the mesh.
\begin{table}
\caption{Dimension of $\KA$ with respect to $k$ and the number of elements}
\centering
\begin{tabular}{cc cc cc cc cc cc }\toprule
$k$ && $N=50$ && $N=100$ &&$N=200$ &&$N=400$ &&$N=800$\\
\midrule
1 && 0 && 0 && 0 && 0 && 0\\
2 && 3 && 30 && 99 && 258 && 565\\
3 && 27 && 94 && 246 && 588 && 1312\\
\bottomrule
\end{tabular}
\label{tb:kerAu}
\end{table}
\begin{table}
\caption{Dimension of $\KB$ with respect to $k$ and the number of elements}
\centering
\begin{tabular}{cc cc cc cc cc cc }\toprule
$k$ && $N=50$ && $N=100$ &&$N=200$ &&$N=400$ &&$N=800$\\
\midrule
1 && 0 && 0 && 0 && 0 && 0\\
2 && 0 && 0 && 0 && 0 && 0\\
3 && 0 && 1 && 43 && 182 && 504\\
\bottomrule
\end{tabular}
\label{tb:kerBu}
\end{table}
In particular we see that for $k=1$ the matrix $\Au$ is nonsingular.
We have computed the lowest eigenvalue of $\Au x=\lambda \Bu x$,
which gives an estimate of the \emph{inf-sup} constant of the discrete problem~\eqref{eq:LaplV}.
The results, presented in~\cref{tb:infsup}, show that the first eigenvalue is
decreasing, and this behavior corresponds to the fact that the bilinear form
$\sum_P a^P(\Pinabla\cdot,\Pinabla\cdot)$ is not stable.
\begin{table}
\caption{First eigenvalues of $\Au x=\lambda \Bu x$ for different meshes}
\centering
\begin{tabular}{c c c c c}\toprule
$N=50$ & $N=100$ &$N=200$ &$N=400$ &$N=800$\\
\midrule
1.92654e+00 & 1.74193e+00 & 1.06691e+00 & 6.81927e-01 & 5.54346e-01\\
\bottomrule
\end{tabular}
\label{tb:infsup}
\end{table}
We now discuss some tests, where we present the behavior of the eigenvalues as
the parameters $\alpha$ and $\beta$ vary, for the mesh with $N=200$ and
different degree $k$ of the polynomials in the space $\V$.
The rows of~\cref{fig:girato} contain the results for fixed $k$ and the
values $\beta=0,1,5$, while, in the columns, $\beta$ is fixed and $k$ varies.
In each picture, we plot in red the exact eigenvalues and
with different colors those corresponding to $\alpha=10^r$ with $r=-3,\dots,1$.
These plots clearly confirm that the choice of the parameters for optimal
performance is not so immediate. Consider, in particular, that we are solving
the Laplace eigenvalue problem (isotropic diffusion) on a domain as simple as a
square. For an arbitrary elliptic problem and more general domains the
situation could be much more complicated.
For $\beta=0$, the first 30 eigenvalues are well approximated with
higher degree of polynomials whenever $\alpha\ge0.1$. The value $\alpha=0.1$
seems to be the best choice in the case $k=1$. Increasing $\beta$ does not
produce much improvement. All the pictures seem to indicate that higher values
of $\alpha$ might give better results. In particular, for $k=2,3$ the first 30
eigenvalues are approximated with a reasonable accuracy for $\alpha=10$ and
$\beta=1$. Increasing $\beta$ and keeping $\alpha=10$, we see that a smaller
number of eigenvalues are captured.
\begin{figure}[h]
\begin{center}
\includegraphics[width=\textwidth]{figures/figure_F/legend.png}
\subfigure[\tiny{$k=1$, $\beta=0$}]
{\includegraphics[width=.32\textwidth]{figures/figure_F/giratok1_beta0_autoval30-eps-converted-to.pdf}}
\subfigure[\tiny{$k=1$, $\beta=1$}]
{\includegraphics[width=.32\textwidth]{figures/figure_F/giratok1_beta1_autoval30-eps-converted-to.pdf}}
\subfigure[\tiny{$k=1$, $\beta=5$}]
{\includegraphics[width=.32\textwidth]{figures/figure_F/giratok1_beta5_autoval30-eps-converted-to.pdf}}
\subfigure[\tiny{$k=2$, $\beta=0$}]
{\includegraphics[width=.32\textwidth]{figures/figure_F/giratok2_beta0_autoval30-eps-converted-to.pdf}}
\subfigure[\tiny{$k=2$, $\beta=1$}]
{\includegraphics[width=.32\textwidth]{figures/figure_F/giratok2_beta1_autoval30-eps-converted-to.pdf}}
\subfigure[\tiny{$k=2$, $\beta=5$}]
{\includegraphics[width=.32\textwidth]{figures/figure_F/giratok2_beta5_autoval30-eps-converted-to.pdf}}
\subfigure[\tiny{$k=3$, $\beta=0$}]
{\includegraphics[width=.32\textwidth]{figures/figure_F/giratok3_beta0_autoval30-eps-converted-to.pdf}}
\subfigure[\tiny{$k=3$, $\beta=1$}]
{\includegraphics[width=.32\textwidth]{figures/figure_F/giratok3_beta1_autoval30-eps-converted-to.pdf}}
\subfigure[\tiny{$k=3$, $\beta=5$}]
{\includegraphics[width=.32\textwidth]{figures/figure_F/giratok3_beta5_autoval30-eps-converted-to.pdf}}
\end{center}
\caption{First 30 eigenvalues for different values of $k$, $\alpha$ and
$\beta$}
\label{fig:girato}
\end{figure}
\Cref{fig:alfa} shows the behavior of the eigenvalues as $\alpha$ varies
from $0$ to $10$. At a first glance the pictures remind of~\cref{fig:case1} (left) even if,
as it has been explained before, the situation is not exactly matching what we
discussed in~\cref{se:param}.
Each subplot reports all computed eigenvalues between $0$ and $40$; the dotted
horizontal lines represent the exact solutions. The first $30$ computed
eigenvalues are connected together with lines of different colors in an
automated way. An ``ideal'' good approximation would correspond to a series of
colored lines matching the dotted lines of the exact eigenvalues.
It is interesting to look at the differences between various degrees ($k$ from
$1$ to $3$ moving from the top to the bottom) and values of $\beta$ (equal to
$0$, $1$, and $5$ from left to right).
\begin{figure}[h]
\begin{center}
\subfigure[\tiny{$k=1$, $\beta=0$, $\alpha\in[0,10]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/rette_k1_b0_a10-eps-converted-to.pdf}}
\subfigure[\tiny{$k=1$, $\beta=1$, $\alpha\in[0,10]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/rette_k1_b1_a10-eps-converted-to.pdf}}
\subfigure[\tiny{$k=1$, $\beta=5$, $\alpha\in[0,10]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/rette_k1_b5_a10-eps-converted-to.pdf}}
\subfigure[\tiny{$k=2$, $\beta=0$, $\alpha=[0,10]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/rette_k2_b0_a10-eps-converted-to.pdf}}
\subfigure[\tiny{$k=2$, $\beta=1$, $\alpha=[0,10]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/rette_k2_b1_a10-eps-converted-to.pdf}}
\subfigure[\tiny{$k=2$, $\beta=5$, $\alpha=[0,10]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/rette_k2_b5_a10-eps-converted-to.pdf}}
\subfigure[\tiny{$k=3$, $\beta=0$, $\alpha=[0,10]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/rette_k3_b0_a10-eps-converted-to.pdf}}
\subfigure[\tiny{$k=3$, $\beta=1$, $\alpha=[0,10]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/rette_k3_b1_a10-eps-converted-to.pdf}}
\subfigure[\tiny{$k=3$, $\beta=5$, $\alpha=[0,10]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/rette_k3_b5_a10-eps-converted-to.pdf}}
\end{center}
\caption{Eigenvalues versus $\alpha$ for different values of $k$ and $\beta$}
\label{fig:alfa}
\end{figure}
\begin{figure}
\includegraphics[width=.70\textwidth]{figures/autofun/rette_k3_b1_a10_spuri-eps-converted-to.pdf}
\caption{Same plot as in~\cref{fig:alfa}(h) with four marked (spurious)
eigenvalues}
\label{fig:marked}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=.45\textwidth]{figures/autofun/alfa3_sp2-eps-converted-to.pdf}
\includegraphics[width=.45\textwidth]{figures/autofun/alfa11_sp7-eps-converted-to.pdf}
\includegraphics[width=.45\textwidth]{figures/autofun/alfa21_sp14-eps-converted-to.pdf}
\includegraphics[width=.45\textwidth]{figures/autofun/alfa31_sp20-eps-converted-to.pdf}
\end{center}
\caption{Eigenfunctions corresponding to the eigenvalues marked
in~\cref{fig:marked}}
\label{fig:autof}
\end{figure}
More reliable results seem to be obtained for large $k$ and small $\beta$.
Actually, the limit case of $\beta=0$ appears to be the safest choice. This is
in agreement with the claim of~\cite{BMRR} where the authors remark that
``even the value $\sigma_E=0$ yields very accurate results, in spite of the
fact that for such a value of the parameter the stability estimate and hence
most of the proofs of the theoretical results do not hold'' (note that
$\sigma_E=0$ in~\cite{BMRR} has the same meaning as $\beta$ in our paper). It
is interesting to observe that the analysis of~\cite{GV}, summarized in~\cref{th:conv},
covers the case $\beta=0$ as well.
On the other hand $\beta=0$ may produce a singular matrix $\m{B}$ and this
could be not convenient from the computational point of view.
In order to better understand the behavior of the eigenvalues reported in~\cref{fig:alfa}(h),
we highlight in~\cref{fig:marked} four eigenvalues that are apparently aligned along an oblique line.
The corresponding eigenfunctions are reported in~\cref{fig:autof}. The four
eigenfunctions look similar, so that the analogy with~\cref{fig:case1}
(left) is even more evident.
We conclude this discussion with an example where, for a given value of
$\alpha$, a good eigenvalue (i.e., an eigenvalue corresponding to a correct
approximation) is crossing a spurious one (i.e., an eigenvalue belonging to an
oblique line). In this case it may happen that the two eigenfunctions mix
together, thus yielding to an even more complicated situation. This behavior
is reported in~\cref{fig:autof-marked}, where a region of the plot shown
in~\cref{fig:alfa}(h) is blown-up close to an intersection point:
actually three eigenvalues (a spurious one and two corresponding to good ones)
are clustered at the marked intersection points.
\begin{figure}
\begin{center}
\includegraphics[width=.45\textwidth]{figures/autofun/spuriomischiato1-eps-converted-to.pdf}
\includegraphics[width=.45\textwidth]{figures/autofun/alfa13_sp7-eps-converted-to.pdf}
\medskip
\includegraphics[width=.45\textwidth]{figures/autofun/spuriomischiato2-eps-converted-to.pdf}
\includegraphics[width=.45\textwidth]{figures/autofun/alfa41_sp25-eps-converted-to.pdf}
\end{center}
\caption{Intersections of good and spurious eigenvalues}
\label{fig:autof-marked}
\end{figure}
\Cref{fig:beta} shows the computed eigenvalues smaller that $40$ when
$\beta$ varies from $0$ to $5$ and for a fixed value of $\alpha$.
As in~\cref{fig:girato} and in analogy with~\cref{fig:alfa}, the
rows correspond to the degree $k$ of
polynomials, while the columns refer to different values of $\alpha$. The
dotted horizontal lines represent the exact eigenvalues.
The lines with different colors in each picture follow the $n$-th
eigenvalue for $n=1,\dots,30$.
It turns out that all lines are originating from curves that look like
hyperbolas when $\beta$ is large. Following each of these hyperbolas from
$\beta=+\infty$ backwards, it happens that when the hyperbola meets a correct
approximation of an eigenvalue of the continuous problem, it deviates from its
trajectory and becomes a (almost horizontal) straight line.
In the case $k=1$, we see that the higher eigenvalues are computed
with decreasing accuracy as $\beta$ approaches $0$.
\begin{figure}[h]
\begin{center}
\subfigure[\tiny{$k=1$, $\alpha=0.1$, $\beta\in[0,5]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/iperb_k1_a01_b5-eps-converted-to.pdf}}
\subfigure[\tiny{$k=1$, $\alpha=1$, $\beta\in[0,5]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/iperb_k1_a1_b5-eps-converted-to.pdf}}
\subfigure[\tiny{$k=1$, $\alpha=10$, $\beta\in[0,5]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/iperb_k1_a10_b5-eps-converted-to.pdf}}
\subfigure[\tiny{$k=2$, $\alpha=0.1$, $\beta=[0,5]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/iperb_k2_a01_b5-eps-converted-to.pdf}}
\subfigure[\tiny{$k=2$, $\alpha=1$, $\beta=[0,5]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/iperb_k2_a1_b5-eps-converted-to.pdf}}
\subfigure[\tiny{$k=2$, $\alpha=10$, $\beta=[0,5]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/iperb_k2_a10_b5-eps-converted-to.pdf}}
\subfigure[\tiny{$k=3$, $\alpha=0.1$, $\beta=[0,5]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/iperb_k3_a01_b5-eps-converted-to.pdf}}
\subfigure[\tiny{$k=3$, $\alpha=1$, $\beta=[0,5]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/iperb_k3_a1_b5-eps-converted-to.pdf}}
\subfigure[\tiny{$k=3$, $\alpha=10$, $\beta=[0,5]$}]
{\includegraphics[width=.32\textwidth]{figures/figure_Lnew/iperb_k3_a10_b5-eps-converted-to.pdf}}
\end{center}
\caption{Eigenvalues versus $\beta$ for different values of $k$ and $\alpha$}
\label{fig:beta}
\end{figure}
We recognize in these pictures the situation presented in~\cref{se:caso2},
corresponding to the behavior of the eigenvalues when
the parameter $\beta$ in matrix $\m{B}$ varies.
In this test, the kernel of matrix $\Bu$ is not empty only for $k=3$.
Nevertheless, we can see that when $\beta$ approaches $0$, there are several
eigenvalues going to $\infty$. On the other side, for greater values of
$\beta$ we obtain several spurious eigenvalues.
The range of $\beta$, which gives eigenvalues close to
the exact ones, clearly depends on $k$ and $\alpha$.
\Cref{fig:6400} displays, in separate pictures, the first four eigenvalues,
with $k=1$, $\alpha=10$, different values of $h$, and $0\le\beta\le400$. Taking
into account that the routine \texttt{eig} sorts the eigenvalues in ascending
order, the four pictures display, in lexicographical order, the first, second,
third and fourth computed eigenvalues. In each subplot, each line refers to a
particular mesh. We can see that the eigenvalues computed with the finest mesh
seem to be insensitive with respect to the value of $\beta$.
On the opposite side the coarsest mesh gives approximations
of the correct values only when $\beta$ is very small and, furthermore,
the accuracy is rather low.
For each eigenvalue and each fixed mesh we recognize a
critical value of the parameter such that greater values of $\beta$ produce
spurious eigenvalues.
The behavior of these eigenvalues clearly reproduces that of the eigenvalues
in~\cref{fig:case1} (right) referring to Case 2. The results are plotted with
a different perspective depending on the fact that the results now depend also
on the computational mesh.
The right bottom plot of~\cref{fig:6400} highlights a phenomenon which
already appears in~\cref{fig:beta}(i). Indeed, we see that the red line
corresponding to the fourth computed eigenvalue for $N=400$ lies along an
hyperbola until $\beta=65$ where it reaches the value $5$ associated with
second and third exact eigenvalues. Between $\beta=65$ and $\beta=55$ the red
line remains close to $5$, then decreasing $\beta$ it follows a different
hyperbola until it reaches the expected value for $\beta=35$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=\textwidth]{figures/6400_4-eps-converted-to.pdf}
\end{center}
\caption{First four eigenvalues}
\label{fig:6400}
\end{figure}
\section*{Conclusions}
In this paper we have discussed how numerically computed eigenvalues can
depend on discretization parameters. \Cref{se:param}
shows the dependence on $\alpha$ and $\beta$ of the eigenvalues
of~\eqref{eq:eig} when $\m{A}$ and $\m{B}$ have the forms~\eqref{eq:A}
and~\eqref{eq:B}, respectively. In~\cref{se:VEM} we have studied the
behavior of the eigenvalues of the Laplace operator computed with the Virtual
Element Method. The presence of two parameters resembles the abstract setting
of~\cref{se:param}; even if assumptions satisfied by the VEM matrices are
more complicated than the ones previously discussed, the numerical results are
pretty much in agreement. The present work opens the question of a viable
choice of the parameters for eigenvalue computations when the discretization
scheme depends on a suitable tuning of them (such as in the case of VEM).
\section*{Acknowledgments}
%
The authors are members of INdAM Research group GNCS and their
research is supported by PRIN/MIUR. The research of the first and third
authors is partially supported by IMATI/CNR.
| {
"timestamp": "2020-10-05T02:10:16",
"yymm": "2001",
"arxiv_id": "2001.01304",
"language": "en",
"url": "https://arxiv.org/abs/2001.01304",
"abstract": "We discuss the solution of eigenvalue problems associated with partial differential equations that can be written in the generalized form $\\m{A}x=\\lambda\\m{B}x$, where the matrices $\\m{A}$ and/or $\\m{B}$ may depend on a scalar parameter. Parameter dependent matrices occur frequently when stabilized formulations are used for the numerical approximation of partial differential equations. With the help of classical numerical examples we show that the presence of one (or both) parameters can produce unexpected results.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Approximation of PDE eigenvalue problems involving parameter dependent matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9863631675246405,
"lm_q2_score": 0.8152324871074608,
"lm_q1q2_score": 0.8041152982523057
} |
https://arxiv.org/abs/0908.2456 | Descent polynomials for permutations with bounded drop size | Motivated by juggling sequences and bubble sort, we examine permutations on the set {1,2,...,n} with d descents and maximum drop size k. We give explicit formulas for enumerating such permutations for given integers k and d. We also derive the related generating functions and prove unimodality and symmetry of the coefficients. | \section{Introduction}
There have been extensive studies of various statistics on $\mathcal{S}_n$,
the set of all permutations of $\{ 1,2, \dots, n\}$. For a
permutation $\pi$ in $\mathcal{S}_n$, we say that $\pi$ has a \emph{drop} at
$i$ if $\pi_i < i$ and that the \emph{drop size} is $i-\pi_i$. We say
that $\pi$ has a \emph{descent} at $i$ if $\pi_i > \pi_{i+1}$. One of
the earliest results \cite{mac} in permutation statistics states that
the number of permutations in $\mathcal{S}_n$ with $k$ drops equals the
number of permutations with $k$ descents. A concept closely related to
drops is that of \emph{excedances}, which is just a drop of the
inverse permutation. In this paper we focus on drops instead of
excedances because of their connection with our motivating
applications concerning bubble sort and juggling sequences.
Other statistics on a permutation $\pi$ include such things as the
number of \emph{inversions}, that is, $|\{(i,j) : i < j,\; \pi_i >
\pi_j\}|$, and the \emph{major index} of $\pi$ (i.e., the sum of $i$
for which a descent occurs). The enumeration of and generating
functions for these statistics can be traced back to the work of
Rodrigues in 1839 \cite{rodrigues} but was mainly influenced by
McMahon's treatise in 1915 \cite{mac}. There is an extensive
literature studying the distribution of the above statistics and their
$q$-analogs (see for example Foata and Han \cite{foata_han} or the
papers of Shareshian and Wachs \cite{wachs,wachs2} for more recent
developments).
As noted above, the drop statistic that we study is closely related to the
excedances statistic. The distribution of the bivariate
statistics $(\mbox{descents},\mbox{excedances})$ can be found in
Foata and Han~\cite[equations (1.15) and (1.16)]{foata_han_fix}.
This joint work originated from its connection with a paper \cite{jug}
on sequences that can be translated into juggling patterns. The set of
juggling sequences of period $n$ containing a specific state, called
the ground state, corresponds to the set $\mathcal{B}_{n,k}$ of permutations in
$\mathcal{S}_n$ with drops of size at most $k$. As it turns out, $\mathcal{B}_{n,k}$
can also be associated with the set of permutations that can be sorted
by $k$ operations of bubble sort. These connections will be further
described in the next section. We note that the {\it maxdrop}
statistic has not been treated in the literature as extensively as
many other statistics in permutations. As far as we know, this is the
first time that the distribution of descents with respect to maxdrop
has been determined.
First we give some definitions concerning the statistics and
polynomials that we examine. Given a permutation $\pi$ in $\mathcal{S}_n$,
let $\Des(\pi)$ denote the descent set, $\{1\leq i<n: \pi_i >
\pi_{i+1}\}$, of $\pi$ and let $\des(\pi)=|\Des(\pi)|$ be the number
of descents. We use $\maxdrop(\pi)$ to denote the value of the maximum
drop (or maxdrop) of $\pi$,
\begin{equation*}
\maxdrop (\pi) = \max\{\,i - \pi(i) : 1\leq i\leq n\,\}.
\end{equation*}
Let $\mathcal{B}_{n,k}=\{\pi \in \mathcal{S}_n : \maxdrop(\pi) \leq k\}$. It is
known, and also easy to show, that $|\mathcal{B}_{n,k}| = k!(k+1)^{n-k}$; e.g., see
\cite[Thm. 1]{jug} or \cite[p. 108]{knuth}. Let
\begin{equation*}
b_{n,k}(r) = |\{\pi \in \mathcal{B}_{n,k} : \des(\pi) = r \}|,
\end{equation*}
and define the ($k$-maxdrop-restricted) descent polynomial
\begin{equation*}
B_{n,k} (x) = \sum_{r \geq 0} b_{n,k}(r) x^r = \sum_{\pi \in \mathcal{B}_{n,k}}x^{\des(\pi)}.
\end{equation*}
Examining the case of $k=2$, we discovered that the coefficients $b_{n,2}(r)$
of $B_{n,2}(x)$ appear to be given by every \emph{third} coefficient of
the simple polynomial
\begin{equation*}
(1+x^2)(1+x+x^2)^{n-1}.
\end{equation*}
Looking at the next two cases, $k=3$ and $k=4$, yielded more
mysterious polynomials: $b_{n,3}(r)$ appeared to be every fourth
coefficient of
\begin{equation*}
(1+x^2+2x^3+x^4+x^6)(1+x+x^2+x^3)^{n-2}
\end{equation*}
and $b_{n,4}(r)$ every fifth coefficient of
\begin{equation*}
(1+x^2+2x^3+4x^4+4x^5+4x^7+4x^8+2x^9+x^{10}+x^{12})(1+x+x^2+x^3+x^4)^{n-3}.
\end{equation*}
After a fierce battle with these polynomials, we were able to show that
$b_{n,k}(r)$ is the coefficient of $ u^{r(k+1)}$ in the polynomial
\begin{equation}\label{closed_1}
P_k(u) \left(1+u+\dots + u^k \right)^{n-k}
\end{equation}
where
\begin{equation}\label{closed_2}
P_k(u)
= \sum_{j=0}^k A_{k-j}(u^{k+1})(u^{k+1}-1)^{j}\sum_{i=j}^k\binom{i}{j}u^{-i},
\end{equation}
and $A_k$ denotes the $k$th Eulerian polynomial (defined in the
next section). Further to this, we give an expression for the
generating function $\mathbf B_k(z,y) = \sum_{n \geq 0}B_{n,k}(y)z^n$,
namely
\begin{equation*}
\mathbf B_k(z,y) = \dfrac{\displaystyle{ 1+\sum_{t=1}^k \left(
A_t(y) - \sum_{i=1}^t \binom{k+1}{i} (y-1)^{i-1} A_{t-i}(y)
\right)z^t }}{\displaystyle{ 1 - \sum_{i=1}^{k+1}\binom{k+1}{i}z^i (y-1)^{i-1} }}.
\end{equation*}
We also give some alternative formulations for $P_k$ which lead to
some identities involving Eulerian numbers as well as proving the
symmetry and unimodality of the polynomials $B_{n,k}(x)$.
Many questions remain. For example, is there a more natural bijective
proof for the formulas that we have derived for $B_{n,k}$ and $\mathbf B_k$?
Why do permutations that are $k$-bubble sortable define the
aforementioned juggling sequences?
\section{Descent polynomials, bubble sort and juggling sequences}
We first state some standard notation. The polynomial
\begin{equation*}
A_n(x) = \sum_{\pi\in\mathcal{S}_n} x^{\des(\pi)}
\end{equation*}
is called the $n$th \emph{Eulerian polynomial}. For instance,
$A_0(x)=A_1(x)=1$ and $A_2(x)=1+x$. Note that
$B_{n,k}(x) = A_n(x)$ for $k \geq n-1$, since $\maxdrop(\pi) \leq
n-1$ for all $\pi \in \mathcal{S}_n$. The coefficient of $x^k$ in
$A_n(x)$ is denoted $\euler{n}{k}$ and is called an \emph{Eulerian
number}. It is well known that (\cite{concrete})
\begin{equation}\label{euler_gf}
\frac{1-w}{e^{(w-1)z }- w }
= \sum_{k,n \geq 0} \euler{n}{k} w^k \frac{z^n}{n!}.
\end{equation}
The Eulerian numbers are also known to be given explicitly as
(\cite{euler, concrete})
\begin{equation*}
\euler{n}{k} = \sum_{i=0}^n \binom{n+1}{i}(k+1-i)^n (-1)^i .
\end{equation*}
We define the operator $\bsort$ which acts recursively on permutations
via
\begin{equation*}
\bsort(LnR)=\bsort(L)Rn.
\end{equation*}
In other words, to apply $\bsort$ to a permutation $\pi$ in $\mathcal{S}_n$,
we split $\pi$ into (possibly empty) blocks $L$ and $R$ to the left
and right, respectively, of the largest element of $\pi$ (which
initially is $n$), interchange $n$ and $R$, and then recursively apply
this procedure to $L$. We will use the convention that
$\bsort(\emptyset) = \emptyset$; here $\emptyset$ denotes the
empty permutation. This operator corresponds to one \emph{pass} of the
classical bubble sort operation. Several interesting results on the
analysis of bubble sort can be found in
Knuth~\cite[pp. 106--110]{knuth}. We define the \emph{bubble sort
complexity} of $\pi$ as
\begin{equation*}
\bsc(\pi) = \min \{ k: \bsort^k(\pi)=\mbox{id}\},
\end{equation*}
the number of times $\bsort$ must be applied to $\pi$ to give the
identity permutation. The following lemma is easy to prove using
induction.
\begin{lemma}\label{bubblesort}
$\mathrm{(i)}$ For all permutations $\pi$ we have $\maxdrop(\pi) =
\bsc(\pi)$.\\
$\mathrm{(ii)}$ The bubble sort
operator maps $\mathcal{B}_{n,k}$ to $\mathcal{B}_{n,k-1}$.
\end{lemma}
The analysis of algorithms similar to bubble sort has been
instrumental in generating interesting research. For example, the
analysis of stack sort in Knuth\cite[pp. 242--243]{knu} gave rise to
the area of pattern avoiding permutations. The stack sort operator
$\ssort$ is defined by $\ssort(LnR)=\ssort(L)\ssort(R)n$. We see below
that stack sort is at least as efficient as bubble sort.
\begin{lemma}
\label{stack}
For all $\pi \in \mathcal{S}_n$, if $\bsort^k(\pi) = \mbox{id}$ then
$\ssort^k(\pi)=\mbox{id}$.
\end{lemma}
\begin{proof}
The proof of Lemma \ref{stack} follows from the following claim:
\smallskip
{\it If $A=a_1a_2 \dots a_n=LmR$ is a sequence of
distinct positive integers and $m=\max_i a_i$, then
either $\maxdrop(A) = 1-a_1$ or $\maxdrop(\ssort(A)) \leq
\maxdrop(A) -1$.}\smallskip
The Claim is certainly true for $n=1$. Suppose the claim is true for
$n' < n$. If $\maxdrop(A)=1-a_1$, we are done. We may assume that
$\maxdrop(A) > 1-a_1$. This implies that the maxdrop of $A$ does
not occur at the entry where $m$ is located. For the $i$th entry in
$\ssort(L)$, the maxdrop of $\ssort(L)$ at $i$ is reduced by one by
induction. For the $j$th entry in $R$, the maxdrop of the
corresponding entry in $\ssort(A)$ is reduced by $1$. Thus, the
claim is proved by induction.
\end{proof}
The class of permutations $\mathcal{B}_{n,k}$ appears in a recent paper
\cite{jug} on enumerating juggling patterns that are usually
called \emph{siteswaps} by (mathematically inclined) jugglers.
Suppose a juggler throws a ball at time $i$ so that the ball will be
in the air for a time $t_i$ before landing at time $t_i +i$. Instead
of an infinite sequence, we will consider periodic patterns, denoted
by $T=(t_1, t_2, \dots, t_n)$. A \emph{juggling sequence} is just one
in which two balls never land at the same time. It is not hard to show
\cite{jug0} that a necessary and sufficient condition for a sequence
to be a juggling sequence is that all the values $t_i+i \pmod n$ are
distinct. In particular, it follows that that the average of $t_i$ is
just the numbers of balls being juggled. Here is an example:
If $T=(3,5,0,2,0)$ then at time 1 a ball is thrown that will land at
time $1+3=4$. At time 2 a ball is thrown that will land at time
$2+5=7$. At time 3 a ball is thrown that will land at time
$3+0=3$. Alternatively one can say that no ball is thrown at time 3.
This is represented in the following diagram. \ \\[0.6em]
\centerline{\scalebox{0.75}{\includegraphics{example1}}}
Repeating this for all intervals of length 5 gives \ \\[0.9em]
\centerline{\scalebox{0.75}{\includegraphics{example2}}}
For a given juggling sequence, it is often possible to further
decompose into shorter juggling sequences, called \emph{primitive
juggling sequences}, which themselves cannot be further decomposed.
These primitive juggling sequences act as basic building blocks for
juggling sequences \cite{jug}. However, in the other direction, it is
not always possible to combine primitive juggling sequences into a
longer juggling sequence. Nevertheless, if primitive juggling
sequences share a common \emph{state} (which one can think of as a
\emph{landing schedule}), we then can combine them to form a longer
and more complicated juggling sequences. In \cite{jug}, primitive
juggling sequences associated with a specified state are enumerated.
Here we mention the related fact concerning $\mathcal{B}_{n,k}$:\smallskip
{\it There is a bijection mapping permutations in $\mathcal{B}_{n,k}$ to
primitive juggling sequences of period $n$ with $k$ balls that all
share a certain state, called the ground state.}\smallskip
The bijection maps $\pi$ to $\phi(\pi)= (t_1, \dots, t_n) $ with $t_i
= k-i+\pi_i$. As a consequence of the above fact and Lemma
\ref{bubblesort}, we can use bubble sort to transform a juggling
sequence using $k$ balls to a juggling sequence using $k-1$ balls.
To make this more precise, let $T=(t_1,\dots, t_n)$ be a juggling
sequence that corresponds to $\pi \in \B_{n,k}$, and suppose that
$T'=(s_1,\dots , s_n)$ is the juggling sequence that corresponds to
$\bsort(\pi)$. Assume that the ball $B$ thrown at time $j$ is the one
that lands latest out of all the $n$ throws. In other words, $t_j+j$
is the largest element in $\{t_i+i\}_{i=1}^n$. Now, write $T=L t_j R $
where $L=(t_1,\dots,t_{j-1})$ and $R=(t_{j+1},\dots,t_n)$. Then we
have
\begin{equation*}
T' = f_k(T) = f_k(L)R\hspace{0.7pt}s,
\end{equation*}
where $s = t_j + j - (n+1)$. In other words, we have removed the ball
$B$ thrown at time $j$ and thus throw all balls after time $j$ one
time unit sooner. Then at time $n$ we throw the ball B so that it
lands one time unit sooner than it would have originally landed. Then
we repeat this procedure to all the balls thrown before time $j$.
\section{The polynomials $B_{n,k}(y)$}
In this section we will characterise the polynomials $B_{n,k}(y)$.
This is done by first finding a recurrence for the polynomials and
then solving the recurrence by exploiting some aspects of their
associated characteristic polynomials. The latter step is quite
involved and so we present the special case dealing with $B_{n,4}(y)$
first.
\subsection{Deriving the recurrence for $B_{n,k}$}
We will derive the following recurrence for $B_{n,k}(y)$.
\begin{theorem}\label{b_rec_thm}
For $n \geq 0$,
\begin{equation}\label{rec}
B_{n+k+1,k} (y)
= \sum_{i=1}^{k+1} \binom{k+1}{i} (y-1)^{i-1} B_{n+k+1-i,k} (y)
\end{equation}
with the initial conditions
\begin{equation*}
B_{i,k}(y) = A_i(y),\quad 0 \leq i \leq k.
\end{equation*}
\end{theorem}
We use the notation $[a,b] =\{i\in\mathbb{Z} :a\leq i\leq b\}$ and
$[b]=[1,b]$. Let $A=\{a_1,\dots,a_n\}$ with $a_1<\dots<a_n$ be any
finite subset of $\mathbb{N}$. The \emph{standardization} of a permutation
$\pi$ on $A$ is the permutation $\st(\pi)$ on $[n]$ obtained from
$\pi$ by replacing the integer $a_i$ with the integer $i$. Thus $\pi$
and $\st(\pi)$ are order isomorphic. For example, $\st(19452) =
15342$. If the set $A$ is fixed, the inverse of the standardization
map is well defined, and we denote it by $\st^{-1}_A(\sigma)$; for
instance, with $A=\{1,2,4,5,9\}$, we have $\st^{-1}_A(15342)=19452$.
Note that $\st$ and $\st^{-1}_A$ each preserve the descent set.
For any set $S \subseteq [n-1]$ we define
$\mathcal{A}_{n,k}(S) = \{ \pi\in\B_{n,k} : \Des(\pi) \supseteq S \}$
and
\begin{equation*}
t_n(S) = \max\{ i\in\mathbb{N} : [n-i,n-1] \subseteq S\}.
\end{equation*}
Note that $\tl_n(S)=0$ in the case that $n-1$ is not a member of
$S$. Now, for any permutation $\pi=\pi_1\dots\pi_n$ in $\mathcal{A}_{n,k}(S)$ define
\begin{equation*}
f(\pi) = (\sigma, X),\;\text{ where }
\sigma=\st(\pi_1\dots\pi_{n-i-1}),
X=\{\pi_{n-i},\dots ,\pi_n\} \text{ and } i=\tl_n(S).
\end{equation*}
\begin{example}
Let $S=\{3,7,8\}$, and choose the permutation $\pi = 138425976$ in
$\mathcal{A}_{9,3}(S)$. Notice that $\Des(\pi)=\{3,4,7,8\} \supset S$. Now
$\tl_9(S)=2$. This gives $f(\pi) = (\sigma, X)$ where $\sigma =
\st(138425) = 136425$ and $X=\{\pi_7, \pi_8,\pi_9\}=\{6,7,9\}$.
Hence $f(138425976)=(136425, \{6,7,9\})$.
\end{example}
\begin{lemma}
For any $\pi$ in $\mathcal{A}_{n,k}(S)$, the image $f(\pi)$ is in the
Cartesian product
\begin{equation*}
\mathcal{A}_{n-i-1,k}(S\cap [n-\tl_n(S)-2])\times\binom{[n-k,n]}{\tl_n(S)+1},
\end{equation*}
where $\binom{X}{m}$ denotes that set of all $m$-element subsets
of the set $X$.
\end{lemma}
\begin{proof}
Given $\pi\in\mathcal{A}_{n,k}(S)$, let $f(\pi)=(\sigma,X)$. Suppose
$i=\tl_n(S)$. Then there are descents at positions $n-i, \dots ,
n-1$ (this is an empty sequence in case $i=0$). Thus
\begin{equation*}
n\geq\pi_{n-i}>\pi_{n-i+1}>\dots > \pi_{n-1}>\pi_n\geq n-k,
\end{equation*}
where the last inequality follows from the assumption that
$\maxdrop(\pi)\leq k$. Hence $X$ is an $(i+1)$-element subset of
$[n-k,n]$, as claimed. Clearly $\sigma\in\mathcal{S}_{n-i-1}$.
Next we shall show that $\sigma$ is in $\mathcal{A}_{n-i-1,k}$. Notice that
the entries of $(\pi_1,\dots, \pi_{n-i-1})$ that do not change
under standardization are those $\pi_{\ell}$ which are
$<\pi_n$. Since these values remain unchanged, the values
$\ell-\pi_{\ell}$ are also unchanged and are thus $\leq k$.
Let $(\pi_{a(1)},\dots, \pi_{a(m)})$ be the subsequence of values
which are $>\pi_n$. The smallest value that any of these may take
after standardization is $\pi_n \geq n-k$. So $\sigma_{a(j)} \geq
\pi_n \geq n-k$ for all $j \in [1,m]$. Thus $a(j)-\sigma_{a(j)}
\leq a(j) - (n-k) = k-(n-a(j)) \leq k$ for all $j \in [1,m]$.
Therefore $\ell - \sigma_{\ell} \leq k$ for all $\ell \in [1,n-i-1]$
and so $\sigma \in \mathcal{A}_{n-i-1,k}$.
The descent set is preserved under standardization, and consequently
$\sigma$ is in $\mathcal{A}_{n-i-1,k}(S\cap [n-i-2])$, as claimed.
\end{proof}
We now define a function $g$ which will be shown to be the inverse of
$f$. Let $\pi$ be a permutation in $\mathcal{A}_{m,k}(T)$, where $T$ is a
subset of $[m-1]$. We will add $i+1$ elements to $\pi$ to yield a new
permutation $\sigma$ in $\mathcal{A}_{m+i,k}(T \cup [m+1,m+i])$. Choose any
$(i+1)$-element subset $X$ of the interval $[m+i+1-k,m+i+1]$, and let
us write $X=\{x_1,\dots , x_{i+1}\}$, where $x_1\leq\dots\leq
x_{i+1}$. Define
\begin{equation*}
g(\pi,X)=\st^{-1}_V (\pi_1\dots\pi_m) \, x_{i+1}x_i\dots x_1,\,
\text{ where }V=[m+i+1]\setminus X.
\end{equation*}
\begin{example}
Let $T=\{1\}$, and choose the permutation $\pi=3142$ in $\mathcal{A}_{4,3}
(T)$. Notice that $\Des(\pi)=\{1,3\}\supseteq T$. Choose $i=2$ and
select a subset $X$ from $[4+2+1-3,4+2+1]=\{4,5,6,7\}$ of size
$i+1=3$. Let us select $X=\{4,6,7\}$. Now we have
$g(\pi,X) =\st^{-1}_V(3142)\,764=3152764$,
where $V$ is the set $[4+2+1]\setminus \{4,6,7\} = \{1,2,3,5\}$.
\end{example}
\begin{lemma}
If $(\pi,X)$ is in the Cartesian product
\begin{equation*}
\mathcal{A}_{m,k}(T)\times\binom{[m+i+1-k,m+i+1]}{i+1}
\end{equation*}
for some $i>0$ then $g(\pi,X)$ is in
\begin{equation*}
\mathcal{A}_{m+i+1,k}(T\cup [m+1, m+i]).
\end{equation*}
\end{lemma}
\begin{proof}
Let $\sigma=g(\pi,X)$. For the first $m$ elements of $\sigma$, since
$\sigma_j \geq \pi_j$ for all $1\leq j \leq m$, we have $j-\sigma_j
\leq j-\pi_j$
which gives
\begin{equation*}
\max\{j-\sigma_j: j\in[m]\}\leq\max\{j-\pi_j : j\in[m]\}\leq k.
\end{equation*}
The final $i+1$ elements of $\sigma$ are decreasing so the
$\maxdrop$ of these elements will be the $\maxdrop$ of the final
element,
\begin{equation*}
m+i+1-\sigma_{m+i+1} = m+i+1 - x_1 \leq m+i+1-(m+i+1-k) = k.
\end{equation*}
Thus $\maxdrop(\sigma) \leq k$ and so $\sigma\in\B_{m+i+1,k}$.
The descents of $\sigma$ will be in the set $T \cup
[m+1,m+i]$ since descents are preserved under standardization
and the final $i+1$
elements of $\sigma$ are listed in decreasing order. Hence
$\sigma\in A_{m+i+1,k}(T\cup [m+1, m+i])$, as claimed.
\end{proof}
\begin{lemma}
The function $f$ is a bijection, and $g$ is its inverse.
\end{lemma}
\begin{proof}
Given any $(\sigma,X) \in \binom{[m+j+1-k,m+j+1]}{j+1}
\times\mathcal{A}_{m,k}(T)$ where $T \subseteq [m-1]$, let
$\pi=g(\sigma,X)$. We have
\begin{equation*}
\pi=\st^{-1}_{[m+j+1]\setminus X}(\sigma_1\dots\sigma_m)
\,x_{j+1}x_j\dots x_1 \in \mathcal{A}_{m+j+1,k}(T\cup[m+1,m+j]),
\end{equation*}
where $X=\{x_1,\dots, x_{j+1}\}$ and $x_1\leq\dots\leq x_{j+1}$.
Let $S=T\cup [m+1,m+j]$. Clearly $\Des(g(\sigma,X)) \supseteq S$
and $i=\tl_{m+j+1}(S)=j$. So $f(g(\sigma,X)) = (\tau,Y)$ where
\begin{equation*}
Y=\{\pi_{m+1},\dots , \pi_{m+j+1}\} = \{x_1,\dots, x_{j+1}\}= X
\end{equation*}
and
\begin{equation*}
\tau =
\st(\st^{-1}_{[1,m+j+1]\setminus X}(\sigma_1\dots\sigma_m)) = \sigma.
\end{equation*}
Hence
$f(g(\sigma,X))=(\sigma,X)$.
Given $\pi\in\mathcal{A}_{n,k}(S)$, let $f(\pi)=(\sigma,X)$ with
$X=\{x_1,\dots , x_{i+1}\}$ and
$\sigma = \st(\pi_1\dots\pi_{n-(i+1)})$.
We have
\begin{align*}
g(\sigma,X)
&=\st^{-1}_{[1,n]\setminus X} (\sigma_1\dots\sigma_{n-i-1})
x_{i+1}\dots x_1\\
&= \st^{-1}_{[1,n]\setminus X}(\st(\pi_1\dots\pi_{n-i-1}))
\pi_{n-i}\dots \pi_n \\
&= \pi_1\dots \pi_{n-i-1}\pi_{n-i}\dots \pi_n \\
&= \pi.
\end{align*}
Hence $g(f(\pi))=\pi$.
\end{proof}
\begin{corollary}\label{rec_corol}
Let $a_{n,k}(S) = |\mathcal{A}_{n,k}(S)|$ and $i=\tl_n(S)$. Then
\begin{equation*}
a_{n,k}(S) = \binom{k+1}{i+1} a_{n-(i+1),k} (S\cap [1,n-(i+1)]).
\end{equation*}
\end{corollary}
\begin{proposition}\label{prop_rec}
For all $n\geq0$,
$$\B_{n,k}(y+1) =
\sum_{i=1}^{k+1} {k+1 \choose i} y^{i-1} \B_{n-i,k} (y+1).
$$
\end{proposition}
\begin{proof}
Notice that
\begin{align*}
\B_{n,k}(y+1)
&= \sum_{\pi \in \B_{n,k}} (y+1)^{\des(\pi)} \\
&= \sum_{\pi \in \B_{n,k}} \sum_{i=0}^{\des(\pi)} \binom{\des(\pi)}{i} y^i \\
&= \sum_{\pi \in \B_{n,k}} \sum_{S\subseteq \Des(\pi)} y^{|S|}\\
&= \sum_{S\subseteq [n-1]} y^{|S|} \sum_{\pi \in \mathcal{A}_{n,k}(S)} 1
= \sum_{S\subseteq [n-1]} y^{|S|} a_{n,k}(S).
\end{align*}
From Corollary \ref{rec_corol}, multiply both sides by $y^{|S|}$ and sum
over all $S \subseteq [n-1]$. We have
\begin{align*}
\B_{n,k}(y+1)
&= \sum_{S \subseteq [n-1]} y^{|S|} \binom{k+1}{\tl_n(S)+1} a_{n-(\tl_n(S)+1),k} (S \cap [n-(\tl_n(S)+2)]) \\
&= \sum_{i \geq 0 } \sum_{S \subseteq [n-1]\atop \tl_n(S)=i} y^i y^{|S|-i} \binom{k+1}{i+1} a_{n-(i+1),k} (S \cap [n-(i+2)]) \\
&= \sum_{i \geq 0 } \binom{k+1}{i+1} y^i \sum_{S \subseteq [n-1]\atop \tl_n(S)=i} a_{n-(i+1),k} (S \cap [n-(i+2)]) y^{|S|-i}\\
&= \sum_{i \geq 0 } \binom{k+1}{i+1} y^i \sum_{S \subseteq [n-(i+1)]} a_{n-(i+1),k} (S) y^{|S|}\\
&= \sum_{i \geq 0 } \binom{k+1}{i+1} y^i \B_{n-(i+1),k} (y+1) \\
&= \sum_{i \geq 1 } \binom{k+1}{i} y^{i-1} \B_{n-i,k} (y+1).
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{b_rec_thm}]
Replacing $n$ and $y$ by $n+k+1$ and $y-1$, respectively, in
Proposition \ref{prop_rec} yields the recurrence (\ref{rec}):
\begin{equation*}
B_{n+k+1,k}(y) = \sum_{i=1}^{k+1} \binom{k+1}{i} (y-1)^{i-1} B_{n+k+1-i,k} (y)
\end{equation*}
for $n \geq 0$, with the initial conditions
$B_{i,k}(y) = A_i(y),\,0 \leq i \leq k$.
\end{proof}
Consequently, by multiplying the above recurrence by $z^n$ and summing
over all $n\geq 0$, we have the generating function $\mathbf B_k(z,y)$:
\begin{equation}\label{BB_gen}
\mathbf B_k(z,y) = \dfrac{\displaystyle{ 1+\sum_{t=1}^k \left(
A_t(y) - \sum_{i=1}^t \binom{k+1}{i} (y-1)^{i-1} A_{t-i}(y)
\right)z^t }}{\displaystyle{ 1 - \sum_{i=1}^{k+1}\binom{k+1}{i}z^i (y-1)^{i-1} }}.
\end{equation}
\subsection{Solving the recurrence for $B_{n,4}$.}
Before we proceed to solve the recurrence for $B_{n,k}$, we first
examine the special case of $k=4$ which is quite illuminating. We note
that the characteristic polynomial for the recurrence for $B_{n,4}$ is
\begin{align*}
h(z)
&= z^5 - 5z^4+10(1-y)z^3 - 10(1-y)^2z^2 + 5(1-y)^3z -(1-y)^4\\
&= \frac{(z-1+y)^5 -yz^5}{1-y}.
\end{align*}
Substituting $y = t^5$ in the expression above, we see that the
roots of $h(z)$ are just
\begin{equation*}
\rho_j(t) = \frac{1-t^5}{1-\omega^j t},\quad 0\leq j \leq 4,
\end{equation*}
where $\omega = \exp(\frac{2 \pi \mathfrak{i}}{5})$ is
a primitive $5th$ root of unity. Hence, the general term for
$B_{n,4}(t)$ can written as
\begin{equation*}
B_{n,4}(t) = \sum_{i=0}^4 \alpha_i(t) \rho_i^n(t)
\end{equation*}
where the $\alpha_i(t)$ are appropriately chosen coefficients
(polynomials in $t$). To determine the $\alpha_i(t)$ we need to solve
the following system of linear equations:
\begin{equation*}
\sum_{i=0}^4 \alpha_i(t) \rho_i^j(t)
= B_{j,4}(t) = A_j(t^5), \, 0 \leq j \leq 4.
\end{equation*}
Thus, $\alpha_i(t)$ can be expressed as the ratio
$N_{4,i+1}(t)/D_4(t)$ of two determinants. The denominator
$D_4(t)$ is just a standard Vandermonde determinant whose $(i+1,j+1)$
entry is $\rho_i^j(t)$. The numerator $N_{4,i+1}(t)$ is formed from
$D_4(t)$ by replacing the elements $\rho_i^j(t)$ in the $(i+1)$st row
by $A_j(t^5)$. A quick computation (using the symbolic
computation package Maple) gives:
\begin{align*}
D_4(t) &= 25 \sqrt 5 \,(1-t^5)^6 t^{10}; \\
N_{4,1}(t) &=5 \sqrt 5 \,(t^{12}+t^{10}+2t^9+4t^8+4t^7+4t^5+4t^4+2t^3+t^2+1)
(1-t^5)^3 (1-t)^3t^{10}
\end{align*}
and, in general, $N_{4,i+1}(t) = N_{4,1}(\omega^i t).$
Substituting the value $\alpha_0 (t) = N_{4,1}(t)/D_4(t)$ into the
first term in the expansion of $B_{n,4}$, we get
\begin{multline*}
\alpha_0 (t)(1+t+t^2+t^3+t^4)^n \\
=\tfrac{1}{5}(t^{12}+t^{10}+2t^9+4t^8+4t^7+4t^5+4t^4+2t^3+t^2+1)
(1+t+t^2+t^3+t^4)^{n-3}.
\end{multline*}
Now, since the other four terms $\alpha_i(t)(1+t+t^2+t^3+t^4)^n$ arise
by replacing $t$ by $\omega ^i t$ then in the sum of all five terms,
the only powers of $t$ that survive are those which have powers which
are multiples of $5$. Thus, we can conclude that if we write
\begin{equation*}
(t^{12}+t^{10}+2t^9+4t^8+4t^7+4t^5+4t^4+2t^3+t^2+1)(1+t+t^2+t^3+t^4)^{n-3}
= \sum_r \beta(r) t^r
\end{equation*}
then $b_{n,4}(d) = \beta(5d)$. In other words, the number of
permutations $\pi \in \mathcal{B}_{n,4}$ with $d$ descents is given by the
coefficient of $t^{5d}$ in the expansion of the above polynomial.
Incidentally, the corresponding results for the earlier $\mathcal{B}_{n,i}$ are
as follows: $b_{n,1}(d) = \beta(2d)$ in the expansion of
\begin{equation*}
(1+t)^n = \sum_r \beta(r) t^r ,
\end{equation*}
so $b_{n,1}(d)=\binom{n}{2d}$; $b_{n,2}(d) = \beta(3d)$ in
the expansion of
\begin{equation*}
(1+t^2)(1+t)^{n-1} = \sum_r \beta(r) t^r;
\end{equation*}
and $b_{n,3}(d) = \beta(4d)$ in the expansion of
\begin{equation*}
(1+t^2+2t^3+t^4+t^6)(1+t)^{n-2} = \sum_r \beta(r) t^r.
\end{equation*}
The preceding arguments have now set the stage for dealing with the
general case of $B_{n,k}$. Of course, the arguments will be somewhat
more involved but it is hoped that treating the above special case
will be a useful guide for the reader.
\subsection{Solving the recurrence for $B_{n,k}$}
\begin{theorem}
We have
$B_{n,k}(y) = \sum_d \beta_k\big((k+1)d\big) y^{(k+1)d}$, where
\begin{equation*}
\sum_j \beta_k(j) u^j
= P_k(u) \left( \frac{1-u^{k+1}}{1-u} \right)^{n-k}
\end{equation*}
and
\begin{equation*}
P_k(u) =
\sum_{j=0}^k
A_{k-j}(u^{k+1}) (u^{k+1}-1)^{j} \sum_{i=j}^k \binom{i}{j} u^{-i}.
\end{equation*}
\end{theorem}
\begin{proof}
To solve (\ref{rec}) for $B_{n,k}$, we first need to compute the
roots of the corresponding characteristic polynomial
\begin{equation*}
z^{n+k+1} - \sum_{i=1}^{k+1} \binom{k+1}{i} (y-1)^{i-1} z^{n+k+1-i}
\end{equation*}
which can be rewritten as
\begin{equation*}
\big(z - (1-y)\big)^{k+1} - yz^{k+1}.
\end{equation*}
Substituting $y = u^{k+1}$, this becomes
\begin{equation*}
\big(z-(1-u^{k+1})\big)^{k+1} - u^{k+1}z^{k+1}.
\end{equation*}
The $k+1$ roots of this polynomial are easily seen to be the
expressions
\begin{equation*}
\rho_i = \rho_i(u)
= \frac{1-u^{k+1}}{1-\omega^i u},\quad 0 \leq i \leq k,
\end{equation*}
where $\omega = \exp(\frac{2 \pi \mathfrak{i}}{k+1})$ is a primitive
$(k+1)$st root of unity. Thus, we can express $B_{n,k}$ in the form
\begin{equation*}
B_{n,k}(u^{k+1}) = \sum_{i=0}^k \alpha_{k,i}(u) \rho_i^n
\end{equation*}
for an appropriate choice of coefficients $\alpha_{k,i}$ (which depend
on the initial conditions). In fact, writing down the expressions for
the first $k+1$ $B_{n,k}$'s, we have:
\begin{equation*}
B_{j,k}(u^{k+1})
= \sum_{i=0}^k \alpha_{k,i}(u) \rho_i^j
= A_j(u^{k+1}),\quad \mbox{for } 0 \leq j \leq k.
\end{equation*}
We can solve this as a system of $k+1$ linear equations in the $k+1$
unknown coefficients $\alpha_{k,i}(u)$ by representing the solution
in the usual way as a ratio of two determinants. In particular, the
expression for $\alpha_{k,0}$ is given by
\begin{equation}\label{alpha_0}
\alpha_{k,0}(u) = \frac{\det R_k(u)}{\det S_k(u)}
\end{equation}
where $S_k(u)$ and $R_k(u)$ are ($k+1$) by ($k+1$) matrices defined by
\begin{equation*}
S_k(u) = (S_k(i+1,j+1))\;\text{ with }\,S_k(i+1,j+1) = \rho_i^j
\end{equation*}
and
\begin{equation*}
R_k(u) =\big(R_k(i+1,j+1)\big) \;\text{ with }\, R_k(i+1,j+1)
=
\begin{cases}
A_{j}(u^{k+1})& \text{ if } i = 0, \\
\rho_i^j & \text{ if } i > 0.
\end{cases}
\end{equation*}
Now the bottom determinant is a standard Vandermonde determinant which
has the value
\begin{equation*}
\det S_k(u) = \prod_{0 \leq i < j \leq k} (\rho_j - \rho_i).
\end{equation*}
The top determinant is almost a Vandermonde determinant (except for
the first row). Its value has the form
\begin{equation*}
\det R_k(u) = \mathsf{Top}_k(u) \cdot \prod_{0 < i < j \leq k} (\rho_j - \rho_i)
\end{equation*}
where $\mathsf{Top}_k(u)$ is a polynomial in $u$ which we will soon determine.
Hence, in the ratio (\ref{alpha_0}), the terms which do not involve
$\rho_0$ cancel, leaving the reduced form
\begin{equation*}
\alpha_{k,0}(u) = \frac{\mathsf{Top}_k(u)}{\prod_{j > 0} (\rho_j - \rho_0)}.
\end{equation*}
However, we have
\begin{align}
\prod_{j > 0} (\rho_0 - \rho_j)
&= \prod_{j>0} \left( \frac{1-u^{k+1}}{1-u} -
\frac{1-u^{k+1}}{1-\omega^j u} \right)\nonumber \\
&= \left( \prod_{j>0} \frac{(1-\omega^j) u}{(1-u)(1-\omega^j u)}\right)
(1-u^{k+1})^k \nonumber \\
&= \frac{(1-u^{k+1})^k u^k}{(1-u)^k}\cdot
\frac{\prod_{j>0} (1-\omega^j) }{\prod_{j>0} (1-\omega^j u)} \nonumber\\
&= \frac{(1-u^{k+1})^k u^k}{(1-u)^k}\cdot
\frac{k+1}{\frac{1-u^{k+1}}{1-u}}
= \frac{(1-u^{k+1})^{k-1} u^k}{(1-u)^{k-1}} \cdot (k+1).
\label{bot2}
\end{align}
On the other hand, for the top we have by standard properties of
Vandermonde determinants:
\begin{equation}\label{top}
\mathsf{Top}_k(u) = \sum_{j \geq 0} (-1)^{k-j} A_{k-j}(u^{k+1})
\mathsf{S}_{k,j} (\rho_1,\dots, \rho_k),
\end{equation}
where $\mathsf{S}_{k,j}(x_1, \dots, x_k)$ is the elementary symmetric
function of degree $j$ in the $k$ variables $x_1, \dots, x_k$ (and
we recall that $A_t(y) = \sum_{j \geq 0} \euler{t}{j} y^t$).
Now consider the generating function
\begin{align*}
X_k(z)
&= (z-\rho_0) (z-\rho_1) \dots (z-\rho_k) \\
&= \sum_{t \geq 0}(-1)^t\mathsf{S}_{k+1,t}(\rho_0,\dots,\rho_k)z^{k+1-t} \\
&= \prod_{i \geq 0}\left(z - \frac{1-u^{k+1}}{1-\omega^i u} \right) \\
&= \frac{\prod_{i \geq 0} (z-(1-u^{k+1}) - \omega^i uz)}
{\prod_{i \geq 0} (1-\omega^i u)} \\
&= \frac{\left(z-(1-u^{k+1})\right)^{k+1}-u^{k+1} z^{k+1}}{1 - u^{k+1}}.
\end{align*}
What we are interested in is
\begin{align*}
Y_k(z) = \frac{X_k(z)}{z-\rho_0}
&= \sum_{i \geq 0} (-1)^i\mathsf{S}_{k,i}(\rho_1,\dots,\rho_k) z^{k-i}\\
&= \frac{\left( z-(1-u^{k+1})\right)^{k+1}
- u^{k+1}z^{k+1}}{(1-u^{k+1})\big(z - \frac{1-u^{k+1}}{1-u}\big)}\\
&= \frac{1-u}{1-u^{k+1}} \sum_{i=0}^k (z-1+u^{k+1})^i (uz)^{k-i} \\
&= \frac{1-u}{1-u^{k+1}} \sum_{i=0}^k
\sum_{j=0}^i(-1)^j\binom{i}{j}(1 - u^{k+1})^j u^{k-i}z^{k-j}\\
&= \frac{1-u}{1-u^{k+1}} \sum_{j=0}^k(1 - u^{k+1})^j z^{k-j}
\sum_{i = j}^k (-1)^j \binom{i}{j} u^{k-i}.
\end{align*}
Thus, by identifying the coefficient of $z^{k-j}$, we have
\begin{equation*}
\mathsf{S}_{k,j}(\rho_1, \dots, \rho_k)
= \frac{(1-u)(1 - u^{k+1})^{j}}{1-u^{k+1}}\sum_{i=j}^k\binom{i}{j}u^{k-i}.
\end{equation*}
Therefore, we have
\begin{align*}
\mathsf{Top}_k(u)
&= \sum_{j \geq 0}(-1)^{k-j}A_{k-j}(u^{k+1})
\mathsf{S}_{k,j}(\rho_1,\dots, \rho_k) \\
&= \sum_{j \geq 0} (-1)^{k-j}A_{k-j}(u^{k+1})
\frac{(1-u)(1 - u^{k+1})^{j}}{1-u^{k+1}}\sum_{i=j}^k\binom{i}{j}u^{k-i}.
\end{align*}
As a consequence, we find by (\ref{top}) and (\ref{bot2}) that
\begin{align}\label{ex}
\lefteqn{\alpha_{k,0}(u) \rho_0^n} \nonumber\\
&= \frac{\mathsf{Top}_k(u)}{(-1)^k \prod_{j > 0} (\rho_0 - \rho_j)}
\left(\frac{1-u^{k+1}}{1-u}\right)^{\!n}\\
&= \frac{1}{(k+1)u^k}\left( \frac{1-u^{k+1}}{1-u} \right)^{\!n-k}
\sum_{j=0}^k(-1)^{j}A_{k-j}(u^{k+1}) (1-u^{k+1})^{j}
\sum_{i=j}^k \binom{i}{j} u^{k-i}\nonumber\\
&= \frac{1}{(k+1)} \left( \frac{1-u^{k+1}}{1-u} \right)^{\!n-k}
\sum_{j=0}^kA_{k-j}(u^{k+1}) (u^{k+1}-1)^{j}
\sum_{i=j}^k \binom{i}{j} u^{-i}.\nonumber
\end{align}
(We should keep in mind that this expression actually is a polynomial
in $u$.) To determine the other coefficients $\alpha_{k,t}(u)$, we
make the following observations. First, for $1 \leq t \leq k$, we can
cyclically permute the rows of the (Vandermonde) matrix $S_k(u)$ to
form the new matrix $S_k^{(t)}(u) = (S_k^{(t)}(i+1,j+1)) =
(\rho_i^{j+t})$. We can then form the corresponding matrix
$R_k^{(t)}(u)$ by replacing the top row of $S_k^{(t)}(u)$ by
$A_0(u^{k+1}), \dots, A_k(u^{k+1})$. In this way, we can
express the coefficient $\alpha_{k,t}(u)$ as:
\begin{equation}\label{alpha_t}
\alpha_{k,t}(u) = \frac{\det R_k^{(t)}(u)}{\det S_k^{(t)}(u)}.
\end{equation}
However, observe that the resulting computations for determining
$\alpha_{k,t}(u)$ are exactly the same as those for
$\alpha_{k,0}(u)$ where we replace $u$ by $\omega^t u$. This is
because $A_j(u^{k+1}) = A_j(\omega^t u)^{k+1}$.
Consequently,
\begin{equation*}
\alpha_{k,t}(u) = \alpha_{k,0}(\omega^t u).
\end{equation*}
Therefore, for each $n$, the expression $\alpha_{k,t}(u) \rho_t^n$
as a polynomial in $u$ can be obtained from $ \alpha_{k,0}(u) \rho_0^n
$ by replacing $u$ by $\omega^t u$. However, this implies that in the
sum $ \sum_{t=0}^k \alpha_{k,t}(u) \rho_t^n$, the only terms that
survive are those powers $u^m$ of $u$ which are multiples of $k+1$,
since for $m \not\equiv 0 \pmod {k+1}$, we have $\sum_{i=0}^k
\omega^{mi} = 0$. One the other hand, for $u^m$ with
$m \equiv 0 \pmod {k+1}$, we must multiply the coefficients by $k+1$
since in this case, all the powers $\omega^t$ are $1$. Consequently,
if we write
\begin{equation}\label{final}
(k+1) \alpha_{k,0}(u) \rho_0^n = \sum_j \beta_k(j) u^j
\end{equation}
then we have
\begin{equation*}
B_{n,k}(y) = \sum_d \beta_k\big(\,(k+1)d\,\big)\, y^{(k+1)d}.
\end{equation*}
In other words, the number $b_{n,k} (r)$ of permutations in $\mathcal{B}_{n,k}$
having exactly $r$ descents is equal to the $k+1$ times the
coefficient of $u^{(k+1)r}$ in (\ref{ex}).
\end{proof}
If we express (\ref{ex}) (times $k+1$) in the form
\begin{equation}
P_k(u) \left(1+u+\dots+u^k\right)^{n-k}
\end{equation}
then the first few values of $P_k(u)$ are shown below.
\begin{equation*}
\begin{array}{c|l}
k & P_k(u) \\ \hline
1 & 1\\
2 & 1+u\\
3 & 1+u+2u^2+u^3+u^4\\
4 & 1+u+2u^2+4u^3+4u^4+4u^5+4u^6+2u^7+u^8+u^{9}\\
5 & 1+u+2u^2+4u^3+8u^4+11u^5+11u^6+14u^7+16u^{8}+\\
& +14u^9+11u^{10}+11u^{11}+8u^{12}+4u^{13}+2u^{14}+u^{15}+u^{16}
\end{array}
\end{equation*}
There is clearly a lot of structure in the polynomials $P_k(u)$
which will be discussed in the next section.
\section{The structure of $P_k(u)$}
Let us first write down the expression for $P_k(u)$ which came from
(\ref{ex}):
\begin{equation}\label{P}
P_k(u) = \sum_{j=0}^k (-1)^{j}A_{k-j}(u^{k+1}) (1-u^{k+1})^{j}
\sum_{i=j}^k \binom{i}{j} u^{-i}.
\end{equation}
If we write $P_k(u) = \sum_{i=0}^{k^2} \alpha_i u^i$, we will define
the \emph{stretch} of $P_k(u)$ to be
\begin{equation*}
PP_k(u)
= \alpha_0 + \sum_{i=0}^k \sum_{j=0}^{k-2}
\alpha_{1+i+(k+1)} u^{2+i+(k+1)j+j}.
\end{equation*}
What this does to $P_k(u)$ is to insert $0$ coefficients at every
$(k+1)$st term, starting after $\alpha_0$. Thus, the stretched
polynomials corresponding to the values of $P_k(u)$ given in the array
above are:
\begin{equation*}
\begin{array}{c|l}
k & PP_k(u) \\ \hline
1 & 1\\
2 & 1+u^2\\
3 & 1+u^2+2u^3+u^4+u^6\\
4 & 1+u^2+2u^3+4u^4+4u^5+4u^7+4u^8+2u^9+u^{10}+u^{12}\\
5 & 1+u^2+2u^3+4u^4+8u^5+11u^6+11u^8+14u^9+16u^{10}+\\
& +14u^{11}+11u^{12}+11u^{14}+8u^{15}+4u^{16}+2u^{17}+u^{18}+u^{20}
\end{array}
\end{equation*}
Note that if $P_k(u)$ has degree $k^2$ then $PP_k(u)$ has degree
$k^2 + k$.
\begin{theorem} \label{th2}
For all $k \geq 1$,
\begin{equation*}
P_{k+1}(u) = PP_k(u)\cdot (1 + u + u^2 + \dots + u^{k+1}).
\end{equation*}
\end{theorem}
\begin{proof}
From (\ref{P}) and the definition of $PP_k(u)$, we can write
\begin{equation*}
PP_k(u) = \sum_{t=0}^k (-1)^t (1-u^{k+2})^t
A_{k-t}(u^{k+2}) \sum_{s=t}^k \binom{s}{t} u^{-s}.
\end{equation*}
We want to show that
\begin{equation*}
P_{k+1}(u) = PP_k(u) \cdot \frac{1-u^{k+2}}{1-u}.
\end{equation*}
Thus, it is enough to prove that $A = B$ where
\begin{align*}
A &= PP_{k+1}(u) (1-u^{k+2}) \\
&= \sum_{t=0}^k (-1)^t (1-u^{k+2})^{t+1}
A_{k-t}(u^{k+2})\sum_{s=t}^k\binom{s}{t} u^{-s}
\shortintertext{and}
B &= P_k(u) (1-u) \\
&= \sum_{t=0}^{k+1} (-1)^t (1-u^{k+2})^{t}
A_{k+1-t}(u^{k+2}) \sum_{s=t}^{k+1} \binom{s}{t} u^{-s} (1-u).
\end{align*}
Observe that
\begin{align*}
B
&= \sum_{t=0}^{k+1} (-1)^t (1-u^{k+2})^{t} A_{k+1-t}(u^{k+2}
\left( \sum_{s=t}^{k+1} \binom{s}{t} u^{-s} (1-u)\right)\\
&= \sum_{t=1}^{k+1} (-1)^t (1-u^{k+2})^{t} A_{k+1-t}(u^{k+2})
\left( \sum_{s=t}^{k+1} \binom{s}{t} u^{-s} (1-u)\right)\\
& \quad + A_{k+1}(u^{k+2}) (1-u^{-k-2}) (-u)\\
&= \sum_{t=0}^{k} (-1)^t (1-u^{k+2})^{t+1} A_{k-t}(u^{k+2})
\left( \sum_{s=t+1}^{k}\binom{s}{t+1} u^{-s} (u-1)\right) \\
& \quad - u (1-u^{-k-2}) A_{k+1}\\
&= \sum_{t=0}^{k} (-1)^t (1-u^{k+2})^{t+1} A_{k-t}(u^{k+2})
\left( \sum_{s=t+1}^{k}
\binom{s}{t} u^{-s} +u^{-t} - \binom{k+1}{t+1} u^{-k-1} \right) \\
&\quad - u (1-u^{-k-2}) A_{k+1}\\
&= \sum_{t=0}^{k} (-1)^t (1-u^{k+2})^{t+1} A_{k-t}(u^{k+2})
\left(
\sum_{s=t}^{k} \binom{s}{t} u^{-s}-\binom{k+1}{t+1} u^{-k-1}
\right) \\
& \quad - u (1-u^{-k-2}) A_{k+1}.
\end{align*}
Hence, to prove that $A = B$, we only need to establish
\begin{multline*}
\sum_{t=0}^{k} (-1)^t (1-u^{k+2})^{t+1} A_{k-t}(u^{k+2})
\left( -\binom{k+1}{t+1} u^{-k-1} \right)
- u (1-u^{-k-2}) A_{k+1}\\
= u(1-u^{-k-2}) A_{k+1}(u^{k+2}).
\end{multline*}
However, this would follow from
\begin{equation}\label{euler-id}
\sum_{t=0}^{k'} (x-1)^t A_{k'-t}(x) \binom{k'}{t} = x A_{k'}(x)
\end{equation}
by taking $k' = k+1$ and $x = u^{k+2}$. So it remains to prove
(\ref{euler-id}).
To do this we will use the standard generating function for
$A_n(w)$ from (\ref{euler_gf}):
\begin{equation*}
\sum_{n,m \geq 0} \euler{n}{m} w^m \frac{z^n}{n!}
= \sum_{n \geq 0} A_n(w) \frac{z^n}{n!}
= \frac{1-w}{e^{(w-1)z} - w}.
\end{equation*}
Consider
\begin{align*}
F(x,z) &= \sum_{k > 0} x A_k(x) \frac{z^k}{k!}
\shortintertext{and}
G(x,z) &= \sum_{k > 0} \sum_{t=0}^k (x-1)^t A_{k-t} (x) \binom{k}{t}
\frac{z^k}{k!}.
\end{align*}
It will suffice to show that $F = G$. Now
\begin{align*}
G(x,z) &= \sum_{k \geq 0} \sum_{t \geq 0} (x-1)^t A_{k-t}(x)
\binom{k}{t} \frac{z^k}{k!} - 1\\
&= \sum_{t \geq 0} \sum_{k \geq t} (x-1)^t A_{k-t}(x)
\binom{k}{t} \frac{z^k}{k!} - 1\\
&= \sum_{t \geq 0} (x-1)^t \sum_{k' \geq 0} A_{k'}(x)
\binom{k' + t}{t} \frac{z^{k' + t}}{(k' + t)!} - 1\\
&= \sum_{t \geq 0} (x-1)^t z^t
\sum_{k' \geq 0}A_{k}(x) \frac{z^{k}}{k!} - 1\\
&= e^{(x-1)z} \cdot \frac{1-x}{e^{(x-1)z} - x} - 1
= \frac{x (1 - e^{(1-x)z})}{e^{(x-1)z} -x}.
\end{align*}
On the other hand,
\begin{align*}
F(x,z)
&= \sum_{k > 0} x A_k(x) \frac{z^k}{k!}\\
&= x \left( \frac{1-x}{e^{(x-1)z}-x} - 1 \right)
= \frac{x (1 - e^{(1-x)z})}{e^{(x-1)z} -x}
\end{align*}
as desired. This completes the proof of Theorem \ref{th2}.
\end{proof}
\begin{theorem}
The coefficients of $P_k(u)$ are symmetric and unimodal.
\end{theorem}
\begin{proof}
It follows from Theorem \ref{th2} that we can construct the
coefficient sequence for $P_{k+1}(u)$ from that of $P_k(u)$ by the
following rule (where we assume that all coefficients of $u^t$ in
$P_k(u)$ are $0$ if $t < 0$ or $t > k^2$). Namely, suppose we write
$P_k(u) = \sum_{i=0}^{k^2} \alpha_i u^i$ so that we have the
coefficient sequence $A_k = (\alpha_0, \alpha_1, \dots,
\alpha_{k^2})$. Now form the new sequence $B_k = (\beta_0, \beta_1,
\dots \beta_{k^2 + k})$ by the rule
\begin{equation*}
\beta_i = \sum_{j=i-k}^i \alpha_j ,\quad 0 \leq i \leq k^2 + k.
\end{equation*}
Finally, starting with $\beta_0$, insert \emph{duplicate} values
for the coefficients
\begin{equation*}
\beta_0,\beta_{k+1},\beta_{2(k+1)},\dots,\beta_{t(k+1)},
\dots,\beta_{(k-1)(k+1)}\text{ and }\beta_{k(k+1)}.
\end{equation*}
Thus, this will generate the sequence
\begin{equation*}
(\beta_0, \beta_0, \beta_1, \beta_2, \dots, \beta_k, \beta_{k+1},
\beta_{k+1}, \beta_{k+2}, \dots, \beta_{k^2+k-1}, \beta_{k^2+k},
\beta_{k^2+k}).
\end{equation*}
This new sequence will in fact just be the coefficient sequence
$A_{k+1}$ for $P_{k+1}(u)$. For example, starting with $P_1(u) = 1 +
u$, we have $A_1 = (1,1)$ and so $B_1 = (1, 2, 1)$. Now, inserting
the duplicate values for $\beta_0 = 1$ and $\beta_2 = 1$, we get the
coefficient sequence $A_2 = (1, {\bf 1}, 2, 1, {\bf 1})$ for $P_2(u)
= 1 + u +2 u^2 + u^3 + u^4$. Repeating this process for $A_2$, we
sum blocks of length $3$ to get $B_2 = (1, 2, 4, 4, 4, 2,
1)$. Inserting duplicates for entries at positions $0, 3$ and $6$
gives us the new coefficient sequence $A_3 = (1, {\bf 1}, 2, 4, 4,
{\bf 4}, 4, 2, 1, {\bf 1})$ of $P_3 = 1 + u + 2u^2 + 4u^3 +4u^4
+4u^5 +4u^6 +2u^7 +u^8 +u^9$, etc. It is also clear from this
procedure that if $A_k$ is symmetric and unimodal, then so is $B_k$,
and consequently, so is $A_{k+1}$. This is what we claimed.
\end{proof}
\subsection{An Eulerian identity}
Note that since $P_k(u)$ is symmetric and has degree $u^{k^2}$, we
have $P_k(u) = u^{k^2} P_k({\textstyle \frac{1}{u}})$. Replacing
$P_k(u)$ by its expression in (\ref{P}), we obtain (with some
calculation) the interesting identity
\begin{equation*}
\sum_{j = 0}^{a+b} (-1)^j \binom{a}{j} (1-x)^j A_{a+b-j}(x)
= x \sum_{j = 0}^{a+b} \binom{b}{j}(1-x)^j A_{a+b-j}(x)
+\binom{b}{a+b}(1-x)^{a+b+1}
\end{equation*}
for \emph{all} integers $a$ and $b$ provided that $a + b \geq 0$.
| {
"timestamp": "2010-01-18T10:33:15",
"yymm": "0908",
"arxiv_id": "0908.2456",
"language": "en",
"url": "https://arxiv.org/abs/0908.2456",
"abstract": "Motivated by juggling sequences and bubble sort, we examine permutations on the set {1,2,...,n} with d descents and maximum drop size k. We give explicit formulas for enumerating such permutations for given integers k and d. We also derive the related generating functions and prove unimodality and symmetry of the coefficients.",
"subjects": "Combinatorics (math.CO)",
"title": "Descent polynomials for permutations with bounded drop size",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9863631631151012,
"lm_q2_score": 0.815232489352,
"lm_q1q2_score": 0.8041152968714368
} |
https://arxiv.org/abs/1011.3027 | Introduction to the non-asymptotic analysis of random matrices | This is a tutorial on some basic non-asymptotic methods and concepts in random matrix theory. The reader will learn several tools for the analysis of the extreme singular values of random matrices with independent rows or columns. Many of these methods sprung off from the development of geometric functional analysis since the 1970's. They have applications in several fields, most notably in theoretical computer science, statistics and signal processing. A few basic applications are covered in this text, particularly for the problem of estimating covariance matrices in statistics and for validating probabilistic constructions of measurement matrices in compressed sensing. These notes are written particularly for graduate students and beginning researchers in different areas, including functional analysts, probabilists, theoretical statisticians, electrical engineers, and theoretical computer scientists. | \section{Introduction} \label{s: introduction}
\paragraph{Asymptotic and non-asymptotic regimes}
Random matrix theory studies properties of $N \times n$ matrices $A$ chosen from some distribution
on the set of all matrices. As dimensions $N$ and $n$ grow to infinity, one observes that the spectrum of $A$ tends to
stabilize. This is manifested in several {\em limit laws}, which may be regarded as random matrix versions of the
central limit theorem. Among them is Wigner's semicircle law for the eigenvalues of symmetric Gaussian
matrices, the circular law for Gaussian matrices, the Marchenko-Pastur law for Wishart matrices $W = A^*A$ where
$A$ is a Gaussian matrix, the Bai-Yin and Tracy-Widom laws for the extreme eigenvalues of Wishart matrices $W$.
The books \cite{Mehta, AGZ, Deift-Gioev, Bai-Silverstein} offer thorough introduction to the classical
problems of random matrix theory and its fascinating connections.
The asymptotic regime where the dimensions $N,n \to \infty$ is well suited for the purposes of
statistical physics, e.g. when random matrices serve as finite-dimensional models of
infinite-dimensional operators. But in some other areas including statistics,
geometric functional analysis, and compressed sensing, the limiting regime may not be very useful \cite{RV ICM}.
Suppose, for example, that we ask about the largest singular value $s_{\max}(A)$
(i.e. the largest eigenvalue of $(A^*A)^{1/2}$); to be specific assume that $A$ is an $n \times n$ matrix
whose entries are independent standard normal random variables.
The asymptotic random matrix theory answers this question as follows:
the Bai-Yin law (see Theorem~\ref{Bai-Yin}) states that
$$
s_{\max}(A) / 2\sqrt{n} \to 1 \quad \text{almost surely}
$$
as the dimension $n \to \infty$.
Moreover, the limiting distribution of $s_{\max}(A)$ is known to be the Tracy-Widom law
(see \cite{Soshnikov, FeSo}).
In contrast to this, a non-asymptotic answer to the same question is the following:
in {\em every} dimension $n$, one has
$$
s_{\max}(A) \le C\sqrt{n} \quad \text{with probability at least } 1 - e^{-n},
$$
here $C$
is an absolute constant (see Theorems~\ref{Gaussian} and \ref{sub-gaussian rows}).
The latter answer is less precise (because of an absolute constant $C$) but more quantitative
because for fixed dimensions $n$ it gives an exponential probability of success.\footnote{For
this specific model (Gaussian matrices),Theorems~\ref{Gaussian} and \ref{Gaussian deviation}
even give a sharp absolute constant $C\approx 2$ here. But the
result mentioned here is much more general as we will see later; it only requires
independence of rows or columns of $A$.}
This is the kind of answer we will seek in this text --
guarantees up to absolute constants in all dimensions, and with large probability.
\paragraph{Tall matrices are approximate isometries}
The following heuristic will be our guideline:
{\em tall random matrices should act as approximate isometries}.
So, an $N \times n$ random matrix $A$ with $N \gg n$ should act
almost like an isometric embedding of $\ell_2^n$ into $\ell_2^N$:
$$
(1-\delta) K \|x\|_2 \le \|Ax\|_2 \le (1+\delta) K \|x\|_2 \quad \text{for all } x \in \mathbb{R}^n
$$
where $K$ is an appropriate normalization factor and $\delta \ll 1$.
Equivalently, this says that all the singular values of $A$ are close to each other:
$$
(1-\delta)K \le s_{\min}(A) \le s_{\max}(A) \le (1+\delta)K,
$$
where $s_{\min}(A)$ and $s_{\max}(A)$ denote the smallest and the largest singular values of $A$.
Yet equivalently, this means that tall matrices are well conditioned: the {\em condition number}
\index{Condition number}
of $A$ is $\kappa(A) = s_{\max}(A)/s_{\min}(A) \le (1+\delta)/(1-\delta) \approx 1$.
In the asymptotic regime and for random matrices with independent entries, our heuristic
is justified by Bai-Yin's law, which is Theorem~\ref{Bai-Yin} below.
Loosely speaking, it states that as the dimensions $N,n$ increase to infinity while the aspect ratio $N/n$ is fixed,
we have
\begin{equation} \label{Bai-Yin heuristic}
\sqrt{N} - \sqrt{n} \approx s_{\min}(A) \le s_{\max}(A) \approx \sqrt{N} + \sqrt{n}.
\end{equation}
In these notes, we study $N \times n$ random matrices $A$ with independent rows or independent columns,
but not necessarily independent entries.
We develop non-asymptotic versions of \eqref{Bai-Yin heuristic} for such matrices,
which should hold for all dimensions $N$ and $n$. The desired results should have the form
\begin{equation} \label{heuristic}
\sqrt{N} - C \sqrt{n} \le s_{\min}(A) \le s_{\max}(A) \le \sqrt{N} + C \sqrt{n}
\end{equation}
with large probability, e.g. $1-e^{-N}$, where $C$ is an absolute constant.\footnote{More accurately,
we should expect $C=O(1)$ to depend on easily computable quantities of the distribution,
such as its moments. This will be clear from the context.}
For tall matrices, where $N \gg n$, both sides of this inequality
would be close to each other, which would guarantee that $A$ is an approximate isometry.
\paragraph{Models and methods}
We shall study quite general models of random matrices -- those with independent rows
or independent columns that are sampled from high-dimensional distributions. We will place either
strong moment assumptions on the distribution (sub-gaussian growth of moments),
or no moment assumptions at all (except finite variance). This leads us
to four types of main results:
\begin{enumerate} \setlength{\itemsep}{-3pt}
\item Matrices with independent sub-gaussian rows: Theorem~\ref{sub-gaussian rows}
\item Matrices with independent heavy-tailed rows: Theorem~\ref{heavy-tailed rows}
\item Matrices with independent sub-gaussian columns: Theorem~\ref{sub-gaussian columns}
\item Matrices with independent heavy-tailed columns: Theorem~\ref{heavy-tailed columns}
\end{enumerate}
These four models cover many natural classes of random matrices that occur in applications,
including random matrices with independent entries (Gaussian and Bernoulli in particular)
and random sub-matrices of orthogonal matrices (random Fourier matrices in particular).
The analysis of these four models is based on a variety of tools of probability theory
and geometric functional analysis, most of which have not been covered in the texts on the
``classical'' random matrix theory. The reader will learn basics on sub-gaussian
and sub-exponential random variables,
isotropic random vectors, large deviation inequalities for sums of independent random variables,
extensions of these inequalities to random matrices, and several basic methods of high dimensional probability
such as symmetrization, decoupling, and covering ($\varepsilon$-net) arguments.
\paragraph{Applications}
In these notes we shall emphasize two applications, one in statistics and one in compressed sensing.
Our analysis of random matrices with independent rows immediately applies to
a basic problem in statistics -- {\em estimating covariance matrices} of high-dimensional
distributions. If a random matrix $A$ has i.i.d. rows $A_i$, then
$A^*A = \sum_i A_i \otimes A_i$ is the {\em sample covariance matrix}. If $A$ has
independent columns $A_j$, then $A^*A = (\< A_j, A_k\> )_{j,k}$ is the {\em Gram matrix}.
Thus our analysis of the row-independent and column-independent models can be interpreted as a
study of sample covariance matrices and Gram matrices of high dimensional distributions.
We will see in Section~\ref{s: covariance} that for a general distribution in $\mathbb{R}^n$, its covariance matrix
can be estimated from a sample of size $N = O(n \log n)$ drawn from the distribution.
Moreover, for sub-gaussian distributions we have an even better bound $N = O(n)$.
For low-dimensional distributions, much fewer samples are needed -- if a distribution
lies close to a subspace of dimension $r$ in $\mathbb{R}^n$, then a sample of size $N = O(r \log n)$
is sufficient for covariance estimation.
In compressed sensing, the best known measurement matrices are random. A sufficient
condition for a matrix to succeed for the purposes of compressed sensing
is given by the {\em restricted isometry property}. Loosely speaking, this property
demands that all sub-matrices of given size be well-conditioned. This fits well in the circle of
problems of the non-asymptotic random matrix theory. Indeed, we will see in Section~\ref{s: restricted isometries}
that all basic models
of random matrices are nice restricted isometries. These include Gaussian and Bernoulli matrices,
more generally all matrices with sub-gaussian independent entries, and even more generally
all matrices with sub-gaussian independent rows or columns. Also, the class of restricted
isometries includes random Fourier matrices, more generally random sub-matrices of
bounded orthogonal matrices, and even more generally matrices whose rows are independent
samples from an isotropic distribution with uniformly bounded coordinates.
\paragraph{Related sources}
This text is a tutorial rather than a survey, so we focus on explaining methods rather than results.
This forces us to make some concessions in our choice of the subjects.
{\em Concentration of measure} and its applications to random matrix theory are only briefly mentioned.
For an introduction into concentration of measure suitable for a beginner,
see \cite{Ball} and \cite[Chapter~14]{Matousek};
for a thorough exposition see \cite{MS, Ledoux};
for connections with random matrices see \cite{DS, Ledoux extremal}. The monograph \cite{Ledoux-Talagrand}
also offers an introduction into concentration of measure and related probabilistic methods in
analysis and geometry, some of which we shall use in these notes.
We completely avoid the important (but more difficult) model of {\em symmetric random matrices}
with independent entries on and above the diagonal.
Starting from the work of F\"uredi and Komlos \cite{FuKo},
the largest singular value (the spectral norm) of symmetric random matrices has been a subject
of study in many works; see e.g. \cite{Meckes, Vu, PeSo} and the references therein.
We also did not even attempt to discuss sharp
small {\em deviation inequalities} (of Tracy-Widom type) for the extreme eigenvalues.
Both these topics and much more are discussed in the surveys \cite{DS, Ledoux extremal, RV ICM},
which serve as bridges between asymptotic and non-asymptotic problems in random matrix theory.
Because of the absolute constant $C$ in \eqref{heuristic}, our analysis of
the smallest singular value (the {\em ``hard edge''}) will only be useful for sufficiently tall matrices, where $N \ge C^2 n$.
For square and almost square matrices, the hard edge problem will be only briefly mentioned in Section~\ref{s: entries}.
The surveys \cite{Tao-Vu survey, RV ICM} discuss this problem at length, and they offer a glimpse
of connections to other problems of random matrix theory
and additive combinatorics.
Many of the results and methods presented in these notes are known in one form or another.
Some of them are published while some others belong to the folklore of probability in Banach spaces,
geometric functional analysis, and related areas.
When available, historic references are given in Section~\ref{s: notes}.
\paragraph{Acknowledgements}
The author is grateful to the colleagues who made a number of improving suggestions for the
earlier versions of the manuscript, in particular to Richard Chen, Subhroshekhar Ghosh, Alexander Litvak, Deanna Needell, Holger Rauhut,
S V N Vishwanathan and the anonymous referees. Special thanks are due to Ulas Ayaz and Felix Krahmer
who thoroughly read the entire text, and whose numerous comments led to significant improvements of
this tutorial.
\section{Preliminaries} \label{s: preliminaries}
\subsection{Matrices and their singular values}
The main object of our study will be an $N \times n$ matrix $A$ with real or complex entries.
We shall state all results in the real case; the reader will be able
to adjust them to the complex case as well. Usually but not always one should think of tall matrices $A$,
those for which $N \ge n > 1$. By passing to the adjoint matrix $A^*$, many results can be carried over
to ``flat'' matrices, those for which $N \le n$.
It is often convenient to study $A$ through the $n \times n$ symmetric positive-semidefinite matrix the matrix $A^*A$.
The eigenvalues of $|A| := \sqrt{A^*A}$ are therefore non-negative real numbers. Arranged in a non-decreasing
order, they are called the {\em singular values}\footnote{In the literature, singular values are also called {\em s-numbers}.}
\index{Singular values}
of $A$ and denoted $s_1(A) \ge \cdots \ge s_n(A) \ge 0$.
Many applications require estimates on the extreme singular values
$$
s_{\max}(A) := s_1(A), \quad s_{\min}(A) := s_n(A).
$$
The smallest singular value is only of interest for tall matrices, since
for $N < n$ one automatically has $s_{\min}(A) = 0$.
Equivalently, $s_{\max}(A)$ and $s_{\min}(A)$
are respectively the smallest number $M$ and the largest number $m$ such that
\begin{equation} \label{mM}
m \|x\|_2 \le \|Ax\|_2 \le M \|x\|_2
\quad \text{for all } x \in \mathbb{R}^n.
\end{equation}
In order to interpret this definition geometrically, we look at $A$ as a linear operator
from $\mathbb{R}^n$ into $\mathbb{R}^N$.
The Euclidean distance between any two points in $\mathbb{R}^n$
can increase by at most the factor $s_{\max}(A)$ and decrease
by at most the factor $s_{\max}(A)$ under the action of $A$.
Therefore, the extreme singular values control the distortion of the Euclidean geometry
under the action of $A$. If $s_{\max}(A) \approx s_{\min}(A) \approx 1$ then $A$
acts as an {\em approximate isometry},\index{Approximate isometries} or more accurately an approximate
isometric embedding of $\ell_2^n$ into $\ell_2^N$.
The extreme singular values can also be described
in terms of the {\em spectral norm of $A$},\index{Spectral norm} which is by definition
\begin{equation} \label{spectral norm}
\|A\| = \|A\|_{\ell_2^n \to \ell_2^N} = \sup_{x \in \mathbb{R}^n \setminus \{0\}} \frac{\|Ax\|_2}{\|x\|_2}
= \sup_{x \in S^{n-1}} \|Ax\|_2.
\end{equation}
\eqref{mM} gives a link between the extreme singular values and
the spectral norm:
$$
s_{\max}(A) = \|A\|, \quad s_{\min}(A) = 1/\|A^\dagger\|
$$
where $A^\dagger$ denotes the pseudoinverse of $A$; if $A$ is invertible then $A^\dagger = A^{-1}$.
\subsection{Nets}
Nets are convenient means to discretize compact sets. In our study we will mostly need
to discretize the unit Euclidean sphere $S^{n-1}$ in the definition of the spectral norm \eqref{spectral norm}.
Let us first recall a general definition of an $\varepsilon$-net.
\begin{definition}[Nets, covering numbers] \index{Net} \index{Covering numbers}
Let $(X,d)$ be a metric space and let $\varepsilon>0$.
A subset $\mathcal{N}_\varepsilon$ of $X$ is called an {\em $\varepsilon$-net} of $X$
if every point $x \in X$ can be approximated to within $\varepsilon$
by some point $y \in \mathcal{N}_\varepsilon$, i.e. so that
$d(x,y) \le \varepsilon$.
The minimal cardinality of an $\varepsilon$-net of $X$, if finite, is denoted $\mathcal{N}(X,\varepsilon)$
and is called the {\em covering number}\footnote{Equivalently, $\mathcal{N}(X,\varepsilon)$ is the minimal
number of balls with radii $\varepsilon$ and with centers in $X$ needed to cover $X$.}
of $X$ (at scale $\varepsilon$).
\end{definition}
From a characterization of compactness we remember that $X$ is compact
if and only if $\mathcal{N}(X,\varepsilon) < \infty$ for each $\varepsilon > 0$.
A quantitative estimate on $\mathcal{N}(X, \varepsilon)$
would give us a {\em quantitative version of compactness} of $X$.\footnote{In
statistical learning theory and geometric functional analysis,
$\log \mathcal{N}(X,\varepsilon)$ is called {\em the metric entropy of $X$}.
In some sense it measures the ``complexity'' of metric space $X$.}
Let us therefore take a simple example of a metric space, the unit Euclidean sphere $S^{n-1}$
equipped with the Euclidean metric\footnote{A similar result holds for the
geodesic metric on the sphere, since for small $\varepsilon$ these two distances are equivalent.}
$d(x,y) = \|x-y\|_2$, and estimate its covering numbers.
\begin{lemma}[Covering numbers of the sphere] \label{net cardinality}
The unit Euclidean sphere $S^{n-1}$ equipped with the Euclidean metric satisfies
for every $\varepsilon>0$ that
$$
\mathcal{N}(S^{n-1},\varepsilon) \le \Big( 1 + \frac{2}{\varepsilon} \Big)^n.
$$
\end{lemma}
\begin{proof}
This is a simple {\em volume argument}.
Let us fix $\varepsilon>0$ and choose $\mathcal{N}_\varepsilon$ to be a maximal $\varepsilon$-separated subset of $S^{n-1}$.
In other words, $\mathcal{N}_\varepsilon$ is such that
$d(x,y) \ge \varepsilon$ for all $x, y \in \mathcal{N}_\varepsilon$, $x \ne y$,
and no subset of $S^{n-1}$ containing $\mathcal{N}_\varepsilon$ has this property.\footnote{One
can in fact construct $\mathcal{N}_\varepsilon$ inductively by first selecting an arbitrary
point on the sphere, and at each next step selecting a point that is at distance at least $\varepsilon$
from those already selected. By compactness, this algorithm will terminate after finitely
many steps and it will yield a set $\mathcal{N}_\varepsilon$ as we required.}
The maximality property implies that $\mathcal{N}_\varepsilon$ is an $\varepsilon$-net of $S^{n-1}$. Indeed,
otherwise there would exist $x \in S^{n-1}$ that is at least $\varepsilon$-far from all points in $\mathcal{N}_\varepsilon$.
So $\mathcal{N}_\varepsilon \cup \{x\}$ would still be an $\varepsilon$-separated set, contradicting the minimality property.
Moreover, the separation property implies via the triangle inequality that the balls of radii $\varepsilon/2$
centered at the points in $\mathcal{N}_\varepsilon$ are disjoint. On the other hand, all such balls lie in
$(1+\varepsilon/2) B_2^n$ where $B_2^n$ denotes the unit Euclidean ball centered at the origin.
Comparing the volume gives
$\vol \big( \frac{\varepsilon}{2} B_2^n \big) \cdot |\mathcal{N}_\varepsilon|
\le \vol \big( (1 + \frac{\varepsilon}{2}) B_2^n \big)$.
Since $\vol \big( r B_2^n \big) = r^n \vol(B_2^n)$ for all $r \ge 0$, we conclude that
$|\mathcal{N}_\varepsilon| \le (1+\frac{\varepsilon}{2})^n / (\frac{\varepsilon}{2})^n = (1+\frac{2}{\varepsilon})^n$
as required.
\end{proof}
Nets allow us to reduce the complexity of computations with linear operators.
One such example is the computation of the spectral norm. To evaluate the
spectral norm by definition \eqref{spectral norm} one needs to take the supremum
over the whole sphere $S^{n-1}$. However, one can essentially replace
the sphere by its $\varepsilon$-net:
\begin{lemma}[Computing the spectral norm on a net] \index{Spectral norm!computing on a net} \label{norm on net general}
Let $A$ be an $N \times n$ matrix,
and let $\mathcal{N}_\varepsilon$ be an $\varepsilon$-net of $S^{n-1}$
for some $\varepsilon \in [0,1)$. Then
$$
\max_{x \in \mathcal{N}_\varepsilon} \|Ax\|_2 \le \|A\| \le (1 - \varepsilon)^{-1} \max_{x \in \mathcal{N}_\varepsilon} \|Ax\|_2
$$
\end{lemma}
\begin{proof}
The lower bound in the conclusion follows from the definition. To prove the upper bound
let us fix $x \in S^{n-1}$ for which $\|A\| = \|Ax\|_2$, and choose $y \in \mathcal{N}_\varepsilon$
which approximates $x$ as $\|x-y\|_2 \le \varepsilon$. By the triangle inequality we have
$\|Ax-Ay\|_2 \le \|A\| \|x-y\|_2 \le \varepsilon \|A\|$.
It follows that
$$
\|Ay\|_2 \ge \|Ax\|_2 - \|Ax-Ay\|_2
\ge \|A\| - \varepsilon\|A\| = (1-\varepsilon)\|A\|.
$$
Taking maximum over all $y \in \mathcal{N}_\varepsilon$ in this inequality, we complete the proof.
\end{proof}
A similar result holds for symmetric $n \times n$ matrices $A$,
whose spectral norm can be computed via the associated quadratic form:
$\|A\| = \sup_{x \in S^{n-1}} |\< Ax,x\> |$.
Again, one can essentially replace the sphere by its $\varepsilon$-net:
\begin{lemma}[Computing the spectral norm on a net] \index{Spectral norm!computing on a net} \label{norm on net}
Let $A$ be a symmetric $n \times n$ matrix,
and let $\mathcal{N}_\varepsilon$ be an $\varepsilon$-net of $S^{n-1}$
for some $\varepsilon \in [0,1)$. Then
$$
\|A\| = \sup_{x \in S^{n-1}} |\< Ax, x\> |
\le (1 - 2\varepsilon)^{-1} \sup_{x \in \mathcal{N}_\varepsilon} |\< Ax, x\> |.
$$
\end{lemma}
\begin{proof}
Let us choose $x \in S^{n-1}$ for which $\|A\| = |\< Ax, x\> |$,
and choose $y \in \mathcal{N}_\varepsilon$ which approximates $x$ as $\|x - y\|_2 \le \varepsilon$.
By the triangle inequality we have
\begin{align*}
|\< Ax, x\> - \< Ay, y\> |
&= |\< Ax, x-y\> + \< A(x-y), y\> |\\
&\le \|A\| \|x\|_2 \|x-y\|_2 + \|A\| \|x-y\|_2 \|y\|_2
\le 2 \varepsilon \|A\|.
\end{align*}
It follows that
$|\< Ay, y \> | \ge |\< Ax, x\> | - 2 \varepsilon \|A\| = (1-2\varepsilon) \|A\|$.
Taking the maximum over all $y \in \mathcal{N}_\varepsilon$ in this inequality completes the proof.
\end{proof}
\subsection{Sub-gaussian random variables} \index{Sub-gaussian!random variables} \label{s: sub-gaussian}
In this section we introduce the class of sub-gaussian random variables,\footnote{It would be more
rigorous to say that we study {\em sub-gaussian probability distributions}.
The same concerns some other properties of random variables and random vectors we study later in this text.
However, it is convenient for us to focus on random variables and vectors because we will form random matrices out of them.}
those whose distributions are dominated by the distribution of a centered gaussian
random variable. This is a convenient and quite wide class, which contains in particular
the standard normal and all bounded random variables.
Let us briefly recall some of the well known properties of the {\em standard normal random variable} $X$.
The distribution of $X$ has density $\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}$ and is denoted $N(0,1)$.
Estimating the integral of this density between $t$ and $\infty$ one checks that
the tail of a standard normal random variable $X$ decays super-exponentially:
\begin{equation} \label{normal tail}
\mathbb{P} \{ |X| > t \} = \frac{2}{\sqrt{2 \pi}} \int_t^\infty e^{-x^2/2} \, dx
\le 2 e^{-t^2/2}, \quad t \ge 1,
\end{equation}
see e.g. \cite[Theorem 1.4]{Durrett} for a more precise two-sided inequality.
The absolute moments of $X$ can be computed as
\begin{equation} \label{normal moments}
(\mathbb{E} |X|^p)^{1/p} = \sqrt{2} \Big[ \frac{\Gamma((1+p)/2)}{\Gamma(1/2)} \Big]^{1/p}
= O(\sqrt{p}), \quad p \ge 1.
\end{equation}
The moment generating function of $X$ equals
\begin{equation} \label{normal mgf}
\mathbb{E} \exp(tX) = e^{t^2/2}, \quad t \in \mathbb{R}.
\end{equation}
Now let $X$ be a general random variable. We observe that these three properties are equivalent --
a super-exponential tail decay like in \eqref{normal tail}, the moment growth \eqref{normal moments},
and the growth of the moment generating function like in \eqref{normal mgf}.
We will then focus on the class of random variables that satisfy these properties, which we shall call
sub-gaussian random variables.
\begin{lemma}[Equivalence of sub-gaussian properties] \label{sub-gaussian properties}
Let $X$ be a random variable. Then the following properties are equivalent with parameters
$K_i > 0$ differing from each other by at most an absolute constant factor.\footnote{The precise meaning
of this equivalence is the following. There exists an absolute constant $C$ such that property $i$
implies property $j$ with parameter $K_j \le C K_i$ for any two properties $i,j=1,2,3$.}
\begin{enumerate}
\item Tails:
$\mathbb{P} \{ |X| > t \} \le \exp(1-t^2/K_1^2)$ for all $t \ge 0$;
\item Moments:
$(\mathbb{E} |X|^p)^{1/p} \le K_2 \sqrt{p}$ for all $p \ge 1$;
\item Super-exponential moment:
$\mathbb{E} \exp(X^2/K_3^2) \le e$.
\end{enumerate}
Moreover, if $\mathbb{E} X = 0$ then properties 1--3 are also equivalent
to the following one:
\begin{enumerate} \setcounter{enumi}{3}
\item Moment generating function:
$\mathbb{E} \exp(tX) \le \exp(t^2 K_4^2)$ for all $t \in \mathbb{R}$.
\end{enumerate}
\end{lemma}
\begin{proof}
{\bf 1. $\Rightarrow$ 2.}
Assume property 1 holds.
By homogeneity, rescaling $X$ to $X/K_1$ we can assume that $K_1=1$.
Recall that for every non-negative random variable $Z$, integration by parts
yields the identity
$\mathbb{E} Z = \int_0^\infty \mathbb{P} \{ Z \ge u \} \, du$.
We apply this for $Z = |X|^p$.
After change of variables $u = t^p$, we obtain using property 1 that
$$
\mathbb{E} |X|^p
= \int_0^\infty \mathbb{P} \{ |X| \ge t \} \, p t^{p-1} \, dt
\le \int_0^\infty e^{1-t^2} p t^{p-1} \, dt
= \big( \frac{ep}{2} \big) \Gamma \big( \frac{p}{2} \big)
\le \big( \frac{ep}{2} \big) \big( \frac{p}{2} \big)^{p/2}.
$$
Taking the $p$-th root yields property 2 with a suitable absolute constant $K_2$.
{\bf 2. $\Rightarrow$ 3.}
Assume property 2 holds.
As before, by homogeneity we may assume that $K_2 = 1$. Let $c>0$ be a sufficiently small absolute constant.
Writing the Taylor series of the exponential function, we obtain
$$
\mathbb{E} \exp(c X^2)
= 1 + \sum_{p=1}^\infty \frac{c^p \mathbb{E}(X^{2p})}{p!}
\le 1 + \sum_{p=1}^\infty \frac{c^p (2p)^p}{p!}
\le 1 + \sum_{p=1}^\infty (2c/e)^p.
$$
The first inequality follows from property 2; in the second one we use $p! \ge (p/e)^p$.
For small $c$ this gives $\mathbb{E} \exp(c X^2) \le e$, which
is property 3 with $K_3 = c^{-1/2}$.
{\bf 3. $\Rightarrow$ 1.}
Assume property 3 holds. As before we may assume that $K_3=1$.
Exponentiating and using Markov's inequality\footnote{This simple argument is sometimes called
exponential Markov's inequality.}
and then property 3, we have
$$
\mathbb{P} \{ |X| > t \}
= \mathbb{P} \{ e^{X^2} \ge e^{t^2} \}
\le e^{-t^2} \mathbb{E} e^{X^2}
\le e^{1-t^2}.
$$
This proves property 1 with $K_1=1$.
{\bf 2. $\Rightarrow$ 4.} Let us now assume that $\mathbb{E} X = 0$ and property 2 holds;
as usual we can assume that $K_2 = 1$.
We will prove that property 4 holds with an appropriately large absolute constant $C = K_4$.
This will follow by estimating Taylor series for the exponential function
\begin{equation} \label{mgf above}
\mathbb{E} \exp(tX)
= 1 + t \mathbb{E} X + \sum_{p=2}^\infty \frac{t^p \mathbb{E} X^p}{p!}
\le 1 + \sum_{p=2}^\infty \frac{t^p p^{p/2}}{p!}
\le 1 + \sum_{p=2}^\infty \Big( \frac{e|t|}{\sqrt{p}} \Big)^p.
\end{equation}
The first inequality here follows from $\mathbb{E} X = 0$ and property 2; the second one holds since $p! \ge (p/e)^p$.
We compare this with Taylor's series for
\begin{equation} \label{exp t squared}
\exp(C^2 t^2)
= 1 + \sum_{k=1}^\infty \frac{(C|t|)^{2k}}{k!}
\ge 1 + \sum_{k=1}^\infty \Big( \frac{C|t|}{\sqrt{k}} \Big)^{2k}
= 1 + \sum_{p \in 2\mathbb{N}} \Big( \frac{C|t|}{\sqrt{p/2}} \Big)^p.
\end{equation}
The first inequality here holds because $p! \le p^p$; the second one is obtained by substitution $p=2k$.
One can show that the series in \eqref{mgf above} is bounded by the series in \eqref{exp t squared} with large absolute constant $C$.
We conclude that $\mathbb{E} \exp(tX) \le \exp(C^2 t^2)$, which proves property~4.
{\bf 4. $\Rightarrow$ 1.}
Assume property 4 holds; we can also assume that $K_4=1$.
Let $\lambda > 0$ be a parameter to be chosen later. By exponential Markov inequality,
and using the bound on the moment generating function given in property 4, we obtain
$$
\mathbb{P} \{ X \ge t \} = \mathbb{P} \{ e^{\lambda X} \ge e^{\lambda t} \}
\le e^{-\lambda t} \mathbb{E} e^{\lambda X}
\le e^{-\lambda t + \lambda^2}.
$$
Optimizing in $\lambda$ and thus choosing $\lambda = t/2$ we conclude that
$\mathbb{P} \{ X \ge t \} \le e^{-t^2/4}$.
Repeating this argument for $-X$, we also obtain $\mathbb{P} \{ X \le -t \} \le e^{-t^2/4}$.
Combining these two bounds we conclude that
$\mathbb{P} \{ |X| \ge t \} \le 2 e^{-t^2/4} \le e^{1-t^2/4}$.
Thus property 1 holds with $K_1=2$.
The lemma is proved.
\end{proof}
\begin{remark}
\begin{enumerate}
\item The constants $1$ and $e$ in properties 1 and 3 respectively are chosen for convenience.
Thus the value $1$ can be replaced by any positive number and the value $e$ can be replaced
by any number greater than $1$.
\item The assumption $\mathbb{E} X = 0$ is only needed to prove the necessity of property 4;
the sufficiency holds without this assumption.
\end{enumerate}
\end{remark}
\begin{definition}[Sub-gaussian random variables] \index{Sub-gaussian!random variables}
A random variable $X$ that satisfies one of the equivalent properties 1 -- 3 in Lemma~\ref{sub-gaussian properties}
is called a {\em sub-gaussian random variable}.
The {\em sub-gaussian norm} \index{Sub-gaussian!norm} of $X$, denoted $\|X\|_{\psi_2}$,
is defined to be the smallest $K_2$ in property 2.
In other words,\footnote{The sub-gaussian norm is also called ${\psi_2}$ norm in the literature.}
$$
\|X\|_{\psi_2} = \sup_{p \ge 1} p^{-1/2} (\mathbb{E} |X|^p)^{1/p}.
$$
\end{definition}
The class of sub-gaussian random variables on a given probability space
is thus a normed space. By Lemma~\ref{sub-gaussian properties},
every sub-gaussian random variable $X$ satisfies:
\begin{gather}
\mathbb{P} \{ |X| > t \} \le \exp(1-ct^2/\|X\|_{\psi_2}^2) \quad \text{for all } t \ge 0; \label{sub-gaussian tail}\\
(\mathbb{E} |X|^p)^{1/p} \le \|X\|_{\psi_2} \sqrt{p} \quad \text{for all } p \ge 1; \label{sub-gaussian moments}\\
\mathbb{E} \exp(cX^2/\|X\|_{\psi_2}^2) \le e; \nonumber\\
\text{if $\mathbb{E} X =0$ then } \mathbb{E} \exp(tX) \le \exp(C t^2 \|X\|_{\psi_2}^2) \quad \text{for all } t \in \mathbb{R}, \label{sub-gaussian mgf}
\end{gather}
where $C, c > 0$ are absolute constants. Moreover, up to absolute constant factors,
$\|X\|_{\psi_2}$ is the smallest possible number in each of these inequalities.
\begin{example}
Classical examples of sub-gaussian random variables are Gaussian, Bernoulli
and all bounded random variables.
\begin{enumerate}
\item {\bf (Gaussian):} A standard normal random variable $X$ is sub-gaussian with
$\|X\|_{\psi_2} \le C$ where $C$ is an absolute constant. This follows from \eqref{normal moments}.
More generally, if $X$ is
a centered normal random variable with variance $\sigma^2$, then $X$ is sub-gaussian
with $\|X\|_{\psi_2} \le C \sigma$.
\item {\bf (Bernoulli):} \index{Bernoulli!random variables} Consider a random variable $X$ with distribution
$\mathbb{P}\{X=-1\} = \mathbb{P}\{X=1\} = 1/2$. We call $X$ a {\em symmetric Bernoulli random variable}.
Since $|X|=1$, it follows that $X$ is a sub-gaussian random variable with $\|X\|_{\psi_2} = 1$.
\item {\bf (Bounded):} More generally, consider any bounded random variable $X$,
thus $|X| \le M$ almost surely for some $M$. Then $X$ is a sub-gaussian random variable with $\|X\|_{\psi_2} \le M$.
We can write this more compactly as
$\|X\|_{\psi_2} \le \|X\|_\infty$.
\end{enumerate}
\end{example}
A remarkable property of the normal distribution is {\em rotation invariance}.
Given a finite number of independent centered normal random variables $X_i$,
their sum $\sum_i X_i$ is also a centered normal random variable,
obviously with $\Var(\sum_i X_i) = \sum_i \Var(X_i)$.
Rotation invariance passes onto sub-gaussian random variables, although approximately:
\begin{lemma}[Rotation invariance] \index{Rotation invariance} \label{rotation invariance}
Consider a finite number of independent centered sub-gaussian random variables $X_i$.
Then $\sum_i X_i$ is also a centered sub-gaussian random variable. Moreover,
$$
\big\| \sum_i X_i \big\|_{\psi_2}^2
\le C \sum_i \|X_i\|_{\psi_2}^2
$$
where $C$ is an absolute constant.
\end{lemma}
\begin{proof}
The argument is based on estimating the moment generating function.
Using independence and \eqref{sub-gaussian mgf} we have for every $t \in \mathbb{R}$:
\begin{align*}
\mathbb{E} \exp \big( t \sum_i X_i \big)
&= \mathbb{E} \prod_i \exp(t X_i)
= \prod_i \mathbb{E} \exp(t X_i)
\le \prod_i \exp(C t^2 \|X_i\|_{\psi_2}^2) \\
&= \exp(t^2 K^2) \quad \text{where } K^2 = C \sum_i \|X_i\|_{\psi_2}^2.
\end{align*}
Using the equivalence of properties 2 and 4 in Lemma~\ref{sub-gaussian properties}
we conclude that $\|\sum_i X_i\|_{\psi_2} \le C_1 K$ where $C_1$ is an absolute constant.
The proof is complete.
\end{proof}
The rotation invariance immediately yields a {\em large deviation inequality}
for sums of independent sub-gaussian random variables:
\begin{proposition}[Hoeffding-type inequality] \index{Hoeffding-type inequality} \label{sub-gaussian large deviations}
Let $X_1,\ldots,X_N$ be independent centered sub-gaussian random variables,
and let $K = \max_i \|X_i\|_{\psi_2}$.
Then for every $a = (a_1,\ldots,a_N) \in \mathbb{R}^N$ and every $t \ge 0$, we have
$$
\mathbb{P} \Big\{ \Big| \sum_{i=1}^N a_i X_i \Big| \ge t \Big\}
\le e \cdot \exp \Big( -\frac{ct^2}{K^2\|a\|_2^2} \Big)
$$
where $c>0$ is an absolute constant.
\end{proposition}
\begin{proof}
The rotation invariance (Lemma~\ref{rotation invariance}) implies the bound
$\|\sum_i a_i X_i\|_{\psi_2}^2 \le C \sum_i a_i^2 \|X_i\|_{\psi_2}^2 \le C K^2 \|a\|_2^2$.
Property \eqref{sub-gaussian tail} yields the required tail decay.
\end{proof}
\begin{remark}
One can interpret these results (Lemma~\ref{rotation invariance} and Proposition~\ref{sub-gaussian large deviations})
as one-sided {\em non-asymptotic manifestations of the central limit theorem}. For example,
consider the normalized sum of independent symmetric Bernoulli random variables
$S_N = \frac{1}{\sqrt{N}} \sum_{i=1}^N \varepsilon_i$. Proposition~\ref{sub-gaussian large deviations}
yields the tail bounds $\mathbb{P} \{ |S_N| > t \} \le e \cdot e^{-ct^2}$ for any number of terms $N$.
Up to the absolute constants $e$ and $c$, these tails coincide with those of the standard normal random variable
\eqref{normal tail}.
\end{remark}
Using moment growth \eqref{sub-gaussian moments}
instead of the tail decay \eqref{sub-gaussian tail}, we immediately
obtain from Lemma~\ref{rotation invariance} a general form of the well known Khintchine inequality:
\begin{corollary}[Khintchine inequality] \index{Khinchine inequality} \label{Khintchine}
Let $X_i$ be a finite number of independent sub-gaussian random variables
with zero mean, unit variance, and $\|X_i\|_{{\psi_2}} \le K$.
Then, for every sequence of coefficients $a_i$ and every exponent $p \ge 2$ we have
$$
\big( \sum_i a_i^2 \big)^{1/2}
\le \big( \mathbb{E} \big| \sum_i a_i X_i \big|^p \big)^{1/p}
\le C K \sqrt{p} \, \big( \sum_i a_i^2 \big)^{1/2}
$$
where $C$ is an absolute constant.
\end{corollary}
\begin{proof}
The lower bound follows by independence and H\"older's inequality: indeed,
$\big( \mathbb{E} \big| \sum_i a_i X_i \big|^p \big)^{1/p} \ge \big( \mathbb{E} \big| \sum_i a_i X_i \big|^2 \big)^{1/2}
= \big( \sum_i a_i^2 \big)^{1/2}$. For the upper bound, we argue as in Proposition~\ref{sub-gaussian large deviations},
but use property \eqref{sub-gaussian moments}.
\end{proof}
\subsection{Sub-exponential random variables} \index{Sub-exponential!random variables} \label{s: sub-exponential}
Although the class of sub-gaussian random variables is natural and quite wide,
it leaves out some useful random variables which have tails heavier than gaussian.
One such example is a standard exponential random variable
-- a non-negative random variable with exponential tail decay
\begin{equation} \label{exponential}
\mathbb{P}\{X \ge t\} = e^{-t}, \quad t \ge 0.
\end{equation}
To cover such examples, we consider a class of {\em sub-exponential random variables},
those with at least an exponential tail decay. With appropriate
modifications, the basic properties of sub-gaussian random variables hold
for sub-exponentials.
In particular, a version of Lemma~\ref{sub-gaussian properties} holds with a similar proof
for sub-exponential properties, except for property 4 of the moment generating function.
Thus for a random variable $X$ the following properties are equivalent with parameters
$K_i > 0$ differing from each other by at most an absolute constant factor:
\begin{gather}
\mathbb{P} \{ |X| > t \} \le \exp(1-t/K_1) \quad \text{for all } t \ge 0; \label{sub-exponential tail} \\
(\mathbb{E} |X|^p)^{1/p} \le K_2 p \quad \text{for all } p \ge 1; \label{sub-exponential moments} \\
\mathbb{E} \exp(X/K_3) \le e. \label{sub-exponential integrability}
\end{gather}
\begin{definition}[Sub-exponential random variables] \index{Sub-exponential!random variables}
A random variable $X$ that satisfies one of the equivalent properties
\eqref{sub-exponential tail} -- \eqref{sub-exponential integrability}
is called a {\em sub-exponential random variable}.
The {\em sub-exponential norm} \index{Sub-exponential!norm} of $X$, denoted $\|X\|_{\psi_1}$,
is defined to be the smallest parameter $K_2$.
In other words,
$$
\|X\|_{\psi_1} = \sup_{p \ge 1} p^{-1} (\mathbb{E} |X|^p)^{1/p}.
$$
\end{definition}
\begin{lemma}[Sub-exponential is sub-gaussian squared] \label{sub-exponential squared}
A random variable $X$ is sub-gaussian if and only if $X^2$ is sub-exponential.
Moreover,
$$
\|X\|_{\psi_2}^2 \le \|X^2\|_{\psi_1} \le 2 \|X\|_{\psi_2}^2.
$$
\end{lemma}
\begin{proof}
This follows easily from the definition.
\end{proof}
The moment generating function of a sub-exponential random variable has a similar
upper bound as in the sub-gaussian case (property 4 in Lemma~\ref{sub-gaussian properties}).
The only real difference is that the bound only holds in a neighborhood of zero rather than on the
whole real line. This is inevitable, as the moment generating function
of an exponential random variable \eqref{exponential} does not exist for $t \ge 1$.
\begin{lemma}[Mgf of sub-exponential random variables] \label{sub-exponential mgf}
Let $X$ be a centered sub-exponential random variable. Then, for $t$ such that
$|t| \le c/\|X\|_{\psi_1}$, one has
$$
\mathbb{E} \exp(t X) \le \exp(C t^2 \|X\|_{\psi_1}^2)
$$
where $C, c > 0$ are absolute constants.
\end{lemma}
\begin{proof}
The argument is similar to the sub-gaussian case.
We can assume that $\|X\|_{\psi_1}=1$ by replacing $X$ with $X/\|X\|_{\psi_1}$
and $t$ with $t \|X\|_{\psi_1}$. Repeating the proof of the
implication 2 $\Rightarrow$ 4 of Lemma~\ref{sub-gaussian properties}
and using $\mathbb{E}|X|^p \le p^p$ this time, we obtain that
$\mathbb{E} \exp(tX) \le 1 + \sum_{p=2}^\infty (e|t|)^p$.
If $|t| \le 1/2e$ then the right hand side is bounded by $1 + 2e^2 t^2 \le \exp(2e^2 t^2)$.
This completes the proof.
\end{proof}
Sub-exponential random variables satisfy a {\em large deviation inequality}
similar to the one for sub-gaussians (Proposition~\ref{sub-gaussian large deviations}).
The only significant difference is that {\em two tails} have to appear here --
a gaussian tail responsible for the central limit theorem,
and an exponential tail coming from the tails of each term.
\begin{proposition}[Bernstein-type inequality] \index{Bernstein-type inequality} \label{sub-exponential large deviations}
Let~$X_1,\ldots,X_N$ be independent centered sub-exponential random variables,
and $K = \max_i \|X_i\|_{\psi_1}$.
Then for every $a = (a_1,\ldots,a_N) \in \mathbb{R}^N$
and every $t \ge 0$, we have
$$
\mathbb{P} \Big\{ \Big| \sum_{i=1}^N a_i X_i \Big| \ge t \Big\}
\le 2 \exp \Big[ -c \min \Big( \frac{t^2}{K^2\|a\|_2^2}, \; \frac{t}{K\|a\|_\infty} \Big) \Big]
$$
where $c>0$ is an absolute constant.
\end{proposition}
\begin{proof}
Without loss of generality, we assume that $K=1$ by replacing $X_i$ with $X_i/K$
and $t$ with $t/K$. We use the exponential Markov inequality for
the sum $S = \sum_i a_i X_i$ and with a parameter $\lambda>0$:
$$
\mathbb{P} \{ S \ge t \}
= \mathbb{P} \{ e^{\lambda S} \ge e^{\lambda t} \}
\le e^{-\lambda t} \mathbb{E} e^{\lambda S}
= e^{-\lambda t} \prod_i \mathbb{E} \exp (\lambda a_i X_i).
$$
If $|\lambda| \le c/\|a\|_\infty$ then $|\lambda a_i| \le c$ for all $i$, so Lemma~\ref{sub-exponential mgf} yields
$$
\mathbb{P} \{ S \ge t \}
\le e^{-\lambda t} \; \prod_i \exp (C \lambda^2 a_i^2)
= \exp(-\lambda t + C \lambda^2 \|a\|_2^2).
$$
Choosing $\lambda = \min( t/2C\|a\|_2^2, \, c/\|a\|_\infty )$, we obtain that
$$
\mathbb{P} \{ S \ge t \} \le \exp \Big[ - \min \Big( \frac{t^2}{4C\|a\|_2^2}, \; \frac{ct}{2\|a\|_\infty} \Big) \Big].
$$
Repeating this argument for $-X_i$ instead of $X_i$, we obtain the same bound for
$\mathbb{P} \{ -S \ge t \}$. A combination of these two bounds completes the proof.
\end{proof}
\begin{corollary} \label{average sub-exponentials}
Let $X_1,\ldots,X_N$ be independent centered sub-exponential random variables,
and let $K = \max_i \|X_i\|_{\psi_1}$.
Then, for every $\varepsilon \ge 0$, we have
$$
\mathbb{P} \Big\{ \Big| \sum_{i=1}^N X_i \Big| \ge \varepsilon N \Big\}
\le 2 \exp \Big[ -c \min \Big( \frac{\varepsilon^2}{K^2}, \; \frac{\varepsilon}{K} \Big) N \Big]
$$
where $c>0$ is an absolute constant.
\end{corollary}
\begin{proof}
This follows from Proposition~\ref{sub-exponential large deviations} for $a_i=1$ and $t=\varepsilon N$.
\end{proof}
\begin{remark}[Centering] \label{centering}
The definitions of sub-gaussian and sub-exponential random variables $X$ do not require
them to be centered. In any case, one can always center $X$ using the simple fact that
if $X$ is sub-gaussian (or sub-exponential), then so is $X - \mathbb{E} X$. Moreover,
$$
\|X - \mathbb{E} X\|_{\psi_2} \le 2 \|X\|_{\psi_2}, \quad
\|X - \mathbb{E} X\|_{\psi_1} \le 2 \|X\|_{\psi_1}.
$$
This follows by triangle inequality
$\|X - \mathbb{E} X\|_{\psi_2} \le \|X\|_{\psi_2} + \|\mathbb{E} X\|_{\psi_2}$ along with
$\|\mathbb{E} X\|_{\psi_2} = |\mathbb{E} X| \le \mathbb{E}|X| \le \|X\|_{\psi_2}$,
and similarly for the sub-exponential norm.
\end{remark}
\subsection{Isotropic random vectors} \index{Isotropic random vectors} \label{s: isotropic}
Now we carry our work over to higher dimensions. We will thus
be working with random vectors $X$ in $\mathbb{R}^n$,
or equivalently probability distributions in $\mathbb{R}^n$.
While the concept of the mean $\mu = \mathbb{E} Z$ of a random variable $Z$ remains the same
in higher dimensions, the second moment $\mathbb{E} Z^2$
is replaced by the $n \times n$ {\em second moment matrix} \index{Second moment matrix}
of a random vector $X$, defined as
$$
\Sigma = \Sigma(X) = \mathbb{E} X \otimes X = \mathbb{E} X X^T
$$
where $\otimes$ denotes the outer product of vectors in $\mathbb{R}^n$.
Similarly, the concept of variance $\Var(Z) = \mathbb{E}(Z - \mu)^2 = \mathbb{E} Z^2 - \mu^2$ of a random variable
is replaced in higher dimensions with the {\em covariance matrix} \index{Covariance matrix} of a random vector
$X$, defined as
$$
\Cov(X) = \mathbb{E} (X-\mu) \otimes (X-\mu) = \mathbb{E} X \otimes X - \mu \otimes \mu
$$
where $\mu = \mathbb{E} X$.
By translation, many questions can be reduced to the case of centered random vectors,
for which $\mu = 0$ and $\Cov(X) = \Sigma(X)$. We will also need a higher-dimensional
version of unit variance:
\begin{definition}[Isotropic random vectors] \index{Isotropic random vectors}
A random vector $X$ in $\mathbb{R}^n$ is called {\em isotropic} if $\Sigma(X) = I$.
Equivalently, $X$ is isotropic if
\begin{equation} \label{isotropy}
\mathbb{E} \< X, x\> ^2 = \|x\|_2^2 \quad \text{for all } x \in \mathbb{R}^n.
\end{equation}
\end{definition}
Suppose $\Sigma(X)$ is an invertible matrix, which means that the distribution of $X$ is
not essentially supported on any proper subspace of $\mathbb{R}^n$.
Then $\Sigma(X)^{-1/2} X$ is an isotropic random vector in $\mathbb{R}^n$.
Thus every non-degenerate random vector can be made isotropic by an appropriate
linear transformation.\footnote{This transformation (usually preceded by centering)
is a higher-dimensional version of {\em standardizing} of random variables, which enforces zero mean and unit variance.}
This allows us to mostly focus on studying isotropic random vectors in the future.
\begin{lemma} \label{norm isotropic}
Let $X,Y$ be independent isotropic random vectors in $\mathbb{R}^n$. Then
$\mathbb{E} \|X\|_2^2 = n$ and $\mathbb{E} \< X,Y\> ^2 = n$.
\end{lemma}
\begin{proof}
The first part follows from
$\mathbb{E} \|X\|_2^2 = \mathbb{E} \tr(X \otimes X) = \tr(\mathbb{E} X \otimes X) = \tr(I) = n$.
The second part follows by conditioning on $Y$, using isotropy of $X$ and using the first part for $Y$:
this way we obtain $\mathbb{E} \< X,Y\> ^2 = \mathbb{E} \|Y\|_2^2 = n$.
\end{proof}
\begin{example} \label{random vectors}
\begin{enumerate}
\item {\bf (Gaussian):}
The (standard) {\em Gaussian random vector} \index{Gaussian!random vectors} $X$ in $\mathbb{R}^n$ chosen according to the
standard normal distribution $N(0, I)$ is isotropic. The coordinates of $X$
are independent standard normal random variables.
\item {\bf (Bernoulli):} \index{Bernoulli!random vectors} A similar example of a discrete isotropic distribution is
given by a {\em Bernoulli random vector} $X$ in $\mathbb{R}^n$ whose
coordinates are independent symmetric Bernoulli random variables.
\item {\bf (Product distributions):} More generally, consider a random vector $X$
in $\mathbb{R}^n$ whose coordinates are independent random variables with zero mean and unit variance.
Then clearly $X$ is an isotropic vector in $\mathbb{R}^n$.
\item {\bf (Coordinate):} \index{Coordinate random vectors}
Consider a {\em coordinate random vector} $X$, which is
uniformly distributed in the set $\{ \sqrt{n} \, e_i \}_{i=1}^n$
where $\{e_i \}_{i=1}^n$ is the canonical basis of $\mathbb{R}^n$.
Clearly $X$ is an isotropic random vector in $\mathbb{R}^n$.\footnote{The examples of Gaussian
and coordinate random vectors are somewhat opposite --
one is very continuous and the other is very discrete. They may be used as test
cases in our study of random matrices.}
\item {\bf (Frame):} \index{Frames} This is a more general version of the coordinate random vector.
A {\em frame} is a set of vectors $\{u_i\}_{i=1}^M$ in $\mathbb{R}^n$
which obeys an approximate Parseval's identity, i.e. there exist numbers $A,B>0$ called {\em frame bounds}
such that
$$
A\|x\|_2^2 \le \sum_{i=1}^M \< u_i, x \> ^2 \le B\|x\|_2^2 \quad \text{for all } x \in \mathbb{R}^n.
$$
If $A=B$ the set is called a {\em tight frame}.
Thus, tight frames are generalizations of orthogonal bases without linear independence.
Given a tight frame $\{u_i\}_{i=1}^M$ with bounds $A=B=M$,
the random vector $X$ uniformly distributed in the set
$\{u_i \}_{i=1}^M$ is clearly isotropic in $\mathbb{R}^n$.\footnote{There is clearly a reverse implication, too,
which shows that the class of tight frames can be identified with the class of discrete isotropic random vectors.}
\item{\bf (Spherical):} \index{Spherical random vector}
Consider a random vector $X$ uniformly distributed on the unit Euclidean
sphere in $\mathbb{R}^n$ with center at the origin and radius $\sqrt{n}$. Then $X$ is isotropic.
Indeed, by rotation invariance $\mathbb{E} \< X,x\> ^2$ is proportional to $\|x\|_2^2$; the correct normalization
$\sqrt{n}$ is derived from Lemma~\ref{norm isotropic}.
\item {\bf (Uniform on a convex set):} In convex geometry, a convex set $K$ in $\mathbb{R}^n$
is called isotropic if a random vector $X$ chosen uniformly from $K$ according
to the volume is isotropic. As we noted, every full dimensional convex set can be made into an isotropic one by
an affine transformation. Isotropic convex sets look ``well conditioned'',
which is advantageous in geometric algorithms (e.g. volume computations).
\end{enumerate}
\end{example}
We generalize the concepts of sub-gaussian random variables to higher dimensions
using one-dimensional marginals.
\begin{definition}[Sub-gaussian random vectors] \index{Sub-gaussian!random vectors} \label{d: sub-gaussian}
We say that a random vector $X$ in $\mathbb{R}^n$ is {\em sub-gaussian}
if the one-dimensional marginals $\< X, x\> $ are sub-gaussian random variables for all $x \in \mathbb{R}^n$.
The {\em sub-gaussian norm} \index{Sub-gaussian!norm} of $X$ is defined as
$$
\|X\|_{\psi_2} = \sup_{x \in S^{n-1}} \|\< X, x\> \|_{\psi_2}.
$$
\end{definition}
\begin{remark}[Properties of high-dimensional distributions]
The definitions of isotropic and sub-gaussian distributions suggest that more generally,
natural properties of high-dimensional distributions may be defined via one-dimensional marginals.
This is a natural way to generalize properties of random variables to random vectors.
For example, we shall call a random vector sub-exponential if all of its one-dimensional
marginals are sub-exponential random variables, etc.
\end{remark}
One simple way to create sub-gaussian distributions in $\mathbb{R}^n$ is by taking a product
of $n$ sub-gaussian distributions on the line:
\begin{lemma}[Product of sub-gaussian distributions] \label{sub-gaussian products}
Let $X_1,\ldots,X_n$ be independent centered sub-gaussian random variables.
Then $X = (X_1,\ldots,X_n)$ is a centered sub-gaussian random vector in $\mathbb{R}^n$, and
$$
\|X\|_{\psi_2} \le C \max_{i \le n} \|X_i\|_{\psi_2}
$$
where $C$ is an absolute constant.
\end{lemma}
\begin{proof}
This is a direct consequence of the rotation invariance principle, Lemma~\ref{rotation invariance}.
Indeed, for every $x = (x_1,\ldots,x_n) \in S^{n-1}$ we have
$$
\|\< X, x\> \|_{\psi_2}
= \Big\| \sum_{i=1}^n x_i X_i \Big\|_{\psi_2}
\le C \sum_{i=1}^n x_i^2 \|X_i\|_{\psi_2}^2
\le C \max_{i \le n} \|X_i\|_{\psi_2}
$$
where we used that $\sum_{i=1}^n x_i^2 = 1$.
This completes the proof.
\end{proof}
\begin{example} \label{random vectors sub-gaussian}
Let us analyze the basic examples of random vectors introduced earlier
in Example~\ref{random vectors}.
\begin{enumerate}
\item {\bf (Gaussian, Bernoulli):} \index{Gaussian!random vectors} \index{Bernoulli!random vectors}
Gaussian and Bernoulli random vectors are sub-gaussian;
their sub-gaussian norms are bounded by an absolute constant.
These are particular cases of Lemma~\ref{sub-gaussian products}.
\item {\bf (Spherical):} \index{Spherical random vector} A spherical random vector is also sub-gaussian;
its sub-gaussian norm is bounded by an absolute constant.
Unfortunately, this does not follow from Lemma~\ref{sub-gaussian products} because the coordinates
of the spherical vector are not independent. Instead, by rotation invariance, the claim
clearly follows from the following geometric fact.
For every $\varepsilon \ge 0$, the spherical cap $\{ x \in S^{n-1}:\; x_1 > \varepsilon\}$ makes up
at most $\exp(-\varepsilon^2 n/2)$ proportion of the total area on the sphere.\footnote{This
fact about spherical caps may seem counter-intuitive. For example, for $\varepsilon = 0.1$ the
cap looks similar to a hemisphere, but the proportion of its area goes to zero
very fast as dimension $n$ increases.
This is a starting point of the study of the {\em concentration of measure phenomenon},
see \cite{Ledoux}.}
This can be proved directly by integration, and also by elementary geometric considerations \cite[Lemma~2.2]{Ball}.
\item {\bf (Coordinate):} \index{Coordinate random vectors}
Although the coordinate random vector $X$ is formally sub-gaussian
as its support is finite, its sub-gaussian norm is too big: $\|X\|_{\psi_2} = \sqrt{n} \gg 1$.
So we would not think of $X$ as a sub-gaussian random vector.
\item {\bf (Uniform on a convex set):} For many isotropic convex sets $K$ (called $\psi_2$ bodies),
a random vector $X$ uniformly distributed in $K$ is sub-gaussian with $\|X\|_{\psi_2} = O(1)$.
For example, the cube $[-1,1]^n$ is a $\psi_2$
body by Lemma~\ref{sub-gaussian products}, while the appropriately
normalized cross-polytope $\{ x \in \mathbb{R}^n:\; \|x\|_1 \le M \}$ is not.
Nevertheless, Borell's lemma (which is a consequence of Brunn-Minkowski inequality)
implies a weaker property, that $X$ is always {\em sub-exponential},
and $\|X\|_{\psi_1} = \sup_{x \in S^{n-1}} \|\< X, x\> \|_{\psi_1}$ is bounded by absolute constant.
See \cite[Section~2.2.b$_3$]{GiMi} for a proof and discussion of these ideas.
\end{enumerate}
\end{example}
\subsection{Sums of independent random matrices} \label{s: sums matrices}
In this section, we mention without proof some results of classical probability theory
in which scalars can be replaced by matrices.
Such results are useful in particular for problems on random matrices,
since we can view a random matrix as a generalization of a random variable.
One such remarkable generalization is valid for Khintchine inequality,
Corollary~\ref{Khintchine}. The scalars $a_i$ can be replaced by matrices, and the absolute value
by the {\em Schatten norm}. \index{Schatten norm}
Recall that for $1 \le p \le \infty$, the $p$-Schatten norm of an $n \times n$ matrix $A$ is defined as
the $\ell_p$ norm of the sequence of its singular values:
$$
\|A\|_{C_p^n} = \| (s_i(A))_{i=1}^n\|_p = \big( \sum_{i=1}^n s_i(A)^p \big)^{1/p}.
$$
For $p=\infty$, the Schatten norm equals the spectral norm $\|A\| = \max_{i \le n} s_i(A)$.
Using this one can quickly check that already for $p = \log n$ the Schatten and spectral
norms are equivalent: $\|A\|_{C_p^n} \le \|A\| \le e \|A\|_{C_p^n}$.
\begin{theorem}[Non-commutative Khintchine inequality, see \cite{Pisier operator} Section 9.8]
\index{Khinchine inequality!non-commutative} \label{non-commutative Khintchine}
\hfill Let $A_1, \ldots, A_N$ be self-adjoint $n \times n$ matrices and
$\varepsilon_1, \ldots, \varepsilon_N$ be independent symmetric Bernoulli random variables.
Then, for every $2 \le p < \infty$, we have
$$
\Big\| \Big( \sum_{i=1}^N A_i^2 \Big)^{1/2} \Big\|_{C_p^n}
\le \Big( \mathbb{E} \Big\| \sum_{i=1}^N \varepsilon_i A_i \Big\|_{C_p^n}^p \Big)^{1/p}
\le C \sqrt{p} \, \Big\| \Big( \sum_{i=1}^N A_i^2 \Big)^{1/2} \Big\|_{C_p^n}
$$
where $C$ is an absolute constant.
\end{theorem}
\begin{remark}
\begin{enumerate}
\item The scalar case of this result, for $n=1$, recovers the classical Khintchine inequality,
Corollary~\ref{Khintchine}, for $X_i = \varepsilon_i$.
\item By the equivalence of Schatten and spectral norms for $p=\log n$,
a version of non-commutative Khintchine inequality holds for the spectral norm:
\begin{equation} \label{Khintchine operator norm}
\mathbb{E} \Big\| \sum_{i=1}^N \varepsilon_i A_i \Big\|
\le C_1 \sqrt{\log n} \, \Big\| \Big( \sum_{i=1}^N A_i^2 \Big)^{1/2} \Big\|
\end{equation}
where $C_1$ is an absolute constant. The logarithmic factor is unfortunately essential;
it role will be clear when we discuss applications of this result to random matrices in the next sections.
\end{enumerate}
\end{remark}
\begin{corollary}[Rudelson's inequality \cite{Rudelson isotropic}] \index{Rudelson's inequality} \label{Rudelson}
Let $x_1, \ldots, x_N$ be vectors in $\mathbb{R}^n$ and
$\varepsilon_1, \ldots, \varepsilon_N$ be independent symmetric Bernoulli random variables.
Then
$$
\mathbb{E} \Big\| \sum_{i=1}^N \varepsilon_i x_i \otimes x_i \Big\|
\le C \sqrt{\log \min(N,n)} \cdot \max_{i \le N} \|x_i\|_2 \cdot \Big\| \sum_{i=1}^N x_i \otimes x_i \Big\|^{1/2}
$$
where $C$ is an absolute constant.
\end{corollary}
\begin{proof}
One can assume that $n \le N$ by replacing $\mathbb{R}^n$ with the linear span of $\{x_1,\ldots,x_N\}$
if necessary. The claim then follows from \eqref{Khintchine operator norm}, since
$$
\Big\| \Big( \sum_{i=1}^N (x_i \otimes x_i)^2 \Big)^{1/2} \Big\|
= \Big\| \sum_{i=1}^N \|x_i\|_2^2 \; x_i \otimes x_i \Big\|^{1/2}
\le \max_{i \le N} \|x_i\|_2 \Big\| \sum_{i=1}^N x_i \otimes x_i \Big\|^{1/2}. \qedhere
$$
\end{proof}
Ahlswede and Winter \cite{AW} pioneered a different approach to matrix-valued
inequalities in probability theory, which was based on trace inequalities like Golden-Thompson
inequality. A development of this idea leads to remarkably sharp results. We quote one such inequality
from \cite{Tropp}:
\begin{theorem}[Non-commutative Bernstein-type inequality \cite{Tropp}]
\index{Bernstein-type inequality!non-commutative} \label{matrix Bernstein}
Consider a finite sequence $X_i$ of independent centered self-adjoint random $n \times n$ matrices.
Assume we have for some numbers $K$ and $\sigma$ that
$$
\|X_i\| \le K \text{ almost surely}, \quad \big\| \sum_i \mathbb{E} X_i^2 \big\| \le \sigma^2.
$$
Then, for every $t \ge 0$ we have
\begin{equation} \label{eq matrix Bernstein}
\mathbb{P} \Big\{ \big\| \sum_i X_i \big\| \ge t \Big\} \le 2 n \cdot \exp \Big( \frac{-t^2/2}{\sigma^2 + Kt/3} \Big).
\end{equation}
\end{theorem}
\begin{remark} \label{mixed tail}
This is a direct matrix generalization of a classical Bernstein's inequality for bounded random variables.
To compare it with our version of Bernstein's inequality for sub-exponentials,
Proposition~\ref{sub-exponential large deviations},
note that the probability bound in \eqref{eq matrix Bernstein} is equivalent to
$2n \cdot \exp \big[ -c \min \big( \frac{t^2}{\sigma^2}, \frac{t}{K} \big) \big]$ where $c>0$ is an absolute constant.
In both results we see a mixture of gaussian and exponential tails.
\end{remark}
\section{Random matrices with independent entries} \label{s: entries}
We are ready to study the extreme singular values of random matrices.
In this section, we consider the classical model of random matrices whose entries are independent and centered
random variables. Later we will study the more difficult models where
only the rows or the columns are independent.
The reader may keep in mind some classical examples of $N \times n$ random matrices with independent entries.
The most classical example is the {\em Gaussian random matrix} $A$ \index{Gaussian!random matrices}
whose entries are independent standard normal random variables. In this case,
the $n \times n$ symmetric matrix $A^*A$ is called Wishart matrix; it is a higher-dimensional version of
chi-square distributed random variables.
The simplest example of discrete random matrices is the {\em Bernoulli random matrix} $A$
\index{Bernoulli!random matrices} whose entries
are independent symmetric Bernoulli random variables. In other words, Bernoulli random matrices are distributed
uniformly in the set of all $N \times n$ matrices with $\pm 1$ entries.
\subsection{Limit laws and Gaussian matrices}
Consider an $N \times n$ random matrix $A$ whose entries are independent centered identically distributed
random variables. By now, the {\em limiting behavior} of the extreme singular values of $A$,
as the dimensions $N, n \to \infty$, is well understood:
\begin{theorem}[Bai-Yin's law, see \cite{Bai-Yin}] \index{Bai-Yin's law} \label{Bai-Yin}
Let $A = A_{N,n}$ be an $N \times n$ random matrix whose entries
are independent copies of a random variable with zero mean, unit variance,
and finite fourth moment. Suppose that the dimensions $N$ and $n$ grow to infinity
while the aspect ratio $n/N$ converges to a constant in $[0,1]$.
Then
$$
s_{\min}(A) = \sqrt{N} - \sqrt{n} + o(\sqrt{n}), \quad
s_{\max}(A) = \sqrt{N} + \sqrt{n} + o(\sqrt{n}) \quad
\text{almost surely}.
$$
\end{theorem}
As we pointed out in the introduction, our program is to find non-asymptotic
versions of Bai-Yin's law. There is precisely one model of random matrices, namely Gaussian,
where an {\em exact} non-asymptotic result is known:
\begin{theorem}[Gordon's theorem for Gaussian matrices] \index{Gordon's theorem} \label{Gaussian}
Let $A$ be an $N \times n$ matrix whose entries
are independent standard normal random variables. Then
$$
\sqrt{N} - \sqrt{n} \le \mathbb{E} s_{\min}(A) \le \mathbb{E} s_{\max}(A) \le \sqrt{N} + \sqrt{n}.
$$
\end{theorem}
The proof of the upper bound, which we borrowed from \cite{DS}, is based
on Slepian's comparison inequality for Gaussian processes.\footnote{Recall that a Gaussian process $(X_t)_{t \in T}$
is a collection of centered normal random variables $X_t$ on the same probability space, indexed by
points $t$ in an abstract set $T$.}
\begin{lemma}[Slepian's inequality, see \cite{Ledoux-Talagrand} Section 3.3] \index{Slepian's inequality} \label{Slepian}
Consider two Gaussian processes $(X_t)_{t \in T}$ and $(Y_t)_{t \in T}$
whose increments satisfy the inequality
$\mathbb{E} |X_s - X_t|^2 \le \mathbb{E} |Y_s - Y_t|^2$ for all $s,t \in T$.
Then
$\mathbb{E} \sup_{t \in T} X_t \le \mathbb{E} \sup_{t \in T} Y_t$.
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{Gaussian}]
We recognize
$s_{\max}(A) = \max_{u \in S^{n-1}, \; v \in S^{N-1}} \< Au, v\> $
to be the supremum of the Gaussian process $X_{u,v} = \< Au, v\> $ indexed by the pairs
of vectors $(u,v) \in S^{n-1} \times S^{N-1}$. We shall compare this process to the
following one whose supremum is easier to estimate:
$Y_{u,v} = \< g, u\> + \< h, v\> $ where $g \in \mathbb{R}^n$ and $h \in \mathbb{R}^N$
are independent standard Gaussian random vectors.
The rotation invariance of Gaussian measures makes it easy to compare
the increments of these processes. For every $(u,v), (u',v') \in S^{n-1} \times S^{N-1}$,
one can check that
$$
\mathbb{E} |X_{u,v} - X_{u',v'}|^2
= \sum_{i=1}^n \sum_{j=1}^N |u_i v_j - u'_i v'_j|^2
\le \|u - u'\|_2^2 + \|v - v'\|_2^2
= \mathbb{E} |Y_{u,v} - Y_{u',v'}|^2.
$$
Therefore Lemma~\ref{Slepian} applies, and it yields
the required bound
$$
\mathbb{E} s_{\max}(A) = \mathbb{E} \max_{(u,v)} X_{u,v}
\le \mathbb{E} \max_{(u,v)}Y_{u,v}
= \mathbb{E} \|g\|_2 + \mathbb{E} \|h\|_2
\le \sqrt{N} + \sqrt{n}.
$$
Similar ideas are used to estimate
$\mathbb{E} s_{\min}(A) = \mathbb{E} \max_{v \in S^{N-1}} \min_{u \in S^{n-1}} \< Au, v\> $,
see \cite{DS}.
One uses in this case Gordon's generalization of Slepian's
inequality for minimax of
Gaussian processes \cite{Gordon 84, Gordon 85, Gordon 92}, see \cite[Section 3.3]{Ledoux-Talagrand}.
\end{proof}
While Theorem~\ref{Gaussian} is about the expectation of singular values, it also yields
a large deviation inequality for them.
It can be deduced formally by using the {\em concentration of measure}
in the Gauss space.
\begin{proposition}[Concentration in Gauss space, see \cite{Ledoux}]
\index{Concentration of meaure} \label{Gaussian concentration}
Let $f$ be a real valued Lipschitz function on $\mathbb{R}^n$ with Lipschitz constant $K$, i.e.
$|f(x)-f(y)| \le K \|x-y\|_2$ for all $x,y \in \mathbb{R}^n$ (such functions are also called $K$-Lipschitz).
Let $X$ be the standard normal random
vector in $\mathbb{R}^n$. Then for every $t \ge 0$ one has
$$
\mathbb{P} \{ f(X) - \mathbb{E} f(X) > t \} \le \exp(-t^2/2K^2).
$$
\end{proposition}
\begin{corollary}[Gaussian matrices, deviation; see \cite{DS}] \index{Gaussian!random matrices} \label{Gaussian deviation}
Let $A$ be an $N \times n$ matrix whose entries
are independent standard normal random variables.
Then for every $t \ge 0$, with probability at least $1 - 2 \exp(-t^2/2)$ one has
$$
\sqrt{N} - \sqrt{n} - t \le s_{\min}(A) \le s_{\max}(A) \le
\sqrt{N} + \sqrt{n} + t.
$$
\end{corollary}
\begin{proof}
Note that $s_{\min}(A)$, $s_{\max}(A)$ are $1$-Lipschitz functions of matrices $A$ considered
as vectors in $\mathbb{R}^{Nn}$. The conclusion now follows from the estimates on the expectation
(Theorem~\ref{Gaussian}) and Gaussian concentration (Proposition~\ref{Gaussian concentration}).
\end{proof}
Later in these notes, we find it more convenient to work with the $n \times n$
positive-definite symmetric matrix $A^*A$ rather than with the original $N \times n$ matrix $A$.
Observe that the normalized matrix $\bar{A} = \frac{1}{\sqrt{N}} A$ is an approximate isometry
(which is our goal) if and only if $\bar{A}^*\bar{A}$ is an approximate identity:
\begin{lemma}[Approximate isometries] \index{Approximate isometries} \label{approximate isometries}
Consider a matrix $B$ that satisfies
\begin{equation} \label{B*B}
\|B^*B - I\| \le \max(\delta,\delta^2)
\end{equation}
for some $\delta > 0$. Then
\begin{equation} \label{smin smax B}
1-\delta \le s_{\min}(B) \le s_{\max}(B) \le 1+\delta.
\end{equation}
Conversely, if $B$ satisfies \eqref{smin smax B} for some $\delta > 0$ then
$\|B^*B - I\| \le 3 \max(\delta,\delta^2)$.
\end{lemma}
\begin{proof}
Inequality \eqref{B*B} holds if and only if
$\big| \|Bx\|_2^2 - 1 \big| \le \max(\delta,\delta^2)$ for all $x \in S^{n-1}$.
Similarly, \eqref{smin smax B} holds if and only if
$\big| \|Bx\|_2 - 1 \big| \le \delta$ for all $x \in S^{n-1}$.
The conclusion then follows from the elementary inequality
$$
\max(|z-1|, |z-1|^2) \le |z^2-1| \le 3 \max(|z-1|, |z-1|^2) \quad \text{for all } z \ge 0. \qedhere
$$
\end{proof}
Lemma~\ref{approximate isometries} reduces our task of proving
inequalities \eqref{heuristic} to showing an equivalent (but often more convenient)
bound
$$
\big\| \frac{1}{N} A^*A-I \big\| \le \max(\delta, \delta^2)
\quad \text{where } \delta = O(\sqrt{n/N}).
$$
\subsection{General random matrices with independent entries}
Now we pass to a more general model of random matrices whose entries
are independent centered random variables with some general distribution
(not necessarily normal). The largest singular value (the spectral norm)
can be estimated by Latala's theorem for general random matrices with non-identically
distributed entries:
\begin{theorem}[Latala's theorem \cite{Latala}] \index{Latala's theorem} \label{Latala}
Let $A$ be a random matrix whose entries $a_{ij}$ are independent centered
random variables with finite fourth moment. Then
$$
\mathbb{E} s_{\max}(A) \le C \Big[ \max_i \big( \sum_j \mathbb{E} a_{ij}^2 \big)^{1/2}
+ \max_j \big( \sum_i \mathbb{E} a_{ij}^2 \big)^{1/2}
+ \big( \sum_{i,j} \mathbb{E} a_{ij}^4 \big)^{1/4} \Big].
$$
\end{theorem}
If the variance and the fourth moments of the entries are uniformly bounded,
then Latala's result yields
$s_{\max}(A) = O(\sqrt{N} + \sqrt{n})$. This is slightly weaker than our goal \eqref{heuristic},
which is $s_{\max}(A) = \sqrt{N} + O(\sqrt{n})$ but still satisfactory for most applications.
Results of the latter type will appear later in the more general model of random matrices
with independent rows or columns.
Similarly, our goal \eqref{heuristic} for the smallest singular value is $s_{\min}(A) \ge \sqrt{N} - O(\sqrt{n})$.
Since the singular values are non-negative anyway, such inequality would only be useful
for sufficiently tall matrices, $N \gg n$. For almost square and square matrices, estimating
the smallest singular value (known also as the {\em hard edge} of spectrum) is considerably more difficult.
The progress on estimating the hard edge is summarized in \cite{RV ICM}.
If $A$ has independent entries, then indeed $s_{\min}(A) \ge c (\sqrt{N} - \sqrt{n})$,
and the following is an optimal probability bound:
\begin{theorem}[Independent entries, hard edge \cite{RV rectangular}] \index{Hard edge of spectrum}
\label{RV rectangular}
Let $A$ be an $n \times n$ random matrix whose entries are independent
identically distributed subgaussian random variables
with zero mean and unit variance.
Then for $\varepsilon \ge 0$,
$$
\mathbb{P} \big( s_{\min}(A) \le \varepsilon (\sqrt{N} - \sqrt{n-1}) \big) \le (C\varepsilon)^{N-n+1} + c^N
$$
where $C > 0$ and $c \in (0,1)$ depend only on the subgaussian
norm of the entries.
\end{theorem}
This result gives an optimal bound for square matrices as well ($N=n$).
\section{Random matrices with independent rows} \label{s: rows}
In this section, we focus on a more general model of random matrices, where we only
assume independence of the rows rather than all entries.
Such matrices are naturally
{\em generated by high-dimensional distributions}.
Indeed, given an arbitrary probability distribution in $\mathbb{R}^n$, one takes
a sample of $N$ independent points and arranges them as the rows of an $N \times n$ matrix $A$.
By studying spectral properties of $A$ one should be able to learn something useful about the
underlying distribution. For example, as we will see in Section~\ref{s: covariance},
the extreme singular values of $A$ would tell us whether the covariance matrix
of the distribution can be estimated from a sample of size $N$.
The picture will vary slightly depending on whether the rows of $A$
are sub-gaussian or have arbitrary distribution. For heavy-tailed distributions, an extra
logarithmic factor has to appear in our desired inequality \eqref{heuristic}.
The analysis of sub-gaussian and heavy-tailed matrices will be completely different.
There is an abundance of examples where the results of
this section may be useful. They include all matrices with independent entries,
whether sub-gaussian such as Gaussian and Bernoulli, or completely general
distributions with mean zero and unit variance. In the latter case
one is able to surpass the fourth moment assumption which is
necessary in Bai-Yin's law, Theorem~\ref{Bai-Yin}.
Other examples of interest come from non-product distributions, some of which we saw
in Example~\ref{random vectors}. Sampling from discrete objects (matrices and frames)
fits well in this framework, too. Given a deterministic matrix $B$, one puts a uniform distribution on
the set of the rows of $B$ and creates
a random matrix $A$ as before -- by sampling some $N$ random rows from $B$.
Applications to sampling will be discussed in Section~\ref{s: sub-matrices}.
\subsection{Sub-gaussian rows}
\index{Sub-gaussian!random matrices with independent rows} \label{s: sub-gaussian rows}
The following result goes in the direction of our goal \eqref{heuristic} for
random matrices with independent sub-gaussian rows.
\begin{theorem}[Sub-gaussian rows] \label{sub-gaussian rows}
Let $A$ be an $N \times n$ matrix whose rows $A_i$ are independent
sub-gaussian isotropic random vectors in $\mathbb{R}^n$.
Then for every $t \ge 0$, with probability at least $1 - 2\exp(-ct^2)$ one has
\begin{equation} \label{smin smax rectangular}
\sqrt{N} - C \sqrt{n} - t \le s_{\min}(A) \le s_{\max}(A) \le \sqrt{N} + C \sqrt{n} + t.
\end{equation}
Here $C = C_K$, $c = c_K > 0$ depend only on the subgaussian norm
$K = \max_i \|A_i\|_{\psi_2}$ of the rows.
\end{theorem}
This result is a general version of Corollary~\ref{Gaussian deviation} (up to absolute constants);
instead of independent Gaussian entries we allow independent sub-gaussian rows.
This of course covers all matrices with independent sub-gaussian entries
such as Gaussian and Bernoulli. It also applies to some natural matrices whose entries
are not independent. One such example is a matrix whose rows are independent spherical
random vectors (Example~\ref{random vectors sub-gaussian}).
\begin{proof}
The proof is a basic version of a {\em covering argument}, \index{Covering argument} and it has three steps.
We need to control $\|Ax\|_2$ for all
vectors $x$ on the unit sphere $S^{n-1}$. To this end, we discretize the sphere using a net $\mathcal{N}$
(the approximation step), establish a tight control of $\|Ax\|_2$ for every fixed vector $x \in \mathcal{N}$
with high probability (the concentration step), and finish off by taking a union bound over all
$x$ in the net. The concentration step will be based on the deviation inequality
for sub-exponential random variables, Corollary~\ref{average sub-exponentials}.
{\bf Step 1: Approximation.}
Recalling Lemma~\ref{approximate isometries} for the matrix $B=A/\sqrt{N}$ we see
that the conclusion of the theorem is equivalent to
\begin{equation} \label{A*A rows}
\big\| \frac{1}{N}A^*A-I \big\| \le \max(\delta, \delta^2) =:\varepsilon
\quad \text{where} \quad
\delta = C \sqrt{\frac{n}{N}} + \frac{t}{\sqrt{N}}.
\end{equation}
Using Lemma~\ref{norm on net}, we can evaluate the operator norm in \eqref{A*A rows}
on a $\frac{1}{4}$-net $\mathcal{N}$ of the unit sphere $S^{n-1}$:
$$
\big\| \frac{1}{N}A^*A-I \big\|
\le 2 \max_{x \in \mathcal{N}} \big| \big\langle (\frac{1}{N}A^*A-I)x, x \big\rangle \big|
= 2 \max_{x \in \mathcal{N}} \big| \frac{1}{N} \|Ax\|_2^2 - 1 \big|.
$$
So to complete the proof it suffices to show that, with the required probability,
$$
\max_{x \in \mathcal{N}} \big| \frac{1}{N} \|Ax\|_2^2 - 1 \big| \le \frac{\varepsilon}{2}.
$$
By Lemma~\ref{net cardinality}, we can choose the net $\mathcal{N}$ so that it has cardinality
$|\mathcal{N}| \le 9^n$.
{\bf Step 2: Concentration.}
Let us fix any vector $x \in S^{n-1}$. We can express $\|Ax\|_2^2$ as a sum of independent
random variables
\begin{equation} \label{Ax as sum}
\|Ax\|_2^2 = \sum_{i=1}^N \< A_i, x\> ^2 =: \sum_{i=1}^N Z_i^2
\end{equation}
where $A_i$ denote the rows of the matrix $A$.
By assumption, $Z_i = \< A_i, x\> $ are independent sub-gaussian random variables
with $\mathbb{E} Z_i^2 = 1$ and $\|Z_i\|_{\psi_2} \le K$.
Therefore, by Remark~\ref{centering} and Lemma~\ref{sub-exponential squared},
$Z_i^2 - 1$ are independent centered sub-exponential random variables with
$\|Z_i^2-1\|_{\psi_1} \le 2\|Z_i^2\|_{\psi_1} \le 4 \|Z_i\|_{\psi_2}^2 \le 4 K^2$.
We can therefore use an exponential deviation inequality, Corollary~\ref{average sub-exponentials},
to control the sum \eqref{Ax as sum}. Since
$K \ge \|Z_i\|_{\psi_2} \ge \frac{1}{\sqrt{2}} (\mathbb{E}|Z_i|^2)^{1/2} = \frac{1}{\sqrt{2}}$, this gives
\begin{align*}
\mathbb{P} \Big\{ \big| \frac{1}{N} \|Ax\|_2^2 - 1 \big| \ge \frac{\varepsilon}{2} \Big\}
&= \mathbb{P} \Big\{ \big| \frac{1}{N}\sum_{i=1}^N Z_i^2 - 1 \big| \ge \frac{\varepsilon}{2} \Big\}
\le 2 \exp \Big[ - \frac{c_1}{K^4} \min(\varepsilon^2, \varepsilon) N \Big] \\
&= 2 \exp \Big[ - \frac{c_1}{K^4} \delta^2 N \Big]
\le 2 \exp \Big[ - \frac{c_1}{K^4} (C^2 n + t^2) \Big]
\end{align*}
where the last inequality follows by the definition of $\delta$
and using the inequality $(a+b)^2 \ge a^2 + b^2$ for $a,b \ge 0$.
{\bf Step 3: Union bound.}
Taking the union bound over all vectors $x$ in the net $\mathcal{N}$ of cardinality $|\mathcal{N}| \le 9^n$,
we obtain
$$
\mathbb{P} \Big\{ \max_{x \in \mathcal{N}} \big| \frac{1}{N} \|Ax\|_2^2 - 1 \big| \ge \frac{\varepsilon}{2} \Big\}
\le 9^n \cdot 2 \exp \Big[ - \frac{c_1}{K^4} (C^2 n + t^2) \Big]
\le 2 \exp \Big( - \frac{c_1 t^2}{K^4} \Big)
$$
where the second inequality follows for $C = C_K$ sufficiently large,
e.g. $C = K^2 \sqrt{\ln 9/c_1}$.
As we noted in Step~1, this completes the proof of the theorem.
\end{proof}
\begin{remark}[Non-isotropic distributions] \label{r: non-isotropic}
\begin{enumerate}
\item A version of Theorem~\ref{sub-gaussian rows} holds for general,
non-isotropic sub-gaussian distributions.
Assume that $A$ is an $N \times n$ matrix whose rows $A_i$ are independent
sub-gaussian random vectors in $\mathbb{R}^n$ with second moment matrix $\Sigma$.
Then for every $t \ge 0$, the following inequality holds with probability at least $1 - 2\exp(-ct^2)$:
\begin{equation} \label{A*A rows non-isotropic}
\big\| \frac{1}{N}A^*A-\Sigma \big\| \le \max(\delta, \delta^2)
\quad \text{where} \quad
\delta = C \sqrt{\frac{n}{N}} + \frac{t}{\sqrt{N}}.
\end{equation}
Here as before $C = C_K$, $c = c_K > 0$ depend only on the subgaussian norm
$K = \max_i \|A_i\|_{\psi_2}$ of the rows. This result is a general version of \eqref{A*A rows}.
It follows by a straighforward modification of the argument of Theorem~\ref{sub-gaussian rows}.
\item A more natural, multiplicative form of \eqref{A*A rows non-isotropic} is the following.
Assume that $\Sigma^{-1/2} A_i$ are isotropic sub-gaussian random vectors, and let $K$
be the maximum of their sub-gaussian norms. Then
for every $t \ge 0$, the following inequality holds with probability at least $1 - 2\exp(-ct^2)$:
\begin{equation} \label{A*A rows non-isotropic multiplicative}
\big\| \frac{1}{N}A^*A-\Sigma \big\| \le \max(\delta, \delta^2) \, \|\Sigma\|
\quad \text{where} \quad
\delta = C \sqrt{\frac{n}{N}} + \frac{t}{\sqrt{N}}
\end{equation}
Here again $C = C_K$, $c = c_K > 0$. This result follows from Theorem~\ref{sub-gaussian rows}
applied to the isotropic random vectors $\Sigma^{-1/2} A_i$.
\end{enumerate}
\end{remark}
\subsection{Heavy-tailed rows}
\index{Heavy-tailed!random matrices with independent rows} \label{s: heavy-tailed rows}
The class of sub-gaussian random variables in Theorem~\ref{sub-gaussian rows} may sometimes
be too restrictive in applications. For example, if the rows of $A$
are independent coordinate or frame random vectors
(Examples~\ref{random vectors} and \ref{random vectors sub-gaussian}),
they are poorly sub-gaussian and Theorem~\ref{sub-gaussian rows} is too weak.
In such cases, one would use the following result instead, which operates in remarkable generality.
\begin{theorem}[Heavy-tailed rows] \label{heavy-tailed rows}
Let $A$ be an $N \times n$ matrix whose rows $A_i$ are independent
isotropic random vectors in $\mathbb{R}^n$. Let $m$ be a number such that
$\|A_i\|_2 \le \sqrt{m}$ almost surely for all $i$.
Then for every $t \ge 0$, one has
\begin{equation} \label{eq heavy-tailed rows}
\sqrt{N} - t \sqrt{m} \le s_{\min}(A) \le s_{\max}(A) \le \sqrt{N} + t \sqrt{m}
\end{equation}
with probability at least $1 - 2 n \cdot \exp(-ct^2)$,
where $c>0$ is an absolute constant.
\end{theorem}
Recall that $(\mathbb{E} \|A_i\|_2^2)^{1/2} = \sqrt{n}$ by Lemma~\ref{norm isotropic}.
This indicates that one would typically use Theorem~\ref{heavy-tailed rows} with $m = O(n)$.
In this case the result takes the form
\begin{equation} \label{heavy-tailed m=n}
\sqrt{N} - t \sqrt{n} \le s_{\min}(A) \le s_{\max}(A) \le \sqrt{N} + t \sqrt{n}
\end{equation}
with probability at least $1 - 2n \cdot \exp(-c't^2)$. This is a form of our desired inequality
\eqref{heuristic} for heavy-tailed matrices. We shall discuss this more after the proof.
\begin{proof}
We shall use the non-commutative Bernstein's inequality, Theorem~\ref{matrix Bernstein}.
{\bf Step 1: Reduction to a sum of independent random matrices.}
We first note that $m \ge n \ge 1$ since by Lemma~\ref{norm isotropic} we have $\mathbb{E} \|A_i\|_2^2 = n$.
Now we start an argument parallel to Step~1 of Theorem~\ref{sub-gaussian rows}.
Recalling Lemma~\ref{approximate isometries} for the matrix $B=A/\sqrt{N}$ we see
that the desired inequalities \eqref{eq heavy-tailed rows} are equivalent to
\begin{equation} \label{A*A heavy-tailed}
\big\| \frac{1}{N}A^*A-I \big\| \le \max(\delta, \delta^2) =:\varepsilon
\quad \text{where} \quad
\delta = t \sqrt{\frac{m}{N}}.
\end{equation}
We express this random matrix as a sum of independent random matrices:
$$
\frac{1}{N} A^*A - I = \frac{1}{N} \sum_{i=1}^N A_i \otimes A_i - I
= \sum_{i=1}^N X_i,
\quad \text{where } X_i := \frac{1}{N} (A_i \otimes A_i - I);
$$
note that $X_i$ are independent centered $n \times n$ random matrices.
{\bf Step 2: Estimating the mean, range and variance.}
We are going to apply the non-commutative Bernstein inequality, Theorem~\ref{matrix Bernstein}, for the sum $\sum_i X_i$.
Since $A_i$ are isotropic random vectors, we have $\mathbb{E} A_i \otimes A_i = I$
which implies that $\mathbb{E} X_i = 0$ as required in the non-commutative Bernstein inequality.
We estimate the range of $X_i$ using that $\|A_i\|_2 \le \sqrt{m}$ and $m \ge 1$:
$$
\|X_i\|
\le \frac{1}{N} ( \|A_i \otimes A_i\| + 1)
= \frac{1}{N} (\|A_i\|_2^2 + 1)
\le \frac{1}{N} (m + 1)
\le \frac{2 m}{N}
=: K
$$
To estimate the total variance $\|\sum_i \mathbb{E} X_i^2\|$, we first compute
$$
X_i^2 = \frac{1}{N^2} \big[ (A_i \otimes A_i)^2 - 2(A_i \otimes A_i) + I \big],
$$
so using that the isotropy assumption $\mathbb{E} A_i \otimes A_i = I$ we obtain
\begin{equation} \label{Xi squared}
\mathbb{E} X_i^2 = \frac{1}{N^2} \big[ \mathbb{E} (A_i \otimes A_i)^2 - I \big].
\end{equation}
Since $(A_i \otimes A_i)^2 = \|A_i\|_2^2 \, A_i \otimes A_i$ is a positive semi-definite matrix
and $\|A_i\|_2^2 \le m$ by assumption, we have
$\big\| \mathbb{E} (A_i \otimes A_i)^2 \big\| \le m \cdot \| \mathbb{E} A_i \otimes A_i \| = m$.
Putting this into \eqref{Xi squared} we obtain
$$
\| \mathbb{E} X_i^2 \| \le \frac{1}{N^2} (m + 1) \le \frac{2 m}{N^2}
$$
where we again used that $m \ge 1$.
This yields\footnote{Here the seemingly crude application of triangle inequality is actually not
so loose. If the rows $A_i$ are identically distributed, then so are $X_i^2$,
which makes the triangle inequality above into an equality.}
$$
\Big\| \sum_{i=1}^N \mathbb{E} X_i^2 \Big\|
\le N \cdot \max_i \| \mathbb{E} X_i^2 \|
= \frac{2m}{N}
=: \sigma^2.
$$
{\bf Step 3: Application of the non-commutative Bernstein's inequality.}
\index{Bernstein-type inequality!non-commutative}
Applying Theorem~\ref{matrix Bernstein} (see Remark~\ref{mixed tail})
and recalling the definitions of $\varepsilon$ and $\delta$ in \eqref{A*A heavy-tailed}, we
we bound the probability in question as
\begin{align*}
\mathbb{P} &\Big\{ \Big\| \frac{1}{N} A^*A - I \Big\| \ge \varepsilon \Big\}
= \mathbb{P} \Big\{ \Big\| \sum_{i=1}^N X_i \Big\| \ge \varepsilon \Big\}
\le 2n \cdot \exp \Big[ -c \min \Big( \frac{\varepsilon^2}{\sigma^2}, \frac{\varepsilon}{K} \Big) \Big] \\
&\le 2n \cdot \exp \Big[ -c \min(\varepsilon^2,\varepsilon) \cdot \frac{N}{2m} \Big]
= 2n \cdot \exp \Big( - \frac{c \delta^2 N}{2m} \Big)
= 2n \cdot \exp(-ct^2/2).
\end{align*}
This completes the proof.
\end{proof}
Theorem~\ref{heavy-tailed rows} for heavy-tailed rows is different from
Theorem~\ref{sub-gaussian rows} for sub-gaussian rows in two ways:
the boundedness assumption\footnote{Going a little
ahead, we would like to point out that the almost sure boundedness can be relaxed to
the bound in expectation $\mathbb{E} \max_i \|A_i\|_2^2 \le m$,
see Theorem~\ref{heavy-tailed rows exp si}.}
$\|A_i\|_2^2 \le m$ appears, and the probability bound is weaker.
We will now comment on both differences.
\begin{remark}[Boundedness assumption] \label{r: boundedness}
Observe that some boundendess assumption on the distribution
is needed in Theorem~\ref{heavy-tailed rows}.
Let us see this on the following example. Choose $\delta \ge 0$ arbitrarily small, and
consider a random vector $X = \delta^{-1/2} \xi Y$ in $\mathbb{R}^n$
where $\xi$ is a $\{0,1\}$-valued random variable with $\mathbb{E} \xi = \delta$ (a ``selector'')
and $Y$ is an independent isotropic random vector in $\mathbb{R}^n$ with an arbitrary distribution.
Then $X$ is also an isotropic random vector.
Consider an $N \times n$ random matrix $A$ whose rows $A_i$ are independent copies of $X$.
However, if $\delta \ge 0$ is suitably small then $A = 0$ with high probability,
hence no nontrivial lower bound on $s_{\min}(A)$ is possible.
\end{remark}
Inequality \eqref{heavy-tailed m=n} fits our goal \eqref{heuristic}, but not quite. The reason is that the probability
bound is only non-trivial if $t \ge C \sqrt{\log n}$. Therefore, in reality Theorem~\ref{heavy-tailed rows}
asserts that
\begin{equation} \label{goal log}
\sqrt{N} - C\sqrt{n \log n} \le s_{\min}(A) \le s_{\max}(A) \le \sqrt{N} + C\sqrt{n \log n}
\end{equation}
with probability, say $0.9$. This achieves our goal \eqref{heuristic} up to a logarithmic factor.
\begin{remark}[Logarithmic factor]
The logarithmic factor can not be removed from \eqref{goal log} for some heavy-tailed distributions.
Consider for instance the coordinate distribution introduced in Example~\ref{random vectors}.
In order that $s_{\min}(A) > 0$ there must be no zero columns in $A$. Equivalently, each coordinate vector
$e_1, \ldots,e_n$ \index{Coordinate random vectors} must be picked at least once in $N$ independent trials
(each row of $A$ picks an independent coordinate vector).
Recalling the classical coupon collector's problem, one must make at least $N \ge C n \log n$ trials to make this occur
with high probability. Thus the logarithm is necessary in the left hand side of \eqref{goal log}.\footnote{This argument
moreover shows the optimality of the probability bound in Theorem~\ref{heavy-tailed rows}.
For example, for $t = \sqrt{N}/2\sqrt{n}$ the conclusion \eqref{heavy-tailed m=n} implies
that $A$ is well conditioned (i.e. $\sqrt{N}/2 \le s_{\min}(A) \le s_{\max}(A) \le 2 \sqrt{N}$)
with probability $1 - n \cdot \exp(-cN/n)$.
On the other hand, by the coupon collector's problem we estimate the probability that $s_{\min}(A) > 0$ as
$1 - n \cdot (1- \frac{1}{n})^N \approx 1 - n \cdot \exp(-N/n)$.}
\end{remark}
A version of Theorem~\ref{heavy-tailed rows} holds for general, non-isotropic distributions.
It is convenient to state it in terms of the equivalent estimate \eqref{A*A heavy-tailed}:
\begin{theorem}[Heavy-tailed rows, non-isotropic] \label{heavy-tailed rows non-isotropic}
Let $A$ be an $N \times n$ matrix whose rows $A_i$ are independent
random vectors in $\mathbb{R}^n$ with the common second moment matrix $\Sigma = \mathbb{E} A_i \otimes A_i$.
Let $m$ be a number such that $\|A_i\|_2 \le \sqrt{m}$ almost surely for all $i$.
Then for every $t \ge 0$, the following inequality holds with probability at least $1 - n \cdot \exp(-ct^2)$:
\begin{equation} \label{A*A heavy-tailed rows non-isotropic}
\big\| \frac{1}{N}A^*A-\Sigma \big\| \le \max(\|\Sigma\|^{1/2}\delta, \delta^2)
\quad \text{where} \quad
\delta = t \sqrt{\frac{m}{N}}.
\end{equation}
Here $c>0$ is an absolute constant.
In particular, this inequality yields
\begin{equation} \label{A heavy-tailed rows non-isotropic}
\|A\| \le \|\Sigma\|^{1/2} \sqrt{N} + t \sqrt{m}.
\end{equation}
\end{theorem}
\begin{proof}
We note that $m \ge \|\Sigma\|$ because
$\|\Sigma\| = \|\mathbb{E} A_i \otimes A_i\| \le \mathbb{E} \|A_i \otimes A_i\| = \mathbb{E} \|A_i\|_2^2 \le m$.
Then \eqref{A*A heavy-tailed rows non-isotropic} follows by a straightforward modification of the argument
of Theorem~\ref{heavy-tailed rows}. Furthermore, if \eqref{A*A heavy-tailed rows non-isotropic} holds then
by triangle inequality
\begin{align*}
\frac{1}{N} \|A\|^2
&= \big\| \frac{1}{N} A^*A \big\|
\le \|\Sigma\| + \big\| \frac{1}{N}A^*A-\Sigma \big\| \\
&\le \|\Sigma\| + \|\Sigma\|^{1/2}\delta + \delta^2
\le (\|\Sigma\|^{1/2} + \delta)^2.
\end{align*}
Taking square roots and multiplying both sides by $\sqrt{N}$, we obtain \eqref{A heavy-tailed rows non-isotropic}.
\end{proof}
\bigskip
The {\em almost sure} boundedness requirement in Theorem~\ref{heavy-tailed rows} may sometimes be too
restrictive in applications, and it can be relaxed to a bound {\em in expectation}:
\begin{theorem}[Heavy-tailed rows; expected singular values] \label{heavy-tailed rows exp si}
Let $A$ be an $N \times n$ matrix whose rows $A_i$ are independent
isotropic random vectors in $\mathbb{R}^n$. Let
$m := \mathbb{E} \max_{i \le N} \|A_i\|_2^2$. Then
$$
\mathbb{E} \max_{j \le n} |s_j(A) - \sqrt{N}|
\le C \sqrt{m \log \min(N,n)}
$$
where $C$ is an absolute constant.
\end{theorem}
The proof of this result is similar to that of Theorem~\ref{heavy-tailed rows}, except that this time
we will use Rudelson's Corollary~\ref{Rudelson} instead of matrix Bernstein's inequality.
To this end, we need a link to symmetric Bernoulli random variables. This is provided by
a general {\em symmetrization argument}:
\begin{lemma}[Symmetrization] \index{Symmetrization} \label{symmetrization}
Let $(X_i)$ be a finite sequence of independent random vectors valued in some Banach space,
and $(\varepsilon_i)$ be independent symmetric Bernoulli random variables.
Then
\begin{equation} \label{eq symmetrization}
\mathbb{E} \Big\| \sum_i (X_i - \mathbb{E} X_i) \Big\|
\le 2 \mathbb{E} \Big\| \sum_i \varepsilon_i X_i \Big\|.
\end{equation}
\end{lemma}
\begin{proof}
We define random variables $\tilde{X}_i = X_i - X_i'$
where $(X_i')$ is an independent copy of the sequence
$(X_i)$.
Then $\tilde{X}_i$ are independent symmetric random variables, i.e. the sequence
$(\tilde{X}_i)$ is distributed
identically with $(-\tilde{X}_i)$ and thus also with $(\varepsilon_i \tilde{X}_i)$.
Replacing $\mathbb{E} X_i$ by $\mathbb{E} X_i'$ in \eqref{eq symmetrization} and using
Jensen's inequality, symmetry, and triangle inequality, we obtain the required inequality
\begin{align*}
\mathbb{E} \Big\| \sum_i (X_i - \mathbb{E} X_i) \Big\|
&\le \mathbb{E} \Big\| \sum_i \tilde{X}_i \Big\|
= \mathbb{E} \Big\| \sum_i \varepsilon_i \tilde{X}_i \Big\| \\
&\le \mathbb{E} \Big\| \sum_i \varepsilon_i X_i \Big\| + \mathbb{E} \Big\| \sum_i \varepsilon_i X_i' \Big\|
= 2 \mathbb{E} \Big\| \sum_i \varepsilon_i X_i \Big\|. \qedhere
\end{align*}
\end{proof}
We will also need a probabilistic version of Lemma~\ref{approximate isometries} on approximate isometries.
The proof of that lemma was based on the elementary inequality
$|z^2-1| \ge \max( |z-1|, |z-1|^2 )$ for $z \ge 0$. Here is a probabilistic version:
\begin{lemma} \label{deviation from 1}
Let $Z$ be a non-negative random variable.
Then $\mathbb{E}|Z^2-1| \ge \max( \mathbb{E}|Z-1|, (\mathbb{E}|Z-1|)^2 )$.
\end{lemma}
\begin{proof}
Since $|Z-1| \le |Z^2-1|$ pointwise, we have $\mathbb{E} |Z-1| \le \mathbb{E} |Z^2-1|$.
Next, since $|Z-1|^2 \le |Z^2-1|$ pointwise,
taking square roots and expectations we obtain
$\mathbb{E}|Z-1| \le \mathbb{E}|Z^2-1|^{1/2} \le (\mathbb{E}|Z^2-1|)^{1/2}$, where the last bound follows by Jensen's inequality.
Squaring both sides
completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{heavy-tailed rows exp si}]
{\bf Step 1: Application of Rudelson's inequality.} \index{Rudelson's inequality}
As in the proof of Theorem~\ref{heavy-tailed rows}, we are going to control
$$
E := \mathbb{E} \big\| \frac{1}{N} A^*A - I \big\|
= \mathbb{E} \Big\| \frac{1}{N} \sum_{i=1}^N A_i \otimes A_i - I \Big\|
\le \frac{2}{N} \, \mathbb{E} \Big\| \sum_{i=1}^N \varepsilon_i A_i \otimes A_i \Big\|
$$
where we used Symmetrization Lemma~\ref{symmetrization}
with independent symmetric Bernoulli random variables $\varepsilon_i$
(which are independent of $A$ as well). The expectation in the right hand side is taken both with respect
to the random matrix $A$ and the signs $(\varepsilon_i)$. Taking first the expectation with respect to $(\varepsilon_i)$
(conditionally on $A$) and afterwards the expectation with respect to $A$,
we obtain by Rudelson's inequality (Corollary~\ref{Rudelson}) that
$$
E \le \frac{C \sqrt{l}}{N} \,
\mathbb{E} \Big( \max_{i \le N} \|A_i\|_2 \cdot \Big\| \sum_{i=1}^N A_i \otimes A_i \Big\|^{1/2} \Big)
$$
where $l = \log \min(N,n)$.
We now apply the Cauchy-Schwarz inequality. Since by the triangle inequality
$\mathbb{E} \big\| \frac{1}{N} \sum_{i=1}^N A_i \otimes A_i \big\| = \mathbb{E} \big\| \frac{1}{N} A^*A \big\| \le E + 1$,
it follows that
$$
E \le C \sqrt{\frac{ml}{N}} (E+1)^{1/2}.
$$
This inequality is easy to solve in $E$. Indeed,
considering the cases $E \le 1$ and $E > 1$ separately, we conclude that
$$
E = \mathbb{E} \big\| \frac{1}{N} A^*A - I \big\| \le \max(\delta, \delta^2)
\quad \text{where } \delta := C \sqrt{\frac{2ml}{N}}.
$$
{\bf Step 2: Diagonalization.}
Diagonalizing the matrix $A^*A$ one checks that
$$
\big\| \frac{1}{N} A^*A - I \big\|
= \max_{j \le n} \big| \frac{s_j(A)^2}{N} - 1 \big|
= \max \Big( \big| \frac{s_{\min}(A)^2}{N} - 1 \big|,
\big| \frac{s_{\max}(A)^2}{N} - 1 \big| \Big).
$$
It follows that
$$
\max \Big( \mathbb{E} \big| \frac{s_{\min}(A)^2}{N} - 1 \big|,
\mathbb{E} \big| \frac{s_{\max}(A)^2}{N} - 1 \big| \Big)
\le \max(\delta,\delta^2).
$$
(we replaced the expectation of maximum by the maximum of expectations).
Using Lemma~\ref{deviation from 1} separately for the two terms on the left hand side, we obtain
$$
\max \Big( \mathbb{E} \big| \frac{s_{\min}(A)}{\sqrt{N}} - 1 \big|,
\mathbb{E} \big| \frac{s_{\max}(A)}{\sqrt{N}} - 1 \big| \Big)
\le \delta.
$$
Therefore
\begin{align*}
\mathbb{E} \max_{j \le n} \big| \frac{s_j(A)}{\sqrt{N}}-1 \big|
&= \mathbb{E} \max \Big( \big| \frac{s_{\min}(A)}{\sqrt{N}} - 1 \big|,
\Big| \frac{s_{\max}(A)}{\sqrt{N}} - 1 \Big| \Big) \\
&\le \mathbb{E} \Big( \big| \frac{s_{\min}(A)}{\sqrt{N}} - 1 \big|
+ \big| \frac{s_{\max}(A)}{\sqrt{N}} - 1 \big| \Big)
\le 2\delta.
\end{align*}
Multiplying both sides by $\sqrt{N}$ completes the proof.
\end{proof}
In a way similar to Theorem~\ref{heavy-tailed rows non-isotropic} we note that a version
of Theorem~\ref{heavy-tailed rows exp si} holds for general, non-isotropic distributions.
\begin{theorem}[Heavy-tailed rows, non-isotropic, expectation] \label{heavy-tailed rows exp si non-isotropic}
Let $A$ be an $N \times n$ matrix whose rows $A_i$ are independent
random vectors in $\mathbb{R}^n$ with the common second moment matrix $\Sigma = \mathbb{E} A_i \otimes A_i$.
Let $m := \mathbb{E} \max_{i \le N} \|A_i\|_2^2$.
Then
$$
\mathbb{E} \big\| \frac{1}{N}A^*A-\Sigma \big\| \le \max(\|\Sigma\|^{1/2}\delta, \delta^2)
\quad \text{where} \quad
\delta = C \sqrt{\frac{m \log \min(N,n)}{N}}.
$$
Here $C$ is an absolute constant.
In particular, this inequality yields
$$
\big( \mathbb{E} \|A\|^2 \big)^{1/2} \le \|\Sigma\|^{1/2} \sqrt{N} + C \sqrt{m \log \min(N,n)}.
$$
\end{theorem}
\begin{proof}
The first part follows by a simple modification of the proof of Theorem~\ref{heavy-tailed rows exp si}.
The second part follows from the first like in Theorem~\ref{heavy-tailed rows non-isotropic}.
\end{proof}
\begin{remark}[Non-identical second moments] \label{r: different second moments}
The assumption that the rows $A_i$ have a common
second moment matrix $\Sigma$ is not essential in Theorems~\ref{heavy-tailed rows non-isotropic}
and \ref{heavy-tailed rows exp si non-isotropic}.
The reader will be able to formulate more general versions of these results.
For example, if $A_i$ have arbitrary second moment matrices $\Sigma_i = \mathbb{E} A_i \otimes A_i$
then the conclusion of Theorem~\ref{heavy-tailed rows exp si non-isotropic}
holds with $\Sigma = \frac{1}{N} \sum_{i=1}^N \Sigma_i$.
\end{remark}
\subsection{Applications to estimating covariance matrices}
\index{Covariance matrix!estimation}\label{s: covariance}
One immediate application of our analysis of random matrices is in statistics,
for the fundamental problem of {\em estimating covariance matrices}.
Let $X$ be a random vector in $\mathbb{R}^n$; for simplicity
we assume that $X$ is centered,\footnote{More generally, in this section we estimate
the {\em second moment matrix} $\mathbb{E} X \otimes X$ of an arbitrary random vector $X$
(not necessarily centered).}
$\mathbb{E} X = 0$. Recall that the covariance matrix of
$X$ is the $n \times n$ matrix $\Sigma = \mathbb{E} X \otimes X$, see Section~\ref{s: isotropic}.
The simplest way to estimate $\Sigma$ is to take some $N$ independent
samples $X_i$ from the distribution and form the {\em sample covariance matrix} \index{Sample covariance matrix}
$\Sigma_N = \frac{1}{N} \sum_{i=1}^N X_i \otimes X_i$.
By the law of large numbers, $\Sigma_N \to \Sigma$ almost surely as $N \to \infty$.
So, taking sufficiently many samples we are guaranteed to estimate the covariance matrix
as well as we want. This, however, does not address the quantitative aspect:
what is the minimal {\em sample size} $N$ that guarantees approximation with a given accuracy?
The relation of this question to random matrix theory becomes clear when we
arrange the samples $X_i =: A_i$ as rows of the $N \times n$ random matrix $A$.
Then the sample covariance matrix is expressed as $\Sigma_N = \frac{1}{N}A^*A$.
Note that $A$ is a matrix with independent rows but usually not independent entries (unless
we sample from a product distribution). We worked out the analysis of such matrices
in Section~\ref{s: rows}, separately for sub-gaussian and general distributions.
As an immediate consequence of Theorem~\ref{sub-gaussian rows}, we obtain:
\begin{corollary}[Covariance estimation for sub-gaussian distributions] \label{covariance sub-gaussian} \hfill
Consider a sub-gaussian distribution in $\mathbb{R}^n$ with covariance matrix $\Sigma$,
and let $\varepsilon \in (0,1)$, $t \ge 1$. Then with probability at least $1 - 2 \exp(- t^2 n)$ one has
$$
\text{If } N \ge C(t/\varepsilon)^2 n \quad \text{then } \|\Sigma_N - \Sigma\| \le \varepsilon.
$$
Here $C = C_K$ depends only on the sub-gaussian norm $K = \|X\|_{\psi_2}$ of a random vector
taken from this distribution.
\end{corollary}
\begin{proof}
It follows from \eqref{A*A rows non-isotropic} that for every $s \ge 0$,
with probability at least $1 - 2\exp(-cs^2)$ we have
$\|\Sigma_N-\Sigma\| \le \max(\delta, \delta^2)$ where
$\delta = C \sqrt{n/N} + s/\sqrt{N}$.
The conclusion follows for $s = C' t \sqrt{n}$ where
$C' = C'_K$ is sufficiently large.
\end{proof}
Summarizing, Corollary~\ref{covariance sub-gaussian} shows that the sample size
$$
N = O(n)
$$ suffices to approximate the covariance
matrix of a sub-gaussian distribution in $\mathbb{R}^n$ by the sample covariance matrix.
\begin{remark}[Multiplicative estimates, Gaussian distributions]
A weak point of Corollary~\ref{covariance sub-gaussian}
is that the sub-gaussian norm $K$ may in turn depend on $\|\Sigma\|$.
To overcome this drawback, instead of using \eqref{A*A rows non-isotropic}
in the proof of this result one can use the multiplicative version \eqref{A*A rows non-isotropic multiplicative}.
The reader is encouraged to state a general result that follows from this argument.
We just give one special example for arbitrary {\em centered Gaussian distributions} in $\mathbb{R}^n$.
For every $\varepsilon \in (0,1)$, $t \ge 1$, the following holds with probability at least $1 - 2 \exp(- t^2 n)$:
$$
\text{If } N \ge C(t/\varepsilon)^2 n \quad \text{then } \|\Sigma_N - \Sigma\| \le \varepsilon \|\Sigma\|.
$$
Here $C$ is an absolute constant.
\end{remark}
Finally, Theorem~\ref{heavy-tailed rows non-isotropic} yields a similar estimation result for arbitrary distributions,
possibly heavy-tailed:
\begin{corollary}[Covariance estimation for arbitrary distributions] \label{covariance heavy-tailed} \hfill
Consider a distribution in $\mathbb{R}^n$ with covariance matrix $\Sigma$ and supported in
some centered Euclidean ball whose radius we denote $\sqrt{m}$.
Let $\varepsilon \in (0,1)$ and $t \ge 1$.
Then the following holds with probability at least $1 - n^{-t^2}$:
$$
\text{If } N \ge C(t/\varepsilon)^2 \|\Sigma\|^{-1} m \log n
\quad \text{then } \|\Sigma_N - \Sigma\| \le \varepsilon \|\Sigma\|.
$$
Here $C$ is an absolute constant.
\end{corollary}
\begin{proof}
It follows from Theorem~\ref{heavy-tailed rows non-isotropic} that for every $s \ge 0$,
with probability at least $1 - n \cdot \exp(-cs^2)$ we have
$\|\Sigma_N-\Sigma\| \le \max(\|\Sigma\|^{1/2}\delta, \delta^2)$
where $\delta = s \sqrt{m/N}$.
Therefore, if $N \ge (s/\varepsilon)^2 \|\Sigma\|^{-1} m$ then $\|\Sigma_N - \Sigma\| \le \varepsilon \|\Sigma\|$.
The conclusion follows with $s = C' t \sqrt{\log n}$ where $C'$ is a sufficiently large
absolute constant.
\end{proof}
Corollary~\ref{covariance heavy-tailed} is typically used with $m = O(\|\Sigma\| n)$.
Indeed, if $X$ is a random vector chosen from the distribution in question,
then its expected norm is easy to estimate:
$\mathbb{E} \|X\|_2^2 = \tr(\Sigma) \le n \|\Sigma\|$.
So, by Markov's inequality, most of the distribution is supported
in a centered ball of radius $\sqrt{m}$ where $m = O(n \|\Sigma\|)$.
If all distribution is supported there, i.e. if $\|X\| = O(\sqrt{n \|\Sigma\|})$ almost surely,
then the conclusion of Corollary~\ref{covariance heavy-tailed} holds
with sample size $N \ge C(t/\varepsilon)^2 n \log n$.
\begin{remark}[Low-rank estimation]
In certain applications, the distribution in $\mathbb{R}^n$ lies close to a low dimensional subspace.
In this case, a smaller sample suffices for covariance estimation. The intrinsic dimension
of the distribution can be measured with the {\em effective rank} \index{Effective rank}
of the matrix $\Sigma$, defined as
$$
r(\Sigma) = \frac{\tr(\Sigma)}{\|\Sigma\|}.
$$
One always has $r(\Sigma) \le \rank(\Sigma) \le n$, and this bound is sharp.
For example, if $X$ is an isotropic random vector in $\mathbb{R}^n$ then $\Sigma = I$ and $r(\Sigma) = n$.
A more interesting example is where $X$ takes values in some $r$-dimensional subspace $E$,
and the restriction of the distribution of $X$ onto $E$ is isotropic. The latter means that
$\Sigma = P_E$, where $P_E$ denotes the orthogonal projection in $\mathbb{R}^n$ onto $E$.
Therefore in this case $r(\Sigma) = r$. The effective rank is a stable quantity compared with the
usual rank. For distributions that are approximately low-dimenional, the effective rank is
still small.
The effective rank $r = r(\Sigma)$ always controls the typical norm of $X$,
as $\mathbb{E} \|X\|_2^2 = \tr(\Sigma) = r \|\Sigma\|$.
It follows by Markov's inequality that most of the distribution is supported in a ball of radius $\sqrt{m}$ where
$m = O(r \|\Sigma\|)$. Assume that all of the distribution is supported there, i.e. if $\|X\| = O(\sqrt{r \|\Sigma\|})$
almost surely. Then the conclusion of Corollary~\ref{covariance heavy-tailed} holds
with sample size $N \ge C(t/\varepsilon)^2 r \log n$.
\end{remark}
We can summarize this discussion in the following way: the sample size
$$
N = O(n \log n)
$$
suffices to approximate the covariance matrix of a general distribution in $\mathbb{R}^n$ by the sample covariance matrix.
Furthermore, for distributions that are approximately low-dimensional, a smaller sample size is sufficient.
Namely, if the effective rank of $\Sigma$ equals $r$ then a sufficient sample size is
$$
N = O(r \log n).
$$
\begin{remark}[Boundedness assumption] \label{r: covariance boundedness}
Without the boundedness assumption on the distribution, Corollary~\ref{covariance heavy-tailed}
may fail. The reasoning is the same as in Remark~\ref{r: boundedness}:
for an isotropic distribution which is highly concentrated at the origin,
the sample covariance matrix will likely equal $0$.
Still, one can weaken the boundedness assumption using Theorem~\ref{heavy-tailed rows exp si non-isotropic}
instead of Theorem~\ref{heavy-tailed rows non-isotropic} in the proof of Corollary~\ref{covariance heavy-tailed}.
The weaker requirement is that $\mathbb{E} \max_{i \le N} \|X_i\|_2^2 \le m$ where $X_i$ denote
the sample points. In this case, the covariance estimation will be guaranteed in expectation rather than
with high probability; we leave the details for the interested reader.
A different way to enforce the boundedness assumption is
to reject any sample points $X_i$ that fall outside the centered ball of radius $\sqrt{m}$.
This is equivalent to sampling from the conditional distribution inside the ball.
The conditional distribution satisfies the boundedness requirement,
so the results discussed above provide a good covariance estimation for it. In many cases, this estimate
works even for the original distribution -- namely, if only a small part of the
distribution lies outside the ball of radius $\sqrt{m}$. We leave the details for the interested reader;
see e.g. \cite{V marginals}.
\end{remark}
\subsection{Applications to random sub-matrices and sub-frames}
\index{Sampling from matrices and frames} \label{s: sub-matrices}
The absence of any moment hypotheses on the distribution in Section~\ref{s: heavy-tailed rows}
(except finite variance) makes these results especially relevant for discrete distributions.
One such situation arises when one wishes to sample entries or rows
from a given matrix $B$, thereby creating a {\em random sub-matrix} $A$.
It is a big program to understand what we can learn about
$B$ by seeing $A$, see \cite{GMDL, DKM, RV sampling}.
In other words, we ask -- what properties of $B$ pass onto $A$?
Here we shall only scratch the surface of this problem:
we notice that random sub-matrices of certain size preserve the property of being an {\em approximate isometry}.
\begin{corollary}[Random sub-matrices] \index{Sub-matrices} \label{random sub-matrices}
Consider an $M \times n$ matrix $B$ such that\footnote{The first hypothesis says
$B^*B = MI$. Equivalently, $\bar{B} := \frac{1}{\sqrt{M}}B$
is an isometry, i.e. $\|\bar{B}x\|_2 = \|x\|_2$ for all $x$. Equivalently, the columns of $\bar{B}$ are orthonormal.}
$s_{\min}(B) = s_{\max}(B) = \sqrt{M}$.
Let $m$ be such that all rows $B_i$ of $B$ satisfy $\|B_i\|_2 \le \sqrt{m}$.
Let $A$ be an $N \times n$ matrix obtained by sampling $N$ random rows from $B$
uniformly and independently.
Then for every $t \ge 0$, with probability at least $1 - 2n \cdot \exp(-ct^2)$ one has
$$
\sqrt{N} - t \sqrt{m} \le s_{\min}(A) \le s_{\max}(A) \le \sqrt{N} + t \sqrt{m}.
$$
Here $c>0$ is an absolute constant.
\end{corollary}
\begin{proof}
By assumption, $I = \frac{1}{M} B^*B = \frac{1}{M} \sum_{i=1}^M B_i \otimes B_i$.
Therefore, the uniform distribution on the set of the rows $\{B_1,\ldots,B_M\}$
is an isotropic distribution in $\mathbb{R}^n$. The conclusion then follows from Theorem~\ref{heavy-tailed rows}.
\end{proof}
Note that the conclusion of Corollary~\ref{random sub-matrices}
does not depend on the dimension $M$ of the ambient matrix $B$.
This happens because this result is a specific version of sampling from a discrete
isotropic distribution (uniform on the rows of $B$), where size $M$
of the support of the distribution is irrelevant.
The hypothesis of Corollary~\ref{random sub-matrices} implies\footnote{To recall why this is true,
take trace of both sides in the identity $I = \frac{1}{M} \sum_{i=1}^M B_i \otimes B_i$.}
that $\frac{1}{M} \sum_{i=1}^M \|B_i\|_2^2 = n$.
Hence by Markov's inequality, most of the rows $B_i$ satisfy $\|B_i\|_2 = O(\sqrt{n})$.
This indicates that Corollary~\ref{random sub-matrices} would be often used with $m = O(n)$.
Also, to ensure a positive probability of success, the useful magnitude of $t$ would be
$t \sim \sqrt{\log n}$. With this in mind, the extremal singular values of $A$ will be close
to each other (and to $\sqrt{N}$) if $N \gg t^2 m \sim n \log n$.
Summarizing, Corollary~\ref{random sub-matrices} states that
a random $O(n \log n) \times n$ sub-matrix of an $M \times n$ isometry
is an approximate isometry.\footnote{For the purposes of compressed sensing,
we shall study the more difficult {\em uniform}
problem for random sub-matrices in Section~\ref{s: restricted isometries}.
There $B$ itself will be chosen as a column sub-matrix of a given $M \times M$ matrix (such as DFT),
and one will need to control all such $B$ simultaneously, see Example~\ref{random measurements}.}
\medskip
Another application of random matrices with heavy-tailed isotropic rows is
for {\em sampling from frames}. Recall that frames are generalizations of bases without
linear independence, see Example~\ref{random vectors}.
Consider a tight frame $\{u_i\}_{i=1}^M$ in $\mathbb{R}^n$, and for the sake of convenient
normalization, assume that it has bounds $A=B=M$.
We are interested in whether a small random subset of $\{u_i\}_{i=1}^M$ is still a nice frame in $\mathbb{R}^n$.
Such question arises naturally because frames are used in signal processing to
create {\em redundant representations} of signals. Indeed, every signal $x \in \mathbb{R}^n$ admits frame expansion
$x = \frac{1}{M} \sum_{i=1}^M \< u_i, x\> u_i$. Redundancy makes
frame representations more robust to errors and losses than basis representations.
Indeed, we will show that if one loses all except $N = O(n \log n)$ random coefficients
$\< u_i, x\> $ one is still able to reconstruct $x$ from the received coefficients $\< u_{i_k}, x\> $
as $x \approx \frac{1}{N} \sum_{k=1}^N \< u_{i_k}, x\> u_{i_k}$.
This boils down to showing that a random subset of size $N = O(n \log n)$ of a tight frame in $\mathbb{R}^n$
is an approximate tight frame.
\begin{corollary}[Random sub-frames, see \cite{V frames}] \index{Frames} \label{random sub-frames}
Consider a tight frame $\{u_i\}_{i=1}^M$ in $\mathbb{R}^n$ with frame bounds $A=B=M$.
Let number $m$ be such that all frame elements satisfy $\|u_i\|_2 \le \sqrt{m}$.
Let $\{v_i\}_{i=1}^N$ be a set of vectors obtained by sampling $N$ random elements
from the frame $\{u_i\}_{i=1}^M$ uniformly and independently.
Let $\varepsilon \in (0,1)$ and $t \ge 1$.
Then the following holds with probability at least $1 - 2n^{-t^2}$:
$$
\text{If } N \ge C(t/\varepsilon)^2 m \log n
\quad \text{then $\{v_i\}_{i=1}^N$ is a frame in $\mathbb{R}^n$}
$$
with bounds $A = (1-\varepsilon)N$, $B = (1+\varepsilon)N$.
Here $C$ is an absolute constant.
In particular, if this event holds, then every $x \in \mathbb{R}^n$ admits an approximate
representation using only the sampled frame elements:
$$
\Big\| \frac{1}{N} \sum_{i=1}^N \< v_i, x\> v_i - x \Big\| \le \varepsilon \|x\|.
$$
\end{corollary}
\begin{proof}
The assumption implies that $I = \frac{1}{M} \sum_{i=1}^M u_i \otimes u_i$.
Therefore, the uniform distribution on the set $\{u_i\}_{i=1}^M$
is an isotropic distribution in $\mathbb{R}^n$.
Applying Corollary~\ref{covariance heavy-tailed} with $\Sigma = I$ and
$\Sigma_N = \frac{1}{N} \sum_{i=1}^N v_i \otimes v_i$ we conclude that
$\|\Sigma_N - I\| \le \varepsilon$ with the required probability. This clearly completes the proof.
\end{proof}
As before, we note that $\frac{1}{M} \sum_{i=1}^M \|u_i\|_2^2 = n$, so
Corollary~\ref{random sub-frames} would be often used with $m = O(n)$.
This shows, liberally speaking, that a random subset of a frame in $\mathbb{R}^n$
of size $N = O(n \log n)$ is again a frame.
\begin{remark}[Non-uniform sampling]
The boundedness assumption $\|u_i\|_2 \le \sqrt{m}$,
although needed in Corollary~\ref{random sub-frames},
can be removed by non-uniform sampling.
To this end, one would sample from the set of normalized vectors $\bar{u}_i := \sqrt{n} \frac{u_i}{\|u_i\|_2}$
with probabilities proportional to $\|u_i\|_2^2$.
This defines an isotropic distribution in $\mathbb{R}^n$, and clearly $\|\bar{u}_i\|_2 = \sqrt{n}$.
Therefore, by Theorem~\ref{random sub-frames}, a random sample of $N = O(n \log n)$ vectors
obtained this way forms an almost tight frame in $\mathbb{R}^n$.
This result does not require any bound on $\|u_i\|_2$.
\end{remark}
\section{Random matrices with independent columns} \label{s: columns}
In this section we study the extreme singular values of $N \times n$ random matrices $A$
with independent columns $A_j$. We are guided by our ideal bounds \eqref{heuristic} as before.
The same phenomenon occurs in the column independent model as in the row independent
model -- sufficiently tall random matrices $A$ are approximate isometries.
As before, being tall will mean $N \gg n$ for sub-gaussian distributions
and $N \gg n \log n$ for arbitrary distributions.
The problem
is equivalent to studying {\em Gram matrices} $G = A^*A = (\< A_j, A_k\> )_{j,k=1}^n$ \index{Gram matrix}
of independent isotropic random vectors $A_1,\ldots, A_n$ in $\mathbb{R}^N$.
Our results can be interpreted using Lemma~\ref{approximate isometries} as showing that
the normalized Gram matrix $\frac{1}{N} G$ is an {\em approximate identity}
for $N, n$ as above.
Let us first try to prove this with a heuristic argument.
By Lemma~\ref{norm isotropic} we know that the diagonal entries of $\frac{1}{N} G$
have mean $\frac{1}{N} \mathbb{E} \|A_j\|_2^2 = 1$ and off-diagonal ones have zero mean and
standard deviation $\frac{1}{N} (\mathbb{E} \< A_j, A_k\> ^2)^{1/2} = \frac{1}{\sqrt{N}}$.
If, hypothetically, the off-diagonal entries were independent, then we could use the
results of matrices with independent entries (or even rows) developed in Section~\ref{s: rows}.
The off-diagonal part of $\frac{1}{N}G$ would have norm $O(\sqrt{\frac{n}{N}})$
while the diagonal part would approximately equal $I$. Hence we would have
\begin{equation} \label{Gram ideal}
\big\| \frac{1}{N} G - I \big\| = O \Big( \sqrt{\frac{n}{N}} \Big),
\end{equation}
i.e. $\frac{1}{N} G$ is an approximate identity
for $N \gg n$. Equivalently, by Lemma~\ref{approximate isometries}, \eqref{Gram ideal}
would yield the ideal bounds \eqref{heuristic} on the extreme singular values of $A$.
Unfortunately, the entries of the Gram matrix $G$ are obviously not independent.
To overcome this obstacle we shall use the {\em decoupling} technique
of probability theory \cite{dG}.
We observe that there is still enough independence encoded in $G$. Consider a
principal sub-matrix $(A_S)^*(A_T)$ of $G = A^*A$ with disjoint index sets $S$ and $T$.
If we condition on $(A_k)_{k \in T}$ then this sub-matrix has independent rows.
Using an elementary decoupling technique, we will indeed seek to replace the full Gram matrix
$G$ by one such decoupled $S \times T$ matrix with independent rows,
and finish off by applying results of Section~\ref{s: rows}.
\medskip
By transposition one can try to reduce our problem to studying the $n \times N$
matrix $A^*$. It has independent rows and the same singular values as $A$,
so one can apply results of Section~\ref{s: rows}.
The conclusion would be that, with high probability,
$$
\sqrt{n} - C \sqrt{N} \le s_{\min}(A) \le s_{\max}(A) \le \sqrt{n} + C \sqrt{N}.
$$
Such estimate is only good for {\em flat} matrices ($N \le n$).
For {\em tall} matrices ($N \ge n$) the lower bound would be trivial because of the (possibly large)
constant $C$.
So, from now on we can focus on tall matrices ($N \ge n$) with independent columns.
\subsection{Sub-gaussian columns}
\index{Sub-gaussian!random matrices with independent columns} \label{s: sub-gaussian columns}
Here we prove a version of Theorem~\ref{sub-gaussian rows} for matrices with independent columns.
\begin{theorem}[Sub-gaussian columns] \label{sub-gaussian columns}
Let $A$ be an $N \times n$ matrix ($N \ge n$) whose columns $A_i$ are independent
sub-gaussian isotropic random vectors in $\mathbb{R}^N$ with $\|A_j\|_2 = \sqrt{N}$ a. s.
Then for every $t \ge 0$, the inequality holds
\begin{equation} \label{smin smax columns}
\sqrt{N} - C \sqrt{n} - t \le s_{\min}(A) \le s_{\max}(A) \le \sqrt{N} + C \sqrt{n} + t
\end{equation}
with probability at least $1 - 2\exp(-ct^2)$,
where $C = C'_K$, $c = c'_K > 0$ depend only on the subgaussian norm
$K = \max_j \|A_j\|_{\psi_2}$ of the columns.
\end{theorem}
The only significant difference between Theorem~\ref{sub-gaussian rows} for independent rows
and Theorem~\ref{sub-gaussian columns} for independent columns is
that the latter requires {\em normalization of columns}, $\|A_j\|_2 = \sqrt{N}$ almost surely.
Recall that by isotropy of $A_j$ (see Lemma~\ref{norm isotropic})
one always has $(\mathbb{E}\|A_j\|_2^2)^{1/2} = \sqrt{N}$,
but the normalization is a bit stronger requirement.
We will discuss this more after the proof of Theorem~\ref{sub-gaussian columns}.
\begin{remark}[Gram matrices are an approximate identity]
By Lemma~\ref{approximate isometries}, the conclusion of Theorem~\ref{sub-gaussian columns}
is equivalent to
$$
\big\| \frac{1}{N} A^*A - I \| \le C \sqrt{\frac{n}{N}} + \frac{t}{\sqrt{N}}
$$
with the same probability $1 - 2\exp(-ct^2)$. This establishes our ideal inequality \eqref{Gram ideal}.
In words, the normalized Gram matrix of $n$ independent sub-gaussian isotropic
random vectors in $\mathbb{R}^N$ is an approximate identity whenever $N \gg n$.
\end{remark}
The proof of Theorem~\ref{sub-gaussian columns} is based on the decoupling technique \cite{dG}.
What we will need here is an elementary decoupling lemma for double arrays.
Its statement involves the notion of a {\em random subset} \index{Random subset} of a given finite set.
To be specific, we define a random set $T$ of $[n]$ with a given average size $m \in [0,n]$
as follows. Consider independent $\{0,1\}$ valued random variables $\delta_1,\ldots,\delta_n$
with $\mathbb{E}\delta_i = m/n$; these are sometimes called {\em independent selectors}. \index{Selectors}
Then we define the random subset
$T = \{ i \in [n]:\; \delta_i=1 \}$. Its average size equals $\mathbb{E}|T| = \mathbb{E} \sum_{i=1}^n \delta_i = m$.
\begin{lemma}[Decoupling] \index{Decoupling} \label{decoupling}
Consider a double array of real numbers $(a_{ij})_{i,j=1}^n$
such that $a_{ii} = 0$ for all $i$. Then
$$
\sum_{i,j \in [n]} a_{ij} = 4 \mathbb{E} \sum_{i \in T,\, j \in T^c} a_{ij}
$$
where $T$ is a random subset of $[n]$ with average size $n/2$.
In particular,
$$
4 \min_{T \subseteq [n]} \sum_{i \in T,\, j \in T^c} a_{ij}
\le \sum_{i,j \in [n]} a_{ij}
\le 4 \max_{T \subseteq [n]} \sum_{i \in T,\, j \in T^c} a_{ij}
$$
where the minimum and maximum are over all subsets $T$ of $[n]$.
\end{lemma}
\begin{proof}
Expressing the random subset as $T = \{ i \in [n]:\; \delta_i=1 \}$
where $\delta_i$ are independent selectors with $\mathbb{E}\delta_i=1/2$, we see that
$$
\mathbb{E} \sum_{i \in T,\, j \in T^c} a_{ij}
= \mathbb{E} \sum_{i,j \in [n]} \delta_i (1-\delta_j) a_{ij}
= \frac{1}{4} \sum_{i,j \in [n]} a_{ij},
$$
where we used that $\mathbb{E} \delta_i (1-\delta_j) = \frac{1}{4}$ for $i \ne j$ and the assumption $a_{ii}=0$.
This proves the first part of the lemma. The second part follows trivially by estimating
expectation by maximum and minimum.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{sub-gaussian columns}]
{\bf Step 1: Reductions.}
Without loss of generality we can assume that the columns $A_i$ have zero mean.
Indeed, multiplying each column $A_i$ by $\pm 1$ arbitrarily
preserves the extreme singular values of $A$, the isotropy of $A_i$ and
the sub-gaussian norms of $A_i$. Therefore, by multiplying
$A_i$ by independent symmetric Bernoulli random variables we achieve that $A_i$
have zero mean.
For $t = O(\sqrt{N})$ the conclusion of Theorem~\ref{sub-gaussian columns}
follows from Theorem~\ref{sub-gaussian rows} by transposition.
Indeed, the $n \times N$ random matrix $A^*$ has independent rows, so for $t \ge 0$ we have
\begin{equation} \label{smax smax star}
s_{\max}(A) = s_{\max}(A^*) \le \sqrt{n} + C_K \sqrt{N} + t
\end{equation}
with probability at least $1 - 2 \exp(-c_K t^2)$.
Here $c_K > 0$ and we can obviously assume that $C_K \ge 1$.
For $t \ge C_K\sqrt{N}$ it follows that
$s_{\max}(A) \le \sqrt{N} + \sqrt{n} + 2t$, which
yields the conclusion of Theorem~\ref{sub-gaussian columns}
(the left hand side of \eqref{smin smax columns} being trivial).
So, it suffices to prove the conclusion for
$t \le C_K \sqrt{N}$.
Let us fix such $t$.
It would be useful to have some a priori control of $s_{\max}(A) = \|A\|$.
We thus consider the desired event
$$
\mathcal{E} := \big\{ s_{\max}(A) \le 3C_K \sqrt{N} \big\}.
$$
Since $3C_K \sqrt{N} \ge \sqrt{n} + C_K \sqrt{N} + t$, by \eqref{smax smax star}
we see that $\mathcal{E}$ is likely to occur:
\begin{equation} \label{EEc}
\mathbb{P}(\mathcal{E}^c) \le 2 \exp(-c_K t^2).
\end{equation}
{\bf Step 2: Approximation.}
This step is parallel to Step~1 in the proof of Theorem~\ref{sub-gaussian rows},
except now we shall choose $\varepsilon := \delta$.
This way we reduce our task to the following.
Let $\mathcal{N}$ be a $\frac{1}{4}$-net of the unit sphere $S^{n-1}$ such that
$|\mathcal{N}| \le 9^n$.
It suffices to show that with probability at least $1 - 2\exp(-c'_K t^2)$ one has
$$
\max_{x \in \mathcal{N}} \Big| \frac{1}{N} \|Ax\|_2^2 - 1 \Big| \le \frac{\delta}{2},
\quad \text{where } \delta = C \sqrt{\frac{n}{N}} + \frac{t}{\sqrt{N}}.
$$
By \eqref{EEc}, it is enough to show that the probability
\begin{equation} \label{desired p}
p:= \mathbb{P} \Big\{ \max_{x \in \mathcal{N}} \Big| \frac{1}{N} \|Ax\|_2^2 - 1 \Big| > \frac{\delta}{2}
\text{ and } \mathcal{E} \Big\}
\end{equation}
satisfies $p \le 2\exp(-c_K'' t^2)$,
where $c_K''>0$ may depend only on $K$.
{\bf Step 3: Decoupling.}
As in the proof of Theorem~\ref{sub-gaussian rows}, we will obtain the required bound for a fixed
$x \in \mathcal{N}$ with high probability, and then take a union bound over $x$.
So let us fix any $x = (x_1,\ldots,x_n) \in S^{n-1}$.
We expand
\begin{equation} \label{norm expansion}
\|Ax\|_2^2
= \Big\| \sum_{j=1}^n x_j A_j \Big\|_2^2
= \sum_{j=1}^n x_j^2 \|A_j\|_2^2 + \sum_{j,k \in [n], \, j \ne k} x_j x_k \< A_j, A_k \> .
\end{equation}
Since $\|A_j\|_2^2 = N$ by assumption and $\|x\|_2 =1$, the first sum equals $N$.
Therefore, subtracting $N$ from both sides and dividing by $N$, we obtain the bound
$$
\Big| \frac{1}{N} \|Ax\|_2^2 - 1 \Big|
\le \Big| \frac{1}{N} \sum_{j,k \in [n], \, j \ne k} x_j x_k \< A_j, A_k \> \Big| .
$$
The sum in the right hand side is $\< G_0 x, x\> $
where $G_0$ is the off-diagonal part of the Gram matrix $G = A^*A$.
As we indicated in the beginning of Section~\ref{s: columns}, we are going to replace
$G_0$ by its decoupled version whose rows and columns are indexed by disjoint sets.
This is achieved by Decoupling Lemma~\ref{decoupling}: we obtain
$$
\Big| \frac{1}{N} \|Ax\|_2^2 - 1 \Big|
\le \frac{4}{N} \max_{T \subseteq [n]} |R_T(x)|,
\quad \text{where }
R_T(x) = \sum_{j \in T, \, k \in T^c} x_j x_k \< A_j, A_k \> .
$$
We substitute this into \eqref{desired p} and
take union bound over all choices of $x \in \mathcal{N}$ and $T \subseteq [n]$.
As we know, $|\mathcal{N}| \le 9^n$, and there are $2^n$ subsets $T$ in $[n]$. This gives
\begin{align} \label{p}
p
&\le \mathbb{P} \Big\{ \max_{x \in \mathcal{N}, \, T \subseteq [n]} |R_T(x)| > \frac{\delta N}{8} \text{ and } \mathcal{E} \Big\} \notag\\
&\le 9^n \cdot 2^n \cdot \max_{x \in \mathcal{N}, \, T \subseteq [n]}
\mathbb{P} \Big\{ |R_T(x)| > \frac{\delta N}{8} \text{ and } \mathcal{E} \Big\}.
\end{align}
{\bf Step 4: Conditioning and concentration.}
To estimate the probability in \eqref{p}, we fix a vector $x \in \mathcal{N}$
and a subset $T \subseteq [n]$ and we condition on a realization of
random vectors $(A_k)_{k \in T^c}$. We express
\begin{equation} \label{RJ equivalent}
R_T(x) = \sum_{j \in T} x_j \langle A_j, z \rangle
\quad \text{where } z = \sum_{k \in T^c} x_k A_k.
\end{equation}
Under our conditioning $z$ is a fixed vector, so $R_T(x)$ is a sum of independent random variables.
Moreover, if event $\mathcal{E}$ holds then $z$ is nicely bounded:
\begin{equation} \label{norm z}
\|z\|_2 \le \|A\| \|x\|_2 \le 3 C_K \sqrt{N}.
\end{equation}
If in turn \eqref{norm z} holds then the terms
$\< A_j, z\> $ in \eqref{RJ equivalent} are independent centered sub-gaussian
random variables with $\|\< A_j, z\> \|_{\psi_2} \le 3 K C_K \sqrt{N}$.
By Lemma~\ref{rotation invariance}, their linear combination $R_T(x)$
is also a sub-gaussian random variable with
\begin{equation} \label{RT sub-gaussian}
\|R_T(x)\|_{\psi_2} \le C_1 \Big( \sum_{j \in T} x_j^2 \|\< A_j, z\> \|_{\psi_2}^2 \Big)^{1/2}
\le \widehat{C}_K \sqrt{N}
\end{equation}
where $\widehat{C}_K$ depends only on $K$.
We can summarize these observations as follows. Denoting the conditional probability
by $\mathbb{P}_T = \mathbb{P} \{ \; \cdot \; | (A_k)_{k \in T^c} \}$
and the expectation with respect to $(A_k)_{k \in T^c}$ by $\mathbb{E}_{T^c}$,
we obtain by \eqref{norm z} and \eqref{RT sub-gaussian} that
\begin{align*}
\mathbb{P} &\Big\{ |R_T(x)| > \frac{\delta N}{8} \text{ and } \mathcal{E} \Big\}
\le \mathbb{E}_{T^c}\mathbb{P}_T \Big\{ |R_T(x)| > \frac{\delta N}{8} \text{ and } \|z\|_2 \le 3 C_K \sqrt{N} \Big\} \\
&\le 2 \exp \Big[ -c_1 \Big( \frac{\delta N/8}{\widehat{C}_K \sqrt{N}} \Big)^2 \Big]
= 2 \exp \Big( -\frac{c_2 \delta^2 N}{\widehat{C}_K^2} \Big)
\le 2 \exp \Big( -\frac{c_2 C^2 n}{\widehat{C}_K^2} - \frac{c_2 t^2}{\widehat{C}_K^2} \Big).
\end{align*}
The second inequality follows because $R_T(x)$ is a sub-gaussian random variable \eqref{RT sub-gaussian}
whose tail decay is given by \eqref{sub-gaussian tail}. Here $c_1,c_2>0$ are absolute constants.
The last inequality follows from the definition of $\delta$.
Substituting this into \eqref{p} and choosing $C$ sufficiently large (so that $\ln 36 \le c_2 C^2/\widehat{C}_K^2$),
we conclude that
$$
p \le 2 \exp \big( - c_2 t^2/\widehat{C}_K^2 \big).
$$
This proves an estimate that we desired in Step 2. The proof is complete.
\end{proof}
\begin{remark}[Normalization assumption]
Some a priori control of the norms of the columns $\|A_j\|_2$ is necessary
for estimating the extreme singular values, since
$$
s_{\min}(A) \le \min_{i \le n} \|A_j\|_2 \le \max_{i \le n} \|A_j\|_2 \le s_{\max}(A).
$$
With this in mind, it is easy to construct an example showing that a normalization assumption $\|A_i\|_2 = \sqrt{N}$
is essential in Theorem~\ref{sub-gaussian columns}; it can not even be replaced by a boundedness
assumption $\|A_i\|_2 = O(\sqrt{N})$.
Indeed, consider a random vector $X = \sqrt{2} \xi Y$ in $\mathbb{R}^N$
where $\xi$ is a $\{0,1\}$-valued random variable with $\mathbb{E} \xi = 1/2$ (a ``selector'')
and $X$ is an independent spherical random
vector in $\mathbb{R}^n$ (see Example~\ref{random vectors sub-gaussian}).
Let $A$ be a random matrix whose columns $A_j$ are independent copies of $X$.
Then $A_j$ are independent centered sub-gaussian isotropic random vectors in $\mathbb{R}^n$ with $\|A_j\|_{\psi_2} = O(1)$
and $\|A_j\|_2 \le \sqrt{2N}$ a.s.
So all assumptions of Theorem~\ref{sub-gaussian columns} except normalization are satisfied.
On the other hand $\mathbb{P}\{X=0\}=1/2$, so matrix $A$ has a zero column with overwhelming probability $1 - 2^{-n}$.
This implies that $s_{\min}(A)=0$ with this probability, so the lower estimate
in \eqref{smin smax columns} is false for all nontrivial $N,n,t$.
\end{remark}
\subsection{Heavy-tailed columns}
\index{Heavy-tailed!random matrices with independent columns}
Here we prove a version of Theorem~\ref{heavy-tailed rows exp si} for independent heavy-tailed
columns.
We thus consider $N \times n$ random matrices $A$ with independent columns $A_j$.
In addition to the normalization assumption $\|A_j\|_2 = \sqrt{N}$ already present in
Theorem~\ref{sub-gaussian columns} for sub-gaussian columns, our new result must also
require an a priori control of the off-diagonal part of the Gram matrix
$G = A^*A = (\< A_j, A_k \> )_{j,k=1}^n$.
\begin{theorem}[Heavy-tailed columns] \label{heavy-tailed columns}
Let $A$ be an $N \times n$ matrix ($N \ge n$) whose columns $A_j$ are independent
isotropic random vectors in $\mathbb{R}^N$ with $\|A_j\|_2 = \sqrt{N}$ a. s.
Consider the incoherence parameter
$$
m := \frac{1}{N} \mathbb{E} \max_{j \le n} \sum_{k \in [n], \, k \ne j} \< A_j, A_k \> ^2.
$$
Then $\mathbb{E} \big\| \frac{1}{N} A^*A - I \big\| \le C_0 \sqrt{ \frac{m \log n}{N}}$. In particular,
\begin{equation} \label{eq heavy-tailed columns}
\mathbb{E} \max_{j \le n} |s_j(A) - \sqrt{N}|
\le C \sqrt{m \log n}.
\end{equation}
\end{theorem}
Let us briefly clarify the role of the incoherence parameter $m$, which
controls the lengths of the rows of the off-diagonal part of $G$.
After the proof we will see that a control of $m$ is essential in Theorem~\ref{heavy-tailed rows}.
But for now, let us get a feel of the typical size of $m$.
We have $\mathbb{E} \< A_j, A_k\> ^2 = N$ by Lemma~\ref{norm isotropic}, so for every row $j$ we see that
$\frac{1}{N} \sum_{k \in [n], \, k \ne j} \< A_j, A_k \> ^2 = n-1$.
This indicates that Theorem~\ref{heavy-tailed columns} would be often used
with $m = O(n)$.
In this case, Theorem~\ref{heavy-tailed rows} establishes our ideal inequality \eqref{Gram ideal}
up to a logarithmic factor.
In words, the normalized Gram matrix \index{Gram matrix} of $n$ independent isotropic
random vectors in $\mathbb{R}^N$ is an approximate identity whenever $N \gg n \log n$.
\medskip
Our proof of Theorem~\ref{heavy-tailed columns} will be based on decoupling, symmetrization
and an application of Theorem~\ref{heavy-tailed rows exp si non-isotropic} for a decoupled
Gram matrix with independent rows.
The decoupling is done similarly to Theorem~\ref{sub-gaussian columns}.
However, this time we will benefit from formalizing the decoupling inequality for Gram matrices:
\begin{lemma}[Matrix decoupling] \index{Decoupling} \label{matrix decoupling}
Let $B$ be a $N \times n$ random matrix whose columns $B_j$ satisfy $\|B_j\|_2 = 1$.
Then
$$
\mathbb{E} \|B^*B-I\| \le 4 \max_{T \subseteq [n]} \mathbb{E} \|(B_T)^* B_{T^c} \|.
$$
\end{lemma}
\begin{proof}
We first note that $\|B^*B-I\| = \sup_{x \in S^{n-1}} \big| \|Bx\|_2^2 - 1 \big|$.
We fix $x = (x_1,\ldots,x_n) \in S^{n-1}$ and, expanding as in \eqref{norm expansion}, observe that
$$
\|Bx\|_2^2 = \sum_{j=1}^n x_j^2 \|B_j\|_2^2 + \sum_{j,k \in [n], \, j \ne k} x_j x_k \< B_j, B_k \> .
$$
The first sum equals $1$ since $\|B_j\|_2 = \|x\|_2 = 1$.
So by Decoupling Lemma~\ref{decoupling}, a random subset $T$ of $[n]$ with average
cardinality $n/2$ satisfies
$$
\|Bx\|_2^2 - 1 = 4 \mathbb{E}_T \sum_{j \in T, k \in T^c} x_j x_k \< B_j, B_k \> .
$$
Let us denote by $\mathbb{E}_T$ and $\mathbb{E}_B$ the expectations with respect to the random
set $T$ and the random matrix $B$ respectively.
Using Jensen's inequality we obtain
\begin{align*}
\mathbb{E}_B \|B^*B-I\|
&= \mathbb{E}_B \sup_{x \in S^{n-1}} \big| \|Bx\|_2^2 - 1 \big| \\
&\le 4 \mathbb{E}_B \mathbb{E}_T \sup_{x \in S^{n-1}} \Big| \sum_{j \in T, k \in T^c} x_j x_k \< B_j, B_k \> \Big|
= 4 \mathbb{E}_T \mathbb{E}_B \|(B_T)^* B_{T^c} \|.
\end{align*}
The conclusion follows by replacing the expectation by the maximum over $T$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{heavy-tailed columns}]
{\bf Step 1: Reductions and decoupling.}
It would be useful to have an a priori bound on $s_{\max}(A) = \|A\|$.
We can obtain this by transposing $A$ and applying one of the results of Section~\ref{s: rows}. Indeed, the random
$n \times N$ matrix $A^*$ has independent rows $A_i^*$ which by our assumption
are normalized as $\|A_i^*\|_2 = \|A_i\|_2 = \sqrt{N}$.
Applying Theorem~\ref{heavy-tailed rows exp si} with the roles of $n$ and $N$
switched, we obtain by the triangle inequality that
\begin{equation} \label{norm A}
\mathbb{E} \|A\| = \mathbb{E} \|A^*\| = \mathbb{E} s_{\max}(A^*)
\le \sqrt{n} + C \sqrt{N \log n}
\le C \sqrt{N \log n}.
\end{equation}
Observe that $n \le m$ since by Lemma~\ref{norm isotropic} we have
$\frac{1}{N} \mathbb{E} \< A_j, A_k \> ^2 =1$ for $j \ne k$.
We use Matrix Decoupling Lemma~\ref{matrix decoupling} for $B = \frac{1}{\sqrt{N}} A$
and obtain
\begin{equation} \label{E via Sigma}
E \le \frac{4}{N} \max_{T \subseteq [n]} \mathbb{E} \|(A_T)^* A_{T^c} \|
= \frac{4}{N} \max_{T \subseteq [n]} \mathbb{E} \|\Gamma\|
\end{equation}
where $\Gamma = \Gamma(T)$ denotes the decoupled Gram matrix
$$
\Gamma = (A_T)^* A_{T^c}
= \big( \< A_j, A_k\> \big)_{j \in T, k \in T^c}.
$$
Let us fix $T$; our problem then reduces to bounding the expected norm of $\Gamma$.
{\bf Step 2: The rows of the decoupled Gram matrix.}
For a subset $S \subseteq [n]$, we denote by $\mathbb{E}_{A_S}$
the conditional expectation given $A_{S^c}$,
i.e. with respect to $A_S = (A_j)_{j \in S}$.
Hence $\mathbb{E} = \mathbb{E}_{A_{T^c}} \mathbb{E}_{A_T}$.
Let us condition on $A_{T^c}$.
Treating $(A_k)_{k \in T^c}$ as fixed vectors we see that,
conditionally, the random matrix $\Gamma$ has independent
rows
$$
\Gamma_j = \big( \< A_j, A_k\> \big)_{k \in T^c}, \quad j \in T.
$$
So we are going to use Theorem~\ref{heavy-tailed rows exp si non-isotropic} to
bound the norm of $\Gamma$. To do this we need estimates on (a) the norms
and (b) the second moment matrices of the rows $\Gamma_j$.
(a) Since for $j \in T$, $\Gamma_j$ is a random vector valued in $\mathbb{R}^{T^c}$, we estimate
its second moment matrix by choosing $x \in \mathbb{R}^{T^c}$ and evaluating the scalar second moment
\begin{align*}
\mathbb{E}_{A_T} \< \Gamma_j, x\> ^2
&= \mathbb{E}_{A_T} \Big( \sum_{k \in T^c} \< A_j, A_k\> x_k \Big)^2
= \mathbb{E}_{A_T} \Big\langle A_j, \sum_{k \in T^c} x_k A_k \Big\rangle ^2 \\
&= \Big\| \sum_{k \in T^c} x_k A_k \Big\|^2
= \|A_{T^c}x\|_2^2
\le \|A_{T^c}\|_2^2 \|x\|_2^2.
\end{align*}
In the third equality we used isotropy of $A_j$.
Taking maximum over all $j \in T$ and $x \in \mathbb{R}^{T^c}$,
we see that the second moment matrix
$\Sigma(\Gamma_j) = \mathbb{E}_{A_T} \Gamma_j \otimes \Gamma_j$ satisfies
\begin{equation} \label{Sigma Gj}
\max_{j \in T} \|\Sigma(\Gamma_j)\| \le \|A_{T^c}\|^2.
\end{equation}
(b) To evaluate the norms of $\Gamma_j$, $j \in T$, note that
$\|\Gamma_j\|_2^2 = \sum_{k \in T^c} \< A_j, A_k\> ^2$.
This is easy to bound, because the assumption says that the random variable
$$
M := \frac{1}{N} \max_{j \in [n]} \sum_{k \in [n], \, k \ne j} \< A_j, A_k \> ^2
\quad \text{satisfies } \mathbb{E} M = m.
$$
This produces the bound $\mathbb{E} \max_{j \in T} \|\Gamma_j\|_2^2 \le N \cdot \mathbb{E} M = Nm$. But at this moment
we need to work conditionally on $A_{T^c}$, so for now we will be satisfied with
\begin{equation} \label{rows of G}
\mathbb{E}_{A_T} \max_{j \in T} \|\Gamma_j\|_2^2 \le N \cdot \mathbb{E}_{A_T} M.
\end{equation}
{\bf Step 3: The norm of the decoupled Gram matrix.}
We bound the norm of the random $T \times T^c$ Gram matrix $\Gamma$
with (conditionally) independent rows using Theorem~\ref{heavy-tailed rows exp si non-isotropic}
and Remark~\ref{r: different second moments}.
Since by \eqref{Sigma Gj} we have
$\big\| \frac{1}{|T|} \sum_{j \in T} \Sigma(\Gamma_j) \big\|
\le \frac{1}{|T|} \sum_{j \in T} \|\Sigma(\Gamma_j)\|
\le \|A_{T^c}\|^2$,
we obtain using \eqref{rows of G} that
\begin{align} \label{EAT Sigma}
\mathbb{E}_{A_T} \|\Gamma\|
&\le (\mathbb{E}_{A_T} \|\Gamma\|^2)^{1/2}
\le \|A_{T^c}\| \sqrt{|T|} + C \sqrt{N \cdot \mathbb{E}_{A_T} (M) \log |T^c|} \nonumber\\
&\le \|A_{T^c}\| \sqrt{n} + C \sqrt{N \cdot \mathbb{E}_{A_T} (M) \log n}.
\end{align}
Let us take expectation of both sides with respect to $A_{T^c}$.
The left side becomes the quantity we seek to bound, $\mathbb{E} \|\Gamma\|$.
The right side will contain the term which we can estimate by \eqref{norm A}:
$$
\mathbb{E}_{A_{T^c}} \|A_{T^c}\| = \mathbb{E} \|A_{T^c}\| \le \mathbb{E} \|A\| \le C \sqrt{N \log n}.
$$
The other term that will appear in the expectation of \eqref{EAT Sigma}
is
$$
\mathbb{E}_{A_{T^c}} \sqrt{\mathbb{E}_{A_T} (M)}
\le \sqrt{\mathbb{E}_{A_{T^c}} \mathbb{E}_{A_T} (M)}
\le \sqrt{\mathbb{E} M}
= \sqrt{m}.
$$
So, taking the expectation in \eqref{EAT Sigma} and using these bounds, we obtain
$$
\mathbb{E} \|\Gamma\|
= \mathbb{E}_{A_{T^c}} \mathbb{E}_{A_T} \|\Gamma\|
\le C \sqrt{N \log n} \sqrt{n} + C \sqrt{N m \log n}
\le 2C \sqrt{N m \log n}
$$
where we used that $n \le m$.
Finally, using this estimate in \eqref{E via Sigma} we conclude
$$
E \le 8C \sqrt{\frac{m \log n}{N}}.
$$
This establishes the first part of Theorem~\ref{heavy-tailed columns}.
The second part follow by the diagonalization argument
as in Step 2 of the proof of Theorem~\ref{heavy-tailed rows exp si}.
\end{proof}
\begin{remark}[Incoherence]
A priori control on the {\em incoherence} \index{Incoherence} is essential in Theorem~\ref{heavy-tailed columns}.
Consider for instance an $N \times n$ random matrix $A$ whose columns are independent coordinate random
vectors in $\mathbb{R}^N$. Clearly $s_{\max}(A) \ge \max_j \|A_i\|_2 = \sqrt{N}$.
On the other hand, if the matrix is not too tall, $n \gg \sqrt{N}$, then $A$ has two identical columns
with high probability, which yields $s_{\min}(A)=0$.
\end{remark}
\section{Restricted isometries} \index{Restricted isometries} \label{s: restricted isometries}
In this section we consider an application of the non-asymptotic random matrix theory
in compressed sensing.
For a thorough introduction to compressed sensing, see the introductory chapter of
this book and \cite{FR, CS website}.
In this area, $m \times n$ matrices $A$
are considered as measurement devices, taking as input a signal $x \in \mathbb{R}^n$ and returning
its measurement $y = Ax \in \mathbb{R}^m$. One would like to take measurements economically,
thus keeping $m$ as small as possible, and still to be able to recover the signal $x$ from its
measurement $y$.
The interesting regime for compressed sensing is where we take
very few measurements, $m \ll n$. Such matrices $A$ are not one-to-one,
so recovery of $x$ from $y$ is not possible for all signals $x$.
But in practical applications, the amount of ``information'' contained in the signal is often small.
Mathematically this is expressed as {\em sparsity} of $x$.
In the simplest case, one assumes that $x$ has few non-zero coordinates, say
$|\supp(x)| \le k \ll n$. In this case, using any non-degenerate matrix $A$ one can check
that $x$ can be recovered whenever $m > 2k$ using the optimization problem
$\min \{ |\supp(x)|: \; Ax=y \}$.
This optimization problem is highly non-convex and generally NP-complete.
So instead one considers a convex relaxation of this problem, $\min \{ \|x\|_1: \; Ax=y \}$.
A basic result in compressed sensing, due to Cand\`es and Tao \cite{Candes-Tao, Candes},
is that for sparse signals $|\supp(x)| \le k$,
the convex problem recovers the signal $x$ from its measurement $y$ exactly, provided
that the measurement matrix $A$ is quantitatively non-degenerate. Precisely, the non-degeneracy
of $A$ means that it satisfies the following {\em restricted isometry property} with $\delta_{2k}(A) \le 0.1$.
\begin{definition-notag}[Restricted isometries]
An $m \times n$ matrix $A$ satisfies the {\em restricted isometry property} of order $k \ge 1$ if
there exists $\delta_k \ge 0$ such that the inequality
\begin{equation} \label{eq RIP}
(1-\delta_k) \|x\|_2^2 \le \|Ax\|_2^2 \le (1+\delta_k) \|x\|_2^2
\end{equation}
holds for all $x \in \mathbb{R}^n$ with $|\supp(x)| \le k$.
The smallest number $\delta_k = \delta_k(A)$ is called the {\em restricted isometry constant} of $A$.
\end{definition-notag}
In words, $A$ has a restricted isometry property if $A$ acts as an approximate isometry
on all sparse vectors. Clearly,
\begin{equation} \label{equiv RIP}
\delta_k(A) = \max_{|T| \le k} \|A_T^* A_T - I_{\mathbb{R}^T}\|
= \max_{|T| = \lfloor k \rfloor} \|A_T^* A_T - I_{\mathbb{R}^T}\|
\end{equation}
where the maximum is over all subsets $T \subseteq [n]$ with $|T| \le k$ or $|T| = \lfloor k \rfloor$.
The concept of restricted isometry can also be expressed via extreme singular values,
which brings us to the topic we studied in the previous sections. $A$ is a restricted isometry
if and only if all $m \times k$ sub-matrices $A_T$ of $A$
(obtained by selecting arbitrary $k$ columns from $A$) are approximate isometries.
Indeed, for every $\delta \ge 0$, Lemma~\ref{approximate isometries} shows that
the following two inequalities are equivalent up to an absolute constant:
\begin{gather}
\delta_k(A) \le \max(\delta,\delta^2); \label{dk dd} \\
1-\delta \le s_{\min}(A_T) \le s_{\max}(A_T) \le 1+\delta \label{s restricted}
\quad \text{for all } |T| \le k.
\end{gather}
More precisely, \eqref{dk dd} implies \eqref{s restricted} and
\eqref{s restricted} implies $\delta_k(A) \le 3\max(\delta,\delta^2)$.
\medskip
Our goal is thus to find matrices that are good restricted isometries. What good means is clear
from the goals of compressed sensing described above. First, we need to keep
the restricted isometry constant $\delta_k(A)$ below some small absolute constant, say $0.1$.
Most importantly, we would like the number of measurements $m$ to be small, ideally
proportional to the sparsity $k \ll n$.
This is where non-asymptotic random matrix theory enters.
We shall indeed show that, with high probability,
$m \times n$ random matrices $A$ are good restricted isometries of order $k$
with $m = O^*(k)$. Here the $O^*$ notation hides some logarithmic factors of $n$.
Specifically, in Theorem~\ref{sub-gaussian RIP} we will show that
$$
m = O(k \log(n/k))
$$
for sub-gaussian random matrices $A$ (with independent rows or columns).
This is due to the strong concentration properties of such matrices.
A general observation of this kind is Proposition~\ref{concentration RIP}.
It says that if for a given $x$, a random matrix $A$ (taken from any distribution) satisfies inequality \eqref{eq RIP}
with high probability, then $A$ is a good restricted isometry.
In Theorem~\ref{heavy-tailed RIP} we will extend these results to random matrices without concentration properties.
Using a uniform extension of Rudelson's inequality, Corollary~\ref{Rudelson}, we shall show that
\begin{equation} \label{m heavy-tailed}
m = O(k \log^4 n)
\end{equation}
for heavy-tailed random matrices $A$ (with independent rows). This includes the important
example of random Fourier matrices.
\subsection{Sub-gaussian restricted isometries} \index{Sub-gaussian!restricted isometries}
In this section we show that $m \times n$ sub-gaussian random matrices $A$ are good restricted isometries.
We have in mind either of the following two models, which we analyzed in Sections~\ref{s: sub-gaussian rows}
and \ref{s: sub-gaussian columns} respectively:
\begin{description}
\item[Row-independent model:] the rows of $A$ are independent
sub-gaussian isotropic random vectors in $\mathbb{R}^n$;
\item[Column-independent model:] the columns $A_i$ of $A$ are independent
sub-gaussian isotropic random vectors in $\mathbb{R}^m$ with $\|A_i\|_2 = \sqrt{m}$ a.s.
\end{description}
Recall that these models cover many natural examples, including Gaussian and Bernoulli matrices
(whose entries are independent standard normal or symmetric Bernoulli random variables),
general sub-gaussian random matrices (whose entries are independent sub-gaussian random variables
with mean zero and unit variance),
``column spherical'' matrices whose columns are independent vectors uniformly distributed on the centered
Euclidean sphere in $\mathbb{R}^m$ with radius $\sqrt{m}$,
``row spherical'' matrices whose rows are independent vectors uniformly distributed on the centered
Euclidean sphere in $\mathbb{R}^d$ with radius $\sqrt{d}$, etc.
\begin{theorem}[Sub-gaussian restricted isometries] \label{sub-gaussian RIP}
Let $A$ be an $m \times n$ sub-gaussian random matrix with independent rows or columns,
which follows either of the two models above.
Then the normalized matrix $\bar{A} = \frac{1}{\sqrt{m}} A$ satisfies the following for every
sparsity level $1 \le k \le n$ and every number $\delta \in (0,1)$:
$$
\text{if } m \ge C \delta^{-2} k \log (en/k)
\quad \text{then } \delta_k(\bar{A}) \le \delta
$$
with probability at least $1 - 2\exp(-c \delta^2 m)$.
Here $C = C_K$, $c = c_K > 0$ depend only on the subgaussian norm
$K = \max_i \|A_i\|_{\psi_2}$ of the rows or columns of $A$.
\end{theorem}
\begin{proof}
Let us check that the conclusion follows from Theorem~\ref{sub-gaussian rows}
for the row-independent model, and from Theorem~\ref{sub-gaussian columns}
for the column-independent model.
We shall control the restricted isometry constant using its equivalent description \eqref{equiv RIP}.
We can clearly assume that $k$ is a positive integer.
Let us fix a subset $T \subseteq [n]$, $|T| = k$
and consider the $m \times k$ random matrix $A_T$. If $A$ folows
the row-independent model, then the rows of $A_T$ are orthogonal projections of the rows of
$A$ onto $\mathbb{R}^T$, so they are still independent sub-gaussian
isotropic random vectors in $\mathbb{R}^T$. If alternatively, $A$ follows the column-independent
model, then trivially the columns of $A_T$ satisfy the same assumptions as the columns of $A$.
In either case, Theorem~\ref{sub-gaussian rows} or Theorem~\ref{sub-gaussian columns} applies
to $A_T$. Hence for every $s \ge 0$, with probability at least $1-2\exp(-cs^2)$
one has
\begin{equation} \label{AT smin smax}
\sqrt{m} - C_0\sqrt{k} - s \le s_{\min}(A_T) \le s_{\max}(A_T) \le \sqrt{m} + C_0\sqrt{k} + s.
\end{equation}
Using Lemma~\ref{approximate isometries} for
$\bar{A}_T = \frac{1}{\sqrt{m}}{A_T}$, we see that \eqref{AT smin smax} implies that
$$
\| \bar{A}_T^* \bar{A}_T - I_{\mathbb{R}^T} \| \le 3 \max(\delta_0,\delta_0^2)
\quad \text{where } \delta_0 = C_0 \sqrt{\frac{k}{m}} + \frac{s}{\sqrt{m}}.
$$
Now we take a union bound over all subsets $T \subset [n]$, $|T| = k$.
Since there are $\binom{n}{k} \le (en/k)^k$ ways to choose $T$, we conclude that
$$
\max_{|T| = k} \| \bar{A}_T^* \bar{A}_T - I_{\mathbb{R}^T} \| \le 3 \max(\delta_0,\delta_0^2)
$$
with probability at least
$1 - \binom{n}{k} \cdot 2\exp(-cs^2)
\ge 1 - 2 \exp \big( k \log (en/k) - cs^2)$.
Then, once we choose $\varepsilon > 0$ arbitrarily and let
$s = C_1 \sqrt{k \log(en/k)} + \varepsilon \sqrt{m}$,
we conclude with probability at least $1 - 2\exp(-c \varepsilon^2 m)$ that
$$
\delta_k(\bar{A}) \le 3 \max(\delta_0,\delta_0^2) \quad \text{where } \delta_0 = C_0 \sqrt{\frac{k}{m}} + C_1 \sqrt{\frac{k \log(en/k)}{m}} + \varepsilon.
$$
Finally, we apply this statement for $\varepsilon := \delta/6$. By choosing constant $C$ in the statement of the theorem sufficiently large,
we make $m$ large enough so that $\delta_0 \le \delta/3$, which yields $3 \max(\delta_0,\delta_0^2) \le \delta$. The proof is complete.
\end{proof}
The main reason Theorem~\ref{sub-gaussian RIP} holds is that the random matrix $A$
has a strong concentration property, i.e. that $\|\bar{A}x\|_2 \approx \|x\|_2$ with high
probability for every fixed sparse vector $x$. This concentration property alone implies
the restricted isometry property, regardless of the specific random matrix model:
\begin{proposition}[Concentration implies restricted isometry, see \cite{BDDW}] \label{concentration RIP}
Let $A$ be an $m \times n$ random matrix, and let $k \ge 1$, $\delta \ge 0$, $\varepsilon > 0$.
Assume that for every fixed $x \in \mathbb{R}^n$, $|\supp(x)| \le k$, the inequality
$$
(1-\delta) \|x\|_2^2 \le \|Ax\|_2^2 \le (1+\delta) \|x\|_2^2
$$
holds with probability at least $1 - \exp(-\varepsilon m)$.
Then we have the following:
$$
\text{if } m \ge C \varepsilon^{-1} k \log (en/k)
\quad \text{then } \delta_k(\bar{A}) \le 2\delta
$$
with probability at least $1 - \exp(-\varepsilon m / 2)$. Here $C$ is an absolute constant.
\end{proposition}
In words, the restricted isometry property can be checked on each individual
vector $x$ with high probability.
\begin{proof}
We shall use the expression \eqref{equiv RIP} to estimate the restricted isometry constant.
We can clearly assume that $k$ is an integer, and focus on the sets
$T \subseteq [n]$, $|T| = k$.
By Lemma~\ref{net cardinality}, we can find a net $\mathcal{N}_T$ of the unit sphere $S^{n-1} \cap \mathbb{R}^T$
with cardinality $|\mathcal{N}_T| \le 9^k$.
By Lemma~\ref{norm on net}, we estimate the operator norm as
$$
\big\| A_T^* A_T - I_{\mathbb{R}^T} \big\|
\le 2 \max_{x \in \mathcal{N}_T} \big| \big\langle (A_T^* A_T - I_{\mathbb{R}^T})x, x \big\rangle \big|
= 2 \max_{x \in \mathcal{N}_T} \big| \|Ax\|_2^2 - 1 \big|.
$$
Taking maximum over all subsets $T \subseteq [n]$, $|T| = k$, we conclude that
$$
\delta_k(A) \le 2 \max_{|T| = k} \max_{x \in \mathcal{N}_T} \big| \|Ax\|_2^2 - 1 \big|.
$$
On the other hand, by assumption we have for every $x \in \mathcal{N}_T$ that
$$
\mathbb{P} \big\{ \big| \|Ax\|_2^2 - 1 \big| > \delta \big\}
\le \exp(-\varepsilon m).
$$
Therefore, taking a union bound over $\binom{n}{k} \le (en/k)^k$ choices of the set $T$
and over $9^k$ elements $x \in \mathcal{N}_T$, we obtain that
\begin{align*}
\mathbb{P} \{ \delta_k(A) > 2\delta \}
&\le \binom{n}{k} 9^k \exp(-\varepsilon m)
\le \exp \big( k \ln(en/k) + k \ln 9 - \varepsilon m \big) \\
&\le \exp(- \varepsilon m / 2)
\end{align*}
where the last line follows by the assumption on $m$.
The proof is complete.
\end{proof}
\subsection{Heavy-tailed restricted isometries} \index{Heavy-tailed!restricted isometries} \label{s: heavy-tailed RIP}
In this section we show that $m \times n$ random matrices $A$ with independent
heavy-tailed rows (and uniformly bounded coefficients) are good restricted isometries.
This result will be established in Theorem~\ref{heavy-tailed RIP}.
As before, we will prove this by controlling the extreme singular values of
all $m \times k$ sub-matrices $A_T$. For each individual subset $T$, this can be achieved
using Theorem~\ref{heavy-tailed rows}:
one has
\begin{equation} \label{AT individual control}
\sqrt{m} - t \sqrt{k} \le s_{\min}(A_T) \le s_{\max}(A_T) \le \sqrt{m} + t \sqrt{k}
\end{equation}
with probability at least $1 - 2 k \cdot \exp(-ct^2)$.
Although this optimal probability estimate has optimal order, it is too weak to allow for
a union bound over all $\binom{n}{k} = (O(1) n/k)^k$ choices of the subset $T$.
Indeed, in order that $1 - \binom{n}{k} 2 k \cdot \exp(-ct^2) > 0$
one would need to take $t > \sqrt{k \log(n/k)}$. So in order to achieve a nontrivial
lower bound in \eqref{AT individual control}, one would be forced to take $m \ge k^2$.
This is too many measurements; recall that our hope is $m = O^*(k)$.
This observation suggests that instead of controlling each sub-matrix $A_T$ separately,
we should learn how to control all $A_T$ at once.
This is indeed possible with the following uniform version
of Theorem~\ref{heavy-tailed rows exp si}:
\begin{theorem}[Heavy-tailed rows; uniform] \label{heavy-tailed rows uniform}
Let $A=(a_{ij})$ be an $N \times d$ matrix ($1 < N \le d$) whose rows $A_i$ are independent
isotropic random vectors in $\mathbb{R}^d$. Let $K$ be a number such that
all entries $|a_{ij}| \le K$ almost surely.
Then for every $1 < n \le d$, we have
$$
\mathbb{E} \max_{|T| \le n} \max_{j \le |T|} |s_j(A_T) - \sqrt{N}|
\le C l \sqrt{n}
$$
where $l = \log(n) \sqrt{\log d} \sqrt{\log N}$
and where $C=C_K$ may depend on $K$ only.
The maximum is, as usual, over all subsets $T \subseteq [d]$, $|T| \le n$.
\end{theorem}
The non-uniform prototype of this result,
Theorem~\ref{heavy-tailed rows exp si},
was based on Rudelson's inequality, Corollary~\ref{Rudelson}.
In a very similar way, Theorem~\ref{heavy-tailed rows uniform}
is based on the following uniform version of Rudelon's inequality.
\begin{proposition}[Uniform Rudelson's inequality \cite{RV Fourier}] \index{Rudelson's inequality} \label{RV Fourier}
Let $x_1, \ldots, x_N$ be vectors in $\mathbb{R}^d$, $1 < N \le d$,
and let $K$ be a number such that
all $\|x_i\|_\infty \le K$.
Let $\varepsilon_1, \ldots, \varepsilon_N$ be independent symmetric Bernoulli random variables.
Then for every $1 < n \le d$ one has
$$
\mathbb{E} \max_{|T| \le n} \Big\| \sum_{i=1}^N \varepsilon_i (x_i)_T \otimes (x_i)_T \Big\|
\le C l \sqrt{n} \cdot \max_{|T| \le n} \Big\| \sum_{i=1}^N (x_i)_T \otimes (x_i)_T \Big\|^{1/2}
$$
where $l = \log(n) \sqrt{\log d} \sqrt{\log N}$
and where $C = C_K$ may depend on $K$ only.
\end{proposition}
The non-uniform Rudelson's inequality (Corollary~\ref{Rudelson}) was a consequence of
a non-commutative Khintchine inequality.
Unfortunately, there does not seem to exist a way to deduce Proposition~\ref{RV Fourier}
from any known result.
Instead, this proposition is proved using Dudley's integral inequality for Gaussian processes and estimates
of covering numbers going back to Carl, see \cite{RV Fourier}.
It is known however that such usage of Dudley's inequality is not optimal
(see e.g. \cite{Ta book}).
As a result, the logarithmic factors in Proposition~\ref{RV Fourier} are probably not optimal.
In contrast to these difficulties with Rudelson's inequality, proving uniform versions of
the other two ingredients of Theorem~\ref{heavy-tailed rows exp si} --
the deviation Lemma~\ref{deviation from 1} and Symmetrization Lemma~\ref{symmetrization} --
is straightforward.
\begin{lemma} \label{deviation from 1 uniform}
Let $(Z_t)_{t \in \mathcal{T}}$ be a stochastic process\footnote{A stochastic process $(Z_t)$
is simply a collection of random variables
on a common probability space indexed by elements $t$ of some abstract set $\mathcal{T}$.
In our particular application, $\mathcal{T}$ will consist of all subsets
$T \subseteq [d]$, $|T| \le n$.}
such that all $Z_t \ge 0$. Then
$
\mathbb{E} \sup_{t \in \mathcal{T}} |Z_t^2-1| \ge \max( \mathbb{E} \sup_{t \in \mathcal{T}} |Z_t-1|, (\mathbb{E} \sup_{t \in \mathcal{T}} |Z_t-1|)^2 ).
$
\end{lemma}
\begin{proof}
The argument is entirely parallel to that of Lemma~\ref{deviation from 1}.
\end{proof}
\begin{lemma}[Symmetrization for stochastic processes] \index{Symmetrization} \label{symmetrization uniform}
Let $X_{it}$, $1 \le i \le N$, $t \in \mathcal{T}$, be random vectors valued in some Banach space $B$,
where $\mathcal{T}$ is a finite index set. Assume that the random vectors $X_i = (X_{ti})_{t \in \mathcal{T}}$
(valued in the product space $B^\mathcal{T}$) are independent.
Let $\varepsilon_1,\ldots, \varepsilon_N$ be independent symmetric Bernoulli random variables.
Then
$$
\mathbb{E} \sup_{t \in \mathcal{T}} \Big\| \sum_{i=1}^N (X_{it} - \mathbb{E} X_{it}) \Big\|
\le 2 \mathbb{E} \sup_{t \in \mathcal{T}} \Big\| \sum_{i=1}^N \varepsilon_i X_{it} \Big\|.
$$
\end{lemma}
\begin{proof}
The conclusion follows from Lemma~\ref{symmetrization} applied to random vectors
$X_i$ valued in the product Banach space $B^\mathcal{T}$ equipped with the norm
$||| (Z_t)_{t \in \mathcal{T}} ||| = \sup_{t \in \mathcal{T}} \|Z_t\|$.
The reader should also be able to prove the result directly, following the proof
of Lemma~\ref{symmetrization}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{heavy-tailed rows uniform}]
Since the random vectors $A_i$ are isotropic in $\mathbb{R}^d$,
for every fixed subset $T \subseteq [d]$ the random vectors $(A_i)_T$ are
also isotropic in $\mathbb{R}^T$, so $\mathbb{E} (A_i)_T \otimes (A_i)_T = I_{\mathbb{R}^T}$.
As in the proof of Theorem~\ref{heavy-tailed rows exp si}, we are going to control
\begin{align*}
E
:= \mathbb{E} \max_{|T| \le n} \big\| \frac{1}{N} A_T^* A_T - I_{\mathbb{R}^T} \big\|
&= \mathbb{E} \max_{|T| \le n} \Big\| \frac{1}{N} \sum_{i=1}^N (A_i)_T \otimes (A_i)_T - I_{\mathbb{R}^T} \Big\| \\
&\le \frac{2}{N} \, \mathbb{E} \max_{|T| \le n} \Big\| \sum_{i=1}^N \varepsilon_i (A_i)_T \otimes (A_i)_T \Big\|
\end{align*}
where we used Symmetrization Lemma~\ref{symmetrization uniform}
with independent symmetric Bernoulli random variables $\varepsilon_1,\ldots, \varepsilon_N$.
The expectation in the right hand side is taken both with respect
to the random matrix $A$ and the signs $(\varepsilon_i)$.
First taking the expectation with respect to $(\varepsilon_i)$
(conditionally on $A$) and afterwards the expectation with respect to $A$,
we obtain by Proposition~\ref{RV Fourier} that
$$
E \le \frac{C_K l \sqrt{n}}{N} \,
\mathbb{E} \max_{|T| \le n} \Big\| \sum_{i=1}^N (A_i)_T \otimes (A_i)_T \Big\|^{1/2}
= \frac{C_K l \sqrt{n}}{\sqrt{N}} \,
\mathbb{E} \max_{|T| \le n} \big\| \frac{1}{N} A_T^* A_T \big\|^{1/2}
$$
By the triangle inequality,
$\mathbb{E} \max_{|T| \le n} \big\| \frac{1}{N} A_T^* A_T \big\| \le E + 1$.
Hence we obtain
$$
E \le C_K l \sqrt{\frac{n}{N}} (E+1)^{1/2}
$$
by H\"older's inequality.
Solving this inequality in $E$ we conclude that
\begin{equation} \label{A*A heavy-tailed exp}
E = \mathbb{E} \max_{|T| \le n} \big\| \frac{1}{N} A_T^* A_T - I_{\mathbb{R}^T} \big\|
\le \max(\delta, \delta^2)
\quad \text{where } \delta = C_K l \sqrt{\frac{2n}{N}}.
\end{equation}
The proof is completed by a diagonalization argument similar to Step 2 in the
proof of Theorem~\ref{heavy-tailed rows exp si}. One uses there a uniform version of
deviation inequality given in Lemma~\ref{deviation from 1 uniform} for stochastic
processes indexed by the sets $|T| \le n$. We leave the details to the reader.
\end{proof}
\begin{theorem}[Heavy-tailed restricted isometries] \label{heavy-tailed RIP}
Let $A=(a_{ij})$ be an $m \times n$ matrix whose rows $A_i$ are independent
isotropic random vectors in $\mathbb{R}^n$. Let $K$ be a number such that
all entries $|a_{ij}| \le K$ almost surely.
Then the normalized matrix $\bar{A} = \frac{1}{\sqrt{m}} A$ satisfies the following for $m \le n$, for every
sparsity level $1 < k \le n$ and every number $\delta \in (0,1)$:
\begin{equation} \label{eq heavy-tailed RIP}
\text{if } m \ge C \delta^{-2} k \log n \log^2(k) \log(\delta^{-2} k \log n \log^2 k)
\quad \text{then } \mathbb{E} \delta_k(\bar{A}) \le \delta.
\end{equation}
Here $C = C_K > 0$ may depend only on $K$.
\end{theorem}
\begin{proof}
The result follows from Theorem~\ref{heavy-tailed rows uniform}, more precisely
from its equivalent statement \eqref{A*A heavy-tailed exp}. In our notation, it says that
$$
\mathbb{E} \delta_k(\bar{A}) \le \max(\delta,\delta^2)
\quad \text{where }
\delta = C_K l \sqrt{\frac{k}{m}} = C_K \sqrt{\frac{k \log m}{m}} \log(k) \sqrt{\log n}.
$$
The conclusion of the theorem easily follows.
\end{proof}
In the interesting sparsity range $k \ge \log n$ and $k \ge \delta^{-2}$, the condition in
Theorem~\ref{heavy-tailed RIP} clearly reduces to
$$
m \ge C \delta^{-2} k \log (n) \log^{3} k.
$$
\begin{remark}[Boundedness requirement]
The {\em boundedness assumption} on the entries of $A$ is essential in
Theorem~\ref{heavy-tailed RIP}. Indeed, if the rows of $A$ are independent
coordinate vectors in $\mathbb{R}^n$, then $A$ necessarily has a zero column (in fact $n-m$ of them).
This clearly contradicts the restricted isometry property.
\end{remark}
\begin{example} \label{random measurements}
\begin{enumerate}
\item {\bf (Random Fourier measurements):} \index{Fourier measurements}
An important example for Theorem~\ref{heavy-tailed rows}
is where $A$ realizes random Fourier measurements.
Consider the $n \times n$ Discrete Fourier Transform (DFT) matrix $W$ with entries
$$
W_{\omega,t} = \exp \Big( -\frac{2 \pi i \omega t}{n} \Big),
\quad \omega, t \in \{0,\ldots,n-1\}.
$$
Consider a random vector $X$ in $\mathbb{C}^n$ which picks a random row of $W$ (with uniform distribution).
It follows from Parseval's inequality that $X$ is isotropic.\footnote{For convenience we have developed the theory over $\mathbb{R}$,
while this example is over $\mathbb{C}$. As we noted earlier, all our definitions and results can be carried
over to the complex numbers. So in this example we use the obvious complex versions of the notion of isotropy and
of Theorem~\ref{heavy-tailed RIP}.}
Therefore the $m \times n$ random matrix $A$ whose rows are independent copies of $X$
satisfies the assumptions of Theorem~\ref{heavy-tailed rows} with $K=1$.
Algebraically, we can view $A$ as a {\em random row sub-matrix of the DFT matrix}.
In compressed sensing, such matrix $A$ has a remarkable meaning -- it realizes
$m$ {\em random Fourier measurements} of a signal $x \in \mathbb{R}^n$. Indeed, $y=Ax$ is the DFT
of $x$ evaluated at $m$ random points; in words, $y$ consists of $m$ random frequencies of $x$.
Recall that in compressed sensing, we would like to guarantee that with high probability
every sparse signal $x \in \mathbb{R}^n$ (say, $|\supp(x)| \le k$)
can be effectively recovered from its $m$ random frequencies $y=Ax$.
Theorem~\ref{heavy-tailed RIP} together with Cand\`es-Tao's result (recalled in the beginning of
Section~\ref{s: restricted isometries}) imply that an exact recovery is given by the convex
optimization problem $\min\{ \|x\|_1 : Ax=y\}$ provided that we observe {\em slightly more frequencies
than the sparsity of a signal}: $m \gtrsim \ge C \delta^{-2} k \log (n) \log^{3} k$.
\item {\bf (Random sub-matrices of orthogonal matrices):} \index{Sub-matrices}
In a similar way, Theorem~\ref{heavy-tailed RIP} applies to a random row sub-matrix $A$
of an {\em arbitrary bounded orthogonal matrix} $W$. Precisely, $A$ may consist of
$m$ randomly chosen rows, uniformly
and without replacement,\footnote{Since in the interesting regime
very few rows are selected, $m \ll n$, sampling with or without replacement are formally equivalent.
For example, see \cite{RV Fourier} which deals with the model of sampling without replacement.}
from an arbitrary $n \times n$ matrix $W = (w_{ij})$ such that $W^*W = n I$
and with uniformly bounded coefficients, $\max_{ij} |w_{ij}| = O(1)$.
The examples of such $W$ include the class of {\em Hadamard matrices} \index{Hadamard matrices}
-- orthogonal matrices in which all entries equal $\pm 1$.
\end{enumerate}
\end{example}
\section{Notes} \label{s: notes}
\paragraph{For Section~\ref{s: introduction}}
We work with two kinds of moment assumptions for random matrices: sub-gaussian and heavy-tailed.
These are the two extremes. By the central limit theorem, the sub-gaussian tail decay
is the strongest condition one can demand from an isotropic distribution. In contrast, our heavy-tailed model
is completely general -- no moment assumptions (except the variance) are required.
It would be interesting to analyze random matrices with independent rows or columns in the
intermediate regime, {\em between sub-gaussian and heavy-tailed} moment assumptions.
We hope that for distributions with an appropriate finite moment (say, $(2+\varepsilon)$th or $4$th),
the results should be the same as for sub-gaussian distributions, i.e. no $\log n$ factors should occur.
In particular, tall random matrices ($N \gg n)$ should still be approximate isometries.
This indeed holds for sub-exponential distributions \cite{ALPT};
see \cite{V covariance} for an attempt to go down to finite moment assumptions.
\paragraph{For Section~\ref{s: preliminaries}}
The material presented here is well known.
The volume argument presented in Lemma~\ref{net cardinality} is quite flexible.
It easily generalizes to covering numbers of more general metric spaces, including convex bodies
in Banach spaces. See \cite[Lemma 4.16]{Pisier volume} and other parts of \cite{Pisier volume} for various
methods to control covering numbers.
\paragraph{For Section~\ref{s: sub-gaussian}}
The concept of sub-gaussian random variables is due to Kahane~\cite{Kahane}.
His definition was based on the moment generating function
(Property 4 in Lemma~\ref{sub-gaussian properties}),
which automatically required sub-gaussian random variables to be centered.
We found it more convenient to use the equivalent Property 3 instead.
The characterization of sub-gaussian random variables in terms of tail decay and moment growth
in Lemma~\ref{sub-gaussian properties} also goes back to \cite{Kahane}.
The rotation invariance of sub-gaussian random variables (Lemma~\ref{rotation invariance})
is an old observation \cite{BuKo}. Its consequence, Proposition~\ref{sub-gaussian large deviations},
is a general form of {\em Hoeffding's inequality}, which is usually stated for bounded random variables.
For more on large deviation inequalities, see also notes for Section~\ref{s: sub-exponential}.
Khintchine inequality is usually stated for the particular case of symmetric Bernoulli random variables.
It can be extended for $0<p<2$ using a simple extrapolation argument based on H\"older's inequality,
see \cite[Lemma~4.1]{Ledoux-Talagrand}.
\paragraph{For Section~\ref{s: sub-exponential}}
Sub-gaussian and sub-exponential random variables can be studied together in a general framework.
For a given exponent $0 < \alpha < \infty$,
one defines general $\psi_\alpha$ random variables,
those with moment growth $(\mathbb{E} |X|^p)^{1/p} = O(p^{1/\alpha})$.
Sub-gaussian random variables correspond to $\alpha = 2$ and sub-exponentials
to $\alpha = 1$. The reader is encouraged to extend
the results of Sections~\ref{s: sub-gaussian} and \ref{s: sub-exponential} to this general class.
Proposition~\ref{sub-exponential large deviations} is a form of {\em Bernstein's inequality},
which is usually stated for bounded random variables in the literature.
These forms of Hoeffding's and Bernstein's inequalities
(Propositions~\ref{sub-gaussian large deviations} and \ref{sub-exponential large deviations})
are partial cases of a large deviation inequality for general $\psi_\alpha$ norms,
which can be found in \cite[Corollary~2.10]{Talagrand canonical} with a similar proof.
For a thorough introduction to large deviation inequalities for sums of independent random variables (and more),
see the books \cite{Petrov, Ledoux-Talagrand, Dembo-Zeitouni} and the tutorial \cite{BBL}.
\paragraph{For Section~\ref{s: isotropic}}
Sub-gaussian distributions in $\mathbb{R}^n$ are well studied in geometric functional analysis;
see \cite{MePaTJ reconstruction} for a link with compressed sensing.
General $\psi_\alpha$ distributions in $\mathbb{R}^n$ are discussed e.g. in \cite{GiMi concentration}.
Isotropic distributions on convex bodies, and more generally isotropic log-concave distributions,
are central to asymptotic convex geometry (see \cite{Giannopoulos isotropic, Paouris})
and computational geometry \cite{Vempala}.
A completely different way in which isotropic distributions appear in convex geometry is from
{\em John's decompositions} for contact points of convex bodies, see \cite{Ball, Rudelson contact, Vershynin John}.
Such distributions are finitely supported and therefore are usually heavy-tailed.
For an introduction to the concept of {\em frames} (Example~\ref{random vectors}),
see \cite{KC, Christensen}.
\paragraph{For Section~\ref{s: sums matrices}}
The non-commutative Khintchine inequality, Theorem~\ref{non-commutative Khintchine},
was first proved by Lust-Piquard \cite{Lust-Piquard} with an unspecified constant $B_p$
in place of $C \sqrt{p}$.
The optimal value of $B_p$ was computed by Buchholz \cite{Buc01, Buc05};
see \cite[Section~6.5]{Rauhut structured} for an thorough introduction to Buchholz's argument.
For the complementary range $1 \le p \le 2$, a corresponding version of non-commutative Khintchine
inequality was obtained by Lust-Piquard and Pisier \cite{Lu-Pi}.
By a duality argument implicitly contained in \cite{Lu-Pi} and independently observed by Marius Junge,
this latter inequality also implies the optimal order $B_p = O(\sqrt{p})$,
see \cite{Rudelson isotropic} and \cite[Section~9.8]{Pisier operator}.
Rudelson's Corollary~\ref{Rudelson} was initially proved using a majorizing measure
technique; our proof follows Pisier's argument from \cite{Rudelson isotropic}
based on the non-commutative Khintchine inequality.
\paragraph{For Section~\ref{s: entries}}
The ``Bai-Yin law'' (Theorem~\ref{Bai-Yin}) was established for $s_{\max}(A)$ by Geman \cite{Geman}
and Yin, Bai and Krishnaiah \cite{YBK}.
The part for $s_{\min}(A)$ is due to Silverstein \cite{Silverstein} for Gaussian random matrices.
Bai and Yin \cite{Bai-Yin} gave a unified treatment of both extreme singular values for general distributions.
The fourth moment assumption in Bai-Yin's law is known to be necessary \cite{Bai-Silverstein-Yin}.
Theorem~\ref{Gaussian} and its argument is due to Gordon \cite{Gordon 84, Gordon 85, Gordon 92}.
Our exposition of this result and of Corollary~\ref{Gaussian deviation} follows \cite{DS}.
Proposition~\ref{Gaussian concentration} is just a tip of an iceberg called {\em concentration of measure
phenomenon}. \index{Concentration of measure}
We do not discuss it here because there are many excellent sources, some of which
were mentioned in Section~\ref{s: introduction}. Instead we give just one example related
to Corollary~\ref{Gaussian deviation}.
For a general random matrix $A$ with independent centered entries bounded by $1$,
one can use Talagrand's concentration inequality for convex
Lipschitz functions on the cube \cite{Tal1, Tal2}.
Since $s_{\max}(A)= \|A\|$ is a convex function of
$A$, Talagrand's concentration inequality implies
$\mathbb{P} \big\{ |s_{\max}(A) - \Median(s_{\max}(A))| \ge t \big\} \le 2 e^{-ct^2}$.
Although the precise value of the median may be unknown,
integration of this inequality shows
that $|\mathbb{E} s_{\max}(A)-\Median(s_{\max}(A))| \le C$.
For the recent developments related to the {\em hard edge} problem
for almost square and square matrices (including Theorem~\ref{RV rectangular})
see the survey \cite{RV ICM}.
\paragraph{For Section~\ref{s: rows}}
Theorem~\ref{sub-gaussian rows} on random matrices with sub-gaussian rows,
as well as its proof by a covering argument, is a folklore in geometric functional analysis.
The use of covering arguments in a similar context goes back to Milman's proof of Dvoretzky's theorem
\cite{Milman Dvoretzky}; see e.g. \cite{Ball} and \cite[Chapter 4]{Pisier volume} for an introduction.
In the more narrow context of extreme singular values of random matrices,
this type of argument appears recently e.g. in \cite{ALPT}.
The breakthrough work on heavy-tailed isotropic distributions is due to Rudelson \cite{Rudelson isotropic}.
He used Corollary~\ref{Rudelson} in the way we described in the proof of
Theorem~\ref{heavy-tailed rows exp si} to show that $\frac{1}{N} A^*A$ is an approximate isometry.
Probably Theorem~\ref{heavy-tailed rows} can also be deduced by a modification of this argument;
however it is simpler to use the non-commutative Bernstein's inequality.
The symmetrization technique is well known.
For a slightly more general two-sided inequality than Lemma~\ref{symmetrization},
see \cite[Lemma~6.3]{Ledoux-Talagrand}.
The problem of estimating covariance matrices described in Section~\ref{s: covariance} is
a basic problem in statistics, see e.g. \cite{Johnstone}.
However, most work in the statistical literature is focused on the normal distribution
or general product distributions (up to linear transformations),
which corresponds to studying random matrices with independent entries.
For non-product distributions, an interesting example
is for uniform distributions on convex sets \cite{KLS}.
As we mentioned in Example~\ref{random vectors sub-gaussian}, such
distributions are sub-exponential but not necessarily sub-gaussian,
so Corollary~\ref{covariance sub-gaussian} does not apply.
Still, the sample size $N = O(n)$ suffices to estimate the covariance matrix in this case
\cite{ALPT}. It is conjectured that the same should hold for general distributions with
finite (e. g. $4$th) moment assumption \cite{V covariance}.
Corollary~\ref{random sub-matrices} on random sub-matrices is a variant of
the Rudelson's result from \cite{Rudelson sub-matrices}. The study of random sub-matrices
was continued in \cite{RV sampling}. Random sub-frames were studied in \cite{V frames}
where a variant of Corollary~\ref{random sub-frames} was proved.
\paragraph{For Section~\ref{s: columns}}
Theorem~\ref{sub-gaussian columns} for sub-gaussian columns seems to be new.
However, historically the efforts of geometric functional analysts were immediately focused on the more difficult case
of sub-exponential tail decay (given by uniform distributions on convex bodies).
An indication to prove results like Theorem~\ref{sub-gaussian columns}
by decoupling and covering is present in \cite{Bourgain}
and is followed in \cite{GiMi concentration, ALPT}.
The normalization condition $\|A_j\|_2 = \sqrt{N}$ in Theorem~\ref{sub-gaussian columns}
can not be dropped but can be relaxed. Namely, consider the random variable
$\delta := \max_{i \le n} \big| \frac{\|A_j\|_2^2}{N} - 1 \big|$.
Then the conclusion of Theorem~\ref{sub-gaussian columns} holds with \eqref{smin smax columns} replaced by
$$
(1-\delta)\sqrt{N} - C \sqrt{n} - t \le s_{\min}(A) \le s_{\max}(A) \le (1+\delta)\sqrt{N} + C \sqrt{n} + t.
$$
Theorem~\ref{heavy-tailed columns} for heavy-tailed columns also seems to be new.
The incoherence parameter $m$
is meant to prevent collisions of the columns of $A$ in a quantitative way.
It is not clear whether the {\em logarithmic factor} is needed in the conclusion
of Theorem~\ref{heavy-tailed columns}, or whether the incoherence parameter alone takes care of the
logarithmic factors whenever they appear. The same question can be raised for all other
results for heavy-tailed matrices in Section~\ref{s: heavy-tailed rows} and their applications --
can we replace the logarithmic factors by more sensitive quantities (e.g. the logarithm
of the incoherence parameter)?
\paragraph{For Section~\ref{s: restricted isometries}}
For a mathematical introduction to compressed sensing, see the introductory chapter of this book
and \cite{FR, CS website}.
A version of Theorem~\ref{sub-gaussian RIP} was proved in \cite{MePaTJ} for the row-independent model;
an extension from sub-gaussian to sub-exponential distributions is given in \cite{ALPT RIP}.
A general framework of stochastic processes with sub-exponential tails is discussed
in \cite{Mendelson}.
For the column-independent model, Theorem~\ref{sub-gaussian RIP} seems to be new.
Proposition~\ref{concentration RIP} that formalizes a simple approach to restricted
isometry property based on concentration is taken from \cite{BDDW}.
Like Theorem~\ref{sub-gaussian RIP}, it can also be used to show that
Gaussian and Bernoulli random matrices are restricted isometries.
Indeed, it is not difficult to check that these matrices satisfy a concentration inequality
as required in Proposition~\ref{concentration RIP} \cite{Achlioptas}.
Section~\ref{s: heavy-tailed RIP} on heavy-tailed restricted isometries is
an exposition of the results from \cite{RV Fourier}.
Using concentration of measure techniques, one can prove a version of
Theorem~\ref{heavy-tailed RIP} with high probability $1 - n^{-c \log^3 k}$
rather than in expectation \cite{Rauhut structured}.
Earlier, Candes and Tao \cite{Candes-Tao Fourier} proved a similar result
for random Fourier matrices, although with a slightly higher exponent
in the logarithm for the number of measurements in \eqref{m heavy-tailed},
$m = O(k \log^6 n)$.
The survey \cite{Rauhut structured} offers a thorough exposition of the material
presented in Section~\ref{s: heavy-tailed RIP} and more.
| {
"timestamp": "2011-11-28T02:00:17",
"yymm": "1011",
"arxiv_id": "1011.3027",
"language": "en",
"url": "https://arxiv.org/abs/1011.3027",
"abstract": "This is a tutorial on some basic non-asymptotic methods and concepts in random matrix theory. The reader will learn several tools for the analysis of the extreme singular values of random matrices with independent rows or columns. Many of these methods sprung off from the development of geometric functional analysis since the 1970's. They have applications in several fields, most notably in theoretical computer science, statistics and signal processing. A few basic applications are covered in this text, particularly for the problem of estimating covariance matrices in statistics and for validating probabilistic constructions of measurement matrices in compressed sensing. These notes are written particularly for graduate students and beginning researchers in different areas, including functional analysts, probabilists, theoretical statisticians, electrical engineers, and theoretical computer scientists.",
"subjects": "Probability (math.PR); Functional Analysis (math.FA); Numerical Analysis (math.NA)",
"title": "Introduction to the non-asymptotic analysis of random matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9863631647185701,
"lm_q2_score": 0.8152324826183822,
"lm_q1q2_score": 0.8041152915368441
} |
https://arxiv.org/abs/2110.06244 | Congruence properties of combinatorial sequences via Walnut and the Rowland-Yassawi-Zeilberger automaton | Certain famous combinatorial sequences, such as the Catalan numbers and the Motzkin numbers, when taken modulo a prime power, can be computed by finite automata. Many theorems about such sequences can therefore be proved using Walnut, which is an implementation of a decision procedure for proving various properties of automatic sequences. In this paper we explore some results (old and new) that can be proved using this method. | \section{Introduction}
We study the properties of two famous combinatorial sequences.
For $n \geq 0$, let $$C_n = \frac{1}{n+1}\binom{2n}{n}$$ denote the
\emph{$n$-th Catalan number} and let $$M_n = \sum_{k=0}^{\lfloor n/2
\rfloor}\binom{n}{2k}C_k$$ denote the \emph{$n$-th Motzkin number}.
For more about the Catalan numbers, see
\cite{Stanley}.
Many authors have studied congruence properties of these and other
sequences modulo primes $p$ or prime powers $p^\alpha$.
Notably, Alter and Kubota \cite{AK73} studied the Catalan numbers modulo $p$,
and Deutsch and Sagan \cite{DS06} studied many sequences,
including the Catalan numbers, Motzkin numbers, Central Delannoy numbers,
Ap\'ery numbers, etc., modulo certain prime powers.
Eu, Liu, and Yeh \cite{ELY08} studied the Catalan and Motzkin numbers
modulo $4$ and $8$, and Krattenthaler and M\"uller \cite{KM18} studied
the Motzkin numbers and related sequences modulo powers of $2$.
Rowland and Yassawi \cite{RY15} and Rowland and Zeilberger \cite{RZ14}
gave different methods to compute finite automata that compute the
sequences $(C_n \bmod p^\alpha)_{n \geq 0}$ and $(M_n \bmod
p^\alpha)_{n \geq 0}$ (and many other similar sequences),
where $p^\alpha$ is a prime power. Rowland and
Zeilberger provide a number of these automata for different $p^\alpha$
at the website
\begin{center}
\url{https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/meta.html}
\end{center}
along with the Maple code used to compute them.
We use some of these automata, along with the program Walnut
\cite{Walnut}, available at the website
\begin{center}
\url{https://cs.uwaterloo.ca/~shallit/walnut.html}
\end{center}
to study properties of these sequences.
We use the Rowland--Zeilberger algorithm (and Walnut) as a black-box,
so we do not discuss the theory behind it. We just mention that
this algorithm applies to any sequence of numbers that can be defined
as the \emph{constant term} of $[P(x)]^nQ(x)$, where $P$ and $Q$ are
\emph{Laurent polynomials}. In particular, the $n$-th Catalan number
is the constant term of $(1/x+2+x)^n(1-x)$ and the $n$-th Motzkin number
is the constant term of $(1/x+1+x)^n(1-x^2)$.
Note that the automata produced by this method read their input in
least-significant-digit-first format. All of the automata in this
paper therefore also follow this convention. We use the notation
$(n)_k$ to denote the base-$k$ representation of
$n$ in the lsd-first format.
Burns has posted several manuscripts to the arXiv
\cite{Bur_A,Bur_B,Bur_C,Bur_D,Bur_E,Bur_F} in which he investigates
various properties of the Catalan and Motzkin numbers modulo primes $p$
by analyzing structural properties of automata computed using the
Rowland--Yassawi algorithm. This paper takes a similar approach, but
we use Walnut to simplify/automate much of the analysis.
\section{Motzkin numbers}
Deutsch and Sagan \cite{DS06} gave a characterization of ${\bf m}_2 = (M_n \bmod 2)_{n \geq
0}$ that involves the Thue--Morse sequence $${\bf t} = (t_n)_{n \geq
0} = (0,1,1,0,1,0,0,1,\ldots).$$ Let $${\bf c} = (c_n)_{n \geq 0} =
(1,3,4,5,7,\ldots)$$ denote the starting positions of the ``runs'' in
${\bf t}$, excluding the first run (which, of course, starts at position
$0$).
\begin{theorem}[Deutsch and Sagan] The Motzkin number $M_n$ is even if
and only if either $n \in
4{\bf c}-2$ or $n \in 4{\bf c}-1$.
\end{theorem}
\begin{proof}
We can prove this result using Walnut. The Rowland--Zeilberger algorithm
produces the algorithm in Figure~\ref{MOT2}, which, when fed with $(n)_2$,\
outputs $M_n \bmod 2$.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.75]{MOT2.pdf}
\caption{Automaton for $M_n \bmod 2$}\label{MOT2}
\end{figure}
Next we use Walnut to construct an
automaton for the sequence ${\bf c}$. The commands
\begin{verbatim}
def tm_blocks "?lsd_2 n>=1 & (At t<n => T_lsd[i+t]=T_lsd[i]) &
T_lsd[i+n]!=T_lsd[i] & (i=0|T_lsd[i-1]!=T_lsd[i])":
def tm_block_start "?lsd_2 i>=1 & ($tm_blocks(i,1)|$tm_blocks(i,2))":
\end{verbatim}
produce the automaton \texttt{tm\_block\_start} given in
Figure~\ref{tm_block_start}, which computes ${\bf c}$.
We see that the elements of ${\bf c}$ are
$$ \{ m4^k : m \text{ is odd and } k \geq 0 \}.$$
\begin{figure}[htb]
\centering
\includegraphics[scale=0.75]{tm_block_start.pdf}
\caption{Automaton for starting positions of ``runs'' in {\bf t}}\label{tm_block_start}
\end{figure}
To complete the proof of the theorem, it suffices to execute the
Walnut command
\begin{verbatim}
eval even_mot "?lsd_2 An (MOT2[n]=@0 <=> Ei $tm_block_start(i) &
(n+2=4*i | n+1=4*i))":
\end{verbatim}
which produces the output ``TRUE''.
\end{proof}
Deutsch and Sagan also characterized ${\bf m}_3 = (M_n \bmod 3)_{n \geq
0}$:
\begin{theorem}{(Deutsch and Sagan)}
The Motzkin number $M_n$ satisfies
\[
M_n \equiv_3
\begin{cases}
1, & \text{ if either } (n)_3 = 0w, w \in \{0,1\}^* \text{ or }
(n+2)_3 = 0w, w \in \{0,1\}^*, \\
2, & \text{ if } (n+1)_3 = 0w, w \in \{0,1\}^*, \\
0, & \text{ otherwise.}
\end{cases}
\]
\end{theorem}
This can also be obtained directly from the automaton for ${\bf m}_3$.
If we examine ${\bf m}_5 = (M_n \bmod 5)_{n \geq 0}$, however, we
discover that its behaviour is very different
from that of ${\bf m}_3$. Deutsch and Sagan determined the positions
of the $0$'s in ${\bf m}_5$.
\begin{theorem}{(Deutsch and Sagan)}
The Motzkin number $M_n$ is divisible by $5$ if and only if
$n$ is of the form
\[ (5i+1)5^{2j}-2, \quad (5i+2)5^{2j-1}-1 \quad
(5i+3)5^{2j-1}-2 \quad (5i+4)5^{2j}-1 . \]
\end{theorem}
\begin{proof}
The Rowland--Zeilberger algorithm gives the automaton in Figure~\ref{MOT5}, which fully characterizes ${\bf m}_5$.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.60]{MOT5.pdf}
\caption{Automaton for $M_n \bmod 5$}\label{MOT5}
\end{figure}
The Walnut command
\begin{verbatim}
eval mot5mod0 "?lsd_5 MOT5[n]=@0":
\end{verbatim}
produces the automaton in Figure~\ref{mot5mod0}, from which
one easily derives the result.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.75]{mot5mod0.pdf}
\caption{Automaton for the positions of the $0$'s in ${\bf m}_5$}\label{mot5mod0}
\end{figure}
\end{proof}
One notices that ${\bf m}_3$ contains
arbitrarily large runs of $0$'s, whereas ${\bf m}_5$ does
not have this property. We can use Walnut to determine the types of repetitions
that are present in ${\bf m}_5$, but we first need to introduce some
definitions.
Let $w = w_1w_2\cdots w_n$ be a word of length $n$ and
\emph{period} $p$; i.e., $w_i = w_{i+p}$ for $i = 0, \ldots, n-p$.
If $p$ is the smallest period of $w$, we say that the \emph{exponent}
of $w$ is $n/p$. We also say that $w$ is an \emph{$(n/p)$-power} of
\emph{order $p$}. Words of exponent $2$ (resp.~$3$) are called
\emph{squares} (resp.~\emph{cubes}). If ${\bf x}$ is an infinite
sequence, we define the \emph{critical exponent} of ${\bf x}$ as
$$ \sup \{ e \in \mathbb{Q}: \text{ there is a factor of } {\bf x}
\text{ with exponent } e \}. $$
\begin{theorem}
The sequence ${\bf m}_5$ has critical exponent $3$. Furthermore,
the only cubes in ${\bf m}_5$ are $111$, $222$, $333$, and $444$.
\end{theorem}
\begin{proof}
We execute the Walnut commands
\begin{verbatim}
eval tmp "?lsd_5 Ei,n (n>=1) & At (t<=2*n) => MOT5[i+t]=MOT5[i+t+n]":
eval tmp "?lsd_5 Ei (n>=1) & At (t<2*n) => MOT5[i+t]=MOT5[i+t+n]":
\end{verbatim}
and note that the first outputs ``FALSE'', indicating that ${\bf m}_5$
has no factors of exponent larger than $3$, and the second produces
an automaton that only accepts $n=1$, indicating that the only cubes
in ${\bf m}_5$ have order $1$. By inspecting a prefix of ${\bf m}_5$,
one sees that $111$, $222$, $333$, and $444$ all occur.
\end{proof}
We can also prove that every pattern of residues that appears in
${\bf m}_5$ appears infinitely often, and furthermore, we can
give a bound on when the next occurrence of a pattern will appear
in ${\bf m}_5$. We say that a sequence ${\bf x}$ is
\emph{uniformly recurrent} if for every factor $w$ of ${\bf x}$,
there is a constant $c$ such that every occurrence of $w$ in ${\bf x}$
is followed by another occurrence of $w$ at distance at most $c$.
Note that ${\bf m}_3$ is \emph{not} uniformly recurrent. This is
due to the presence of arbitrarily large runs of $0$'s in ${\bf m}_3$.
On the other hand, the sequence ${\bf m}_5$ exhibits rather different
behaviour.
\begin{theorem}
The sequence ${\bf m}_5$ is uniformly recurrent. Furthermore,
if $w$ has length $n$ and occurs at position $i$ in ${\bf m}_5$,
then there is another occurrence of $w$ at some position $j$,
where $i < j \leq i+200n$. The bound $200n$ cannot be replaced by
$200n-1$.
\end{theorem}
\begin{proof}
This is proved with the Walnut commands
\begin{verbatim}
def mot5faceq "?lsd_5 At (t<n) => (MOT5[i+t]=MOT5[j+t])":
eval tmp "?lsd_5 An (n>=1) => Ai Ej (j>i) & (j<i+200*n+1) &
$mot5faceq(i,j,n)":
eval tmp "?lsd_5 An (n>=1) => Ai Ej (j>i) & (j<i+200*n) &
$mot5faceq(i,j,n)":
\end{verbatim}
noting that the first \texttt{eval} command returns ``TRUE'' and
the second returns ``FALSE''.
\end{proof}
Burns \cite{Bur_E} studied ${\bf m}_p$ for $p$ between $7$
and $29$ using automata computed using the Rowland--Yassawi algorithm.
Among other things, his work suggests that depending on the value of $p$, the sequence
${\bf m}_p$ either behaves like ${\bf m}_3$, where $0$ has density $1$ (i.e.,
$p = 7, 17, 19$), or ${\bf m}_p$ behaves like ${\bf m}_5$, where $0$ has
density $<1$ (i.e., $p = 11, 13, 23, 29$). Many of Burns' results could also be
obtained using Walnut.
\begin{problem}\label{mot_rec}Characterize the primes $p$ for which ${\bf m}_p$ is
uniformly recurrent.
\end{problem}
Indeed, based on Burns' results and the discussion in the next section,
we guess that the answer to this problem is given by the sequence
$$2, 5, 11, 13, 23, 29, 31, 37, 53, \ldots$$
of primes that do not divide any central trinomial number. This is
sequence
\seqnum{A113305} of \cite{OEIS}.
\section{Central trinomial coefficients}
The Motzkin numbers are closely related to the \emph{central trinomial
coefficients} $T_n$. The usual definition of $T_n$ is as the coefficient of $x^n$
in $(1+x+x^2)^n$, but the definition
$$T_n = \sum_{k \geq 0}\binom{n}{2k}\binom{2k}{k}$$ better illustrates the connection between
these numbers and the Motzkin numbers. The number $T_n$ is also
the constant term of $(1/x+1+x)^n$, which is the form needed for the Rowland--Zeilberger
algorithm. Deutsch and Sagan
studied the divisibility of $T_n$ modulo primes and Noe \cite{Noe06} did the same
for generalized central trinomial numbers.
\begin{theorem}[Deutsch and Sagan]
The central trinomial coefficient $T_n$ satisfies
\[
T_n \equiv_3
\begin{cases}
1, & \text{ if } (n)_3 \text{ does not contain a } 2;\\
0, & \text{ otherwise.}
\end{cases}
\]
\end{theorem}
Deutsch and Sagan proved this by an application of Lucas' Theorem;
it is also immediate from the automaton produced by the Rowland--Zeilberger
algorithm. As with the Motzkin numbers, the behaviour of $T_n$ modulo $5$ is
rather different from that modulo $3$. We collect some properties below
(compare with those of ${\bf m}_5$ from the previous section).
\begin{theorem}
Let ${\bf t}_5 = (T_n \bmod 5)_{n \geq 0}$. Then
\begin{enumerate}
\item ${\bf t}_5$ does not contain $0$ (i.e., $T_n$ is never divisible by $5$);
\item ${\bf t}_5$ has critical exponent $3$; furthermore,
the only cubes in ${\bf t}_5$ are $111$, $222$, $333$, and $444$;
\item ${\bf t}_5$ is uniformly recurrent; Furthermore,
if $w$ has length $n$ and occurs at position $i$ in ${\bf t}_5$,
then there is another occurrence of $w$ at some position $j$,
where $i < j \leq i+200n-192$. The constant $192$ cannot be replaced with $193$.
\item If $w$ has length $n$ and appears in ${\bf t}_5$, then
$w$ appears in the prefix of ${\bf t}_5$ of length $121n$. The
quantity $121n$ cannot be replaced with $121n-1$.
\end{enumerate}
\end{theorem}
\begin{proof}
Properties 1)--3) can all be obtained by similar Walnut commands to those used in the previous
section for the Motzkin numbers. We just need the automaton for ${\bf t}_5$.
The Rowland--Zeilberger algorithm gives the pleasantly symmetric automaton
in Figure~\ref{TRI5}.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.60]{TRI5.pdf}
\caption{Automaton for $T_n \bmod 5$}\label{TRI5}
\end{figure}
For Property 4), we use the Walnut commands
\begin{verbatim}
def pr_tri5 "?lsd_5 Aj Ei i+n<=s & At t<n => TRI5[i+t]=TRI5[j+t]":
eval tmp "?lsd_5 An $pr_tri5(n,121*n)":
eval tmp "?lsd_5 An $pr_tri5(n,121*n-1)":
\end{verbatim}
and note that the last two commands return "TRUE" and "FALSE",
respectively.
\end{proof}
We should note that in the special case of the central trinomial coefficients,
it is not necessary to resort to either the Rowland--Zeilberger or Rowland--Yassawi algorithms to compute the automaton for $T_n \bmod p$. Using the following result of Deutsch and Sagan,
one can directly define the automaton for $T_n \bmod p$.
\begin{theorem}{(Deutsch and Sagan)}\label{tri_lucas}
Let $(n)_p = n_0n_1\cdots n_r$. Then $$T_n \equiv_p \prod_{i=0}^r T_{n_i}.$$
\end{theorem}
An immediate consequence is that $T_n$ is divisible by $p$ if and only if one of the $T_{n_i}$
is divisible by $p$.
This criterion allows one to determine the primes that do not divide any central trinomial
coefficient; i.e., those in \seqnum{A113305} of \cite{OEIS}, which we conjectured in the
previous section to be the ones that answer the question of Problem~\ref{mot_rec}.
We can also give the following sufficient condition for ${\bf t}_p = (T_n \bmod p)_{n \geq 0}$
to be uniformly recurrent. For $i =0,\ldots,p-1$, let $\tau_i = T_i \bmod p$.
\begin{theorem}
Let $p$ be prime and let $\Sigma = \{\tau_i : i = 0,\ldots,p-1\}$. If $\Sigma$ does not
contain $0$ but does contain a primitive root modulo $p$, then ${\bf t}_p$ is uniformly
recurrent.
\end{theorem}
\begin{proof}
Clearly the order of the product in Theorem~\ref{tri_lucas} does not matter; it follows then
that Theorem~\ref{tri_lucas} holds for the most-significant-digit-first representation
of $n$, as well as the least-significant-digit-first representation. If we consider
Theorem~\ref{tri_lucas} with $n$ written in msd-first notation, we see that ${\bf t}_p$
is generated by iterating the morphism $f : \Sigma^* \to \Sigma^*$ defined by
$$f(\tau_i) = (\tau_i\tau_0 \bmod p)(\tau_i\tau_1 \bmod p)\cdots (\tau_i\tau_{p-1} \bmod p)$$
for $i = 0,\ldots,p-1$; i.e., ${\bf t}_p = f^\omega(1)$.
Recall that if there exists $t$ such that for every $a,b \in \Sigma$ the word $f^t(a)$
contains $b$, we say that $f$ is a \emph{primitive morphism}. Now $\tau_0=1$, so for
$i=0,\ldots,p-1$, we can write $f(\tau_i)=\tau_i x_i$ for some word $x_i$. It follows
that $f^p(\tau_i) = \tau_i f(x_i) f^2(x_i)\cdots f^{p-1}(x_i)$. Furthermore,
if $0 \notin \Sigma$ and $x_0$ contains a primitive root modulo $p$, then for every $i$,
each non-zero residue modulo $p$ appears in one of $\tau_i, f(x_i), f^2(x_i), \ldots,
f^{p-1}(x_i)$. This proves that the morphism $f$ is primitive. A standard result
from the theory of morphic sequences states that any fixed point of a primitive morphism
is uniformly recurrent \cite[Theorem~10.9.5]{AS03}.
\end{proof}
\begin{example}
For $p=5$, we have $(T_0,T_1,T_2,T_3,T_4) = (1,1,3,7,19)$, so $(\tau_0,\tau_1,\tau_2,\tau_3,\tau_4) =
(1,1,3,2,4)$ contains the primitive root $2$. The word
$${\bf t}_5 = 113241132433412221434423111324\cdots$$ is
uniformly recurrent and is equal to
$f^\omega(1)$, where $f$ is the morphism defined by
\begin{align*}
1 &\to 11324\\
2 &\to 22143\\
3 &\to 33412\\
4 &\to 44231.
\end{align*}
\end{example}
A computer calculation shows that for each prime $p$ appearing in the list of
initial values $2,5,11,13,\ldots,479$ of \seqnum{A113305}, the first $p$
terms of ${\bf t}_p$ always contain a primitive root modulo $p$.
Hence, each of these ${\bf t}_p$'s are uniformly recurrent.
\section{Catalan numbers}
Alter and Kubota \cite{AK73} studied the sequences ${\bf c}_p = (C_n \bmod p)_{n
\geq 0}$, where $p$ is prime. They proved that the runs of $0$'s in
${\bf c}_p$ have lengths
\begin{equation}\label{cp_0runs}
\frac{p^{m+1+\delta_{3p}}-3}{2},
\end{equation}
where $\delta_{3p}$ is $1$ when $p=3$ and $0$ otherwise. This implies, of
course, that for every prime $p$, the sequence ${\bf c}_p$ is not
uniformly recurrent. Alter and Kubota also
proved that the blocks of non-zero values in ${\bf c}_p$ have length
$$\frac{p+3(1+2\delta_{3p})}{2}.$$
For $p=3$, Deutsch and Sagan~\cite[Theorem~5.2]{DS06} gave a complete
characterization of ${\bf c}_3$. We can obtain a similar
characterization using Walnut.
\begin{theorem}[Deutsch and Sagan]\label{c3_0runs}
The runs of $0$'s in ${\bf c}_3$ begin at positions $n$, where either
$$(n)_3 \in 211^* \text{ or } (n)_3 \in 211^*0\{0,1\}^*,$$
and have length $(3^{i+2}-3)/2$, where $i$ is the length of the
leftmost block of $1$'s in $(n)_3$. The blocks of non-zero values
in ${\bf c}_3$ are given by the following:
\begin{itemize}
\item The block $11222$ occurs at position $0$.
\item The block $111222$ occurs at all positions $n$ where $(n)_3 \in 222^*0w$
for some $w \in \{0,1\}^*$ that contains an odd number of $1$'s.
\item The block $222111$ occurs at all positions $n$ where $(n)_3 \in 222^*0w$
for some $w \in \{0,1\}^*$ that contains an even number of $1$'s.
\end{itemize}
\end{theorem}
\begin{proof}
We use the automaton for ${\bf c}_3$ given in Figure~\ref{CAT3}.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.75]{CAT3.pdf}
\caption{Automaton for $C_n \bmod 3$}\label{CAT3}
\end{figure}
The Walnut command
\begin{verbatim}
eval cat3max0 "?lsd_3 n>=1 & (At t<n => CAT3[i+t]=@0) &
CAT3[i+n]!=@0 & (i=0|CAT3[i-1]!=@0)":
\end{verbatim}
produces the automaton in Figure~\ref{cat3max0}.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.75]{cat3max0.pdf}
\caption{Automaton for runs of $0$'s in ${\bf c}_3$}\label{cat3max0}
\end{figure}
Examining the transition
labels of the first component of the input gives the claimed representation for
the starting positions of the runs of $0$'s and examining the transition
labels of the second component gives the claimed length.
For the blocks of non-zero values, we execute the Walnut commands
\begin{verbatim}
eval cat3max12 "?lsd_3 n>=1 & (At t<n => CAT3[i+t]!=@0) &
CAT3[i+n]=@0 & (i=0|CAT3[i-1]=@0)":
eval cat3_111222 "?lsd_3 $cat3max12(i,6) & CAT3[i]=@1 &
CAT3[i+1]=@1 & CAT3[i+2]=@1 & CAT3[i+3]=@2 &
CAT3[i+4]=@2 & CAT3[i+5]=@2":
eval cat3_222111 "?lsd_3 $cat3max12(i,6) & CAT3[i]=@2 &
CAT3[i+1]=@2 & CAT3[i+2]=@2 & CAT3[i+3]=@1 &
CAT3[i+4]=@1 & CAT3[i+5]=@1":
eval cat3all12 "?lsd_3 Ai,n $cat3max12(i,n) =>
(i=0 | $cat3_111222(i) | $cat3_222111(i))":
\end{verbatim}
to obtain the automata in Figures~\ref{cat3_111222} and \ref{cat3_222111}.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.75]{cat3_111222.pdf}
\caption{Automaton for blocks $111222$ in ${\bf c}_3$}\label{cat3_111222}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.75]{cat3_222111.pdf}
\caption{Automaton for blocks $222111$ in ${\bf c}_3$}\label{cat3_222111}
\end{figure}
\end{proof}
Note that the length of the runs given in Theorem~\ref{c3_0runs} is
exactly what is given by the result of Alter and Kubota stated above
in Eq.~\eqref{cp_0runs}.
We can also perform the same calculation for $p=5$ to obtain
\begin{theorem}
The runs of $0$'s in ${\bf c}_5$ begin at positions $n$, where either
$$(n)_5 \in 32^* \text{ or } (n)_5 \in 32^*\{0,1\}\{0,1,2\}^*,$$
and have length $(5^{i+2}-3)/2$, where $i$ is the length of the leftmost
block of $2$'s in $(n)_5$.
\end{theorem}
\begin{proof}
We use the automaton for ${\bf c}_5$ given in Figure~\ref{CAT5}.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.65]{CAT5.pdf}
\caption{Automaton for $C_n \bmod 5$}\label{CAT5}
\end{figure}
The Walnut command
\begin{verbatim}
eval cat5max0 "?lsd_5 n>=1 & (At t<n => CAT5[i+t]=@0) &
CAT5[i+n]!=@0 & (i=0|CAT5[i-1]!=@0)":
\end{verbatim}
produces the automaton in Figure~\ref{cat5max0}.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.75]{cat5max0.pdf}
\caption{Automaton for runs of $0$'s in ${\bf c}_5$}\label{cat5max0}
\end{figure}
Examining the transition
labels, as in the proof of Theorem~\ref{c3_0runs}, gives the result.
\end{proof}
Again, note that the lengths of the runs match what is given by
Eq.~\eqref{cp_0runs}.
\begin{theorem}
The sequence $c_5$ begins with the non-zero block $112$. The other
non-zero blocks in $c_5$ are $1331$, $2112$, $3443$, and $4224$.
\end{theorem}
\begin{proof}
The Walnut command
\begin{verbatim}
eval cat5max1234 "?lsd_5 n>=1 & (At t<n => CAT5[i+t]!=@0) &
CAT5[i+n]=@0 & (i=0|CAT5[i-1]=@0)":
\end{verbatim}
produces the automaton in Figure~\ref{cat5max1234}.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.75]{cat5max1234.pdf}
\caption{Automaton for non-zero blocks in ${\bf c}_5$}\label{cat5max1234}
\end{figure}
We see that the initial non-zero block has length $3$ and all others
have length $4$. We omit the Walnut command to verify
the values of these length $4$ blocks, but it is easy to
formulate.
\end{proof}
\section{Conclusion}
We have shown how to use Walnut to obtain automated proofs of certain
results in the literature concerning the Catalan and Motzkin numbers modulo $p$,
as well as the central trinomial coefficients modulo $p$.
We were also able to use Walnut to examine other properties of these
sequences that have not previously been explored, such as the presence (or
absence) of certain repetitive patterns and the property of being
uniformly recurrent. We hope these results encourage other researchers to
continue to further explore these properties for other sequences.
| {
"timestamp": "2021-10-14T02:00:43",
"yymm": "2110",
"arxiv_id": "2110.06244",
"language": "en",
"url": "https://arxiv.org/abs/2110.06244",
"abstract": "Certain famous combinatorial sequences, such as the Catalan numbers and the Motzkin numbers, when taken modulo a prime power, can be computed by finite automata. Many theorems about such sequences can therefore be proved using Walnut, which is an implementation of a decision procedure for proving various properties of automatic sequences. In this paper we explore some results (old and new) that can be proved using this method.",
"subjects": "Combinatorics (math.CO); Formal Languages and Automata Theory (cs.FL); Number Theory (math.NT)",
"title": "Congruence properties of combinatorial sequences via Walnut and the Rowland-Yassawi-Zeilberger automaton",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018383629826,
"lm_q2_score": 0.8244619242200082,
"lm_q1q2_score": 0.8040167841596341
} |
https://arxiv.org/abs/1111.0433 | A closed-form approximation for the median of the beta distribution | A simple closed-form approximation for the median of the beta distribution Beta(a, b) is introduced: (a-1/3)/(a+b-2/3) for (a,b) both larger than 1 has a relative error of less than 4%, rapidly decreasing to zero as both shape parameters increase. | \section{Introduction}
\label{s1}
Consider
the the beta distribution
$\mathrm{Beta}(a,b)$, with the density function,
$$
\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}
\theta^{a-1}(1-\theta)^{b-1}.
$$
The mean of $\mathrm{Beta}(a, b)$ is readily obtained
by the formula $a/(a+b)$, but
there is no general closed formula for the median.
The median function, here denoted by $m(a,b)$,
is the function that satisfies,
$$
\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}
\int_0^{m(a,b)}\theta^{a-1}(1-\theta)^{b-1} \mathrm{d}\theta
=
\frac{1}{2}.
$$
The relationship $m(a,b)=1-m(b, a)$ holds.
Only for the special cases $a=1$ or $b=1$ we may obtain
an exact formula: $m(a, 1)=2^{-1/a}$
and $m(1, b)=1-2^{-1/b}$.
Moreover, when $a=b$, the median is exactly $1/2$.
There has been much literature about the
incomplete beta function and
its inverse
(see e.g. \citet{Dutka:1981} for a review).
The focus in literature has been
on finding accurate numerical results,
but a simple and practical
approximation that is easy to compute
has not been found.
\begin{figure}[t]
\begin{center}
\scalebox{0.80}{\includegraphics{fig-betaerr}}
\caption{\label{fig-betaerrors}
Relative errors of the approximation
$(a-1/3)/(a+b-2/3)$ of the
median of the $\mathrm{Beta}(a, b)$ distribution,
compared with the numerically computed value
for several fixed $p=a/(a+b)<1/2$.
The horizontal axis shows the shape parameter $a$
on logarithmic scale.
From left to right,
$p=0.499$, 0.49, 0.45, 0.35, 0.25, and 0.001.
}
\end{center}
\end{figure}
\section{A new closed-form approximation for the median}
Trivial bounds for the median can be derived
\citep{Payton:1989}, which are
a consequence of the more general
mode-median-mean inequality
\citep{Groeneveld:Meeden:1977}.
In the case of the beta distribution with
$1<a<b$,
the median is bounded by the
mode $(a-1)/(a+b-2)$ and the mean $a/(a+b)$:
$$
\frac{a-1}{a+b-2}
\le
m(a,b)
\le
\frac{a}{a+b}.
$$
For $a\le1$ the formula for the mode does not hold
as there is no mode.
If $1<b<a$, the order of the inequality is reversed.
Equality holds if and only if $a=b$;
in this case the mean, median, and mode are all equal to $1/2$.
This inequality shows that if the mean is kept fixed
at some $p$,
and one of the shape parameters is increased, say $a$,
then the median is sandwiched between
$p(a-1)/(a-2p)$ and $p$,
hence the median tends to $p$.
From the formulas for the mode and mean,
it can be conjectured that
the median $m(a,b)$ could be approximated by
$m(a,b;d)=(a-d)/(a+b-2d)$ for some $d\in(0,1)$,
as this form would satisfy the above inequality
while agreeing with the symmetry requirement,
that is, $m(a,b;d)=1-m(b,a;d)$.
\begin{figure}[t]
\begin{center}
\scalebox{0.80}{\includegraphics{fig-betaerrp}}
\caption{\label{fig-betaerrp}
Relative errors of the approximation
$(a-1/3)/(a+b-2/3)$ of the
median of the $\mathrm{Beta}(a, b)$ distribution
over the whole range of possible distribution means
$p=a/(a+b)$.
The smaller of the shape parameters is fixed,
i.e. for $p\le 0.5$,
the median is computed for $\mathrm{Beta}(a, a(1-p)/p)$
and for $p>0.5$,
the median is computed for $\mathrm{Beta}(bp/(1-p), b)$.
}
\end{center}
\end{figure}
Since a $\mathrm{Beta}(a,b)$ variate can be expressed as
the ratio $\gamma_1/(\gamma_1+\gamma_2)$ where
$\gamma_1\sim\mathrm{Gamma}(a)$ and
$\gamma_2\sim\mathrm{Gamma}(b)$ (both with unit scale),
it is useful to have a look at the median
of the gamma distribution.
\citet{Berg:Pedersen:2006} studied the median
function of the unit-scale
gamma distribution median function, denoted here by $M(a)$,
for any shape parameter $a>0$,
and obtained
$M(a) = a - 1/3 + o(1)$,
rapidly approaching $a-1/3$ as $a$ increases.
It can therefore be conjectured that
the distribution median may be approximated by,
\begin{equation}\label{eq-beta}
m(a, b) \approx
m(a, b; 1/3)
=
\frac{a-1/3}{(a-1/3)+(b-1/3)}
=
\frac{a-1/3}{a+b-2/3}.
\end{equation}
Figure (\ref{fig-betaerrors})
shows that
this approximation indeed appears to approach
the numerically computed median asymptotically
for all distribution means $p=a/(a+b)$ as the
(smaller) shape parameter $a\to\infty$.
For $a\ge1$, the relative error is less than 4\%,
and for $a\ge2$ this is already less than 1\%.
\begin{figure}[t]
\begin{center}
\scalebox{0.80}{\includegraphics{fig-betadisterr}}
\caption{\label{fig-betadisterr}
Logarithm of the scaled absolute error (distance)
$\log(|m(a,b;d)-m(a,b)|/p)$,
computed for a fixed distribution mean $p=0.01$ and various
$d$. The approximate median of the
$\mathrm{Beta}(a,b)$ distribution
is defined as $m(a,b;d)=(a-d)/(a+b-2d)$.
Due to scaling of the error,
the graph and its scale will not essentially change even if
the error is computed for other values of $p<0.5$.
The approximation $m(a,b;1/3)$ performs the
most consistently,
attaining the lowest absolute error eventually as the
precision of the distribution increases.
}
\end{center}
\end{figure}
Figure (\ref{fig-betaerrp}) shows the relative
error over all possible distribution means $p=a/(a+b)$,
as the smallest of the two shape parameters varies from
$1$ to $4$. This illustrates how the relative error
tends uniformly to zero over all $p$ as the shape parameters
increase.
The figure also shows that
the formula consistently either underestimates
or overestimates the median depending on whether
$p<0.5$ or $p>0.5$.
However, the function
$m(a,b;d)$
approximates the median fairly accurately
if some other $d$ close to $1/3$ (say $d=0.3$) is chosen.
Figure (\ref{fig-betadisterr}) displays
curves of the logarithm of the absolute
difference from the numerically computed
median for a fixed $p=0.01$, as the shape parameter
$a$ increases.
The absolute difference
has been scaled by $p$ before taking the logarithm:
due to this scaling,
the error stays approximately constant as $p$ decreases
so the picture and its scale will not essentially change even if
the error is computed for other values of $p<0.5$.
The figure shows that although some approximations such as
$d=0.3$ has a lower absolute error for some $a$,
the error of $m(a, b; 1/3)$ tends to be lower in the long run,
and moreover performs more consistently
by decreasing at the same rate on the logarithmic scale.
In practical applications, $d=0.333$ should be a sufficiently
good approximation of $d=1/3$.
\begin{figure}[t]
\begin{center}
\scalebox{0.7}{\includegraphics{fig-betatail}}
\caption{\label{fig-betatail}
Tail probabilities $\Pr(\theta<m)$
of the $\mathrm{Beta}(a, b)$ distribution
when $m=(a-1/3)/(a+b-2/3)$.
As the smaller of the two shape
parameters increases, the tail probability
tends rapidly and uniformly to $0.5$.
}
\end{center}
\end{figure}
Another measure of the accuracy is the
tail probability
$\Pr(\theta \le m(a,b;1/3))$ of a $\mathrm{Beta}(a, b)$ variate $\theta$:
good approximators of the
median should yield probabilities close to $1/2$.
Figure (\ref{fig-betatail}) shows that
as long as the smallest of the shape parameters
is at least 1,
the tail probability is bound between $0.4865$ and $0.5135$.
As the shape parameters increase, the
probability tends
rapidly and uniformly to $0.5$.
Finally, let us have a look at a
well-known paper that provides further
support for the uniqueness of $m(a,b;1/3)$.
\citet{Peizer:Pratt:1968} and \citet{Pratt:1968}
provide approximations for
the probability function $\Pr(\theta\le x)$
of a $\mathrm{Beta}(a,b)$ variate $\theta$.
Although they do not provide a formula
for the inverse, it is
the probability function at the approximate median.
According to \citet{Peizer:Pratt:1968},
$\Pr(\theta\le x)$
is well approximated by
$\Phi(z(a,b; x))$ where $\Phi$ is the
standard normal probability function,
and $z$ is a function of the shape parameters and
the quantile $x$.
Consider $m=m(a,b;d)$:
$z(a,b;m)$ should be close to zero and at least
tend to zero fast as $a$ and $b$ increase.
Now assume that $p$ is fixed, $a$ varies and $b=a(1-p)/p$.
The function $z(a, b; m)$ equals, rewritten with
the notation in this paper,
\begin{equation}\label{eq-peizer-beta}
\sqrt{p}\frac{1-2m}{(a-p)^{1/2}}\left(
1/3-d
-
\frac{0.02p}{a}\left[
\frac{1}{2} + \frac{1-dp/a}{p(1-p)}
\right]
\right)
\left(\frac{1+f(a,p;d)}{m(1-m)}\right)^{1/2},
\end{equation}
where the function $f(a,p;d)$ tends to zero
as $a$ increases,
being exactly zero only when $d=1/2$ or $m=1/2$.
It is evident that for the fastest convergence
rate to zero, one should choose $d=1/3$.
This is of the order $O(a^{-3/2})$;
if $d\ne 1/3$,
for example if we choose the mean $p$
as the approximation of the median ($d=0$),
the rate is at most $O(a^{-1/2})$.
| {
"timestamp": "2011-11-03T01:02:13",
"yymm": "1111",
"arxiv_id": "1111.0433",
"language": "en",
"url": "https://arxiv.org/abs/1111.0433",
"abstract": "A simple closed-form approximation for the median of the beta distribution Beta(a, b) is introduced: (a-1/3)/(a+b-2/3) for (a,b) both larger than 1 has a relative error of less than 4%, rapidly decreasing to zero as both shape parameters increase.",
"subjects": "Statistics Theory (math.ST); Computation (stat.CO)",
"title": "A closed-form approximation for the median of the beta distribution",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806540875628,
"lm_q2_score": 0.8198933359135361,
"lm_q1q2_score": 0.8039715436121291
} |
https://arxiv.org/abs/0708.2643 | On fixed points of permutations | The number of fixed points of a random permutation of 1,2,...,n has a limiting Poisson distribution. We seek a generalization, looking at other actions of the symmetric group. Restricting attention to primitive actions, a complete classification of the limiting distributions is given. For most examples, they are trivial -- almost every permutation has no fixed points. For the usual action of the symmetric group on k-sets of 1,2,...,n, the limit is a polynomial in independent Poisson variables. This exhausts all cases. We obtain asymptotic estimates in some examples, and give a survey of related results. | \section{Introduction} \label{intro}
One of the oldest theorems in probability theory is the Montmort
(1708) limit
theorem for the number of fixed points of a random permutation of
$\{1, 2, \ldots, n\}$. Let $S_n$ be the symmetric group.
For an
element $w \in S_n$, let $A(w) = \{i : w(i) = i\}$. Montmort \cite{monmort}
proved that
\refstepcounter{thm}
\begin{equation} \label{eqn0}
{|\{w : A(w) = j\}| \over n!} \rightarrow {1 \over e} \ {1 \over
j!}
\end{equation}
for $j$ fixed as $n$ tends to infinity. The limit theorem (\ref{eqn0}) has
had many refinements and variations. See Tak\'{a}cs \cite{Ta} for its
history, Chapter 4 of Barbour, Holst, Janson \cite{BHJ} or Chatterjee,
Diaconis, Meckes \cite{CDM} for modern versions.
The limiting distribution $P_{\lambda}(j) = e^{-\lambda} \lambda^j /
j!$ (in (\ref{eqn0}) $\lambda = 1$) is the Poisson distribution of
``the law of small numbers''. Its occurrence in many other parts of
probability (see e.g. Aldous \cite{Al}) suggests that we seek
generalizations of (\ref{eqn0}), searching for new limit laws.
In the present paper we look at other finite sets on which $S_n$ acts.
It seems natural to restrict to transitive action -- otherwise, things
break up into orbits in a transparent way. It is also natural to
restrict to primitive actions. Here $S_n$ acts {\it primitively} on
the finite set $\Omega$ if we cannot partition $\Omega$ into disjoint
blocks $\Delta_1, \Delta_2, \ldots, \Delta_h$ where $S_n$ permutes the
blocks (if $\Delta_i^w \cap \Delta_j \not = \phi$ then
$\Delta_i^w = \Delta_j)$. The familiar wreath products which permute
within blocks and between blocks are an example of an
imprimitive action.
The primitive actions of $S_n$ have been classified in the O'Nan-Scott
theorem. We describe this carefully in Section \ref{Onan}. For the
study of fixed points most of the cases can be handled by a marvelous
theorem of Luczak-Pyber \cite{LP}. This shows that, except for the
action of $S_n$ on $k$-sets of an $n$ set, almost all permutations
have no fixed points (we say $w$ is a derangement). This result is
explained in Section \ref{lucpyb}. For $S_n$ acting on $k$-sets, one
can assume that $k < n/2$, and there is a nontrivial limit if and
only if $k$ stays fixed as $n$ tends to infinity. In these cases, the
limit is shown to be an explicit polynomial in independent Poisson
random variables. This is the main content of Section
\ref{ksets}. Section \ref{matchings} works out precise asymptotics for
the distribution of fixed points in the action of $S_n$ on
matchings. Section \ref{impriv} considers more general imprimitive
subgroups. Section \ref{prim} proves that the proportion of elements
of $S_n$ which belong to a primitive subgroup not containing $A_n$ is
at most $O(n^{-2/3+\alpha})$ for any $\alpha>0$; this improves on the
bound of Luczak and Pyber \cite{LP}. Finally, Section \ref{survey}
surveys related results (including analogs of our main results for
finite classical groups) and applications of the distribution of fixed
points and derangements.
If a finite group $G$ acts on $\Omega$ with $F(w)$ the number of
fixed points of
$w$, the ``lemma that is not Burnside's'' implies that
$$
\begin{aligned}
E(F(w)) &= \# \ \mbox{orbits of} \ G \ \mbox{on} \ \Omega \\
E(F^2 (w)) &= \# \ \mbox{orbits of} \ G \ \mbox{on} \ \Omega
\times \Omega = \ \mbox{rank} := r.
\end{aligned}
$$
If $G$ is transitive on $\Omega$ with isotropy group $H$, then
the rank is also the number of orbits of $H$ on $\Omega$ and
so equal to the number of $H-H$ double cosets in $G$.
Thus for transitive actions
\refstepcounter{thm}
\begin{equation}
E(F(w)) = 1, \quad {\rm Var} (F(w)) = \ \mbox{rank}-1
\end{equation}
In most of our examples $P(F(w) = 0) \rightarrow 1$ but because of
(1.2), this cannot be seen by moment methods. The standard second
moment method (Durrett \cite{Du}, page 16) says that a non-negative
integer random variable satisfies $P(X > E(X)/2) \geq {1 \over 4}
(EX)^2 / E(X^2)$. Specializing to our case, $P(F(w) > 0) \geq 1/(4
r)$; thus $P(F(w) = 0) \leq 1 - 1/(4 r)$. This shows that the
convergence to $1$ cannot be too rapid.
There is also a quite easy lower bound
for $P(F(w)=0)$ \cite{GW}. Even the simplest instance of this lower
bound was only observed in 1992 in \cite{CaCo}.
We reproduce the simple proof from \cite{GW}. Let $n=|\Omega|$
and let $G_0$ be the set of elements of $G$ with no fixed points.
Note that $F(w) \le n$, whence
$$
\sum_G (F(w)-1)(F(w)-n) \le \sum_{G_0} (F(w)-1)(F(w)-n) = n|G_0|.
$$
On the other hand, the left hand side is equal
to $|G|(r -1)$. Thus,
$P(F(w)=0) \ge (r-1)/n$. We record these bounds.
\begin{theorem} \label{basic} Let $G$ be a finite
transitive permutation group
of degree $n$ and rank $r$. Then
$$
\frac{r-1}{n} \le P(F(w)=0) \le 1 - \frac{1}{4r}.
$$
\end{theorem}
Frobenius groups of order $n(n-1)$ with $n$ a prime power
are only the possibilities when the lower bound is achieved.
The inequality above shows that $P(F(w)=0)$ tends to $1$
implies that the rank tends to infinity. Indeed, for primitive
actions of symmetric and alternating groups, this is also
a sufficient condition -- see Theorem \ref{theC}.
\section{O'Nan-Scott Theorem} \label{Onan}
Let $G$ act transitively on the finite set $\Omega$. By standard
theory we may represent $\Omega = G/ G_{\alpha}$, with any fixed
$\alpha \in \Omega$. Here $G_{\alpha} = \{w : \alpha^w = \alpha\}$
with the action being left multiplication on the cosets. Further
(Passman \cite[3.4]{P}) the action of $G$ on $\Omega$ is primitive if and only
if the isotropy group $G_{\alpha}$ is maximal.
Thus, classifying primitive actions of $G$ is the same problem as
classifying maximal subgroups $H$ of $G$.
The O'Nan-Scott theorem classifies maximal subgroups of $A_n$ and
$S_n$ up to determining the almost simple primitive groups of degree $n$.
\begin{theorem} {\rm [O'Nan-Scott]} Let $H$ be a maximal
subgroup of $G=A_n$ or $S_n$. Then, one of the following three cases holds:
\begin{description}
\item [I] $H$ acts primitively as a subgroup of $S_n$ (primitive case),
\item [II] $H = (S_a \wr S_b) \cap G$ (wreath product), $n = a \cdot b, |
\Omega | = \frac{n!}{(a!)^b \cdot b!}$ (imprimitive case), or
\item [III] $H = (S_k \times S_{n-k}) \cap G, | \Omega | = {n \choose k}$
with $1 \le k < n/2$ (intransitive case).
\end{description}
Further, in case I, one of the following holds:
\begin{description}
\item [Ia] $H$ is almost simple,
\item [Ib] $H$ is diagonal,
\item [Ic] $H$ preserves product structure, or
\item [Id] $H$ is affine.
\end{description}
\end{theorem}
{\it Remarks and examples:}
\begin{enumerate}
\item Note that in cases I, II, III, the modifiers
`primitive', `imprimitive', `intransitive' apply to $H$.
Since $H$ is maximal in $G$, $\Omega \cong G/H$ is a primitive
$G$-set.
We present an example and suitable additional definitions for each case.
\item In case III, $\Omega$ is the $k$-sets of $\{1, 2,
\ldots, n\}$ with the obvious action of $S_n$. This case is
discussed extensively in Section \ref{ksets} below.
\item In case II, take $n$ even with $a = 2, b = n/2$. We may identify
$\Omega$ with the set of perfect matchings on $n$ points --
partitions of $n$ into $n/2$ two-element subsets where order within
a subset or among subsets does not matter. For example if $n = 6,
\{1,2\} \{3,4\} \{5,6\}$ is a perfect matching. For this case,
$|\Omega| = \frac{n!}{2^{n/2} (n/2)!} = (2 n-1)(2n-3) \cdots
(1)$. Careful asymptotics for this case are developed in Section
\ref{matchings}. More general imprimitive subgroups are considered in
Section \ref{impriv}.
\item While every maximal subgroup of $A_n$ or $S_n$ falls into one of the
categories of the O'Nan-Scott theorem,
not every group is maximal. A complete list of the exceptional examples
is in
Liebeck, Praeger and Saxl \cite{LPS1}.
\item In case Ia, $H$ is {\it almost simple} if for some non-abelian
simple group $G$, $G \leq H \leq \Aut(G)$. For example, fix $1 < k <
m$. Let $n = {m \choose k}$. Let $S_n$ be all $n!$ permutations of
the $k$ sets of $\{1, 2, \ldots, m\}$. Take $S_m \leq S_n$ acting on
the $k$-sets in the usual way. For $m \geq 5$, $S_m$ is almost
simple and primitive. Here $\Omega = S_n / S_m$ does not have a
simple combinatorial description, but this example is crucial and
the $k=2$ case will be analyzed in Section \ref{prim}.
Let $\tau \in S_m$ be a transposition. Then $\tau$ moves
precisely $ 2 \binom{m-2}{k-1}$ elements of $\Omega$.
Thus, $S_m$ embeds in $A_n$ if and only if $\binom{m-2}{k-1}$ is
even. Indeed for most primitive embeddings of $S_m$ into $S_n$, the
image is contained in $A_n$ \cite{wisc}.
It is not difficult to see that the image of $S_m$ is maximal in
either $A_n$ or $S_n$. This follows from the general result
in \cite{LPS1}. It also follows from the classification of primitive
groups containing a non-trivial
element fixing at least $n/2$ points \cite{GM}.
Similar examples can be constructed by
looking at the action of $P\Gamma L_d (q)$ on $k$-spaces
(recall the $P\Gamma L_d(q)$ is the projective group of all
semilinear transformations of a $d$ dimensional vector space
over $\F_q$). All of these are
covered by case $Ia$.
\item In case Ib, {\it $H$ is diagonal} if $H = G^k \cdot (\Out(G) \times S_k)$
for $G$ a non-abelian simple group, $k \geq 2$
(the dot denotes semidirect product).
Let $\Omega = G^k / D$ with $D = \{(g, g, \ldots g)\}_{g \in G}$ the
diagonal subgroup. Clearly $G^k$ acts on $\Omega$. Let $\Out(G)$ (the
outer automorphisms) act coordinate-wise and let $S_k$ act by
permuting coordinates. These transformations determine a permutation
group $H$ on the set $\Omega$. The group $H$ has normal subgroup $G^k$
with quotient isomorphic to
$\Out(G) \times S_k$. The extension usually splits (but it doesn't always
split).
Here is an specific example. Take $G = A_m$ for $m \geq 8$ and $k = 2$.
Then $\Out(A_m) = C_2$ and so $H = \langle A_m \times A_m, \tau, (s,s)
\rangle$ where $s$ is a transposition (or any element in $S_m$
outside of $A_m$) and $\tau$ is the involution changing coordinates.
More precisely, each coset of $D$ has a unique representative of the
form $(1, x)$. We have $(g_1, g_2) (1, x)D =
(g_1, g_2 x)D = (1,g_2 x g_1^{-1})D$. The action
of $\tau \in C_2$ takes $(1, x) \rightarrow (1, x^{-1})$ and
the action of $(s,s) \in \Out(A_m)$ takes $(1, x)$ to $(1, sxs^{-1})$.
The maximality of $H$ is somewhat subtle. We first show that if $m
\geq 8$, then $H$ is contained in $\Alt(\Omega)$. Clearly $A_m \times A_m$
is contained in $\Alt(\Omega)$. Observe that $(s,s)$ is contained in
$\Alt(\Omega)$. Indeed, taking $s$ to be a transposition, the number of
fixed points of $(s,s)$ is the size of its centralizer in $A_m$
which is $|S_{m-2}|$, and so $\frac{m!}{2}-(m-2)!$ points are
moved and this is divisible by $4$ since $m \geq 8$. To see that
$\tau$ is contained in $Alt(\Omega)$ for $m \geq 8$, note that the number
of fixed points of $\tau$ is the number of involutions (including
the identity) in $A_m$, so it is sufficient to show that
$\frac{m!}{2}$ minus this number is a multiple of 4. This follows
from the next proposition, which is of independent combinatorial
interest.
\begin{prop} \label{countinv} Suppose that $m \geq 8$.
Then the number of involutions in $A_m$ and the number of
involutions in $S_m$ are multiples of $4$. \end{prop}
\begin{proof} Let $a(m)$ be the number of involutions in $A_m$
(including the identity). Let $b(m)$ be the number of involutions in
$S_m-A_m$. It suffices to show that $a(m)=b(m) = 0 \mod 4$. For $n = 8,9$
we compute directly.
For $n > 9$, we observe that \[ a(n) = a(n-1) + (n-1)b(n-2) \] and \[
b(n) = b(n-1) + (n-1)a(n-2) \] (because an involution either fixes 1
giving the first term or swaps 1 with $j > 1$, giving rise to the
second term). The result follows by induction.
\end{proof}
Having verified that $H$ is contained in $\Alt(\Omega)$ for $m \geq 8$,
maximality now follows from Liebeck-Praeger-Saxl \cite{LPS1}.
\item In case Ic, {\it $H$ preserves a product structure}.
Let $\Gamma=\{1,...,m\}$, $\Delta=\{1,...,t\}$, and let $\Omega$ be
the $t$-fold Cartesian product of $\Gamma$. If $C$ is a permutation
group on $\Gamma$ and $D$ is a permutation group on $\Delta$, we may
define a group $H = C \wr D$ by having $C$ act on the coordinates,
and having $D$ permute the coordinates. Primitivity of $H$ is
equivalent to $C$ acting
primitively on $\Gamma$ with some non identity element having a
fixed point and $D$ acting transitively on $\Delta$ (see, e.g.
Cameron \cite{Ca1}, Th. 4.5).
There are many examples of case Ic but $|\Omega| = m^t$
is rather restricted and
$H$ has a simple form. One specific example is as follows:
$G=S_{m^t}$, $H=S_m \wr S_t$ and $\Omega$
is the t-fold Cartesian product $\{1,\cdots,m\}^t$. The case
$t=2$ will be analyzed in detail in Section \ref{prim}.
It is easy to determine when $H$ embeds in $A_{m^t}$. We just note that
if $t=2$, then this is case if and only if $4|m$.
\item In case Id $H$ is affine. Thus $\Omega = V$, a vector
space of dimension $k$ over a field of $q$ elements (so $n= |\Omega | =
q^k$) and $H$ is the semidirect product $V \cdot GL(V)$. Since
we are interested only in maximal subgroups, $q$ must be prime.
Note that if $q$ is odd, then $H$ contains an $n-1$ cycle and so is
not contained in $A_n$. If $q=2$, then for $k > 2$, $H$ is perfect
and so is contained in $A_n$. The maximality of $H$ in $A_n$
or $S_n$ follows by
Mortimer \cite{mortimer} for $k >1$ and \cite{gurkim} if $k=1$.
\item The proof of the
O'Nan-Scott theorem is not extremely difficult.
O'Nan and
Scott each presented proofs at the Santa Cruz Conference in 1979.
There is a more delicate version which describes
all primitive permutation groups. This was proved in
Aschbacher-Scott \cite{AS} giving quite detailed information.
A short proof of the Aschbacher-O'Nan Scott Theorem is in \cite{msri}.
See also Liebeck, Praeger
and Saxl \cite{LPS2}). A textbook presentation is in Dixon and
Mortimer \cite{DxM}. We find the lively lecture notes of Cameron
(\cite{Ca1}, Chapter 4) very helpful. The theorem has a life of its
own, away from permutation groups, in the language of the generalized
Fitting subgroup $F^*$. See Kurtzweil and Stellmacher \cite{KS}.
See also the lively lecture notes of Cameron
(\cite{Ca1}, Chapter 4). The notion of generalized Fitting
subgroup is quite useful in both the proof and statement
of the theorem. See Kurtzweil and Stellmacher \cite{KS}.
\item It turns out that many of the details above are not needed for
our main results. Only case III $(H = S_k \times S_{n-k})$ allows
non-trivial limit theorems. This is the subject of the next
section. The other cases are of interest when we try to get explicit
bounds (Sections \ref{matchings}, \ref{impriv}, \ref{prim}).
\end{enumerate}
\section{Two Theorems of Luczak-Pyber} \label{lucpyb}
The following two results are due to Luczak and Pyber.
\begin{theorem} \label{theA} (\cite{LP}) Let $S_n$ act on $\{1,
2, \ldots, n\}$ as usual and let $i(n, k)$ be the number of $w \in
S_n$ that leave some $k$-element set invariant. Then, ${i (n, k) \over
n!} \leq a k^{-.01}$ for an absolute constant $a$.
\end{theorem}
\begin{theorem} \label{theB} (\cite{LP}) Let $t_n$ denote the
number of elements of the symmetric group $S_n$ which belong to
transitive subgroups different from $S_n$ or $A_n$. Then
$$
\lim_{n \rightarrow \infty} t_n / n! = 0.
$$ \end{theorem}
Theorem \ref{theA} is at the heart of the proof of Theorem \ref{theB}.
We use them both
to show that a primitive action of $S_n$ is a derangement with
probability approaching one, unless $S_n$ acts on $k$-sets with fixed
$k$. Note that we assume that $k \leq n/2$ since the action on $k$-sets is
isomorphic to the action on $n-k$ sets.
\begin{theorem} \label{theC} Let $G_i$ be a finite symmetric
or alternating group of degree $n_i$ acting
primitively on a finite set $\Omega_i$ of cardinality at
least $3$. Assume
that $n_i \rightarrow \infty$. Let
$d_i$ be the proportion of $w \in G_i$ with no fixed points. Then
the following are equivalent:
\begin{enumerate}
\item
$
\lim_{i \rightarrow \infty} d_i = 1,
$
\item there is no fixed $k$ with $\Omega_i= \{k-\mbox{sets of} \
\{1, 2, \ldots, n_i\}\}$ for infinitely many $i$, and
\item the rank of $G_i$ acting on $\Omega_i$ tends to $\infty$.
\end{enumerate}
\end{theorem}
\begin{proof} Let $H_i$ be an isotropy group for $G_i$
acting on $\Omega_i$. If $H_i$
falls into category I or II of the O'Nan-Scott theorem,
$H_i$ is transitive. Writing out Theorem \ref{theB} above more fully,
Luczak-Pyber prove that
$$
{\left| \bigcup_H H \right| \over n!} \rightarrow 0
$$ where the union is over {\it all} transitive subgroups of $S_n$
not equal to $S_n$ or $A_n$. Thus a randomly chosen $w \in S_n$ is not
in $x H_n x^{-1}$ for any $x$ if $H_n$ falls into category I or II.
Having ruled out categories I and II, we turn to category III
($k$-sets of an $n$ set). Here, Theorem \ref{theA} shows the chance of
a derangement tends to one as $a/k^{.01}$, for an absolute constant
$a$.
The previous paragraphs show that (2) implies (1). If the rank does not
go to $\infty$, then $d_i$ cannot approach 1 by Theorem
\ref{basic}. Thus (1) implies (3), and also (2) since the rank of the
action on k-sets is $k+1$. Clearly (3) implies (2), completing the proof.
\end{proof}
\section{$k$-Sets of an $n$-Set} \label{ksets}
In this section the limiting distribution of the number of fixed
points of a random permutation acting on $k$-sets of an $n$-set is
determined.
\begin{theorem} \label{klim} Fix $k$ and let $S_n$ act on $\Omega_{n,
k}$ -- the $k$ sets of $\{1, 2, \ldots, n\}$. Let $A_i (w)$ be the
number of $i$-cycles of $w \in S_n$ in its usual action on $\{1, 2,
\ldots, n\}$. Let $F_k(w)$ be the number of fixed points of $w$ acting
on $\Omega_{n, k}$. Then
\refstepcounter{thm}
\begin{equation}
F_k(w) = \sum_{|\lambda|=k} \prod_{i=1}^k {A_i (w) \choose
\alpha_i (\lambda)}.
\end{equation}
Here the sum is over partitions $\lambda$ of $k$ and $\alpha_i
(\lambda)$ is the number of parts of $\lambda$ equal to $i$.
(2) For all $n \geq 2, E(F_k) = 1, {\rm Var} (F_k) = k$.
(3) As $n$ tends to infinity, $A_i(w)$ converge to independent
Poisson $(1/i)$ random variables. \end{theorem}
\begin{proof} If $w \in S_n$ is to fix a $k$ set, the cycles
of $w$ must be grouped to partition $k$. The expression for $F_k$
just counts the distinct ways to do this. See the examples below.
This proves (1). The rank of $S_n$ acting on $k$ sets is $k+1$,
proving (2).
The joint limiting distribution of the $A_i$ is a classical result
due to Goncharov \cite{Go}. In fact, letting $X_1,X_2,\cdots,X_k$ be
independent Poisson with parameters $1,\frac{1}{2},\cdots,\frac{1}{k}$,
one has from \cite{DS} that for all $n \geq \sum_{i=1}^k ib_i$,
\[ E \left( \prod_{i=1}^k A_i(w)^{b_i} \right) =
\prod_{i=1}^k E(X_i^{b_i}).\] For total variation bounds see \cite{AT}.
This proves (3). \end{proof}
{\it Examples}. Throughout, let $X_1, X_2, \ldots, X_k$ be
independent Poisson random variables with parameters $1, 1/2, 1/3,
\ldots, 1/k$ respectively.
{$k=1$:} This is the usual action of $S_n$ on $\{1, 2, \ldots,
n\}$ and Theorem \ref{klim} yields (1) of the introduction: In
particular, for derangements
$$
P(F_1 (w) = 0) \rightarrow 1/e \ \dot = \ .36788 .
$$
{$k=2$:} Here $F_2(w) = {A_1 (w) \choose 2} +
A_2 (w)$ and Theorem \ref{klim} says that
$P(F_2(w) = j) \rightarrow P \left( {X_1 \choose 2} +
X_2 = j \right)$ with $X_1$ Poisson$(1)$, $X_2$ Poisson$({1 \over 2})$.
In particular
$$
P(F_2(w) = 0) \rightarrow {2 \over e^{3/2}} \ \dot = \ .44626 .
$$
{$k=3$:} Here $F_3(w) = { A_1 (w) \choose 3}
+ A_1 (w) A_2 (w) + A_3 (w)$ and
$$ P(F_3 (w) = j) \rightarrow P\left( {X_1 \choose 3} + X_1 X_2 + X_3
= j \right) .
$$
In particular
$$
P(F_3(w) = 0) \rightarrow {1 \over e^{4/3}} (1 + {3 \over 2}
e^{-1/2}) \ \dot = \ .50342.
$$
We make the following conjecture, which has also been independently stated
as a problem by Cameron \cite{Ca2}. \\
\noindent
{\it Conjecture:} $\lim_{n \rightarrow \infty} P(F_k (w) = 0)$ is increasing
in
$k$. \\
Using Theorem \ref{klim}, one can prove the following result
which improves, in this context, the upper bound given in Theorem \ref{basic}.
\begin{prop} \[ lim_{n \rightarrow \infty} P(F_k (w) = 0)
\leq 1 - \frac{\log(k)}{k} + O \left( \frac{1}{k} \right). \]
\end{prop}
\begin{proof} Clearly
\begin{eqnarray*} P(F_k(w) > 0) & \geq & P \left(\bigcup_{j=1}^{\lfloor
\frac{k-1}{2} \rfloor} (A_{k-j} > 0 \ \mbox{and} \ A_j > 0) \right)\\
& = & 1 - P \left( \bigcap_{j=1}^{\lfloor \frac{k-1}{2} \rfloor}
\overline{ (A_{k-j} > 0 \ \mbox{and} \ A_j > 0) }
\right). \end{eqnarray*} By Theorem \ref{klim}, this converges to \[ 1
- \prod_{j=1}^{\lfloor \frac{k-1}{2} \rfloor} \left[ 1-(1- e^{-{1/j}})
(1 - e^{-{1/(k-j)}}) \right]. \] Let $a_j=(1-e^{-1/j})$. Write the
general term in the product as $e^{\log(1-a_ja_{k-j})}$. Expand the
log to $-a_ja_{k-j} + O \left( (a_ja_{k-j})^2 \right)$. Writing $a_j=
\left( \frac{1}{j}+O(\frac{1}{j^2}) \right)$ and multiplying out, we
must sum \[ \sum_{j=1}^{\lfloor \frac{k-1}{2} \rfloor} \frac{1}{j}
\frac{1}{k-j} , \sum_{j=1}^{\lfloor \frac{k-1}{2} \rfloor}
\frac{1}{j^2} \frac{1}{k-j} , \sum_{j=1}^{\lfloor \frac{k-1}{2}
\rfloor} \frac{1}{(k-j)^2} \frac{1}{k} , \sum_{j=1}^{\lfloor
\frac{k-1}{2} \rfloor} \frac{1}{(k-j)^2} \frac{1}{j^2}.\] Writing
$\frac{1}{j} \frac{1}{k-j} = \frac{1}{k} \left( \frac{1}{j} +
\frac{1}{k-j} \right),$ the first sum is $\frac{\log(k)}{k}+ O \left(
\frac{1}{k} \right)$. The second sum is $O \left( \frac{1}{k}
\right)$, the third sum is $O \left( \frac{\log(k)}{k^2} \right)$ and
the fourth is $O \left( \frac{1}{k^2} \right)$. Thus $-a_j a_{k-j}$
summed over $1 \leq j \leq (k-1)/2$ is $- \frac{\log(k)}{k} + O \left(
\frac{1}{k} \right)$. The sum of $(a_ja_{k-j})^2$ is of lower order by
similar arguments. In all, the lower bound on $lim_{n \rightarrow
\infty} P(F_k(w)>0)$ is \[ 1-e^{-\frac{\log(k)}{k} + O \left(
\frac{1}{k} \right)} = \frac{\log(k)}{k} + O \left( \frac{1}{k}
\right).\] \end{proof}
To close this section, we give a combinatorial interpretation for
the moments of the numbers $F_k(w)$ of Theorem \ref{klim} above.
This involves
the ``top k to random'' shuffle, which removes k cards from the
top of the deck, and randomly
interleaves them with the other n-k cards (choosing one of the ${n
\choose k}$ possible interleavings uniformly at random).
\begin{prop} \label{shufeig}
\begin{enumerate}
\item The eigenvalues of the top k to random shuffle are the
numbers $\left\{ \frac{F_k(w)}{{n \choose k}} \right \}$,
where $w$ ranges over $S_n$.
\item For all values of $n,k,r$, the rth moment of the distribution of
fixed k-sets is equal to ${n \choose k}^r$ multiplied by the chance
that the top k to random shuffle is at the identity after r steps.
\end{enumerate}
\end{prop}
\begin{proof} Note that the top k to random shuffle is the
inverse of the move k to front shuffle, which picks k cards at
random and moves them to the front of
the deck, preserving their relative order. Hence their transition
matrices are transposes, so have the same eigenvalues. The move k to
front shuffle is a special case of the theory of random walk on
chambers of hyperplane arrangements developed in \cite{BHR}. The
arrangement is the braid arrangement and one assigns weight
$\frac{1}{{n \choose k}}$ to each of the block ordered partitions
where the first block has size $k$ and the second block has size
$n-k$. The result now follows from Corollary 2.2 of \cite{BHR}, which
determined the eigenvalues of such hyperplane walks.
For the second assertion, let $M$ be the transition matrix for the top k to
random shuffle.
Clearly $Tr(M^r)$ (the trace of $M^r$) is equal to $n!$ multiplied
by the chance that the top k to
random shuffle is at the identity after r steps. The first part
gives that \[ Tr(M^r) = \sum_{\
w \in S_n} \left( \frac{F_k(w)}{{n \choose k}} \right)^r, \]
which implies the result. \end{proof}
As an example of part 2 of Proposition \ref{shufeig}, the chance of
being at the identity after 1 step is $\frac{1}{{n \choose k}}$ and
the chance of being at the identity after 2 steps is $\frac{k+1}{{n
\choose k}^2}$, giving another proof that $E(F_k(w))=1$ and
$E(F_k^2(w))=k+1$.
{\it Remarks}
\begin{enumerate}
\item As in the proof of Proposition \ref{klim}, the moments of $F_k(w)$
can be expressed exactly in terms of the moments of Poisson random
variables, provided that $n$ is sufficiently large.
\item There is a random walk on the irreducible representations of $S_n$
which has the same eigenvalues as the top k to random walk, but with
different multiplicities. Unlike the top k to random walk, this walk is
reversible with respect to its stationary distribution, so that spectral
techniques (and hence information about the distribution of fixed points)
can be used to analyze its convergence rate. For details, applications,
and a generalization to other actions, see \cite{F1}, \cite{F2}.
\end{enumerate}
\section{Fixed Points on Matchings} \label{matchings}
Let $M_{2n}$ be the set of perfect matchings on $2n$ points. Thus, if
$2n = 4, M_{2n} = \{(1,2)(3,4), (1,3)(2,4), (1,4)(2,3)\}$. It is well
known that
$$
|M_{2n}| = (2n - 1)!! = (2n - 1)(2n - 3) \cdots (3) (1).
$$
The literature on perfect matchings is enormous. See Lov\'{a}sz and
Plummer \cite{LoPl} for a book length treatment. Connections with
phylogenetic trees and further references are in \cite{DH,DH2}. As
explained above, the symmetric group $S_{2n}$ acts primitively on
$M_{2n}$. Results of Luczak-Pyber \cite{LP} imply that, in this action,
almost every permutation is a derangement. In this section we give
sharp asymptotic rates for this last result. We show that the
proportion of derangements in $S_{2n}$ is
\refstepcounter{thm}
\begin{equation} \label{eqn5.1}
1 - {A (1) \over \sqrt{\pi n}} + o \Bigg( {1 \over \sqrt{n}} \Bigg) ,
\quad A(1) = \prod_{i=1}^\infty {\rm cosh} (1 / (2i-1)).
\end{equation}
Similar asymptotics are given for the proportion of permutations
with $j > 0$ fixed points. This is zero if $j$ is even. For odd $j$,
it is
\refstepcounter{thm}
\begin{equation} \label{eqn5.2}
{C(j) B(1) \over \sqrt{\pi n}} + o \Bigg( {1 \over \sqrt{n}} \Bigg), \quad
B(1) = \prod_{i=1}^\infty \Bigg( 1 + {1 \over 2i} \Bigg) e^{-{1 / 2i}}
\end{equation}
and $C(j)$ explicit rational numbers. In particular
\refstepcounter{thm}
\begin{equation} \label{eqn5.3}
C(1) = {3 \over 2}, \ C(3) = {1 \over 4}, \ C(5) = {27 \over 400}, \
C(7) = {127 \over 2352}.
\end{equation}
The argument proceeds by finding explicit closed forms for
generating functions followed by standard asymptotics. It is well
known that the rank of this action is $p(n)$, the number of
partitions of $n$. Thus (\ref{eqn5.1}) is a big improvement over
the upper bound given in Theorem \ref{basic}.
For $w \in S_{2n}$, let $a_i (w)$ be the number of $i$-cycles in the
cycle decomposition. Let $F(w)$ be the
number of fixed points of $w$ acting on $M_{2n}$. The following
proposition determines $F(w)$ in terms of $a_i(w), \ 1 \leq i \leq
2n$.
\begin{prop} \label{numfix} The number of fixed points, $F(w)$ of $w
\in S_{2n}$ on $M_{2n}$ is
$$
F(w) = \prod_{i=1}^{2n} F_i (a_i (w))
$$
with
$$
F_{2i-1} (a) = \begin{cases} 1 &{\rm if} \ a = 0 \\ 0 &{\rm if} \ a
\ \mbox{is odd} \\ (a-1)!!(2i-1)^{a/2} &{\rm if} \ a > 0 \ \mbox{is
even} \end{cases}
$$
$$
F_{2i} (a) = 1+ \sum_{k=1}^{\lfloor a/2 \rfloor} (2k-1)!! {a \choose 2k}
(2i)^k $$ In particular,
\refstepcounter{thm}
\begin{equation}
F(w) \not = 0 \ \mbox{if and only if} \ a_{2i-1} (w) \ \mbox{is
even for all} \ i
\end{equation}
\refstepcounter{thm}
\begin{equation}
F(w) \ \mbox{does not take on non-zero even values}.
\end{equation}
\end{prop}
\begin{proof} Consider first the cycles of $w$ of length $2
i-1$. If $a_{2i-1}$ is even, the cycles may be matched in pairs, then
each pair of length $2i-1$ can be broken into matched two
element subsets by first pairing the lowest element among the numbers
with any of the $2i-1$ numbers in the second cycle. The rest is
determined by cyclic action. For example, if the two three-cycles
$(123) (456)$ appear, the matched pairs $(14)(25)(36)$ are fixed, so
are $(15)(26)(34)$ or $(16)(24)(35)$. Thus $F_3 (2) = 3$. If
$a_{2i-1}$ is odd, some element cannot be matched and $F_{2i-1}(a_{2i-1})
= 0$.
Consider next the cycles of $w$ of length $2i$. Now, there are two
ways to create parts of a fixed perfect matching. First, some of these
cycles can be paired and, for each pair, the previous construction can
be used. Second, for each unpaired cycle, elements $i$ apart can be
paired. For example, from $(1234)$ the pairing $(13)(24)$ may be
formed. The sum in $F_{2i} (a)$ simply enumerates by partial
matchings.
To see that $F(w)$ cannot take on non-zero even values, observe that
$F_{2i-1} (a)$ and $F_{2i}(a)$ only take on odd values if they are
non-zero. \end{proof}
Let $P_{2n} (j) = \frac{|\{w \in S_{2n} : F(w) = j\}|}{2n!}$.
For $j \geq 1$, let $g_j (t) = \sum_{n=0}^\infty
t^{2n} P_{2n} (j)$ and let $\bar g_0(t) = \sum_{n=0}^\infty t^{2n} (1 -
P_{2n} (0))$.
\begin{prop} \label{cycindex}
$$
\bar g_0(t) = {\displaystyle \prod_{i=1}^\infty
\cosh (t^{2i-1} / (2i-1)) \over \sqrt {1 - t^2}} \qquad
\mbox{for} \ 0 < t < 1.
$$ \end{prop}
\begin{proof} From Proposition \ref{numfix}, $w \in S_{2n}$ has $F(w)
\not = 0$ if and only if $a_{2i-1} (w)$ is even for all $i$. From
Shepp-Lloyd \cite{SL}, if $N$ is chosen in $\{0, 1, 2, \ldots\}$ with
$$
P (N = n) = (1 - t) t^n
$$ and then $w$ is chosen uniformly in $S_N$, the $a_i(w)$ are
independent Poisson random variables with parameter $t^i / i$
respectively. If $X$ is a Poisson $(\lambda)$ random variable, $P$ ($X$
is even) = $\Bigg( {1 \over 2} + {e^{-2 \lambda} \over 2} \Bigg)$. It
follows that
\begin{eqnarray*}
\sum_{n=0}^\infty (1 - t) t^{2n} (1 - P_{2n} (0)) & = &
\prod_{i=1}^\infty \Bigg( {1 + e^{-2{t}^{2i-1} / (2i-1)} \over 2}
\Bigg)\\ & = & \prod_{i=1}^\infty e^{{-t}^{2i-1} / (2i-1)}
\prod_{i=1}^\infty {\rm
cosh} \Bigg( {t^{2i-1} \over (2i-1)} \Bigg) \\
& = & \sqrt{{1 - t \over 1 + t}} \prod_{i=1}^\infty {\rm cosh}
(t^{2i-1} / (2i-1)). \end{eqnarray*} \end{proof}
\begin{cor} \label{asymcor}
As $n$ tends to infinity,
$$
1 - P_{2n} (0) \sim {\displaystyle \prod_{i=1}^\infty {\rm cosh}
(1/(2i-1)) \over \sqrt {\pi n}}.
$$ \end{cor}
\begin{proof} By Proposition \ref{cycindex}, $1-P_{2n}(0)$ is the
coefficient of $t^n$ in \[ \frac{\prod_{i=1}^\infty
\cosh (t^{(2i-1)/2} / (2i-1))}{ \sqrt {1 - t}}.\] It is
straightforward to check that the numerator is analytic near $t=1$,
so the result follows from Darboux's theorem (\cite{O}, Theorem
11.7). \end{proof}
Proposition \ref{numfix} implies that the event $F(w) = j$ is
contained in the event $\{a_{2i-1} (w)$ is even for all $i$ and
$a_{2i}(w) \in \{0, 1\} \ \mbox{for} \ 2i > j\}$. This is evidently
complicated for large $j$.
We prove
\begin{prop} \label{genfunc} For positive odd $j$,
$$g_j (t) = {P_j(t) \displaystyle \prod_{i=1}^{\infty} \Big(1 +
{t^{2i} \over 2i} \Big)
e^{-t^{2i} / 2i} \over \sqrt {1 - t^2}}
$$
for $P_j(t)$ an explicit
rational function in $t$ with positive rational coefficients. In
particular,
$$
P_1 (t) = \Bigg(1 + {t^2 \over 2} \Bigg), P_3 (t) = \Bigg( \frac{t^4}{6} +
\frac{t^6}{18} + \frac{t^8}{36} \Bigg)
$$
$$P_5(t) =
{\frac{1}{2} \Big({1 + \frac{t^2}{2}} \Big) \Big( {t^2 \over 4} \Big)^2
\over 1 +
{t^{4} \over 4}} + \frac{1}{2} \left(1+ \frac{t^2}{2} \right)
\left( \frac{t^5}{5} \right)^2
$$
$$
P_7 (t) = { \frac{1}{2} \Big( 1 + \frac{t^2}{2} \Big) \over 1 + {t^6 \over 6}}
\Big({t^6 \over 6} \Big)^2 + \frac{1}{6} \Big( {t^2 \over 2} \Big)^3 +
\frac{1}{2} \left(1+\frac{t^2}{2} \right) \left( \frac{t^7}{7} \right)^2.
$$ \end{prop}
\begin{proof} Consider first the case of $j=1$. From
Proposition \ref{numfix}, $F(w) = 1$ if and only if $a_{2i-1} (w) = 0$ for $i
\geq 2$, $a_1 (w) \in \{0, 2\}$ and $a_{2i} (w) \in \{0, 1\}$ for all
$i \geq 1$. For example, if $2n = 10$ and $w = (1) (2) (3456789 10)$,
the unique fixed matching is $(1\ 2) (3 \ 7) (4\ 8)(5\ 9)(6\ 10)$. From the
cycle index argument used in Proposition \ref{cycindex},
\begin{eqnarray*}
& & \sum_{n=0}^{\infty} (1 - t) t^{2n} P_{2n} (F(w) = 1)\\
& = & e^{-t} \Big( 1 + {t^2 \over
2} \Big) \displaystyle \prod_{i=2}^\infty e^{-t^{2i-1} / (2i-1)}
\prod_{i=1}^{\infty} e^{-t^{2i}/2i} \Big( 1 + {t^{2i} \over 2i} \Big) \\
& = & (1 - t) \Big(1 + {t^2 \over 2} \Big) \displaystyle
\prod_{i=1}^\infty \Big( 1 + {t^{2i} \over 2i} \Big)\\ & = & {(1-t)\Big( 1 +
{t^2 \over 2} \Big) \displaystyle \prod_{i=1}^\infty \Big( 1 +
{t^{2i} \over 2i} \Big) e^{-t^{2i} / 2i} \over \sqrt {1 - t^2}}.
\end{eqnarray*}
The arguments for the other parts are similar. In particular, $F(w)=3$
iff one of the following holds: \begin{itemize}
\item $a_1(w) = 4$, all $a_{2i-1}(w) = 0 \ i \geq
2$, all $a_{2i}(w) \in \{0, 1\}$
\item $a_1(w) \in
\{0, 2\}, \ a_2(w) = 2$ all $a_{2i-1}(w) = 0$ and
$a_{2i} (w) \in \{0, 1\} \ i \geq 2$
\item $a_1(w) \in \{0,
2\}, \ a_3(w) = 2, a_{2i-1} (w) = 0 \ i \geq 3, \ a_{2i} \in \{0,
1\}$ \end{itemize} Similarly, $F(w)=5$ iff one of the following holds:
\begin{itemize}
\item
$a_4(w) = 2, \ a_1 (w) \in \{0, 2\},
a_{2i-1}(w) = 0, a_{2i} (w) \in \{0, 1\}$ else
\item $a_5(w)=2, a_1(w) \in \{0,2\}, a_{2i-1}(w)=0$, $a_{2i}(w)
\in \{0,1\}$ else
\end{itemize} Finally, $F(w)=7$ iff one of the following holds:
\begin{itemize}
\item
$a_1(w) \in \{0, 2\}, \ a_6 (w) = 2 \
\mbox{or} \ a_2 (w) = 3, a_{2i-1} (w) = 0, a_{2i} (w)
\in \{0, 1\}$ else
\item $a_7(w)=2,a_1(w) \in
\{0,2\}, a_{2i-1}(w)=0$, $a_{2i}(w) \in \{0,1 \}$ else
\end{itemize} Further details are omitted. \end{proof}
The asymptotics in (\ref{eqn5.2}) follow from Proposition \ref{genfunc},
by the same method used to prove (\ref{eqn5.1}) in Corollary \ref{asymcor}.
\section{More imprimitive subgroups} \label{impriv}
Section \ref{matchings} studied fixed points on matchings, or
equivalently fixed points of $S_{2n}$ on the left cosets of
$S_2 \wr S_n$. This section uses a quite different approach to
study derangements of $S_{an}$ on the left cosets of $S_a \wr
S_n$, where $a \geq 2$ is constant. It is proved that the
proportion of elements of $S_{an}$ which fix at least one left
coset of $S_a \wr S_n$ (or equivalently are conjugate to an
element of $S_a \wr S_n$ or equivalently fix a system of $n$ blocks of
size $a$) is
at most the coefficient of $u^n$
in \[ \exp \left( \sum_{k \geq 1} \frac{u^k}{a!}
(\frac{1}{k})(\frac{1}{k}+1) \cdots (\frac{1}{k}+a-1) \right),
\] and that this coefficient is asymptotic to $C_a
n^{\frac{1}{a}-1}$ as $n \rightarrow \infty$, where $C_a$ is
an explicit constant depending on $a$ (defined in Theorem
\ref{genfunction} below). In the special case of matchings
($a=2$), this becomes $\frac{ e^{\frac{\pi^2}{12}} }
{\sqrt{\pi n}}$, which is extremely close to the true
asymptotics obtained in Section \ref{matchings}. Moreover,
this generating function will be crucially applied when we sharpen a
result of Luczak and Pyber in Section \ref{prim}.
The method of proof is straightforward. Clearly the number of
permutations in $S_{an}$ conjugate to an element of $S_a \wr
S_n$ is upper bounded by the sum over conjugacy classes $C$ of
$S_a \wr S_n$ of the size of the $S_{an}$ conjugacy class of
$C$. Unfortunately this upper bound is hard to compute, but we
show it to be smaller than something which can be exactly
computed as a coefficient in a generating function. This will
prove the result.
From Section 4.2 of \cite{JK}, there is the following useful
description of conjugacy classes of $G \wr S_n$ where $G$ is a
finite group. The classes correspond to matrices $M$ with
natural number entries $M_{i,k}$, rows indexed by the
conjugacy classes of $G$, columns indexed by the numbers
$1,2,\cdots,n$, and satisfying the condition that $\sum_{i,k}
k M_{i,k} = n$. More precisely, given an element
$(g_1,\cdots,g_n; \pi)$ in $G \wr S_n$, for each k-cycle of
$\pi$ one multiplies the $g$'s whose subscripts are the
elements of the cycle in the order specified by the
cycle. Taking the conjugacy class in $G$ of the resulting
product contributes 1 to the matrix entry whose row
corresponds to this conjugacy class in $G$ and whose column is
$k$.
The remainder of this section specializes to $G=S_a$. Since
conjugacy classes of $S_a$ correspond to partitions $\lambda$
of $a$, the matrix entries are denoted by $M_{\lambda,k}$. We
write $|\lambda|= a$ if $\lambda$ is a partition of $a$. Given
a partition $\lambda$, let $n_i(\lambda)$ denote the number of
parts of size $i$ of $\lambda$.
\begin{prop} \label{param} Let
the conjugacy class $C$ of $S_a \wr S_n$ correspond to the matrix
$(M_{\lambda,k})$ where $\lambda$ is a partition of $a$. Then the
proportion of elements of $S_{an}$ conjugate to an element of $C$ is
at most \[ \frac{1}{\prod_{k} \prod_{|\lambda|= a} M_{\lambda,k}! [
\prod_i (ik)^{n_i(\lambda)} n_i(\lambda)!]^{M_{\lambda,k}}}.\]
\end{prop}
\begin{proof} Observe that the number of cycles of length $j$ of an
element of $C$ is equal
to \[ \sum_{k|j} \sum_{|\lambda|=a} M_{\lambda,k} n_{j/k}(\lambda).\]
To see this, note
that $S_a \wr S_n$ can be viewed concretely as a permutation of $an$
symbols by letting it
act on an array of $n$ rows of length $a$, with $S_a$ permuting within
each row and $S_n$ permuting among the rows.
Hence by a well known formula for conjugacy class sizes in a
symmetric group, the proportion of elements of $S_{an}$ conjugate to an
element of $C$ is equal to
\begin{eqnarray*}
& & \frac{1}{\prod_j j^{\sum_{k|j} \sum_{|\lambda|= a} M_{\lambda,k}
n_{j/k}(\lambda)} [\sum_{k|j} \sum_{|\lambda|= a} M_{\lambda,k}
n_{j/k}(\lambda)] !}\\ & \leq & \frac{1}{\prod_j j^{\sum_{k|j}
\sum_{|\lambda|= a} M_{\lambda,k} n_{j/k}(\lambda)} \prod_{k|j}
\prod_{|\lambda|= a} M_{\lambda,k} n_{j/k}(\lambda) !}\\ & \leq &
\frac{1}{\prod_j j^{\sum_{k|j} \sum_{|\lambda|= a} M_{\lambda,k}
n_{j/k}(\lambda)} \prod_{k|j} \prod_{|\lambda|= a} [ M_{\lambda,k}!
n_{j/k}(\lambda)!^{M_{\lambda,k}}]}\\ & = & \frac{1}{\prod_k
\prod_{|\lambda|= a} M_{\lambda,k}! [\prod_i (ik)^{n_i(\lambda)}
n_i(\lambda)!]^{M_{\lambda,k}}},
\end{eqnarray*} as desired. The first inequality uses the fact
that $(x_1+\cdots+x_n)! \geq x_1! \cdots x_n!$. The second
inequality uses that $(xy)! \geq x! y!^x$ for $x,y \geq 1$ integers,
which is true since \[ (xy)! = \prod_{i=1}^x \prod_{j=0}^{y-1} (i+jx)
\geq \prod_{i=1}^x \prod_{j=0}^{y-1} i(1+j) = (x!)^y (y!)^x \geq x! (y!)^x.\]
The final equality used the change of variables $i=j/k$. \end{proof}
To proceed further, the next lemma is useful.
\begin{lemma} \label{numcycles} \[ \sum_{|\lambda|= a}
\frac{1}{\prod_i (ik)^{n_i(\lambda)} n_i(\lambda)!} = \frac{(\frac{1}{k})
(\frac{1}{k}+1) \cdots (\frac{1}{k}+a-1)}{a!} .\] \end{lemma}
\begin{proof} Let $c(\pi)$ denote the number of cycles of a permutation $\pi$.
Since the number of permutations in $S_a$ with $n_i$ cycles of length $i$
is $\frac{a!}{\prod_i i^{n_i} n_i!}$, the left hand side is equal to
\[ \frac{1}{a!} \sum_{\pi \in S_a} k^{-c(\pi)} .\] It is well known and
easily proved by induction that \[ \sum_{\pi \in S_a} x^{c(\pi)} =
x(x+1) \cdots (x+a-1).\] \end{proof}
Theorem \ref{genfunction} applies the preceding results
to obtain a useful generating function.
\begin{theorem} \label{genfunction}
\begin{enumerate}
\item The proportion of elements in $S_{an}$ conjugate to an element
of $S_a \wr S_n$ is at most the coefficient of $u^n$ in \[ \exp \left(\sum_{k
\geq 1} \frac{u^k}{a!} (\frac{1}{k})(\frac{1}{k}+1) \cdots
(\frac{1}{k}+a-1) \right).\]
\item For $a$ fixed and $n \rightarrow \infty$, the coefficient of
$u^n$ in this generating function is asymptotic to \[
\frac{e^{\sum_{r=2}^a p(a,r) \zeta(r)}}{\Gamma(1/a)}
n^{\frac{1}{a}-1}\] where $p(a,r)$ is the proportion of permutations
in $S_a$ with exactly r cycles, $\zeta$ is the Riemann zeta function,
and $\Gamma$ is the gamma function.
\end{enumerate}
\end{theorem}
\begin{proof} Proposition \ref{param} implies that the sought
proportion is at most the coefficient of $u^n$ in \begin{eqnarray*}
& & \prod_{k} \prod_{|\lambda|=a} \sum_{M_{\lambda,k} \geq 0} \frac{u^{k
M_{\lambda,k}}}{ M_{\lambda,k}! [(ik)^{n_i(\lambda)}
n_i(\lambda)!]^{M_{\lambda,k}}}\\ & = & \prod_k
\prod_{|\lambda|=a} \exp \left(\frac{u^k}{(ik)^{n_i(\lambda)}
n_i(\lambda)!} \right)\\ & = & \prod_k \exp
\left(\sum_{|\lambda|=a} \frac{u^k}{(ik)^{n_i(\lambda)} n_i(\lambda)!}
\right)\\ & = & \exp \left(\sum_{k \geq 1} \frac{u^k}{a!}
(\frac{1}{k})(\frac{1}{k}+1) \cdots (\frac{1}{k}+a-1) \right).
\end{eqnarray*} The last equality used Lemma \ref{numcycles}.
For the second assertion, one uses Darboux's lemma (see
\cite{O} for an exposition), which gives the asymptotics of
functions of the form $(1-u)^{\alpha} g(u)$ where $g(u)$ is
analytic near 1, $g(1) \neq 0$, and $\alpha \not \in \{0,1,2,
\cdots \}$. More precisely it gives that the coefficient of
$u^n$ in $(1-u)^{\alpha} g(u)$ is asymptotic to
$\frac{g(1)}{\Gamma(-\alpha)} n^{-\alpha-1}$. By Lemma
\ref{numcycles}, \begin{eqnarray*} & & \exp \left( \sum_{k
\geq 1} \frac{u^k}{a!} (\frac{1}{k})(\frac{1}{k}+1) \cdots
(\frac{1}{k}+a-1) \right)\\ & = & \exp \left( \sum_{k \geq 1}
\frac{u^k}{ak} + \sum_{k \geq 1} u^k \sum_{r=2}^a p(a,r)
k^{-r} \right)\\ & = & (1-u)^{-\frac{1}{a}} \cdot \exp
\left(\sum_{r=2}^a p(a,r) \sum_{k \geq 1} \frac{u^k}{k^r}
\right). \end{eqnarray*} Taking $g(u) = \exp
\left(\sum_{r=2}^a p(a,r) \sum_{k \geq 1} \frac{u^k}{k^r}
\right)$ proves the result. \end{proof}
{\it Remark} The upper bound in Theorem \ref{genfunction} is not perfect.
In fact when $n=2$, it does not approach 0 as $a \rightarrow \infty$,
whereas the true answer must by Theorem \ref{theC}. However by part 2
of Theorem \ref{genfunction}, the bound is useful for $a$ fixed and $n$
growing, and it will be crucially applied in Section \ref{prim}
when $a=n$ are both growing.
\section{Primitive subgroups} \label{prim}
A main goal of this section is to prove that the proportion of
elements of $S_n$ which belong to a primitive subgroup not containing
$A_n$ is at most $O(n^{-2/3+\alpha})$ for any $\alpha>0$. This
improves on the bound $O(n^{-1/2+\alpha})$ in \cite{LP}, which was
used in proving Theorem \ref{theB} in Section \ref{Onan}. We conjecture
that this can in fact be replaced by $O(n^{-1})$ (and the examples
with $n = (q^d-1)/(q-1)$ with the subgroup
containing $PGL(d,q)$ or $n=p^d$ with subgroup $AGL(d,p)$
show that in general one can do no better).
The minimal degree of a permutation group is defined as the least
number of points moved by a nontrivial element. The first step is to
classify the degree $n$ primitive permutation groups with minimal
degree at most $n^{2/3}$. We note that Babai \cite{babai} gave an
elegant proof (not requiring the classification of finite simple
groups) that there are no primitive permutation groups of degree $n$
other than $A_n$ or $S_n$ with minimal degree at most $n^{1/2}$.
\begin{theorem} \label{bobemailprim} Let $G$ be a primitive permutation
group of degree $n$.
Assume that there is a nontrivial $g \in G$ moving at most $n^{2/3}$ points.
Then one of the following holds:
\begin{enumerate}
\item $G=A_n$ or $S_n$;
\item $G=S_m$ or $A_m$ with $m \ge 5$ and
$n = \binom{m}{2}$ (acting on subsets of
size $2$) ; or
\item $A_m \times A_m < G \le S_m \wr S_2$ with $m \ge 4$
and $n=m^2$
(preserving a product structure).
\end{enumerate}
If there is a nontrivial $g \in G$ moving fewer than $n^{1/2}$
points, then $G=A_n$ or $S_n$.
\end{theorem}
\begin{proof} First note that the minimal degree in (2)
is $2(m-2)$ and in (3) is $2n^{1/2}$. In particular, aside from
(1), we always have the minimal degree is at least $n^{1/2}$. Thus,
the last statement follows from the first part.
It follows by the main result of \cite{GM} that if
there is a $g \in G$ moving fewer than $n/2$ points, then one the
following holds:
(a) $G$ is almost simple with socle (the subgroup generated by the
minimal normal subgroups) $A_m$ and $n = \binom{m}{k}$ with the action
on subsets of size $k < m/2$;
(b) $n=m^t$, with $t > 1$, $m \ge 5$, $G$ has a unique minimal normal
subgroup $N = L_1 \times \ldots \times L_t$ with $t >1$ and $G$
preserves a product structure -- i.e. if $\Omega =\{1, \ldots, n\}$,
then as $G$-sets, $\Omega \cong X^t$ where $m=|X|$, $G \le S_m \wr S_t$
acts on $X^t$ by acting on each coordinate and permuting the
coordinates.
Note that $n/2 \geq n^{2/3}$ as long as $n \geq 8$. If $n<8$, then $G$
contains an element moving at most $3$ points, i.e. either a
transposition or a $3$ cycle, and so contains $A_n$ (Theorem 3.3A in
\cite{DxM}).
Consider (a) above. If $k=1$, then $(1)$ holds. If $3 \le k < m/2$,
then it is an easy
exercise to see that the element of $S_m$ moving the fewest $k$ sets is
a transposition. The number of $k$-sets moved is
$2 \binom{m-2}{k-1}$. We claim that this is greater than $n^{2/3}$.
Indeed, the sought inequality is equivalent to checking that
$\frac{2k(m-k)}{m(m-1)} > {m \choose k}^{-1/3}$.
The worst case is clearly $k=3$, which is checked by
taking cubes. This settles the case $3 \le k < m/2$, and if $k=2$,
we are in case (2).
Now consider (b) above.
Suppose that $t \geq 3$. Then if $g \in S_m
\times \cdots \times S_m$ is nontrivial, it moves at least
$2m^{t-1}>n^{2/3}$ many points. If $g \in S_m \wr S_t$ and is not in
$S_m \times \cdots \times S_m$, then up to conjugacy we may write
$g=(g_1,\cdots,g_t;\sigma)$ where say $\sigma$ has an orbit
$\{ 1, \ldots, s\}$ with $s > 1$. Viewing
our set as $A \times B$ with $A$ being the first $s$ coordinates, we
see that $g$ fixes at most $m$ points on $A$ (since there is at most one
$g$ fixed point with a given coordinate) and so on the whole space,
$g$ fixes at most $m^{t-s+1} \leq m^{t-1}$ points and so moves at
least $m^t - m^{t-1}$ points. Since $t \ge 3$, this is greater than
$n^{2/3}$. Summarizing, we have shown that in case (b), $t \geq 3$
leads to a contradiction.
So finally consider (b) with $t=2$. We claim that $L$ must be
$A_m$. Enlarging the group slightly, we may assume that $G = S \wr
S_2$ where $L \le S \le Aut(L)$ and $S$ is primitive of degree $m$.
If $g \notin S \times S$,
then arguing as in the $t=3$ case shows that
$g$ moves at least $m^2-m$ points. This is greater than $m^{4/3}=n^{2/3}$
since $m \ge 5$, a contradiction. So write
$g = (g_1, g_2) \in S \times S$ with say $g_1 \ne 1$.
If $g_1$ moves at least $d$ points, then $g$ moves at least $dm$ points.
This is greater than $n^{2/3}$ unless $d \le m^{1/3}$.
By this theorem (for $m$), this implies
that $L=A_m$, whence (1) holds.
\end{proof}
Next, we focus on Case 2 of Theorem \ref{bobemailprim}.
\begin{lemma} \label{cyc} Let $S_m$ be viewed as a subgroup of
$S_{{m \choose 2}}$ using its
action on 2-sets of $\{1,\cdots,m\}$. For $w \in S_m$, let $A_i(w)$
denote the number of cycles of $w$ of length $i$ in its usual action
on $\{1,\cdots,m\}$. The total number of orbits of $w$ on 2-sets
$\{j,k\}$ of symbols which are in a common cycle of $w$ is \[
\frac{m}{2} - \sum_{i \ odd} \frac{A_i(w)}{2}.\] \end{lemma}
\begin{proof} First suppose that $w$ is a single cycle of length $i \geq 2$.
If $i$
is odd, then all orbits of $w$ on pairs of symbols in the $i$-cycle
have length $i$, so the total number of orbits is $\frac{{i
\choose 2}}{i} = \frac{i-1}{2}$. If $i$ is even, there is 1 orbit of size
$\frac{i}{2}$ and all other orbits have size $i$, giving a total of
$\frac{i}{2}$ orbits. Hence for general $w$, the total number of
orbits on pairs of symbols in a common cycle of $w$ is \begin{eqnarray*}
\sum_{i \
odd \atop i \geq 3} A_i(w) \frac{i-1}{2} + \sum_{i \ even} A_i(w)
\frac{i}{2} & = & \sum_{i \ odd \atop i \geq 1} A_i(w) \frac{i-1}{2} +
\sum_{i \ even} A_i(w) \frac{i}{2}\\
& = & \frac{m}{2} - \sum_{i \ odd} \frac{A_i(w)}{2}. \end{eqnarray*}
\end{proof}
\begin{theorem} \label{cas2} Let $S_m$ be viewed as a subgroup of
$S_n$ with $n={{m \choose 2}}$ using its action on 2-sets of $\{1,\cdots,m\}$.
Then the proportion of elements of $S_n$ contained in a conjugate
of $S_m$ is at most $O \left( \frac{\log(n)}{n} \right)$.
\end{theorem}
\begin{proof}
We claim that any element $w$ of $S_m$ has at least $\frac{m}{12}$
cycles when viewed as an element of $S_n$. Indeed, if $A_1(w)>
\frac{m}{2}$, then $w$ fixes at least $\frac{m(m-1)}{8} \geq
\frac{m}{12}$ two-sets. So we suppose that $A_1(w) \leq
\frac{m}{2}$. Clearly $\sum_{i \geq 3 \ odd} A_i(w) \leq
\frac{m}{3}$. Thus Lemma \ref{cyc} implies that $w$ has at least
$\frac{m}{2} - \frac{m}{4} - \frac{m}{6} = \frac{m}{12}$ cycles as an
element of $S_n$. The number of cycles of a random element of $S_n$
has mean and variance asymptotic to $\log(n) \sim 2 \log(m)$ (and is
in fact asymptotically normal) \cite{Go}. Thus by Chebyshev's
inequality, the proportion of elements in $S_m$ with at least
$\frac{m}{12}$ cycles is $O \left( \frac{\log(m)}{m^2} \right) = O
\left( \frac{\log(n)}{n} \right)$, as desired. \end{proof}
To analyze Case 3 of Theorem \ref{bobemailprim}, the following bound,
based on the generating function from
Section \ref{impriv}, will be needed.
\begin{prop} \label{blockbound} The proportion of elements in $S_{m^2}$
which fix a system of $m$ blocks
of size m is $O(n^{-3/4+\alpha})$ for any $\alpha>0$.
\end{prop}
\begin{proof} By Theorem \ref{genfunction}, the proportion in question is
at most the coefficient of
$u^m$ in \[ \exp \left( \sum_{k \geq 1} \frac{u^k}{mk}
(1+\frac{1}{k}) (1+\frac{1}{2k}) \cdots (1+\frac{1}{(m-1)k}) \right).\]
If $f(u)$ and $g(u)$
are power series in $u$, we write $f(u)<<g(u)$ if the coefficient of $u^n$
in $f(u)$ is less than or equal to
the corresponding coefficient in $g(u)$, for all $n$.
Since $\log(1+x) \leq x$ for $0<x<1$, one has that
\[ \log \left( \prod_{i=1}^{m-1} (1+\frac{1}{ik}) \right) \leq
\sum_{i=1}^{m-1}
\frac{1}{ki} \leq \frac{1}{k} (1+ \log(m-1)).\] Thus \begin{eqnarray*}
& & \exp \left( \sum_{k \geq 1} \frac{u^k}{mk} (1+\frac{1}{k})
(1+\frac{1}{2k}) \cdots (1+\frac{1}{(m-1)k}) \right)\\
& << & \exp
\left( \sum_{k \geq 1} \frac{u^k}{mk} e^{1/k} (m-1)^{1/k} \right)\\ &
<< & e^{ue} \exp \left( \sum_{k \geq 2} \frac{u^k}{k}
\sqrt{\frac{e}{m}} \right)\\ & << & e^{ue} \exp \left( \sum_{k \geq 1}
\frac{u^k}{k} \sqrt{\frac{e}{m}} \right) \\ & = & e^{ue}
(1-u)^{-\sqrt{\frac{e}{m}}}. \end{eqnarray*}
The coefficient of $u^i$ in $(1-u)^{-\sqrt{\frac{e}{m}}}$ is \[
\frac{1}{i!} \sqrt{\frac{e}{m}} \prod_{j=1}^{i-1}
(\sqrt{\frac{e}{m}}+j-1) = \frac{1}{i} \sqrt{\frac{e}{m}}
\prod_{j=1}^{i-1} (1+ \frac{1}{j} \sqrt{\frac{e}{m}}).\] Since \[ \log
\left( \prod_{j=1}^{i-1} (1+\frac{1}{j} \sqrt{\frac{e}{m}}) \right)
\leq \sum_{j=1}^{i-1} \frac{1}{j} \sqrt{\frac{e}{m}} \leq
\sqrt{\frac{e}{m}} (1+ \log(i-1)),\] it follows that \[
\prod_{j=1}^{i-1} (1+ \frac{1}{j} \sqrt{\frac{e}{m}}) \leq
[e(i-1)]^{\sqrt{\frac{e}{m}}}.\] This is at most a universal constant
$A$ if $0 \leq i \leq m$. Thus the coefficient of $u^m$ in $e^{ue}
(1-u)^{-\sqrt{\frac{e}{m}}}$ is at most \[ \frac{e^m}{m!} + A
\sqrt{\frac{e}{m}} \sum_{i=1}^m \frac{1}{i} \frac{e^{m-i}}{(m-i)!}.\]
By Stirling's formula (page 52 of \cite{Fe}), $m!> m^m
e^{-m+1/(12m+1)} \sqrt{2 \pi m}$, which implies that the first term is
very small for large $m$. To bound the sum, consider the terms for
$i \ge m^{1-\alpha}$, where $0<\alpha<1$. These contribute at most
$\frac{B m^{\alpha}}{m^{3/2}}$ for a universal constant $B$. The
contribution of the other terms is negligible in
comparison, by Stirling's formula. Summarizing, the contribution of
the sum is $O(m^{-3/2+\alpha}) = O(n^{-3/4+\alpha/2})$, as
desired. \end{proof}
The following theorem gives a bound for Case 3 of Theorem \ref{bobemailprim}.
\begin{theorem} \label{cas3} Let $S_m \wr S_2$ be viewed as a
subgroup of $S_n$ with $n=m^2$ using its action on
the Cartesian product $\{1,\cdots,m\}^2$. Then the proportion of elements
of $S_n$ conjugate to
an element of $S_m \wr S_2$ is $O(n^{-3/4+\alpha})$ for any $\alpha>0$.
\end{theorem}
\begin{proof} Consider elements of $S_m \wr S_2$ of the form $(w_1,w_2;id)$.
These all fix $m$ blocks of size $m$ in the action on $\{1,\cdots,m\}^2$;
the blocks consist of points with a given first coordinate. By
Proposition \ref{blockbound}, the proportion of elements of $S_n$
conjugate to some $(w_1,w_2;id)$ is $O(n^{-3/4+\alpha})$ for any
$\alpha>0$.
Next, consider an element of $S_m \wr S_2$ of the form
$\sigma=(w_1,w_2;(12))$. Then $\sigma^2=(w_1w_2,w_2w_1;id)$. Note that
$w_1w_2$ and $w_2w_1$ are conjugate in $S_m$, and let $A_i$ denote
their common number of $i$-cycles. Observe that if $x$ is in an
$i$-cycle of $w_1w_2$, and $y$ is in an $i$-cycle of $w_2w_1$, then
$(x,y) \in \Omega$ is in an orbit of $\sigma^2$ of size $i$. Hence the
total number of orbits of $\sigma^2$ of size $i$ is at least
$\frac{(iA_i)^2}{i} \geq iA_i$. Thus the total number of orbits of
$\sigma^2$ on $\Omega$ is at least $\sum_i iA_i=m$. Hence the total
number of orbits of $\sigma$ is at least $\frac{m}{2}$. Arguing as in
the proof of Theorem \ref{cas2}, it follows that the proportion of
elements of $S_n$ conjugate to an element of the form $\sigma$ is $O
\left( \frac{\log(n)}{n} \right)$, and so is $O(n^{-3/4+\alpha})$ for
any $\alpha>0$. \end{proof}
Now the main result of this section can be proved.
\begin{theorem} \label{mainres} The proportion of elements of
$S_n$ which belong to a
primitive subgroup not containing $A_n$ is at most $O(n^{-2/3+\alpha})$
for any $\alpha>0$.
\end{theorem}
\begin{proof} Fix $\alpha>0$. By Bovey \cite{Bo}, the proportion of
elements $w$ of $S_n$ such that $\langle w \rangle$
has minimum degree greater than
$n^{2/3}$ is $O(n^{-2/3+\alpha})$. Thus the proportion of $w \in S_n$
which lie in a primitive permutation group having minimal degree greater
than $n^{2/3}$ is $O(n^{-2/3+\alpha})$. The only primitive
permutation groups of degree $n$ with minimal degree $\leq n^{2/3}$, and
not containing $A_n$ are given by Cases 2 and 3 of Theorem
\ref{bobemailprim}. Theorems \ref{cas2} and \ref{cas3} imply that the
proportion of $w$ lying in the union of all such subgroups is
$O(n^{-2/3+\alpha})$, so the result follows. \end{proof}
A trivial corollary to the theorem is that this holds for $A_n$ as well. \\
{\it Remark}\ The actions of the symmetric group studied in this
section embed the group as a subgroup of various larger symmetric
groups. Any such embedding can be thought of as a code in the larger
symmetric group. Such codes may be used for approximating sums of
various functions over the larger symmetric group via a sum over the
smaller symmetric group. Our results can be interpreted as giving
examples of functions where the approximation is not particularly
accurate.
For example, the proof of Theorem \ref{cas2} shows this to be the case
when $S_m$ is viewed as a subgroup of $S_n, n = \binom{m}{2}$
using the actions on
2-sets, and the function is the number of cycles.
\section{Related results and applications} \label{survey}
There are numerous applications of the distribution of fixed points
and derangements. Subsection \ref{motivnum} mentions some motivation
from number theory. Subsection \ref{shalev} discusses some literature
on the proportion of derangements and an analog of the main result of
our paper for finite classical groups. Subsection \ref{fpr} discusses
fixed point ratios, emphasizing the application to random
generation. Subsection \ref{miscell} collects some miscellany about
fixed points and derangements, including algorithmic issues and
appearances in algebraic combinatorics.
While this section does cover many topics, the survey is by no means
comprehensive. Some splendid presentations of several other topics
related to derangements are Serre \cite{Se}, Cameron's lecture notes
\cite{Ca2} and Section 6.6 of \cite{Ca1}. For the connections with
permutations
with restricted positions and rook polynomials see
\cite[2.3, 2.4]{Stanley}.
\subsection{Motivation from number theory} \label{motivnum}
We describe two number theoretic applications of derangements which
can be regarded as motivation for their study:\\
(1) {\it Zeros of polynomials} Let $h(T)$ be a polynomial with
integer coefficients which is irreducible over the integers.
Let $\pi(x)$ be the number of primes $\leq x$ and let $\pi_h(x)$ be
the number of primes $\leq x$ for which
$h$ has no zeros mod $p$. It follows from Chebotarev's density theorem
(see \cite{LS} for history and a proof sketch), that $lim_{x
\rightarrow \infty} \frac{\pi_h(x)}{\pi(x)}$ is equal to the
proportion of derangements in the Galois group $G$ of $h(T)$ (viewed
as permutation of the roots of $h(T)$). Several detailed examples are
worked out in Serre's survey \cite{Se}.
In addition, there are applications such as the the number field sieve
for factoring integers (Section 9 of \cite{BLP}), where it is
important to understand the proportion of primes for which $h$ has no
zeros mod $p$. This motivated Lenstra (1990) to pose the question of
finding a good lower bound for the proportion of derangements of a
transitive permutation group acting on a set of $n$ letters with $n
\geq 2$. Results on this question are described in Subsection
\ref{shalev}. \\
(2) {\it The value problem} Let $\mathbb{F}_q$ be a finite field of
size $q$ with characteristic $p$ and let $f(T)$ be a polynomial of
degree $n>1$ in $\mathbb{F}_q[T]$ which is not a polynomial in
$T^p$. The arithmetic question raised by Chowla \cite{Ch} is to
estimate the number $V_f$ of distinct values taken by $f(T)$ as $T$
runs over $\mathbb{F}_q$.
There is an asymptotic formula for $V_f$ in terms of certain Galois
groups and derangements. More precisely, let $G$ be the Galois group
of $f(T)-t=0$ over $\mathbb{F}_q(t)$ and let $N$ be the Galois group
of $f(T)-t=0$ over $\overline{\mathbb{F}}_q(t)$, where
$\overline{\mathbb{F}}_q$ is an algebraic closure of $\mathbb{F}_q$
(we are viewing $f(T)-t$ as a polynomial with variable $T$ with
coefficients in $\F_q(t)$). Both groups act transitively on the $n$
roots of $f(T)-t=0$. The geometric monodromy group $N$ is a normal
subgroup of the arithmetic monodromy group $G$. The quotient group
$G/N$ is a cyclic group (possibly trivial).
\begin{theorem} (\cite{Co}) Let $xN$ be the coset which is the Frobenius
generator of the cyclic group $G/N$. The Chebotarev density theorem
for function fields yields the following asymptotic formula:
\[ V_f = \left( 1 - \frac{|S_0|}{|N|} \right) q + O(\sqrt{q}) \]
where $S_0$ is the set of group elements in the coset $xN$ which
act as derangements on the set of roots of $f(T)-t=0$.
The constant in the above error term depends only on $n$, not on $q$.
\end{theorem}
As an example, let $f(T)=T^r$ with $r$ prime and different from $p$
(the characteristic of the base field $\mathbb{F}_q$). The Galois
closure of $\mathbb{F}_q(T)/\mathbb{F}_q(T^r)$ is
$\mathbb{F}_q(\mu,T)$ where $\mu$ is a nontrivial $r$th root of $1$.
Thus $N$ is cyclic of order $r$ and $G/N$ is isomorphic to the
Galois group of $\mathbb{F}_q(\mu)/\mathbb{F}_q$.
The permutation action is of degree $r$. If $G=N$, then
every non-trivial element is a derangement and so the image of $f$ has
order roughly $\frac{q}{r} + O(\sqrt{q})$. If $G \neq N$, then $G$ is a
Frobenius group and every fixed point free element is contained in $N$.
Indeed, since in this case $(r,q-1)=1$, we see that $T^r$ is bijective
on $\mathbb{F}_q$.
For further examples, see Guralnick-Wan \cite{GW} and references therein.
Using work on derangements, they prove that if the degree of $f$ is relatively
prime to the characteristic $p$, then either $f$ is bijective or
$V_f \leq \frac{5q}{6} + O(\sqrt{q})$.
\subsection{Proportion of derangements and Shalev's conjecture} \label{shalev}
Let $G$ be a finite permutation group acting transitively on a set $X$
of size $n>1$. Subsection \ref{motivnum} motivated the study of
$\delta(G,X)$, the proportion of derangements of $G$ acting on $X$. We
describe some results on this question, focusing particularly on lower
bounds and analogs of our main results for classical groups.
Perhaps the earliest such result is due to Jordan \cite{Jo}, who
showed that $\delta(G,X)>0$. Cameron and Cohen \cite{CaCo} proved that
$\delta(G,X) \geq 1/n$ with equality if and only if $G$ is a Frobenius
group of order $n(n-1)$, where $n$ is a prime power. See also \cite{Se}, who
also notes a topological application of Jordan's theorem.
Based on extensive computations, it was asked in \cite{Bet} whether
there is a universal constant $\delta > 0$ (which they speculate may
be optimally chosen as $\frac{2}{7}$) such that $\delta(G,X)> \delta$
for all finite simple groups $G$. The existence of such a $\delta > 0$
was also conjectured by Shalev.
Shalev's conjecture was proved by Fulman and Guralnick in the series
of papers \cite{FG1},\cite{FG2},\cite{FG3}. We do not attempt to
sketch a proof of Shalev's conjecture here, but make a few remarks:
\begin{enumerate}
\item One can assume that the action of $G$ on $X$ is primitive, for
if $f:Y \mapsto X$ is a surjection of $G$-sets, then $\delta(G,Y) \geq
\delta(G,X)$.
\item By Jordan's theorem \cite{Jo} that $\delta(G,X)>0$, the proof of
Shalev's conjecture is an asymptotic result: we only need to show
that there exists a $\delta>0$ such that for any sequence $G_i,X_i$
with $|X_i| \rightarrow \infty$, one has that $\delta(G_i,X_i)>\delta$
for all sufficiently large $i$.
\item When $G$ is the alternating group, by
Theorem \ref{theC},
for all primitive actions of $A_n$ except the action on $k$-sets,
the proportion of derangements tends to $1$. For the case of $A_n$ on
$k$-sets, one can give arguments similar to those Dixon \cite{Dx1},
who proved that the proportion of elements of $S_n$ which are
derangements on $k$-sets is at least $\frac{1}{3}$.
\item When $G$ is a finite Chevalley group, the key is to study the
set of regular semisimple elements of $G$. Typically (there are some
exceptions in the orthogonal cases) this is the set of elements of $G$
whose characteristic polynomial is square-free. Now a regular
semisimple element is contained in a unique maximal torus, and there
is a map from maximal tori to conjugacy classes of the Weyl
group. This allows one to relate derangements in $G$ to derangements
in the Weyl group. For example, one concludes that the proportion of
elements of
$GL(n,q)$ which are regular semisimple and fix some k-space is at most the
proportion of elements in
$S_n$ which fix a k-set. For large $q$, algebraic group arguments show that
nearly all elements of $GL(n,q)$
are regular semisimple, and for fixed $q$, one uses generating functions to
uniformly bound the proportion of
regular semisimple elements away from $0$.
\end{enumerate}
To close this subsection, we note that the main result of this paper has an
analog for finite classical groups.
The following result was stated in \cite{FG1} and is proved in \cite{FG2}.
\begin{theorem} Let $G_i$ be a sequence of classical groups
with the natural module of dimension $d_i$. Let
$X_i$ be a $G_i$-orbit of either totally singular or nondegenerate
subspaces (of the natural module) of dimension $k_i \le d_i/2$.
If $k_i \rightarrow \infty$, then $\lim \delta(G_i,X_i)=1$.
If $k_i$ is a bounded sequence, then there exist
$0 < \delta_1 < \delta_2 < 1$
so that $\delta_1 < \delta(G_i,X_i) < \delta_2$.
\end{theorem}
This result applies to any subgroup between the classical
group and its socle. Note that in the case that $G_i= PSL$, we
view all subspaces as being totally singular (note that the totally
singular spaces have parabolic subgroups as stabilizers).
We also remark that in characteristic $2$, we consider the orthogonal group
inside the symplectic group as the stabilizer of a subspace (indeed, if
we view $Sp(2m,2^e)=O(2m+1,2^e)$, then the orthogonal groups
are stabilizers of nondegenerate hyperplanes).
In fact, Fulman and Guralnick prove an analog of the Luczak-Pyber result
for symmetric groups. This result was proved by Shalev \cite{Sh1} for
$PGL(d,q)$ with $q$ fixed.
\begin{theorem} Let $G_i$ be a sequence of simple classical groups
with the natural module $V_i$ of dimension $d_i$ with $d_i \rightarrow \infty$.
Let $H_i$ be the union of all proper irreducible subgroups
(excluding orthogonal subgroups of the symplectic
group in characteristic $2$). Then
$\lim_{i \rightarrow \infty} |H_i|/|G_i| = 0$.
\end{theorem}
If the $d_i$ are fixed, then this result is false. For example,
if $G_i=PSL(2,q)$ and $H$ is the normalizer of a maximal
torus
of $G$, then $\lim_{q \rightarrow \infty} \delta(G, G/H) = 1/2$.
However, the analog of the previous
theorem is proved in \cite{FG1} if the rank of the Chevalley group is fixed.
In this case, we take $H_i$ to be the union of maximal subgroups
which do not contain a maximal torus.
The example given above shows that the rank of the permutation action
going to $\infty$ does not
imply that the proportion of derangements tends to $1$. The results of
Fulman and Guralnick do show this is true if one considers simple Chevalley
groups over fields of bounded size.
\subsection{Fixed point ratios} \label{fpr}
Previous sections of this paper have discussed $fix(x)$, the number of
fixed points of an element $x$ of $G$ on a set $\Omega$.
This subsection concerns the fixed point ratio
$rfix(x)=\frac{fix(x)}{|\Omega|}$. We describe applications to
random generation. For many other applications
(base size, Guralnick-Thompson conjecture, etc.), see the survey \cite{Sh2}.
It should also be mentioned that fixed point ratios are a special case of
character ratios, which have numerous applications to areas such as random
walk \cite{D} and number theory \cite{GlM}.
Let $P(G)$ denote the probability that two random elements of a finite
group $G$ generate $G$. One of the first results concerning $P(G)$ is
due to Dixon \cite{Dx2}, who proved that $lim_{n \rightarrow \infty}
P(A_n) = 1$. The corresponding result for finite simple classical
groups is due to Kantor and Lubotzky \cite{KL}. The strategy adopted
by Kantor and Lubotzky was to first note that for any pair $g,h \in
G$, one has that $\langle g,h \rangle \neq G$ if and
only if $\langle g,h \rangle$ is contained in
a maximal subgroup $M$ of $G$. Since $P(g,h \in M) = (|M|/|G|)^2$, it
follows that \[ 1 - P(G) \leq \sum_M \left( \frac{|M|}{|G|} \right)^2
\leq \sum_i \left( \frac{|M_i|}{|G|} \right)^2 \left(
\frac{|G|}{|M_i|} \right) = \sum_i \frac{|M_i|}{|G|}.\] Here $M$
denotes a maximal subgroup and $\{\it M_i \}$ are representatives
of conjugacy classes of maximal subgroups. Roughly, to show that this
sum is small, one can use Aschbacher's classification of maximal
subgroups \cite{As}, together with Liebeck's upper bounds on sizes of
maximal subgroups \cite{Li}.
Now suppose that one wants to study $P_x(G)$, the chance that a fixed
element $x$ and a random element $g$ of $G$ generate $G$. Then \[ 1
- P_x(G) = P(\langle x,g \rangle \neq G)
\leq \sum_{M \ maximal \atop x \in M} P(g
\in M) = \sum_{M \ maximal \atop x \in M} \frac{|M|}{|G|}.\] Here the
sum is over maximal subgroups $M$ containing $x$. Let $\{M_i\}$ be a
set of representatives of maximal subgroups of $G$, and write $M \sim
M_i$ if $M$ is conjugate to $M_i$. Then the above sum becomes \[
\sum_i \frac{|M_i|}{|G|} \sum_{M \sim M_i \atop x \in M} 1.\] To
proceed further we assume that $G$ is simple. Then, letting $N_G(M_i)$
denote the normalizer of $M_i$ in $G$, one has that \[ g_1 M_i
g_1^{-1} = g_2 M_i g_2^{-1} \leftrightarrow g_1^{-1}g_2 \in N_G(M_i)
\leftrightarrow g_1^{-1}g_2 \in M_i \leftrightarrow g_1 M_i = g_2
M_i.\] In other words, there is a bijection between conjugates of
$M_i$ and left cosets of $M_i$. Moreover, $x \in
gM_ig^{-1}$ if and only if $xgM_i = gM_i$. Thus \[ \frac{|M_i|}{|G|}
\sum_{M \sim M_i \atop x \in M} 1 = rfix(x,M_i). \] Here $rfix(x,M_i)$
denotes the fixed point ratio of $x$ on left cosets of $M_i$, that is
the proportion of left cosets of $M_i$ fixed by $x$. Summarizing,
$P_x(G)$ can be upper bounded in terms of the quantities
$rfix(x,M_i)$. This fact has been usefully applied in quite a few
papers (see \cite{GK}, \cite{FG4} and the references therein, for
example).
\subsection{Miscellany} \label{miscell}
This subsection collects some miscellaneous facts about fixed points and
derangements.
(1) {\it Formulae for fixed points}
We next state a well-known elementary proposition which gives
different formulae for the number of fixed points of an element in a
group action.
\begin{prop} Let $G$ be a finite group acting transitively on $X$.
Let $C$ be a conjugacy class of $G$ and $g$ in $C$. Let $H$ be the
stabilizer of a point in $X$.
\begin{enumerate}
\item The number of fixed points of $g$ on $X$ is $\frac{|C \cap H|}{|C|} |X|$.
\item The fixed point ratio of $g$ on $X$ is $\frac{|C \cap H|}{|C|}$.
\item The number of fixed points of $g$ on $X$ is $|C_G(g)| \sum_i
|C_H(g_i)|^{-1}$ where the $g_i$ are representatives for the
$H$ classes of $C \cap H$.
\end{enumerate}
\end{prop}
\begin{proof} Clearly (1) and (2) are equivalent. To prove (1),
we determine the cardinality of the set
$\{(u,x) \in C \times X | ux=x \}$. On the one hand,
this set has size $|C| f(g)$ where $f(g)$ is the
number of fixed points of $g$. On the other hand,
it is $|X| |C \cap H|$, whence (a) holds.
For (c), note that $|C|= \frac{|G|}{|C_G(g)|}$ and $|C \cap H| =
\sum_i \frac{|H|}{|C_H(g_i)|}$ where the $g_i$ are representatives for
the $H$-classes of $C \cap H$. Plugging this into (1) and using $|X|=
\frac{|G|}{|H|}$ completes the proof. \end{proof}
(2) {\it Algorithmic issues}
It is natural to ask for an algorithm to generate a random
derangement in $S_n$, for example for cryptographic purposes. Of
course, one method is to simply generate random permutations until a
derangements is reached. A more closed form algorithm has been
suggested by Sam Payne. This begins by generating a random
permutation and then, working left to right, each fixed point is
transposed with a randomly chosen place. Each such transposition
decreases the number of fixed points and a clever non-inductive
argument shows that after one pass, the resulting derangement is
uniformly distributed. We do not know if this works starting with
the identity permutation instead of a random permutation.
A very different, direct algorithm for generating a uniformly chosen
derangement appears in \cite{De}. There is also a literature on Gray codes
for running through all derangements in the symmetric group; see \cite{BV} and
\cite{KoL}. \\
(3) {\it Algebraic combinatorics}
The set of derangements has itself been the subject of some
combinatorial study. For example, D\'{e}sarm\'{e}nien \cite{De} has shown that
there is a bijection between derangements in $S_n$ and the set of
permutations with first ascent occurring in an even position. This is
extended and refined by D\'{e}sarm\'{e}nien and Wachs
\cite{DeW}. Diaconis, McGrath, and Pitman \cite{DMP} study the set of
derangements with a single descent. They show that this set admits an
associative, commutative product and unique factorization into cyclic
elements. B\'{o}na \cite{Bn} studies the distribution of cycles in
derangements, using among other things a result of E. Canfield that
the associated generating function has all real zeros. \\
(4) {\it Statistics}
The fixed points of a permutation give rise to a useful metric on
the permutation group: the Hamming metric. Thus $d(\pi,\sigma)$ is equal
to the number of places where $\pi$ and $\sigma$ disagree. This is a
bi-invariant metric on the permutation group and
$$d(\pi,\sigma) = d(id,\pi^{-1} \sigma) = \mbox{number of fixed points in \ }
\pi^{-1} \sigma.$$ Such metrics have many statistical applications
(Chapter 6 of \cite{D}).
| {
"timestamp": "2007-08-21T01:58:22",
"yymm": "0708",
"arxiv_id": "0708.2643",
"language": "en",
"url": "https://arxiv.org/abs/0708.2643",
"abstract": "The number of fixed points of a random permutation of 1,2,...,n has a limiting Poisson distribution. We seek a generalization, looking at other actions of the symmetric group. Restricting attention to primitive actions, a complete classification of the limiting distributions is given. For most examples, they are trivial -- almost every permutation has no fixed points. For the usual action of the symmetric group on k-sets of 1,2,...,n, the limit is a polynomial in independent Poisson variables. This exhausts all cases. We obtain asymptotic estimates in some examples, and give a survey of related results.",
"subjects": "Combinatorics (math.CO); Group Theory (math.GR)",
"title": "On fixed points of permutations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806489800368,
"lm_q2_score": 0.8198933381139645,
"lm_q1q2_score": 0.8039715415822001
} |
https://arxiv.org/abs/2212.00228 | Gated Recurrent Neural Networks with Weighted Time-Delay Feedback | We introduce a novel gated recurrent unit (GRU) with a weighted time-delay feedback mechanism in order to improve the modeling of long-term dependencies in sequential data. This model is a discretized version of a continuous-time formulation of a recurrent unit, where the dynamics are governed by delay differential equations (DDEs). By considering a suitable time-discretization scheme, we propose $\tau$-GRU, a discrete-time gated recurrent unit with delay. We prove the existence and uniqueness of solutions for the continuous-time model, and we demonstrate that the proposed feedback mechanism can help improve the modeling of long-term dependencies. Our empirical results show that $\tau$-GRU can converge faster and generalize better than state-of-the-art recurrent units and gated recurrent architectures on a range of tasks, including time-series classification, human activity recognition, and speech recognition. | \section{Illustrations of Differences Between ODE and DDE Dynamics}
\label{sect:appA}
Of particular interest to us are the differences between ODEs and DDEs that are driven by an input.
To illustrate the differences in the context of RNNs in terms of how they map the input signal into an output signal, we consider the simple examples:
\begin{itemize}
\item[(a)] DDE based RNNs with the hidden states $h \in \RR$ satisfying $\dot{h} = -h(t-\tau) + u(t)$, with $\tau = 0.5$ and $\tau = 1$, and $h(t) = 0$ for $t \in [-\tau,0]$, and
\item[(b)] an ODE based RNN with the hidden states $h \in \RR$ satisfying $\dot{h} = -h(t) + u(t)$,
\end{itemize}
where $u(t) = \cos(t)$ is the driving input signal.
Figure \ref{fig:diff} shows the difference between the dynamics of the hidden state driven by the input signal for (a) and (b). We see that, when compared to the ODE based RNN, the introduced delay causes time lagging in the response of the RNNs to the input signal. The responses are also amplified. In particular, using $\tau = 0.5$ makes the response of the RNN closely matches the input signal. In other words, simply fine tuning the delay parameter $\tau$ in the scalar RNN model is sufficient to replicate the dynamics of the input signal.
To further illustrate the differences, we consider the following examples of RNN with a nonlinear activation:
\begin{itemize}
\item[(c)] DDE based RNNs with the hidden states $h \in \RR$ satisfying $\dot{h} = - h + \tanh(-h(t-\tau) + s(t))$, with $\tau > 0$, and $h(t) = 0$ for $t \in [-\tau,0]$, and
\item[(d)] an ODE based RNN with the hidden states $h \in \RR$ satisfying $\dot{h} = - h + \tanh(-h(t) + s(t))$,
\end{itemize}
where the driving input signal $s$ is taken to be the truncated Weierstrass function:
\begin{equation}
s(t) = \sum_{n=0}^3 a^{-n} \cos(b^n \cdot \omega t ),
\end{equation}
where $a=3$, $b=4$ and $\omega=2$.
Figure \ref{fig:diff2} shows the difference between the input-driven dynamics of the hidden states for (c) and (d). We see that, when compared to the ODE based RNN, the introduced delay causes time lagging in the response of the RNNs to the input signal. Even though the response of both RNNs does not match the input signal precisely (since we consider RNNs with one-dimensional hidden states here and therefore their expressivity is limited), we see that using $\tau = 0.5$ produces a response that tracks the seasonality of the input signal better than the ODE based RNN.
\begin{figure}[!t]
\centering \includegraphics[width=0.85\textwidth]{figs/diff_plot.pdf} \caption{Hidden state dynamics of the DDE based RNNs with $\tau=0.5$ and $\tau=1$, and the ODE based RNN ($\tau=0$). All RNNs are driven by the same cosine input signal. } \label{fig:diff}
\end{figure}
\begin{figure}[!h]
\centering \includegraphics[width=0.85\textwidth]{figs/simple_plot2.pdf} \caption{Hidden state dynamics of the DDE based RNN with $\tau=0.5$ and the ODE based RNN ($\tau = 0$). All RNNs are driven by the same input signal $s(t)$. } \label{fig:diff2}
\end{figure}
\section{Proof of Theorem \ref{thm_exist2main}}
\label{sect:appB}
In this section, we provide a proof of Theorem \ref{thm_exist2main}.
To start, note that one can view the solution of the DDE as a mapping of functions on the interval $[t -\tau,t]$ into functions on the interval $[t,t + \tau]$, i.e., as a sequence of functions defined over a set of contiguous time intervals of length $\tau$.
This perspective makes it more straightforward to prove existence and uniqueness theorems analogous to those for ODEs than by directly viewing DDEs as an evolution over the state space $\RR^d$.
The following theorem, adapted from Theorem 3.7 in \cite{smith2011introduction}, provides sufficient conditions for existence and uniqueness of solution through a point $(t_0, \phi) \in \mathbb{R} \times C$ for the IVP \eqref{eq_genddemain}. Recall that $C := C([-\tau, 0], \mathbb{R}^d)$, the Banach space of continuous functions from $[-\tau, 0]$ into $\mathbb{R}^d$ with the topology of uniform convergence. It is equipped with the norm $\|\phi \| := \sup \{ |\phi(\theta)| : \theta \in [-\tau, 0] \}$.
\begin{theorem}[Adapted from Theorem 3.7 in \cite{smith2011introduction}]
\label{thm_existence}
Let $t_0 \in \RR$ and $\phi \in C$ be given. Assume that $f$ is continuous and satisfies the Lipschitz condition on each bounded subset of $\mathbb{R} \times C$, i.e., for all $a, b \in \mathbb{R}$, there exists a constant $K > 0$ such that
\begin{equation}
|f(t, \phi) - f(t, \psi)| \leq K \| \phi - \psi \|, \ t \in [a,b], \ \|\phi\|, \|\psi \| \leq M,
\end{equation}
with $K$ possibly dependent on $a, b, M$.
There exists $A > 0$, depending only on $M$ such that if $\phi \in C$ satisfies $\| \phi \| \leq M$, then there exists a unique solution $h(t) = h(t, \phi)$ of Eq. \eqref{eq_genddemain}, defined on $[t_0 - \tau, t_0 + A]$. Moreover, if $K$ is the Lipschitz constant for $f$ corresponding to $[t_0, t_0 + A]$ and M, then
\begin{equation}
\max_{\eta \in [t_0 - \tau, t_0 + A]} |h(\eta, \phi) - h(\eta, \psi)| \leq \|\phi - \psi \| e^{KA}, \ \|\phi\|, \| \psi \| \leq M.
\end{equation}
\end{theorem}
We now provide existence and uniqueness result for the continuous-time $\tau$-GRU model, assuming that the input $x$ is continuous in $t$. As before, we define the state $h_t \in C$ as:
\begin{equation}
h_t(\theta) := h(t+\theta), \ -\tau \leq \theta \leq 0.
\end{equation}
Then the DDE defining the model can be formulated as the following IVP for the nonautonomous system:
\begin{equation} \label{eq_gendde}
\dot{h}(t) = -h(t) + u(t, h(t)) + a(t, h(t)) \odot z(t, h_t), \ t \geq t_0,
\end{equation}
and $h_{t_0} = \phi \in C$ for some initial time $t_0 \in \mathbb{R}$, with the dependence on $x(t)$ casted as dependence on $t$ for notational convenience.
Now we restate Theorem \ref{thm_exist2main} from the main text and provide the proof.
\begin{theorem}[Existence and uniqueness of solution for continuous-time $\tau$-GRU] \label{thm_exist2}
Let $t_0 \in \RR$ and $\phi \in C$ be given. There exists a unique solution $h(t) = h(t, \phi)$ of Eq. \eqref{eq_gendde}, defined on $[t_0 - \tau, t_0 + A]$ for any $A > 0$. In particular, the solution exists for all $t \geq t_0$, and
\begin{equation}
\| h_t(\phi) - h_t(\psi) \| \leq \| \phi - \psi \| e^{K(t-t_0)},
\end{equation}
for all $t \geq t_0$, where $K = 1 + \|W_1\| + \|W_2\| + \|W_4\|/4$.
\end{theorem}
\begin{proof}
We shall apply Theorem \ref{thm_existence}.
To verify the Lipschitz condition: for any $\phi, \psi \in C$,
\begin{align}
&
\hspace{-7mm}
|(u(t, \phi) + a(t, \phi) \odot z(t, \phi) - \phi) - (u(t, \psi) + a(t, \psi) \odot z(t, \psi) - \psi)| \nonumber \\
&\leq |u(t, \phi) - u(t, \psi)| + |\phi - \psi| + |(a(t,\phi) - a(t, \psi)) \odot z(t,\phi) | + | a(t, \psi) \odot (z(t, \phi) - z(t, \psi) )| \\
&\leq \|W_1\| \cdot |\phi - \psi| + |\phi - \psi| + \frac{1}{4} \|W_4\| \cdot |\phi - \psi| + \|W_2\| \cdot |\phi - \psi| \\
&=: K |\phi - \psi|,
\end{align}
where we have used the fact that the tanh and sigmoid are Lipschitz continuous, both bounded by one in absolute value, and they have positive derivatives of magnitude no larger than one and $1/4$ respectively in the last inequality~above.
Therefore, we see that the right hand side function satisfies a global Lipschitz condition and so the result follows from Theorem \ref{thm_existence}.
\end{proof}
\section{Proof of Proposition \ref{prop_delaymain}} \label{sect:appC}
In this section, we restate Proposition \ref{prop_delaymain}, and provide its proof and some remarks.
\begin{proposition} \label{prop_delay}
Consider the linear time-delayed RNN whose hidden states are described by the update equation:
\begin{equation} \label{eq_recursion}
h_{n+1} = Ah_n + Bh_{n-m} + Cu_n, \ n=0,1,\dots,
\end{equation}
and $h_n = 0$ for $n =-m, -m+1, \dots, 0$ with $m > 0$.
Then, assuming that $A$ and $B$ commute, we have:
\begin{equation}
\frac{\partial h_{n+1}}{\partial u_i} = A^{n-i} C,
\end{equation}
for $n=0,1, \dots, m$, $i=0,\dots, n$, and
\begin{align}
\frac{\partial h_{m+1+j}}{\partial u_i} &= A^{m+j-i} C + \delta_{i,j-1} BC + 2 \delta_{i,j-2} ABC \nonumber \\
&\ \ \ \ + 3 \delta_{i,j-3} A^2 B C + \cdots + j \delta_{i,0} A^{j-1} B C,
\end{align}
for $j = 1,2,\dots, m+1$, $i=0,1,\dots, m+j$, where $\delta_{i,j}$ denotes the Kronecker delta.
\end{proposition}
\begin{proof}
Note that by definition $h_i = 0$ for $i=-m, -m+1, \dots, 0$, and, upon iterating the recursion \eqref{eq_recursion}, one~obtains:
\begin{equation}
h_{n+1} = A^n C u_0 + A^{n-1} C u_1 + \cdots + AC u_{n-1} + C u_n,
\end{equation}
for $n = 0, 1, \dots, m$.
Now, applying the above formula for $h_1$, we obtain
\begin{align}
h_{m+2} &= A h_{m+1} + B h_1 + Cu_{m+1} \nonumber \\
&= (B + A^{m+1} ) C u_0 + A^{m} C u_1 + \cdots + A^2 C u_{m-1} + ACu_m + Cu_{m+1}.
\end{align}
Likewise, we obtain:
\begin{align}
h_{m+3} &= A h_{m+2} + B h_2 + Cu_{m+2} \nonumber \\
&= (BA + A^{m+2} + AB) C u_0 + (B+A^{m+1})C u_1 + A^m C u_2 + \cdots + AC u_{m+1} + C u_{m+2} \nonumber \\
&= (2AB + A^{m+2}) C u_0 + (B+A^{m+1})C u_1 + A^m C u_2 + \cdots + AC u_{m+1} + C u_{m+2},
\end{align}
where we have used commutativity of $A$ and $B$ in the last line above.
Applying the above procedure repeatedly and using commutativity of $A$ and $B$ give:
\begin{equation} \label{eq_linear}
h_{m+1+j} = (A^{m+j} + j A^{j-1} B) C u_0 + ( A^{m+j-1} + (j-1) A^{j-2} B) C u_1 + \cdots + ACu_{m+j-1} + C u_{m+j},
\end{equation}
for $j = 1,2, \dots, m+1$.
The formula in the proposition then follows upon taking the derivative with respect to the $u_i$ in the above formula for the hidden states $h_k$.
\end{proof}
We remark that one could also derive formula for the gradients $\frac{\partial h_{n+1+j}}{\partial u_i}$ for $n \geq 2m+1$ as well as those for our $\tau$-GRU architecture analogously, albeit the resulting expression is quite complicated.
In particular, the dependence on higher powers of $B$ for the coefficients in front of the Kronecker deltas would appear in the formula for the former case (with much more complicated expressions without assuming commutativity of the matrices). However, we emphasize that the qualitative conclusion derived from the analysis remains the same: that introduction of delays places more emphasis on gradient information due to input elements in the past, so they act as buffers to propagate the gradient information more effectively than the counterpart models without delays.
\section{Gradient Bounds for $\tau$-GRU}
\label{app:gradbound}
\noindent
{\bf On the exploding and vanishing gradient problem.}
For simplicity of our discussion here, we consider the loss~function:
\begin{equation}
\mathcal{E}_n = \frac{1}{2} \| y_n - \overline{y}_n \|^2,
\end{equation}
where $n = 1, \dots, N$ and $\overline{y}_n$ denotes the underlying growth truth. The training of $\tau$-GRU involves computing gradients of this loss function with respect to its underlying parameters $\theta \in \Theta = [W_{1,2,3,4}, U_{1,2,3,4}, V]$ at each iteration of the gradient descent algorithm. Using chain rule, we obtain \cite{pascanu2013difficulty}:
\begin{equation}
\frac{\partial \mathcal{E}_n}{\partial \theta} = \sum_{k=1}^n \frac{\partial \mathcal{E}_n^{(k)}}{\partial \theta},
\end{equation}
where
\begin{equation}
\frac{\partial \mathcal{E}_n^{(k)}}{\partial \theta} = \frac{\partial \mathcal{E}_n}{\partial h_n} \frac{\partial h_n}{\partial h_k} \frac{\partial^+ h_k}{\partial \theta},
\end{equation}
with $\frac{\partial^+ h_k}{\partial \theta}$ denoting the ``immediate'' partial derivative of
the state $h_k$ with respect to $\theta$, i.e., where $h_{k-1}$ is taken as a constant with respect to $\theta$ \cite{pascanu2013difficulty}.
The partial gradient $\frac{\partial \mathcal{E}_n^{(k)}}{\partial \theta}$ measures the contribution to the hidden state gradient at step $n$ due to the information at step $k$. It can be shown that this gradient behaves as \begin{equation}
\frac{\partial \mathcal{E}_n^{(k)}}{\partial \theta} \sim \gamma^{n-k},
\end{equation}
for some $\gamma > 0$ \cite{pascanu2013difficulty}. If $\gamma > 1$, then the gradient grows exponentially in sequence length, for long-term dependencies where $k \ll n$, causing the exploding gradient problem. On the other hand, if $\gamma < 1$, then the gradient decays exponentially for $k \ll n$, causing the vanishing gradient problem. Therefore, we can investigate how $\tau$-GRU deals with these problems by deriving bounds on the gradients. In particular, we are interested in the behavior of the gradients for long-term dependencies, i.e., $k \ll n$, and shall show that the delay mechanism in $\tau$-GRU slows down the exponential decay rate, thereby reducing the sensivity to the vanishing gradient problem.
Recall that the update equations defining $\tau$-GRU are given by $h_n = 0$ for $n=-m, -m+1, \dots, 0$,
\begin{equation}
h_n = (1-g(A_{n-1})) \odot h_{n-1} + g(A_{n-1}) \odot [u(B_{n-1}) + a(C_{n-1}) \odot z(D_{n-m-1})],
\end{equation}
for $n=1,2,\dots, N$, where $m := \lfloor \tau/\Delta t \rfloor \in \{1,2,\dots, N-1\}$, $A_{n-1} = W_3 h_{n-1} + U_3 x_{n-1}$, $B_{n-1} = W_1 h_{n-1} + U_1 x_{n-1}$,
$C_{n-1} = W_4 h_{n-1} + U_4 x_{n-1}$, and $D_{n-m-1} = W_2 h_{n-m-1} + U_2 x_{n-1}$.
In the sequel, we shall denote the $i$th component of a vector $v$ as $v^i$ and the $(i,j)$ entry of a matrix $A$ as $A^{ij}$.
We start with the following lemma.
\begin{lemma} \label{app_lem}
For every $i$, we have $ h_n^i = 0$, for $n = -m, -m+1, \dots, 0$, and $ |h_n^i| \leq 2$, for $n=1,2,\dots, N$.
\end{lemma}
\begin{proof}
The $i$th component of the hidden states of $\tau$-GRU are given by: $h_n^i = 0$ for $n=-m, -m+1, \dots, 0$, and
\begin{equation}
h_n^i = (1-g(A^i_{n-1})) h^i_{n-1} + g(A^i_{n-1}) [u(B^i_{n-1}) + a(C^i_{n-1}) z(D^i_{n-m-1})],
\end{equation}
for $n=1,2,\dots, N$.
Using the fact that $g(x), a(x) \in (0,1)$ and $u(x), z(x) \in (-1,1)$ for all $x$, we can bound the $h_n^i$ as:
\begin{align}
h_n^i &\leq (1-g(A^i_{n-1})) \max(h^i_{n-1},2) + g(A^i_{n-1}) \max(h^i_{n-1},2) \nonumber \\
&\leq \max(h^i_{n-1},2),
\end{align}
for all $i$ and $n = 1,2,\dots, N$.
Similarly, we have:
\begin{align}
h_n^i &\geq (1-g(A^i_{n-1})) \min(-2, h^i_{n-1}) + g(A^i_{n-1}) \min(-2, h^i_{n-1}) \nonumber \\
&\geq \min(-2, h^i_{n-1}),
\end{align}
for all $i$ and $n = 1,2,\dots, N$.
Thus,
\begin{equation}
\min(-2, h^i_{n-1}) \leq h_n^i \leq \max(h^i_{n-1},2),
\end{equation}
for all $i$ and $n = 1,2,\dots, N$.
Now, iterating over $n$ and using $h_0^i = 0$ for all $i$, we obtain $-2 \leq h_n^i \leq 2$ for all $i$ and $n = 1,2,\dots, N$.
\end{proof}
We now provide the gradients bound for $\tau$-GRU in the following proposition and proof.
\begin{proposition} \label{app_prop}
Assume that there exists an $\epsilon > 0$ such that $\max_n g(A^i_{n-1}) \geq \epsilon$ and $\max_n a(C^i_{n-1}) \geq \epsilon$ for all $i$. Then
\begin{equation}
\left \| \frac{\partial h_n}{\partial h_k} \right\|_\infty \leq (1+C-\epsilon)^{n-k} + \| W_2\|_\infty \cdot \left( (1+C-\epsilon)^{n-k-2} \delta_{m,1} + \dots + (1+C-\epsilon) \delta_{m, n-k-2} + \delta_{m, n-k-1} \right),
\end{equation}
for $n=1, \dots, N$ and $k < n$, where $C = \|W_1\|_\infty + \|W_3\|_\infty + \frac{1}{4}\|W_4\|_\infty$.
\end{proposition}
\begin{proof}
Recall $h_n = 0$ for $n=-m, -m+1, \dots, 0$, and
\begin{equation}
h_n = (1-g(A_{n-1})) \odot h_{n-1} + g(A_{n-1}) \odot [u(B_{n-1}) + a(C_{n-1}) \odot z(D_{n-m-1})] =: F(h_{n-1}, h_{n-m-1}),
\end{equation}
for $n=1,2,\dots, N$.
Denote $q_{n,l} := \frac{\partial F}{\partial h_{n-l}}$, where $F := F(h_{n-1}, h_{n-l})$ for $l>1$.
The gradients $\frac{\partial h_n}{\partial h_k}$ can be computed recursively as~follows.
\begin{align}
p_n^{(1)} &:= \frac{\partial h_n}{\partial h_{n-1}}, \\
p_n^{(2)} &:= \frac{\partial h_n}{\partial h_{n-2}} = p_n^{(1)} p_{n-1}^{(1)} + q_{n,2} \delta_{m,1}, \\ p_n^{(3)} &:= \frac{\partial h_n}{\partial h_{n-3}} = p_n^{(1)} p_{n-1}^{(2)} + q_{n,3} \delta_{m,2}, \\
\vdots \\
p_n^{(n-k)} &:= \frac{\partial h_n}{\partial h_{k}} = p_n^{(1)} p_{n-1}^{(n-k-1)} + q_{n,n-k} \delta_{m,n-k-1}.
\end{align}
As $\|p_n^{(n-k)}\| \leq \|p_n^{(1)} \| \cdot \| p_{n-1}^{(n-k-1)}\| + \|q_{n,n-k}\| \delta_{m,n-k-1}$, it remains to upper bound the $p_n^{(1)}$ and $q_{n,n-k}$.
The $i$th component of the hidden states can be written as:
\begin{equation}
h_n^i = (1-g(A^i_{n-1})) h^i_{n-1} + g(A^i_{n-1}) [u(B^i_{n-1}) + a(C^i_{n-1}) z(D^i_{n-m-1})],
\end{equation}
where $A^i_{n-1} = W_3^{iq} h_{n-1}^q + U_3^{ir} x_{n-1}^{r}$, $B^i_{n-1} = W_1^{iq} h_{n-1}^q + U_1^{ir} x_{n-1}^{r}$, $C^i_{n-1} = W_4^{iq} h_{n-1}^q + U_4^{ir} x_{n-1}^{r}$, and $D^i_{n-m-1} = W_2^{iq} h_{n-m-1}^q + U_2^{ir} x_{n-1}^{r}$, using Einstein's summation notation for repeated indices.
Therefore, applying chain rule and using Einstein's summation for repeated indices in the following, we obtain:
\begin{align}
\frac{\partial h_n^i}{\partial h_{n-1}^j} &= (1-g(A_{n-1}^i)) \frac{\partial h_{n-1}^i}{\partial h_{n-1}^j} - \frac{\partial g(A_{n-1}^i)}{\partial A_{n-1}^l} \frac{\partial A_{n-1}^l}{\partial h_{n-1}^j} h_{n-1}^i + g(A_{n-1}^i) \frac{\partial u(B_{n-1}^i)}{\partial B_{n-1}^l} \frac{\partial B_{n-1}^l}{\partial h_{n-1}^j} \nonumber \\
&\ \ \ \ \ + \frac{\partial g(A_{n-1}^i)}{\partial A_{n-1}^l} \frac{\partial A_{n-1}^l}{\partial h_{n-1}^j} u(B_{n-1}^i) + g(A_{n-1}^i) \frac{\partial a(C_{n-1}^i)}{\partial C_{n-1}^l} \frac{\partial C_{n-1}^l}{\partial h_{n-1}^j} z(D_{n-m-1}^i) \nonumber \\
&\ \ \ \ \ + \frac{\partial g(A_{n-1}^i)}{\partial A_{n-1}^l} \frac{\partial A_{n-1}^l}{\partial h_{n-1}^j} a(C_{n-1}^i) z(D_{n-m-1}^i).
\end{align}
Noting that $\frac{\partial g(A_{n-1}^i)}{\partial A_{n-1}^l} = 0$, $\frac{\partial u(B_{n-1}^i)}{\partial B_{n-1}^l} = 0$ and $\frac{\partial a(C_{n-1}^i)}{\partial C_{n-1}^l} = 0$ for $i \neq l$, we have:
\begin{align}
\frac{\partial h_n^i}{\partial h_{n-1}^j} &= (1-g(A_{n-1}^i)) \frac{\partial h_{n-1}^i}{\partial h_{n-1}^j} - \frac{\partial g(A_{n-1}^i)}{\partial A_{n-1}^i} \frac{\partial A_{n-1}^i}{\partial h_{n-1}^j} h_{n-1}^i + g(A_{n-1}^i) \frac{\partial u(B_{n-1}^i)}{\partial B_{n-1}^i} \frac{\partial B_{n-1}^i}{\partial h_{n-1}^j} \nonumber \\
&\ \ \ \ \ + \frac{\partial g(A_{n-1}^i)}{\partial A_{n-1}^i} \frac{\partial A_{n-1}^i}{\partial h_{n-1}^j} u(B_{n-1}^i) + g(A_{n-1}^i) \frac{\partial a(C_{n-1}^i)}{\partial C_{n-1}^i} \frac{\partial C_{n-1}^i}{\partial h_{n-1}^j} z(D_{n-m-1}^i) \nonumber \\
&\ \ \ \ \ + \frac{\partial g(A_{n-1}^i)}{\partial A_{n-1}^i} \frac{\partial A_{n-1}^i}{\partial h_{n-1}^j} a(C_{n-1}^i) z(D_{n-m-1}^i) \\
&= (1-g(A_{n-1}^i)) \delta_{i,j} - \frac{\partial g(A_{n-1}^i)}{\partial A_{n-1}^i} W_3^{ij} h_{n-1}^i + g(A_{n-1}^i) \frac{\partial u(B_{n-1}^i)}{\partial B_{n-1}^i} W_1^{ij} \nonumber \\
&\ \ \ \ \ + \frac{\partial g(A_{n-1}^i)}{\partial A_{n-1}^i} W_3^{ij} u(B_{n-1}^i) + g(A_{n-1}^i) \frac{\partial a(C_{n-1}^i)}{\partial C_{n-1}^i} W_4^{ij} z(D_{n-m-1}^i) \nonumber \\
&\ \ \ \ \ + \frac{\partial g(A_{n-1}^i)}{\partial A_{n-1}^i} W_3^{ij} a(C_{n-1}^i) z(D_{n-m-1}^i).
\end{align}
Using the assumption that $\max_n g(A^i_{n-1}), \max_n a(C^i_{n-1}) \geq \epsilon$ for all $i$, the fact that $|z(x)|, |u(x)| \leq 1$, $g(x), a(x) \in (0,1)$, $g'(x), a'(x) \leq 1/4$, $u'(x) \leq 1$ for all $x \in \RR$, and Lemma \ref{app_lem}, we obtain:
\begin{align}
\left|\frac{\partial h_n^i}{\partial h_{n-1}^j} \right| &\leq (1-\epsilon) \delta_{i,j} + \frac{1}{4} |W_3^{ij}| \cdot |h_{n-1}^i| + |W_1^{ij}| + \frac{1}{4} |W_3^{ij}| + \frac{1}{4} |W_4^{ij}| + \frac{1}{4} |W_3^{ij}| \\
&\leq (1-\epsilon) \delta_{i,j} + |W_1^{ij}| + |W_3^{ij}| + \frac{1}{4} |W_4^{ij}|.
\end{align}
Therefore,
\begin{equation}
\left \| \frac{\partial h_n}{\partial h_{n-1}} \right\|_\infty := \max_{i=1,\dots, d} \sum_{j=1}^d \left| \frac{\partial h_n^i}{\partial h_{n-1}^j } \right| \leq (1-\epsilon ) + \|W_1\|_\infty + \|W_3 \|_\infty + \frac{1}{4} \|W_4\|_\infty =: 1-\epsilon + C.
\end{equation}
Likewise, we obtain, for $l>1$:
\begin{align}
\frac{\partial F^i}{\partial h_{n-l}^j} &= g(A_{n-1}^i) a(C_{n-1}^i) \frac{\partial z(D_{n-l}^i)}{\partial D_{n-l}^i} \frac{\partial D_{n-l}^i}{\partial h_{n-l}^j} \\
&= g(A_{n-1}^i) a(C_{n-1}^i) \frac{\partial z(D_{n-l}^i)}{\partial D_{n-l}^i} W_2^{ij}.
\end{align}
Using the fact that $|g(x)|, |a(x)| \leq 1$ and $|z'(x)| \leq 1$ for all $x \in \RR$, we obtain:
\begin{align}
\left|\frac{\partial F^i}{\partial h_{n-l}^j}\right| &\leq |W_2^{ij}|,
\end{align}
for $l > 1$,
and thus
$\|q_{n,n-k}\|_\infty = \| \frac{\partial F}{\partial h_k} \|_\infty \leq \|W_2\|_\infty$ for $k > 1$.
The upper bound in the proposition follows by using the above bounds for $\|p_{n}^{(1)}\|_\infty$ and $\|q_{n,n-k}\|_\infty$, and iterating the recursion $\|p_n^{(n-k)}\| \leq \|p_n^{(1)} \| \cdot \| p_{n-1}^{(n-k-1)}\| + \|q_{n,n-k}\| \delta_{m,n-k-1}$ over $k$.
\end{proof}
From Proposition \ref{app_prop}, we see that if $\epsilon > C$, then the gradient norm decays exponentially as $k$ becomes large. However, the delay in $\tau$-GRU introduces jump-ahead connections (buffers) to slow down the exponential decay. For instance, choosing $m=1$ for the delay, we have $\left \| \frac{\partial h_n}{\partial h_k} \right\| \sim (1+C-\epsilon)^{n-k-2}$ as $k \to \infty$ (instead of $\left \| \frac{\partial h_n}{\partial h_k} \right\| \sim (1+C-\epsilon)^{n-k-1}$ as $k \to \infty$ in the case when no delay is introduced into the model). The larger the $m$ is, the more effective the delay is able to slow down the exponential decay of the gradient norm. These qualitative conclusions can already be derived by studying the linear time-delayed RNN, which we consider in the main text for simplicity.
\section{Approximation Capability of Time-Delayed RNNs} \label{app:uat}
RNNs (without delay) have been shown to be universal approximators of a large class of open dynamical systems \cite{schafer2006recurrent}. Analogously, RNNs with delay can be shown to be universal approximators of open dynamical systems with~delay.
Let $m > 0$ (time lag) and consider the state space models (which, in this section, we shall simply refer to as delayed RNNs) of the form:
\begin{align} \label{eq_gendRNN}
s_{n+1} &= f(As_n + B s_{n-m} + Cu_n + b), \nonumber \\
r_n &= Ds_n,
\end{align}
and dynamical systems of the form
\begin{align} \label{ds_approx}
x_{n+1} &= g(x_n, x_{n-m}, u_n), \nonumber \\
o_n &= o(x_n),
\end{align}
for $n = 0,1,\dots, N$.
Here $u_n \in \RR^{d_u}$ is the input, $o_n \in \RR^{d_o}$ is the target output of the dynamical systems to be learned, $s_n \in \RR^d$ is the hidden state of the learning model, $r_n \in \RR^{q}$ is the model output, $f$ is the tanh function applied component-wise, the maps $g$ and $o$ are Lipschitz continuous, and the matrices $A$, $B$, $C$, $D$ and the vector $b$ are learnable parameters. For simplicity, we take the initial functions to be $s_n = y_n = 0$ for $n = -m, -m+1, \dots, 0$.
The following theorem shows that the delayed RNNs \eqref{eq_gendRNN} are capable of approximating a large class of time-delay dynamical systems, of the form \eqref{ds_approx}, to arbitrary accuracy.
\begin{theorem} \label{thm_uat}
Assume that there exists a constant $R > 0$ such that $\max(\|x_{n+1}\|, \|u_n\|) < R$ for $n = 0,1,\dots, N$. Then, for a given $\epsilon > 0$, there exists a delayed RNN of the form \eqref{eq_gendRNN} such that the following holds for some $d$:
\begin{equation} \label{thm_bd}
\|r_n - o_n \| \leq \epsilon,
\end{equation}
for $n = 0,1,\dots, N$.
\end{theorem}
\begin{proof}
The proof proceeds along the line of \cite{schafer2006recurrent, rusch2022long}, using the universal approximation theorem (UAT) for feedforward neural network maps and with straightforward modification to deal with the extra delay variables $s_{n-n}$ and $x_{n-m}$ here. The proof proceeds in a similar manner as the one provided in Section E.4 in \cite{rusch2022long}.
The goal is to construct hidden states, output state, weight matrices and bias vectors such that an output of the delayed RNN approximates the dynamical system \eqref{ds_approx}.
Let $\epsilon > \epsilon^* > 0, R^* > R \gg 1$ be parameters to be defined later. Then, using the UAT for continuous functions with neural networks with the tanh activation function \cite{barron1993universal}, we can obtain the following statements. Given an $\epsilon^*$, there exist weight matrices $W_1$, $W_2$, $W_3$, $V_1$ and a bias vector $b_1$ of appropriate dimensions such that the neural network defined by $\mathcal{N}_1(h, \tilde{h}, u) := W_3 \tanh(W_1 h + W_2 \tilde{h} + V_1 u + b_1)$ approximates the underlying function $g$ as follows:
\begin{equation}
\max_{\max (\|h\|, \|\tilde{h} \|, \|u\|) < R^* } \| g(h, \tilde{h}, u) - \mathcal{N}_1(h, \tilde{h}, u) \| \leq \epsilon^*.
\end{equation}
Now, we define the dynamical system:
\begin{align}
p_{n} &= W_3 \tanh(W_1 p_{n-1} + W_2 p_{n-m-1} + V_1 u_{n-1} + b_1),
\end{align}
with $p_i = 0$ for $i = -m, -m+1, \dots, 0$.
Then using the above approximation bound, we obtain, for $n=1, \dots, N+1$,
\begin{align}
\|x_{n} - p_{n} \| &= \| g(x_{n-1}, x_{n-m-1}, u_{n-1}) - p_n \| \\
&\leq \|g(x_{n-1}, x_{n-m-1}, u_{n-1}) - W_3 \tanh(W_1 p_{n-1} + W_2 p_{n-m-1} + V_1 u_{n-1} + b_1)\| \\
&\leq \|g(x_{n-1}, x_{n-m-1}, u_{n-1}) - g(p_{n-1}, p_{n-m-1}, u_{n-1}) \| \nonumber \\
&\ \ \ \ \ + \| g(p_{n-1}, p_{n-m-1}, u_{n-1}) - W_3 \tanh(W_1 p_{n-1} + W_2 p_{n-m-1} + V_1 u_{n-1} + b_1)\| \\
&\leq Lip(g) (\| x_{n-1} - p_{n-1} \| + \|x_{n-m-1} - p_{n-m-1}\|) + \epsilon^*,
\end{align}
where $Lip(g)$ is the Lipschitz constant of $g$ on the compact set $\{ (h,\tilde{h}, u): \| h\|, \|\tilde{h}\|, \|u\| < R^* \}$.
Iterating the above inequality over $n$ leads to:
\begin{equation}
\|x_n - p_n \|\leq \epsilon^* C_1(n, m, Lip(g)),
\end{equation}
for some constant $C_1>0$ that is dependent on $n, m, Lip(g)$.
Using the Lipschitz continuity of the output function $o$, we obtain:
\begin{equation} \label{out_b}
\| o_n - o(p_n) \| \leq \epsilon^*C_2(n, m, Lip(g), Lip(o)),
\end{equation}
for some constant $C_2$ that is dependent on $n, m, Lip(g), Lip(o)$, where $Lip(o)$ is the Lipschitz constant of $o$ on the compact set $\{ h : \| h\| < R^* \}$.
Next we use the UAT for neural networks again to obtain the following approximation result. Given an $\overline{\epsilon}$, there exists weight matrices $W_4, W_5$ and bias vector $b_2$ of appropriate dimensions such that the tanh neural network, $\mathcal{N}_2(h) := W_5 \tanh(W_4 h + b_2)$ approximates the underlying output function $o$ as:
\begin{equation}
\max_{\|h\| < R^*} \| o(h) - \mathcal{N}_2(h) \| \leq \overline{\epsilon}.
\end{equation}
Defining $\overline{o}_n = W_5 \tanh(W_4 p_n + b_2)$, we obtain, using the above approximation bound and the inequality \eqref{out_b}:
\begin{equation} \label{o_b_2}
\|o_n - \overline{o}_n \| = \|o_n - o(p_n) \| + \| o(p_n) - \overline{o}_n \| \leq \epsilon^* C_2(n, m, Lip(g), Lip(o)) + \overline{\epsilon}.
\end{equation}
Now, let us denote:
\begin{align}
\tilde{p}_{n} &= \tanh(W_1 p_{n-1} + W_2 p_{n-m-1} + V_1 u_{n-1} + b_1),
\end{align}
so that $p_n = W_3 \tilde{p}_n$. With this notation, we have:
\begin{equation}
\overline{o}_n = W_5 \tanh(W_4 W_3 \tanh(W_1 W_3 \tilde{p}_{n-1} + W_2 W_3 \tilde{p}_{n-m-1} + V_1 u_{n-1} + b_1) + b_2).
\end{equation}
Since the function $R(y) = W_5 \tanh(W_4 W_3 \tanh(W_1 W_3 y + W_2 W_3 \tilde{p}_{n-m-1} + V_1 u_{n-1} + b_1) + b_2)$ is Lipschitz continuous in $y$, we can apply the UAT again to obtain: for any $\tilde{\epsilon}$, there exists weight matrices $W_6, W_7$ and bias vector $b_3$ of appropriate dimensions such that
\begin{equation}
\max_{\|y\| < R^*} \| R(y) - W_7 \tanh(W_6 y + b_3) \| \leq \tilde{\epsilon}.
\end{equation}
Denoting $\tilde{o}_n := W_7 \tanh(W_6 p_{n-1} + b_3)$ and using the above approximation bound, we obtain $\|\overline{o}_n - \tilde{o}_n \| \leq \tilde{\epsilon}$.
Finally, we collect all the ingredients above to construct a delayed RNN that can approximate the dynamical system \eqref{ds_approx}.
To this end, we define the hidden states (in an enlarged state space): $s_n := (\tilde{p}_n, \hat{p}_n)$, with $\tilde{p}_n$, $\hat{p}_n$ sharing the same dimension.
These hidden states evolve according to the dynamical system:
\begin{align}
s_n &= \tanh\left(
\left[ {\begin{array}{cc}
W_1 W_3 & 0 \\
W_6 W_3 & 0 \\
\end{array} } \right] s_{n-1} + \left[ {\begin{array}{cc}
W_2 W_3 & 0 \\
0 & 0 \\
\end{array} } \right] s_{n-m-1} + \left[ {\begin{array}{c}
V_1 u_{n-1} \\
0 \\
\end{array} } \right] + \left[ {\begin{array}{c}
b_1 \\
0 \\
\end{array} } \right]
\right).
\end{align}
Defining the output state as $r_n := [0, W_7] s_n$, with the $s_n$ satisfying the above system, we arrive at a delayed RNN that approximates the dynamical system \eqref{ds_approx}. In fact, we can verify that $r_n = \tilde{o}_n$.
Setting $\overline{\epsilon} < \epsilon/2$ and $\epsilon^* < \epsilon/(2 C_2(n, m, Lip(g), Lip(o)))$ give us the bound \eqref{thm_bd} in the theorem.
\end{proof}
\section{Additional Details and Experiments}
\label{sect:appD}
In this section, we provide additional empirical results and details to demonstrate the advantages of $\tau$-GRU when compared to other RNN architectures.
\begin{table}[!b]
\caption{Results for Google12. Results indicated by $^*$ are produces by us, results indicated by $^+$ are from~\cite{rusch2022long}. }
\label{tab:results_google12}
\centering
\scalebox{0.9}{
\begin{tabular}{l c c ccccccc}
\toprule
Model & Test Accuracy ($\%$) & \# units & \# param \\
\midrule
tanh RNN~\cite{rusch2022long}$^+$ & 73.4 & 128 & 27k \\
LSTM~\cite{rusch2022long}$^+$ & 94.9 & 128 & {107k}\\
GRU~\cite{rusch2022long}$^+$ & 95.2 & 128 & 80k \\
AsymRNN~\cite{chang2018antisymmetricrnn}$^+$ & 90.2 & 128 & 20k \\
expRNN~\cite{lezcano2019cheap}$^+$ & 92.3 & 128 & 19k \\
coRNN~\cite{rusch2021coupled}$^+$ & 94.7 & 128 & 44k \\
Fast GRNN~\cite{kusupati2018fastgrnn}$^+$ & 94.8 & 128 & 27k \\
LEM~\cite{rusch2022long} & 95.7 & 128 & {107k} \\
Lipschitz RNN~\cite{erichson2020lipschitz}$^*$ & 95.6 & 128 & 34k\\
Noisy RNN~\cite{lim2021noisy}$^*$ & 95.7 & 128 & 34k\\
iRNN~\cite{Kag2020RNNs}$^*$ & 95.1 & - & {8.5k} \\
TARNN~\cite{kag2021time}$^*$ & 95.9 & 128 & {107k} \\
\midrule
\textbf{ours} & \textbf{96.2} & 128 & {107k} \\
\bottomrule
\end{tabular}}
\end{table}
\subsection{Speech Recognition: Google 12}
Here, we consider the Google Speech Commands data set V2~\cite{warden2018speech} to demonstrate the performance of our model for speech recognition.
The aim of this task is to learn a model that can classify a short audio sequence, which is sampled at a rate of 16 kHz from 1 second utterances of $2,618$ speakers.
We consider the Google 12-label task (Google12) which is composed of 10 keyword classes, and in addition one class that corresponds to `silence', and a class corresponding to `unknown' keywords.
We adopt the standard train/validation/test set split for evaluating our model, and we use dropout, applied to the inputs, with rate 0.03 to reduce overfitting.
Table~\ref{tab:results_google12} presents the results for our $\tau$-GRU and a number of competitive RNN architectures.
We adopt the results for the competitive RNNs from~\cite{rusch2022long}. Our proposed $\tau$-GRU shows the best performance on this task, i.e., $\tau$-GRU is able to outperform gated and continuous-time RNNs on this task that requires an expressive recurrent unit.
\subsection{Learning the Dynamics of Mackey-Glass System}
Here, we consider the task of learning the Mackey-Glass equation, originally introduced in \cite{mackey1977oscillation} to model the variation in the relative quantity of mature cells in the blood:
\begin{equation}
\dot{x} = a \frac{x(t-\delta)}{1 + x^n(t-\delta)} - b x(t), \ t \geq \delta,
\end{equation}
where $\delta \geq 17$, $a, b, n > 0$, with $x$ satisfying $\dot{x} = a x(0)/(1 + x(0)^n) - b x$ for $t \in [0, \delta]$. It is a scalar equation with chaotic dynamics, with infinite-dimensional state space. Increasing the value of $\delta$ increases the dimension of the~attractor.
For data generation, we choose $a = 0.2$, $b = 0.1$, $n=10$, $\delta = 17$, $x(0) \sim \rm{Unif}(0,1)$, and use the classical Runge-Kutta method (RK4) to integrate the system numerically from $t=0$ to $t = 1000$ with a step-size of 0.25. The training and testing samples are the time series (of length 2000) generated by the RK4 scheme on the interval $[500,1000]$ for different realizations of $x(0)$. Figure \ref{fig:data_plot} shows a realization of the trajectory produced by the Mackey-Glass system (and also the DDE based ENSO system considered in the main text).
\begin{figure}[!h]
\centering
\includegraphics[width=0.44\textwidth]{figs/mg_plot.png}
\includegraphics[width=0.44\textwidth]{figs/mz_plot.png}
\caption{A realization of the Mackey-Glass dynamics (left) and the DDE based ENSO dynamics (right). }
\label{fig:data_plot}
\end{figure}
Table \ref{tab:results_MG} shows that our $\tau$-GRU model (with $\alpha = \beta = 1$ and using $\tau = 10$) is more effective in learning the Mackey-Glass system when compared to other RNN architectures. We also see that the predictive performance deteriorates without making full use of the combination of the standard recurrent unit and delay recurrent unit (setting either $\alpha$ or $\beta$ to zero). Moreover, $\tau$-GRU demonstrates improved performance when compared to the simple delay GRU model (Eq. \eqref{eq_simpleDRNN}) and the counterpart model without using the gating. Similar observation also holds for the ENSO prediction task; see Table \ref{tab:add_results_ENSO}.
Figure \ref{fig:mg_traintest} shows that our model converges much faster than other RNN models during training. In particular, our model is able to achieve both lower training and testing error (as measured by the root mean square error (RMSE)) with fewer epochs, demonstrating the effectiveness of the delay mechanism in improving the performance on the problem of long-term dependencies. This is consistent with our analysis on how the gradient
information is propagated through the delay buffers in the network (see Proposition \ref{prop_delaymain}), suggesting that the delay buffers can propagate gradient more efficiently. Similar behavior is also observed for the ENSO task; see Figure \ref{fig:mz_traintest}.
\begin{table}[!h]
\caption{Additional results for the ENSO model prediction.}
\label{tab:add_results_ENSO}
\centering
\scalebox{0.9}{
\begin{tabular}{l c c ccccccc}
\toprule
Model & MSE ($\times 10^{-2}$) & \# units & \# parameters \\
\midrule
simple delay GRU (Eq. \eqref{eq_simpleDRNN}) & 0.2317 & 16 & 0.897k \\
ablation (no gating) & 0.4289 & 16 & 0.929k \\
\midrule
\textbf{$\tau$-GRU (ours)} & \textbf{0.17} & 16 & 1.2k \\
\bottomrule
\end{tabular}}
\end{table}
\begin{table}[!t]
\caption{Results for the Mackey-Glass system prediction.}
\label{tab:results_MG}
\centering
\scalebox{0.9}{
\begin{tabular}{l c c ccccccc}
\toprule
Model & MSE ($\times 10^{-2}$) & \# units & \# parameters \\
\midrule
Vanilla RNN & 0.3903 & 16 & 0.321k \\
LSTM & 0.6679 & 16 & 1.233k \\
GRU & 0.4351 & 16 & 0.929k \\
Lipschitz RNN & 8.9718 & 16 & 0.561k \\
coRNN & 1.6835 & 16 & 0.561k \\
LEM & 0.1430 & 16 & 1.233k \\
\midrule
simple delay GRU (Eq. \eqref{eq_simpleDRNN}) & 0.2772 & 16 & 0.897k \\
ablation (no gating) & 0.2765 & 16 & 0.929k \\
ablation ($\alpha = 0$) & 0.1553 & 16 & 0.625k \\
ablation ($\beta = 0$) & 0.2976 & 16 & 0.929k \\
\midrule
\textbf{$\tau$-GRU (ours)} & \textbf{0.1358} & 16 & 1.233k \\
\bottomrule
\end{tabular}}
\end{table}
\begin{figure}[!h]
\centering
\includegraphics[width=0.44\textwidth]{figs/mg_trainrmse.png}
\includegraphics[width=0.44\textwidth]{figs/mg_testrmse.png}
\caption{Train RMSE (left) and test RMSE (right) vs. epoch for the Mackey-Glass learning task.}
\label{fig:mg_traintest}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.44\textwidth]{figs/mz_trainrmse.png}
\includegraphics[width=0.44\textwidth]{figs/mz_testrmse.png}
\caption{Train RMSE (left) and test RMSE (right) vs. epoch for the ENSO learning task.}
\label{fig:mz_traintest}
\end{figure}
\clearpage
\section{Tuning Parameters}
To tune our $\tau$-GRU, we use a non-exhaustive random search within the following plausible ranges for $\tau={5,\dots,200}$. We used Adam as our optimization algorithm for all of the experiments. For the synthetic data sets generated by the ENSO and Mackey-Glass system, we used learning rate of 0.01. For the other experiments we considered learning rates between 0.01 and 0.0005. We used dropout for the IMDB and Google12 task to avoid overfitting.
Table~\ref{tab:tuning} is listing the tuning parameters for the different tasks that we considered in this work.
\begin{table}[!h]
\caption{Summary of tuning parameters.}
\label{tab:tuning}
\centering
\scalebox{0.85}{
\begin{tabular}{l c c c c c c c c }
\toprule
Name & d & lr & $\tau$ & dropout & epochs \\
\midrule
Adding Task $N=2000$ & 128 & 0.0026 & 900 & - & 200 \\
Adding Task $N=5000$ & 128 & 0.002 & 2000 & - & 200 \\
\midrule
IMDB & 128 & 00012 & 1 & 0.04 & 30 \\
\midrule
HAR-2 & 128 & 0.00153 & 10 & - & 100 \\
\midrule
sMNIST & 128 & 0.0018 & 50 & - & 60 \\
\midrule
psMNIST & 128 & 0.0055 & 65 & - & 80 \\
\midrule
sCIFAR & 128 & 0.0035 & 30 & - & 50 \\
\midrule
nCIFAR & 128 & 0.0022 & 965 & - & 50 \\
\midrule
Google12 & 128 & 0.00089 & 5 & 0.03 & 60 \\
\midrule
ENSO & 16 & 0.01 & 20 & - & 400 \\
\midrule
Mackey-Glass system & 16 & 0.01 & 10 & - & 400 \\
\bottomrule
\end{tabular}}
\end{table}
\paragraph{Sensitivity to Random Initialization.}
We evaluate our models for each tasks using 8 seeds. The maximum, minimum, average values, and standard deviations obtained for each task are tabulated in Table~\ref{tab:minmax}.
\begin{table}[h]
\caption{Sensitivity to random initialization evaluated over 8 runs with different seeds.}
\label{tab:minmax}
\centering
\scalebox{0.91}{
\begin{tabular}{l c c c c c c c c}
\toprule
Task & Maximum & Minimum & Average & standard dev. & d \\
\midrule
IMDB & 88.7 & 86.2 & 87.9 & 0.82 & 128 \\
HAR-2 & 97.4 & 96.6 & 96.9 & 0.41 & 128 \\
sMNIST & 99.4 & 99.1 & 99.3 & 0.08 & 128 \\
psMNIST & 97.2 & 96.0 & 96.8 & 0.39 & 128 \\
sCIFAR & 74.9 & 72.65 & 73.54 & 0.90 & 128 \\
nCIFAR & 62.7 & 61.7 & 62.3 & 0.32 & 128 \\
Google12 & 96.2 & 95.7 & 95.9 & 0.17 & 128 \\
\bottomrule
\end{tabular}}
\end{table}
\section{Introduction}
Recurrent neural networks (RNNs) and their variants are flexible gradient-based methods specially designed to model sequential data.
Models of this type can be viewed as dynamical systems whose temporal evolution is governed by a system of differential equations driven by an external input.
Indeed, there is a long-standing tradition to formulate continuous-time variants of RNNs~\cite{pineda1988dynamics}.
In this setting, the data are formulated in continuous-time, i.e., inputs are defined by the function ${\bf x} = {\bf x}(t) \in \mathbb{R}^p$ and targets are defined as ${\bf y} = {\bf y}(t) \in \mathbb{R}^q$.
In this way, one can, for instance, employ a nonautonomous ordinary differential equation (ODE) to model the dynamics of the hidden states ${\bf h}(t)\in\mathbb{R}^d$, where $t$ denotes continuous time, as
\begin{equation*}
\frac{d {\bf h}(t)}{d t}\,\,\, = \,\,\, f({\bf h}(t),\, {\bf x}(t);\, \boldsymbol{\theta}).
\end{equation*}
Here, $f: \mathbb{R}^d\times \mathbb{R}^p \rightarrow \mathbb{R}^d$ is a function which is parameterized by a neural network (NN) with the learnable weights $\boldsymbol{\theta}$.
A prototypical choice for $f$ is the $\tanh$ recurrent unit:
\begin{equation*}
f({\bf h}(t),\, {\bf x}(t); \boldsymbol{\theta}) := \tanh({\bf W} {\bf h}(t) + {\bf U} {\bf x}(t) + {\bf b}),
\end{equation*}
where ${\bf W} \in \mathbb{R}^{d\times d}$ denotes a hidden-to-hidden weight matrix, ${\bf U}\in \mathbb{R}^{d \times p}$ an input-to-hidden weight matrix, and ${\bf b}$ a bias term.
With this continuous-time formulation in hand, one can then use tools from dynamical systems theory to study the dynamical behavior of the model as well as to motivate mechanisms that can prevent rapidly diverging or converging dynamics.
For instance,~\cite{chang2018antisymmetricrnn} proposed a parametrization of the hidden-to-hidden matrix as an antisymmetric matrix to ensure stable hidden state dynamics, and~\cite{erichson2020lipschitz} relaxed this idea to improve model expressivity.
More recently,~\cite{rusch2022long} has proposed an RNN architecture based on a suitable time-discretization of a set of coupled multiscale ODEs.
\begin{figure*}[!t]
\begin{CatchyBox}{$\tau$-GRU}\vspace{+0.3cm}
\textbf{Continuous-time formulation of $\tau$-GRU:}\\
\begin{minipage}[h]{0.95\linewidth
\begin{align
\label{eq:tdRNN_de}
\frac{d \, {\bf h}(t)}{dt}
= \underbrace{u({\bf h}(t),\, {\bf x}(t))}_\text{instantaneous dynamics} \, + \,\,\, \underbrace{\textcolor{wine}{a({\bf h}(t), {\bf x}(t))} \odot \textcolor{darkcyan}{z({\bf h}(t-\tau),\, {\bf x}(t))}}_\text{weighted time-delayed feedback} \,\,\, - \,\,\, {\bf h}(t)
\end{align}
\end{minipage}\vspace{+0.3cm}
\textbf{Discrete-time formulation of $\tau$-GRU:}\\
\begin{minipage}[h]{0.55\linewidth
\begin{equation}
{\bf h}_{n+1} = (1-{\bf g}_n) \odot {\bf h}_n + {\bf g}_n \odot ({\bf u}_n + \textcolor{wine}{{\bf a}_n} \odot \textcolor{darkcyan}{{\bf z}_n}) \label{eq:ourdRNN}
\end{equation}
%
with
%
\vspace{-0.3cm}
\begin{align}
{\bf u}_n & = u({\bf h}_n, {\bf x}_n) := \text{tanh}({\bf W}_1 {\bf h}_n + {\bf U}_1 {\bf x}_n) \label{eq:3} \\
\textcolor{darkcyan}{{\bf z}_n} & = \textcolor{darkcyan}{z({\bf h}_{l}, {\bf x}_n)} := \text{tanh}({\bf W}_2 {\bf h}_{l} + {\bf U}_2 {\bf x}_{n}) \\
{\bf g}_n & = g({\bf h}_n, {\bf x}_n) := \text{sigmoid}({\bf W}_3 {\bf h}_n + {\bf U}_3 {\bf x}_n) \\
\textcolor{wine}{{\bf a}_n} & = \textcolor{wine}{a({\bf h}_n, {\bf x}_n)} :=\text{sigmoid}({\bf W}_4 {\bf h}_{n} + {\bf U}_4 {\bf x}_{n}) \label{eq:6}
\end{align}
%
\hfill
\end{minipage}
%
\hfill
\begin{minipage}[h]{.51\linewidth}\small
\centering
\begin{tabular}{l|c|c}
input & ${\bf x}$ & $\mathbb{R}^{p}$\\\hline
time index & $t$ & $\mathbb{R}$ \\\hline
time delay & $\tau$ & $\mathbb{R}$ \\\hline
hidden state & ${\bf h}$ & $\mathbb{R}^{d}$\\\hline
hidden-to-hidden matrix & ${\bf W}_i$ & $\mathbb{R}^{d \times d}$\\\hline
input-to-hidden matrix & ${\bf U}_i$ & $\mathbb{R}^{d\times p}$\\\hline
decoder matrix & ${\bf V}$ & $\mathbb{R}^{q\times d}$\\\hline
\end{tabular} \\
\vspace{0.2cm}
${\bf h}_n \approx {\bf h}(t_n)$, $t_n = n \Delta t$, $n=0,1,\dots$ \\
\vspace{0.1cm}
$l := n - \lfloor \tau/\Delta t \rfloor$
\end{minipage}
\end{CatchyBox}
\end{figure*}
In this work, we consider using input-driven delay differential equations (DDEs) to model the dynamics of the hidden states:
\begin{align*}
\frac{d {\bf h}(t)}{d t} \,\,\, = \,\,\, f({\bf h}(t),\, {\bf h}(t-\tau),\, {\bf x}(t);\, \boldsymbol{\theta}),
\end{align*}
where $\tau$ is a constant that indicates the delay (i.e., time-lag). Here, the time derivative is described by a function $f: \mathbb{R}^d\times \mathbb{R}^d\times \mathbb{R}^p \rightarrow \mathbb{R}^d$ that explicitly depends on states from the past.
In prior work~\cite{lin1996learning}, it has been shown that delay units can greatly improve performance on long-term dependency problems \cite{pascanu2013difficulty}, i.e., problems for which the desired model output depends on inputs presented at times far in the past.
In more detail, we propose a novel continuous-time recurrent unit, given in Eq.~\eqref{eq:tdRNN_de}, that is composed of two parts:
(i) a component $u({\bf h}(t),{\bf x}(t))$ that explicitly models instantaneous dynamics; and (ii) a component $z({\bf h}(t-\tau), {\bf x}(t))$ that provides time-delayed feedback to account for non-instantaneous dynamics.
The feedback also helps to propagate gradient information more efficiently, thus lessening the issue of vanishing gradients.
In addition, we introduce $a({\bf h}(t),{\bf x}(t))$ to weight the importance of the feedback component-wise, which helps to better model different time scales.
By considering a suitable time-discretization scheme of this continuous-time setup, we obtain a gated recurrent unit (GRU), given in Eq. \eqref{eq:ourdRNN}, which we call $\tau$-GRU. The individual parts are described by Eq.~\eqref{eq:3}-\eqref{eq:6}, where ${\bf g}_n$ and ${\bf a}_n$ resemble commonly used gating functions.
\paragraph{Main Contributions.}
Here are our main contributions.
\begin{itemize}
\item {\bf Design.}
We introduce a novel gated recurrent unit, which we call $\tau$-GRU, that incorporates a weighted time-delay feedback mechanism to lessen the vanishing gradients issue. This model is motivated by DDEs, and it is obtained by discretizing the continuous Eq. \eqref{eq:tdRNN_de}.
\item {\bf Theory.}
We show that the continuous-time $\tau$-GRU model has a unique well-defined solution (see Theorem~\ref{thm_exist2main}).
Moreover, we provide intuition and analysis to understand how the introduction of delays in $\tau$-GRU can act as a buffer to help lessen the vanishing gradients problem, thus improving the ability to retain information far in the past.
See Proposition \ref{prop_delaymain} for a simplified setting and Proposition \ref{app_prop} for $\tau$-GRU.
\item {\bf Experiments.}
We provide empirical results to demonstrate the superior performance of $\tau$-GRU, when compared to other RNN architectures, on a variety of benchmark tasks.
In particular, we show that $\tau$-GRU converges faster during training and can achieve improved generalization performance. Moreover, we demonstrate that it provides favorable trade-offs between effectiveness in dealing with long-term dependencies and expressivity in the considered tasks.
See Figure \ref{fig:intro_fig} for an illustration of this.
\end{itemize}
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{figs/e_vs_l_overview.pdf
\caption{Test accuracy for nCIFAR~\cite{chang2018antisymmetricrnn} versus Google-12~\cite{warden2018speech}. nCIFAR requires a recurrent unit with long-term dependency capabilities, while Google-12 requires a highly expressive unit. Our proposed $\tau$-GRU is able to improve performance on both tasks, relative to existing state-of-the-art alternatives, including LEM~\cite{rusch2022long}.}
\label{fig:intro_fig}
\end{figure}
\section{Related Work}
Here, we discuss recent RNN advances that have been demonstrated to outperform classic architectures such as Long Short Term Memory (LSTM) networks~\cite{hochreiter1997long} and Gated Recurrent Units (GRUs)~\cite{cho2014properties}.
We also briefly discuss prior works on incorporating delays into neural architectures. (We omit a detailed discussion of other recent deep learning models for sequential problems that leverage dynamical system theory:~\cite{erichson2019physics,azencot2020forecasting,gu2021combining,smith2022simplified,hasani2022liquid}.) \\
\noindent
\textbf{Unitary and orthogonal RNN.} \
The seminal work by~\cite{arjovsky2016unitary} introduced a recurrent unit where the hidden weight matrix is constructed as the product of unitary matrices.
This enforces that the eigenvalues lie on the unit circle.
This, in turn, prevents vanishing and exploding gradients, thereby enabling the learning of long-term dependencies.
However, such unitary RNNs suffer from limited expressivity, since the construction of the hidden matrix is restrictive~\cite{azencot2021differential}.
Work by~\cite{wisdom2016full} and~\cite{vorontsov2017orthogonality} partially addressed this issue by considering the Cayley transform on skew-symmetric matrices; and
work by~\cite{lezcano2019cheap,lezcano2019trivializations} leveraged skew-Hermitian matrices to parameterize the orthogonal group to improve expressiveness.
The expressiveness of RNNs has been further improved by considering nonnormal hidden matrices~\cite{kerg2019non}. \\
\noindent
\textbf{Continuous-time RNNs.} \
The recent work on Neural ODEs~\cite{chen2018neural} and variants~\cite{kidger2020neural,queiruga2021stateful,xia2021heavy,hasani2022closed} have motivated the formulation of several modern continuous-time RNNs, which are expressive and have good long-term memory.
The work by~\cite{chang2018antisymmetricrnn} used an antisymmetric matrix to parameterize the hidden-to-hidden matrix in order to obtain stable dynamics. This was later relaxed by~\cite{erichson2020lipschitz}. In~\cite{Kag2020RNNs}, a modified differential equation was considered, which allows one to update the hidden states based on the difference between predicted and previous states. Work by \cite{rusch2021coupled} demonstrated that long-term memory can be improved by modeling the hidden dynamics by a second-order system of ODEs, which models a coupled network of controlled forced and damped nonlinear oscillators. Another approach for improving long-term memory was motivated by a time-adaptive discretization of an ODE~\cite{kag2021time}.
The expressiveness of continuous-time RNNs has been further improved by introducing a suitable time-discretization of a set of multiscale ODEs~\cite{rusch2022long}.
It has also been shown that noise-injected RNNs can be viewed as discretizations of stochastic differential equations driven by input data~\cite{lim2021noisy}.
In this case, the noise can help to stabilize the hidden dynamics during training and improve robustness to input perturbations. \\
\noindent
\textbf{Using delays in NNs.}
The idea of introducing delays into NNs goes back to \cite{waibel1989phoneme, lang1990time}, where a time-delay NN was proposed to tackle the phoneme recognition problem.
Several works followed:
\cite{kim1998time} considered a time-delayed RNN model that is suitable for temporal correlations and prediction of chaotic and financial time series; and
delays were also incorporated into and studied within the nonlinear autoregressive with exogenenous inputs (NARX) RNNs \cite{lin1996learning}.
More recently, \cite{zhu2021neural} introduced delay terms in Neural ODEs and demonstrated their outstanding approximation capacities.
In particular, the model of \cite{zhu2021neural} is able to learn delayed dynamics where the trajectories in the lower-dimensional phase space could be mutually intersected, while the Neural ODEs~\cite{chen2018neural} are not able to do so.
\section{Method}
In this section, we first provide an introduction to DDEs; then, we motivate the formulation of our DDE-based models, in continuous as well as discrete time; and, finally, we propose a weighted time-delay feedback architecture.
\paragraph{Notation.} $\odot$ denotes Hadamard product, $| v |$ denotes vector norm for the vector $v$, $\| A \|$ denotes operator norm for the matrix $A$, $\sigma$ and $\hat{\sigma}$ (or sigmoid) denote the $\tanh$ and sigmoid function, respectively; and $\lceil x \rceil$ and $\lfloor x \rfloor$ denote the ceiling function and floor function in $x$, respectively.
\subsection{Background on Delay Differential Equations}
DDEs are an important class of dynamical systems that arise in natural and engineering sciences~\cite{smith2011introduction, erneux2009applied, keane2017climate}. In these systems, a feedback term is introduced to adjust the system non-instantaneously, resulting in delays in time. In mathematical terms, the derivative of the system state depends explicitly on the past value of the state variable.
Here, we focus on DDEs with a single discrete delay
\begin{equation} \label{eq_gen_dde}
\dot{h} = F(t, h(t), h(t-\tau)),
\end{equation}
with $\tau > 0$, where $F$ is a continuous function. Due to the presence of the delay term, we need to {\it specify an initial
function} which describes the behavior of the system prior to the initial time 0. For the DDE, it would be a function $\phi$ defined on $[-\tau, 0]$. Hence, a DDE numerical solver must save all the information needed to approximate delayed~terms.
Instead of thinking the solution of the DDE as consisting of a sequence of values of $h$ at increasing values of $t$, as one would do for ODEs, it is more fruitful to view it as a mapping of functions on the interval $[t -\tau,t]$ into functions on the interval $[t,t + \tau]$, i.e., as a sequence of functions defined over a set of contiguous time intervals of length $\tau$.
Since the state of the system at time $t \geq 0$ must contain all the information necessary to determine the solution for future times $s \geq t$, it should contain the initial condition $\phi$.
More precisely, the DDE is a functional differential equation with the state space $C := C([-\tau, 0], \mathbb{R}^d)$. This state space is the Banach space of continuous functions from $[-\tau, 0]$ into $\mathbb{R}^d$, with the topology of uniform convergence. It is equipped with the norm $\|\phi \| := \sup \{ |\phi(\theta)| : \theta \in [-\tau, 0] \}$.
In contrast to the ODEs (with $\tau = 0)$ whose state space is finite-dimensional, DDEs are generally infinite-dimensional dynamical systems.
Various aspects of DDEs have been studied, including their solution properties \cite{hale2013introduction, asl2003analysis}, dynamics \cite{lepri1994high, baldi1994delays} and stability \cite{marcus1989stability, belair1993stability, liao2002delay, yang2014exponential, park2019dynamic}.
\subsection{Formulation of Continuous-Time $\tau$-GRUs}
The basic form of a time-delayed RNN is
\begin{align}\label{eq:simple_model}
\begin{split}
\dot{{\bf h}} =\sigma({\bf W}_1 {\bf h}(t) + {\bf W}_2 {\bf h}(t-\tau)+ {\bf U} {\bf x}(t) ) - {\bf h}(t),
\end{split}
\end{align}
for $t \geq 0$, and ${\bf h}(t) = 0$ for $t \in [-\tau, 0]$, with the output ${\bf y}(t) = {\bf V} {\bf h}(t)$. In this expression, ${\bf h} \in \RR^d$ denotes the hidden states, $f: \RR^d \times \RR^d \times \RR^{p} \to \RR^d$ is a nonlinear function, and $\sigma: \RR \to (-1,1)$ denotes the tanh activation function applied component-wise. The matrices ${\bf W}_1, {\bf W}_2 \in \RR^{d \times d}$, ${\bf U} \in \RR^{d \times p}$ and ${\bf V} \in \RR^{q \times d}$ are learnable parameters, and $\tau \geq 0$ denotes the discrete time-lag.
For notational brevity, we omit the bias term here, assuming it is included in ${\bf W}_1$.
It is important to equip RNNs with mechanism to better represent a large number of scales, as discussed by~ \cite{tallec2018can} and more recently by~\cite{rusch2022long}.
Therefore, we follow \cite{tallec2018can} and consider a time warping function $c:\mathbb{R}^d\rightarrow \mathbb{R}^d$ which we define to be a parametric function $c(t)$. Using the reasoning in~\cite{tallec2018can}, and denoting $t_\tau := t-\tau$, we can formulate the following continuous-time delay recurrent unit
\begin{align*}\label{eq:simple_model}
\dot{{\bf h}} = \frac{d c(t)}{d t}\left[ \sigma({\bf W}_1 {\bf h}(t) + {\bf W}_2 {\bf h}(t_\tau) + {\bf U}_1 {\bf x}(t)) - {\bf h}(t)\right].
\end{align*}
Now, we need a learnable function to model $\frac{d c(t)}{d t}$. A natural choice is to consider a standard gating function, which is a universal approximator, taking the form
\begin{equation}
\frac{d c(t)}{d t} = \hat{\sigma}({\bf W}_3 {\bf h}_t + {\bf U}_3 {\bf x}_t) =: {\bf g}(t),
\end{equation}
where ${\bf W}_3 \in \RR^{d \times d}$ and ${\bf U}_3 \in \RR^{d \times p}$ are learnable parameters, and where $\hat{\sigma}: \RR \to (0,1)$ is the component-wise sigmoid function.
\subsection{From Continuous to Discrete-Time $\tau$-GRUs}
To learn the weights of the recurrent unit, a numerical integration scheme can be used to discretize the continuous model. Specifically, we discretize the time as $t_n = n \Delta t$ for $n = -\lfloor \tau/\Delta t \rfloor, \dots, -1, 0, 1, \dots$, and approximate the solution $({\bf h}(t))$ to Eq. \eqref{eq:simple_model} by the sequence $({\bf h}_n = {\bf h}(t_n))$, given by ${\bf h}_n = 0$ for $n = -\lfloor \tau/\Delta t \rfloor, \dots, 0$, and
\begin{align}
{\bf h}_{n+1} &= {\bf h}_n + \int_{t_n}^{t_n+\Delta t} f({\bf h}(s),\, {\bf h}(s-\tau), {\bf x}(s)) \,\mathrm{d}s \\
&\approx {\bf h}_n + \mathtt{scheme}[f,\,{\bf h}_n,\,{\bf h}_{l}, \Delta t],
\end{align}
for $n=0,1,\dots$, where the subscript $n$ denotes discrete time indices, $l := n - \lfloor \tau/\Delta t \rfloor$, and $\Delta t$ represents the time difference between a pair of consecutive elements in the input sequence.
In addition, $\mathtt{scheme}$ refers to a numerical integration scheme whose application yields an approximate solution for the integral.
Using the explicit forward Euler scheme and choosing $\Delta t = 1$ gives:
\begin{equation} \label{eq_simpleDRNN}
{\bf h}_{n+1} = (1-{\bf g}_n) \odot {\bf h}_n + {\bf g}_n \odot \sigma({\bf W}_1 {\bf h}_n + {\bf W}_2 {\bf h}_{l} + {\bf U} {\bf x}_n).
\end{equation}
Note that this discretization corresponds to the leaky-integrator described by~\cite{jaeger2007optimization}.
It can be shown that \eqref{eq_simpleDRNN} is a universal approximator of a large class of open dynamical systems with delay (see Theorem \ref{thm_uat}).
However, the performance of this architecture is not able to outperform state-of-the-art RNN architectures on a number of tasks. To improve the performance, we propose a modified gated recurrent unit next.
\subsection {Discrete-Time $\tau$-GRUs with a Weighted Time-Delay Feedback Architecture}
In this work, we propose to model the hidden dynamics using a mixture of a standard recurrent unit and a delay recurrent unit. To this end, we replace the $\sigma$ in Eq. \eqref{eq_simpleDRNN} by
\begin{equation} \label{eq_motivate}
{\bf u}_n + {\bf a}_n \odot {\bf z}_n,
\end{equation}
so that we yield a new GRU that takes the form
\begin{equation}
{\bf h}_{n+1} = (1-{\bf g}_n) \odot {\bf h}_n + {\bf g}_n \odot \left( {\bf u}_n + {\bf a}_n \odot {\bf z}_n \right).
\end{equation}
Here ${\bf u}_n$ describes the standard recurrent unit $${\bf u}_n=\text{tanh}({\bf W}_1 {\bf h}_n + {\bf U}_1 {\bf x}_n),$$ and ${\bf z}_n$ describes the delay recurrent unit $${\bf z}_n = \text{tanh}({\bf W}_2 {\bf h}_{l} + {\bf U}_2 {\bf x}_{n}).$$
Further, the gate ${\bf g}_n$ is a learnable vector-valued coefficient
$${\bf g}_n =\text{sigmoid}({\bf W}_3 {\bf h}_{n} + {\bf U}_3 {\bf x}_{n}).$$
The weighting term ${\bf a}_n$ is also a vector-valued coefficient
$${\bf a}_n =\text{sigmoid}({\bf W}_4 {\bf h}_{n} + {\bf U}_4 {\bf x}_{n}),$$
with the learnable parameters ${\bf W}_4 \in \RR^{d \times d}$ and ${\bf U}_4 \in \RR^{d \times p}$ that weights the importance of the time-delay feedback component-wise for the task on hand.
From the design point of view, Eq. \eqref{eq_motivate} can be motivated by the sigmoidal coupling used in Hodgkin-Huxley type neural models (see Eq. (1)-(2) and Eq. (4) in \cite{campbell2007time}).
\section{Theory}
In this section, we define the notion of solution for DDEs and show that the continuous-time $\tau$-GRU has a unique solution. Moreover, we provide intuition and analysis to understand how the delay mechanism can help improving long-term dependencies.
\subsection{Existence and Uniqueness of Solution for Continuous-Time $\tau$-GRUs}
Since we must know $h(t + \theta)$, $\theta \in [-\tau, 0]$ in order to determine the solution of the DDE \eqref{eq_gen_dde} for $s > t$, we call the state of the dynamical system at time $t$ the element of $C$ which we denote as $h_t$, defined as
$h_t(\theta) := h(t+\theta)$ for $\theta \in [-\tau, 0]$.
The trajectory of the solution can thus be viewed as the curve $t \to h_t$ in the state space $C$. In general, DDEs can be formulated as the following initial value problem (IVP) for the nonautonomous system \cite{hale2013introduction}:
\begin{equation} \label{eq_genddemain}
\dot{h}(t) = f(t, h_t), \ t \geq t_0,
\end{equation}
where $h_{t_0} = \phi \in C$ for some initial time $t_0 \in \mathbb{R}$, and $f: \mathbb{R} \times C \to \mathbb{R}^d$ is a given continuous function. The above equation describes a general type of systems, including ODEs ($\tau = 0$) and DDEs of the form $\dot{h}(t) = g(t, h(t), h(t-\tau))$ for some continuous function $g$.
We say that a function $h$ is a solution of Eq. (\ref{eq_genddemain}) on $[t_0 - \tau, t_0 + A]$ if there exists $t_0 \in \mathbb{R}$ and $A > 0$ such that $h \in C([t_0 - \tau, t_0 + A), \mathbb{R}^d)$, $(t, h_t) \in \mathbb{R} \times C$ and $h(t)$
satisfies Eq. (\ref{eq_genddemain}) for $t \in [t_0, t_0 + A)$. It can be shown that (see, for instance, Lemma 1.1 in \cite{hale2013introduction}) if $f(t,\phi)$ is continuous, then finding a solution of Eq. (\ref{eq_genddemain}) is equivalent to solving the integral equation: $h_{t_0} = \phi$,
\begin{equation}
h(t) = \phi(0) + \int_{t_0}^t f(s, h_s) \ \rm{d}s, \ t \geq t_0.
\end{equation}
We now provide existence and uniqueness result for the continuous-time $\tau$-GRU model, assuming that the input $x$ is continuous in $t$. Defining the state $h_t \in C$ as $h_t(\theta) := h(t+\theta)$ for $\theta \in [-\tau, 0]$ as before, the DDE describing the $\tau$-GRU model can be formulated as the following IVP:
\begin{equation} \label{eq_gendde}
\dot{h} = - h(t) + u(t, h(t)) + a(t, h(t)) \odot z(t, h_t), \ t \geq t_0,
\end{equation}
and $h_{t_0} = \phi \in C$ for some initial time $t_0 \in \mathbb{R}$, with the dependence on $x(t)$ viewed as dependence on $t$.
By applying Theorem 3.7 in \cite{smith2011introduction}, we can obtain the following result.
See App.~\ref{sect:appB} for a proof of this theorem.
\begin{theorem}[Existence and uniqueness of solution for continuous-time $\tau$-GRU] \label{thm_exist2main}
Let $t_0 \in \RR$ and $\phi \in C$ be given. There exists a unique solution $h(t) = h(t, \phi)$ of Eq. \eqref{eq:tdRNN_de}, defined on $[t_0 - \tau, t_0 + A]$ for any $A > 0$. In particular, the solution exists for all $t \geq t_0$, and
\begin{equation}
\| h_t(\phi) - h_t(\psi) \| \leq \| \phi - \psi \| e^{K(t-t_0)},
\end{equation}
for all $t \geq t_0$, where $K = 1 + \|W_1\| + \|W_2\| + \|W_4\|/4$.
\end{theorem}
Theorem \ref{thm_exist2main} guarantees that the continuous-time $\tau$-GRU model, as a functional differential equation, has a well-defined unique solution that does not blow up in finite time.
\subsection{The Delay Mechanism in $\tau$-GRUs Can Help Improve Long-Term Dependencies}
RNNs suffer from the problem of vanishing and exploding gradients, leading to the problem of long-term dependencies.
While the gating mechanisms could mitigate the problem to some extent, the delays introduced in $\tau$-GRUs can further
help reduce the sensitivity to long-term dependencies.
To understand the reason for this, we consider how gradients are computed using the backpropagation through time (BPTT) algorithm~\cite{pascanu2013difficulty}.
BPTT involves the two stages of unfolding the network in time and backpropagating the training error through the unfolded network. When $\tau$-GRUs are unfolded in time, the delays in the hidden state will appear as jump-ahead connections (buffers) in the unfolded network. These buffers provide a shorter path for propagating gradient information, and therefore reducing the sensitivity of the network to long-term dependencies. Such intuition is also used to explain the behavior of the NARX RNNs in \cite{lin1996learning}.
We now make this intuition precise in the following simplified setting.
See App.~\ref{sect:appC} for a proof of this proposition.
We also provide analogous results (bounds for the gradient norm) and discussions for $\tau$-GRU (in App.~\ref{app:gradbound}, see Proposition \ref{app_prop}).
\begin{proposition} \label{prop_delaymain}
Consider the linear time-delayed RNN whose hidden states are described by the update equation:
\begin{equation}
h_{n+1} = Ah_n + Bh_{n-m} + Cu_n, \ n=0,1,\dots,
\end{equation}
and $h_n = 0$ for $n=-m, -m+1, \dots, 0$ with $m > 0$.
Then, assuming that $A$ and $B$ commute, we have:
\begin{equation}
\frac{\partial h_{n+1}}{\partial u_i} = A^{n-i} C,
\end{equation}
for $n=0,1, \dots, m$, $i=0,\dots, n$, and
\begin{align}
\frac{\partial h_{m+1+j}}{\partial u_i} &= A^{m+j-i} C + \delta_{i,j-1} BC + 2 \delta_{i,j-2} ABC \nonumber \\
&\ \ \ \ + 3 \delta_{i,j-3} A^2 B C + \cdots + j \delta_{i,0} A^{j-1} B C,
\end{align}
for $j = 1,2,\dots, m+1$, $i=0,1,\dots, m+j$, where $\delta_{i,j}$ denotes the Kronecker delta.
\end{proposition}
We remark that the commutativity assumption is not necessary. It is used here only to simplify the expression for the gradients. Analogous formula for the gradients can be derived without such assumption, at the cost of displaying more complicated formulae.
From Proposition \ref{prop_delaymain}, we see that the presence of the delay allows the model to place more emphasis on the gradients due to input information in the past (as can be seen in Eq. \eqref{eq_linear} in the proof of the proposition, where additional terms dependent on $B$ appear in the coefficients in front of the past inputs). In particular, if $\|A\| < 1$ and $B=0$, then the gradients decay exponentially as $i$ becomes large. Introducing the delay term ($B\neq 0$) makes the rates at which this exponential decays lower by perturbing the gradients of hidden states (dependent on the delay parameter $m$) with respect to the past inputs with nonzero values.
Similar qualitative conclusion can also be drawn for our $\tau$-GRU (see Proposition \ref{app_prop} and the discussions in App. \ref{app:gradbound}). Therefore, one expects that these networks would be able to more effectively deal with long-term dependencies than the counterpart models without delays.
\section{Experimental Results}
\label{sect:exp}
In this section, we consider several benchmark datasets to demonstrate the performance of our proposed $\tau$-GRU.
We use standard protocols for training and initialization, and validation sets for parameter tuning (see App.~\ref{sect:appD} for~details).
\begin{figure}[!b]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{figs/adding_task_2000.pdf}
\caption{Sequence length $N=2000$.}
\label{fig:adding_task_2000}
\end{subfigure
~
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{figs/adding_task_5000.pdf}
\caption{Sequence length $N=5000$.}
\label{fig:adding_task_5000}
\end{subfigure}
\caption{Results for the adding task on very long sequences.}
\label{fig:adding_task}
\end{figure
\subsection{The Adding Task}
The adding task is a classic problem for testing the ability of models to learn long-term dependencies, originally proposed by~\cite{hochreiter1997long}.
The inputs of this problem are composed of two stacked random vectors $u$ and $v$ of length $N$. The elements of $u$ are drawn from the uniform distribution $\mathcal{U}(0,1)$. The vector $v$ has two non-zero elements (both set to 1), one placed in a random location $i$ sampled from the index set $i\in\{1,\dots,\lfloor\frac{N}{2}\rfloor\}$, and the other element is placed at location $j$ sampled from the index set $j\in\{\lceil\frac{N}{2}\rceil,\dots,N\}$. The target value for each sequence is constructed as the sum $\sum(u\odot v)$, i.e., the sum of the two elements in $u$ that correspond to the the non-negative entries $v$.
Here, we follow the work by~\cite{rusch2022long} and consider two challenging settings with very long input sequences $N=\{2000,5000\}$.
Figure~\ref{fig:adding_task} shows results for our $\tau$-GRU and several state-of-the-art RNN models which are designed to solve long-term dependencies tasks, such as LEM~\cite{rusch2022long}, coRNN~\cite{rusch2021coupled}, DTRIV$_\infty$~\cite{lezcano2019trivializations}, fastGRNN~\cite{kusupati2018fastgrnn}, and LSTM with chrono initialization~\cite{tallec2018can}. It can be seen that the DTRIV$_\infty$ and fastGRNN are performing poorly in both cases. In contrast, our $\tau$-GRU shows a favorable performance, i.e., $\tau$-GRU is converging faster, and it achieves lower mean squared errors than other methods.
\subsection{Sentiment Analysis: IMDB}
Next, we consider the IMDB dataset~\cite{maas2011learning} to study the expressiveness of our proposed model on a sentiment analysis task. The aim of this task is to predict whether a movie review is positive or negative. This dataset is composed of $50$k movie reviews, with an average length of 240 words per review. Each review is annotated by a label that indicates a positive, or negative sentiment. The dataset is split evenly into a training and test set, so that both sets contain $25$k reviews. We use 15\% of the training data for validation. Further, we use standard preprocessing schemes, following ~\cite{pennington2014glove}, to restrict the dictionary to $25$k words, and to embed the data with a pretrained GloVe model~\cite{pennington2014glove}.
Table~\ref{tab:results_imbd} shows that our $\tau$-GRU achieves a substantially higher test accuracy than LSTM, and GRU models.
Further, $\tau$-GRU is also able to outperform the continuous-time coRNN~\cite{rusch2021coupled} and the highly expressive LEM~\cite{rusch2022long}.
\begin{table}[!h]
\caption{Results for IMDB sentiment analysis task.}
\label{tab:results_imbd}
\centering
\scalebox{0.9}{
\begin{tabular}{l c c ccccccc}
\toprule
Model & Test Accuracy ($\%$) & \# units & \# param \\
\midrule
LSTM~\cite{campos2018skip} & 86.8 & 128 & {220k} \\
Skip LSTM~\cite{campos2018skip} & 86.6 & 128 & {220k} \\
GRU~\cite{campos2018skip} & 86.2 & 128 & 164k \\
Skip GRU~\cite{campos2018skip} & 86.6 & 128 & 164k \\
ReLU GRU~\cite{dey2017gate} & 84.8 & 128 & 99k \\
coRNN~\cite{rusch2021coupled} & 87.4 & 128 & 46k \\
LEM & 88.1 & 128 & {220k} \\
\midrule
\textbf{$\tau$-GRU (ours)} & \textbf{88.6} & 128 & {220k} \\
\bottomrule
\end{tabular}}
\end{table}
\begin{table}[!b]
\caption{Results for HAR2 task.}
\label{tab:results_har2}
\centering
\scalebox{0.9}{
\begin{tabular}{l c c ccccccc}
\toprule
Model & Test Accuracy ($\%$) & \# units & \# param \\
\midrule
GRU~\cite{kusupati2018fastgrnn} & 93.6 & 75 & {19k}\\
LSTM~\cite{Kag2020RNNs} & 93.7 & 64 & {19k}\\
FastRNN~\cite{kusupati2018fastgrnn} & 94.5 & 80 & 7k\\
FastGRNN~\cite{kusupati2018fastgrnn} & 95.6 & 80 & 7k\\
AsymRNN~\cite{Kag2020RNNs} & 93.2 & 120 & 8k\\
iRNN~\cite{Kag2020RNNs} & 96.4 & 64 & 4k\\
DIRNN~\cite{zhang2021deep} & 96.5 & 64 & -\\
coRNN~\cite{rusch2021coupled} & 97.2 & 64 & 9k\\
LipschitzRNN & 95.4 & 64 & 9k\\
LEM & 97.1 & 64 & {19k}\\
\midrule
\textbf{$\tau$-GRU (ours)} & \textbf{97.4} & 64 & {19k} \\
\bottomrule
\end{tabular}}
\end{table}
\begin{table*}[!t]
\caption{Test accuracies on sMNIST, and psMNIST.}
\label{tab:image_mnist}
\centering
\scalebox{0.85}{
\begin{tabular}{lcccc|cccc}
\toprule
Model& sMNIST & psMNIST & \# units & \# parameters & sCIFAR & nCIFAR & \# units & \# parameters \\
\midrule
LSTM~\cite{kag2021time} & 97.8 & 92.6 & 128 & {68k} & 59.7 & 11.6 & 128 & 69k / 117k \\
r-LSTM~\cite{trinh2018learning} & 98.4 & 95.2 & - & 100K & 72.2 & - & - & 101k / -\\
chrono-LSTM~\cite{rusch2022long} & 98.9 & 94.6 & 128 & {68k} & - & 55.9 & 128 & - / 116k\\
Antisymmetric RNN~\cite{chang2018antisymmetricrnn} & 98.0 & 95.8 & 128 & 10k & 62.2 & 54.7 & 256 & 37k / 37k\\
Lipschitz RNN~\cite{erichson2020lipschitz} & 99.4 & 96.3 & 128 & 34k & 64.2 & 59.0 & 256 & 134k / 158k\\
expRNN~\cite{lezcano2019cheap} & 98.4 & 96.2& 360 & {68k} & - & 49.0 & 128 & - / 47k\\
iRNN~\cite{Kag2020RNNs} & 98.1 & 95.6 & 128 & 8k & - & 54.5 & 128 & - / 12k \\
TARNN~\cite{kag2021time} & 98.9 & 97.1 & 128 & {68k} & - & 59.1 & 128 & - / 100K \\
coRNN~\cite{rusch2021coupled} & 99.3 & {96.6} & 128 & 34k & - & 59.0 & 128 & - / 46k\\
{LEM}~\cite{rusch2022long} & \textbf{99.5} & {96.6} & 128 & {68k}& - & 60.5 & 128 & - / 117k\\
\midrule
Simple delay GRU (Eq.~\eqref{eq_simpleDRNN}) & 98.7 & 94.1 & 128 & 51k & 57.2 & 59.8 & 128 & 52k / 75k \\
\textbf{$\tau$-GRU (ours)} & {99.4} & \textbf{97.3} & 128 & {68k} & \textbf{74.9} & \textbf{62.2} & 128 & 69k / 117k \\
\bottomrule
\end{tabular}}
\end{table*}
\subsection{Human Activity Recognition: HAR-2}
Here, we study the performance of our model for human activity recognition using the HAR dataset provided by~\cite{anguita2012human}. This dataset
consists of measurements by an accelerometer and gyroscope on a Samsung Galaxy S3 smartphone which are tracking six activities for
30 volunteers within an age bracket of 19-48 years. For learning, the sequences are divided into shorter sequences of length $N=128$, and the raw measurements are summarized by 9 features per time step. Kusupati et al.~\cite{kusupati2018fastgrnn} proposed the HAR-2 dataset which groups the activities into two categories \{Sitting, Laying, Walking Upstairs\} and \{Standing, Walking, Walking Downstairs\}. We use $7,352$ sequences for training, $900$ for validation and $2,947$ for testing.
Table~\ref{tab:results_har2} shows that our $\tau$-GRU is able to outperform traditional gated architectures on this task. The most competitive model is coRNN~\cite{rusch2021coupled}, which achieves a test accuracy of $97.2\%$ with just $9$k parameters. LEM~\cite{rusch2022long} achieves $97.1\%$ with the the same number of parameters as our $\tau$-GRU.
\subsection{Sequential Image Classification}
Next, we consider four sequential variations of the MNIST~\cite{lecun1998gradient} and CIFAR-10~\cite{CIFAR10} image classification datasets. These sequence classification tasks aim to evaluate the capability of RNNs to learn long-term dependencies.
The sequential MNIST (sMNIST) task, originally proposed by~\cite{le2015simple}, is sequentially presenting the $N=784$ pixels of each thumbnail to the recurrent unit.
The final hidden state is used to predict the class membership probability of the flattened image.
A variation of the sMNIST task is the permuted sMNIST task (psMNIST), which presents the model with a fixed random permutation of the pixel-by-pixel sequence. This task removes any natural patterns in the sequence and requires that models can learn long-term dependencies between pixels that are possibly far apart.
Since the standard sMNIST task has essentially been solved by state-of-the-art RNNs, \cite{chang2018antisymmetricrnn} has proposed to consider the sequential CIFAR-10 (sCIFAR) task instead. This task is more challenging due to the increased sequence length, $N=1024$, of the flattened input images. Each element of the sequence is a 3-dimensional vector that contains the pixels for each color channel. To solve this task, models require to have long-term memory and sufficient~expressivity.
\begin{figure}[!b]
\centering
\includegraphics[width=0.48\textwidth]{figs/psmnist_acc.pdf}
\caption{Test accuracy on permuted sequential MNIST as a function of the number of epochs. Our $\tau$-GRU requires substantially fewer number of epochs to reach peak performance.}
\label{fig:psmnist_acc}
\end{figure}
Furthermore, \cite{chang2018antisymmetricrnn} has also proposed a noise-padded CIFAR-10 (nCIFAR) task, which requires that the RNN has the ability to memorize information from far in the past, and be able to suppress noisy segments that contain no relevant information. Specifically, we construct a sequence of length $N=1000$ where each element is a $96$-dimensional vector. The first $32$ two elements are the rows of an image, where the channels are stacked.
The remaining $968$ elements of the sequence are random vectors drawn from the standard normal distribution.
Table~\ref{tab:image_mnist} shows results for the four described tasks. Our $\tau$-GRU is competitive and able to outperform other models on the psMNIST, sCIFAR and nCIFAR task. The proposed weighted time-delay feedback mechanisms demonstrates a clear advantage in particular for the CIFAR tasks.
Figure~\ref{fig:psmnist_acc} shows that our $\tau$-GRU is converging faster than other recently introduced continuous-time models, such as LEM~\cite{rusch2022long}, coRNN~\cite{rusch2021coupled}, and LipschitzRNN~\cite{erichson2020lipschitz}.
This could help reduce training times when using these~methods.
\subsection{Ablation Study using psMNIST}
We use the psMNIST task to perform an ablation study. To do so, we consider the following model
$${\bf h}_{t+1} = (1-{\bf g}_t) \odot {\bf h}_t + {\bf g}_t \odot (\beta \cdot {\bf u}_t + \alpha \cdot {\bf a}_t \odot {\bf z}_t)$$
where $\alpha\in [0,1]$ and $\beta\in [0,1]$ are constants that can be used to control the effect of different components. We are interested in the cases where $\alpha$ and $\beta$ are either 0 or 1, i.e., a component is switched off or on.
Table~\ref{table:ablation} shows the results for different ablation configurations. By setting $\alpha=0$ we yield a simple gated RNN.
Second, for $\beta=0$, we yield a $\tau$-GRU without instantaneous dynamics. Third, we show how different values of $\tau$ affect the performance. Setting $\tau=0$ leads to a $\tau$-GRU without time-delay feedback.
We also show that a model without the weighting function ${\bf a}_t$ is not able to achieve peak performance.
\begin{table}[!h]
\caption{Ablation study.}
\label{table:ablation}
\centering
\scalebox{0.9}{
\begin{tabular}{l c c ccccccc}
\toprule
Model & $\alpha$ & $\beta$ & $\tau$ & ${\bf a}_t$ & Test Accuracy (\%) \\
\midrule
ablation & 0 & 1 & - & yes & 94.6 \\
ablation & 1 & 0 & 65 & yes &94.9 \\
\midrule
ablation & 1 & 1 & 0 & yes & 95.1 \\
ablation & 1 & 1 & 20 & yes & 96.4 \\
ablation & 1 & 1 & 65 & no & 96.8 \\
\midrule
\textbf{$\tau$-GRU (ours)} & 1 & 1 & 65 & yes &\textbf{97.3}\\
\bottomrule
\end{tabular}}
\end{table}
\begin{figure}[!h]
\centering
\includegraphics[width=0.48\textwidth]{figs/tau_ablation.pdf}
\caption{Sensitivity analysis of $\tau$-GRU on psMNIST. The light green envelopes represent $\pm 1$ standard
deviation around the mean.}
\label{fig:tau_ablation_main}
\end{figure}
Figure~\ref{fig:tau_ablation_main} further investigates the effect of $\tau$. The performance of $\tau$-GRU is increasing as a function of $\tau$ and peaking around $\tau=65$.
It can be seen that the performance in the range $50-150$ is relatively constant for this task. Thus, the model is relatively insensitive as long as $\tau$ is sufficiently large, but not too large. The performance is starting to decrease for $\tau>150$. Since $\tau$ takes discrete values, tuning is easier as compared to continuous tuning parameters, for instance, parameters used by LEM~\cite{rusch2022long}, coRNN~\cite{rusch2021coupled}, or LipschitzRNN~\cite{erichson2020lipschitz}.
\subsection{Learning Climate Dynamics}
We consider the task of learning the dynamics of the DDE model for El Ni\~{n}o Southern Oscillation (ENSO) of \cite{falkena2019derivation} (see Eq. (46) there). It models the sea surface temperature $T$ in the eastern Pacific Ocean, and is described by:
\begin{equation}
\dot{T} = T-T^3 - c T(t-\delta) (1-\gamma T^2(t-\delta)), \ t > \delta,
\end{equation}
where $\gamma < 1$, $c > 0$, with $T(t)$ satisfying $\dot{T} = T-T^3 - c T(0) (1-\gamma T(0)^2)$ with $T(0) \sim \rm{Unif}(0,1)$ for $t \in [0, \delta]$. For data generation, we follow \cite{falkena2019derivation}, and choose $c = 0.93$, $\gamma = 0.49$ and $\delta = 4.8$. We use the classical Runge-Kutta method (RK4) to numerically integrate the system from $t=0$ to $t=400$ using a step-size of 0.1. The training and testing samples are the time series (of length 2000) generated by the RK4 scheme on the interval $[200,400]$ for different realizations of $T(0)$.
Table \ref{tab:results_ENSO} shows that our model (with $\alpha = \beta = 1$ and using $\tau = 20$) is more effective in learning the ENSO dynamics when compared to other RNN architectures. We also see that the predictive performance deteriorates without using appropriate combination of standard and delay recurrent units (setting either $\alpha$ or $\beta$ to zero).
\begin{table}[!h]
\caption{Results for the ENSO model prediction.}
\label{tab:results_ENSO}
\centering
\scalebox{0.9}{
\begin{tabular}{l c c ccccccc}
\toprule
Model & MSE ($\times 10^{-2}$) & \# units & \# parameters \\
\midrule
Vanilla RNN & 0.45 & 16 & 0.3k \\
LSTM & 0.92 & 16 & 1.2k \\
GRU & 0.53 & 16 & 0.9k \\
Lipschitz RNN & 10.6 & 16 & 0.6k \\
coRNN & 4.00 & 16 & 0.6k \\
LEM & 0.31 & 16 & 1.2k \\
\midrule
ablation ($\alpha = 0$) & 0.31 & 16 & 0.6k \\
ablation ($\beta = 0$) & 0.38 & 16 & 0.9k \\
\midrule
\textbf{$\tau$-GRU (ours)} & \textbf{0.17} & 16 & 1.2k \\
\bottomrule
\end{tabular}}
\end{table}
\section{Conclusion and Future Work}
Starting from a continuous-time formulation, we derive a discrete-time gated recurrent unit with delay, $\tau$-GRU. We also provide intuition and analysis to understand how the delay can act as a buffer to improve modeling of long-term dependencies. Importantly, we demonstrate the superior performance of $\tau$-GRU in several challenging sequential learning tasks.
We now discuss some future directions. On the one hand, using multiple delays could lead to improved models, and therefore it is of interest to study the extension of $\tau$-GRU to include suitable distributed delay mechanism. On the other hand, achieving model robustness with respect to adversarial perturbations and common corruptions is critical for sensitive applications, but it is largely unexplored in the sequential learning setting. It has been shown in \cite{lim2021noisy, lim2021nfm, erichson2022noisymix} that noise injection can be effective in improving robustness. Therefore, it is also of interest to consider and study noise-injected versions of $\tau$-GRU to improve the trade-offs between accuracy and robustness.
\section*{Acknowledgments}
N. B. Erichson would like to acknowledge IARPA (contract
W911NF20C0035).
S. H. Lim would like to acknowledge the WINQ Fellowship, the Swedish Research Council (VR/2021-03648), and the computational resources provided by the Swedish National Infrastructure
for Computing (SNIC) at Chalmers Centre for Computational Science and Engineering (C3SE) partially funded by the Swedish Research Council through grant agreement no. 2018-05973.
M. W. Mahoney would also like to acknowledge NSF, and ONR for providing partial support of this work. Our conclusions do not necessarily
reflect the position or the policy of our sponsors, and no official endorsement should be inferred.
{\normalsize
\bibliographystyle{ieee_fullname}
| {
"timestamp": "2022-12-02T02:06:15",
"yymm": "2212",
"arxiv_id": "2212.00228",
"language": "en",
"url": "https://arxiv.org/abs/2212.00228",
"abstract": "We introduce a novel gated recurrent unit (GRU) with a weighted time-delay feedback mechanism in order to improve the modeling of long-term dependencies in sequential data. This model is a discretized version of a continuous-time formulation of a recurrent unit, where the dynamics are governed by delay differential equations (DDEs). By considering a suitable time-discretization scheme, we propose $\\tau$-GRU, a discrete-time gated recurrent unit with delay. We prove the existence and uniqueness of solutions for the continuous-time model, and we demonstrate that the proposed feedback mechanism can help improve the modeling of long-term dependencies. Our empirical results show that $\\tau$-GRU can converge faster and generalize better than state-of-the-art recurrent units and gated recurrent architectures on a range of tasks, including time-series classification, human activity recognition, and speech recognition.",
"subjects": "Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE); Machine Learning (stat.ML)",
"title": "Gated Recurrent Neural Networks with Weighted Time-Delay Feedback",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806501150426,
"lm_q2_score": 0.8198933359135361,
"lm_q1q2_score": 0.8039715403550862
} |
https://arxiv.org/abs/1911.03012 | Counting extensions revisited | We consider rooted subgraphs in random graphs, i.e., extension counts such as (i) the number of triangles containing a given vertex or (ii) the number of paths of length three connecting two given vertices. In 1989, Spencer gave sufficient conditions for the event that, with high probability, these extension counts are asymptotically equal for all choices of the root vertices. For the important strictly balanced case, Spencer also raised the fundamental question as to whether these conditions are necessary. We answer this question by a careful second moment argument, and discuss some intriguing problems that remain open. | \section{Introduction}
Subgraph counts and their many natural generalizations are central topics in random graph theory:
since the~1960's they are a constant source of beautiful problems and~conjectures,
which have repeatedly inspired the development of important new probabilistic~techniques and~insights (see~\cite{BB,AS,JLR,FK}).
In this paper we consider \mbox{rooted subgraph counts} in the binomial random graph~${\mathbb G}_{n,p}} % \newcommand{\Gnp}{\G(n,p)$,
i.e., so-called \linebreak[4] \mbox{extension counts}~\cite{SS1988,S90b,LS1991,Vu2001}
such as
(i)~the number of triangles containing a given~vertex
or (ii)~the number of paths of length three connecting two given~vertices.
In combinatorics and related areas, the need for studying such extension~counts
arises frequently in probabilistic proofs and applications,
including zero-one~laws in random graphs~\cite{SS1988,LS1991,S2001},
games on random graphs~\cite{LP2010,N2017},
random graph processes~\cite{bohman2010early,BFL2015, bohman2013dynamic, pontiveros2013triangle, BW2018},
sparse random analogues of classical extremal and Ramsey results~\cite{NS2017,SS2018,BK2019},
and many more, such as~\cite{S90a,R92,Vu2001,ST2002,YR2007,JR11,spohel2013general,M2015,TBD}.
Consequently the investigation of extension~counts is not only a natural problem in probabilistic combinatorics,
but also an important issue from the applications point of~view.
After initial groundwork of Shelah and Spencer~\cite{SS1988} as well as Spencer~\cite{S90a} on (rooted~subgraph) extension~counts,
in~1989 Spencer~\cite{S90b} proved sufficient conditions for the event that, with high probability\footnote{As usual, we say that an event holds~\emph{whp} (with~high~probability) if it holds with probability tending to~$1$ as~$n\to \infty$.},
these extension counts are asymptotically equal in~${\mathbb G}_{n,p}} % \newcommand{\Gnp}{\G(n,p)$ for all choices of the root~vertices.
For the important strictly balanced case, he also raised the fundamental question whether these sufficient conditions (see~\eqref{eq_Spencer_SB} below) are qualitatively necessary.
In this paper we answer Spencer's 30-year old question by a careful second moment argument, see~Theorem~\ref{thm_strictly_balanced} below,
rectifying a surprising gap in the random graph literature.
We also discuss some further partial results and intriguing open~problems (see~Sections~\ref{ss_partial}--\ref{sec:intro:general} below).
\subsection{Main result}\label{sec:intro:main}
To fix notation, by a \emph{rooted graph}~${(G,H)}$
we mean a graph~${H=(V(H),E(H))}$ and an induced subgraph~${G \subseteq H}$
with labeled `root' vertices~${V(G)=\{1, \ldots, v_G\}}$.
Given a tuple~${\xx =(x_1, \ldots, x_{v_G})}$ consisting of distinct vertices,
a~{\emph{$(G,H)$-extension} of $\xx$ is a copy of the graph~$H_G:= (V(H), E(H)\setminus E(G))$ in which each vertex~$j \in V(G)$ is mapped onto~$x_j$.
Note that
if~$\xx$ spans a copy of~$G$ in which each vertex~$j \in V(G)$ is mapped onto~$x_j$,
then every~$(G,H)$-extension of~$\xx$ corresponds to a copy of~$H$.
Since the edges between root vertices do not affect the definition of a~$(G,H)$-extension, the reader may without loss of generality assume~$V(G)$ is an independent set~of~$H$ in the results below, cf.~\cite{JLR,JR11} (allowing for~$G$ that are not independent will be convenient in some proofs, though).
For brevity, we write~$\oset{v_G}$ for the set of all \emph{roots}, i.e., tuples~$\xx = (x_1, \dots, x_{v_G})$ of distinct vertices from~$[n] := \left\{ 1, \dots, n \right\}$.
Let~$X_{\xx} = X_{G,H}(\xx)$ denote the number of~$(G,H)$-extensions of~$\xx$ in the binomial random graph~${\mathbb G}_{n,p}} % \newcommand{\Gnp}{\G(n,p)$.
Note that the expected~value
\begin{equation}\label{def:muGH}
\mu=\mu_{G,H} := \E X_{\xx} \asymp n^{v_H-v_G}p^{e_H-e_G}
\end{equation}
does not depend on the particular choice of~$\xx$.
To avoid trivialities, we henceforth assume that~$H$ has more edges than~$G$, i.e., that~$e_H>e_G$.
Similarly as for (unrooted) subgraph counts, we define
\begin{equation}\label{def:mGH}
m(G,H) := \max_{G \subsetneq J \subseteq H} d(G,J) \quad \text{ with } \quad d(G,J):=\frac{e_J-e_G}{v_J-v_G},
\end{equation}
and say that~$(G,H)$ is \emph{strictly~balanced} if~$d(G,J) < d(G,H)$ for all~$G \subsetneq J \subsetneq H$.
We also call~$(G,H)$~\emph{grounded} if at least one root vertex~$j \in V(G)$ is connected to a non-root vertex~$w \in V(H) \setminus V(G)$.
Spencer derived in~1989 sufficient conditions for the event that, with high probability,
all extension counts satisfy~$X_{\xx} \sim \mu$, i.e., are asymptotically equal.
In the important case when~$(G,H)$ is strictly balanced,
\mbox{\cite[Theorem~2]{S90b}} states that for every fixed~$\eps \in (0,1]$ there is a constant~$K(\eps) > 0$ such that
\begin{equation}\label{eq_Spencer_SB}
\lim_{n \to \infty} \Pr\Bigpar{\max_{\xx \in \oset{v_G}}|X_\xx - \mu| < \eps\mu} = 1 \quad \text{ if } \mu \ge K(\eps) \log n.
\end{equation}
Spencer remarked that in~\eqref{eq_Spencer_SB} his constant satisfies~$K(\eps) \to \infty$ as~$\eps \to 0$,
and speculated that this is probably also necessary, see~\cite[Remark~on~p.249]{S90b}.
In other words, he raised the question whether his sufficient condition is qualitatively
best possible.
Our main result answers this fundamental question:
\eqref{eq:main:strbal}~shows that the `correct' dependence is~$K(\eps)=\Theta(\eps^{-2})$ in the grounded case, even when~$\eps=\eps(n) \to 0$ at some polynomial~rate.
For completeness, \eqref{eq:main:strbal:non}~also shows that the logarithm in the sufficient condition~\eqref{eq_Spencer_SB} is~unnecessary in the less interesting ungrounded~case (where extension counts are essentially unrooted subgraph counts, cf.~example~(b) in~\refF{fig_primal}).
\begin{theorem}[Main result: strictly balanced case]\label{thm_strictly_balanced
Let~$(G,H)$ be a rooted graph that is strictly balanced.
There are constants~$c, C, \alpha > 0$ such that, for all~$p=p(n) \in [0,1]$ and~$\eps=\eps(n) \in [n^{-\alpha},1]$, the following~holds:
\begin{romenumerate2}
\ite
If the rooted graph~$(G,H)$ is grounded, then
\begin{align}
\label{eq:main:strbal}
\vspace{-0.125em}
\lim_{n \to \infty} \Pr\Bigpar{\max_{\xx \in \oset{v_G}}|X_\xx - \mu| < \eps\mu} &=
\begin{cases}
0 & \text{if $\eps^2\mu \le c \log n$,} \\
1 & \text{if $\eps^2\mu \ge C \log n$.}
\end{cases}\hspace{2.5em}\vspace{-0.125em}
\intertext{\ite
If the rooted graph~$(G,H)$ is not grounded, then}
\label{eq:main:strbal:non}
\vspace{-0.25em}
\lim_{n \to \infty} \Pr\Bigpar{\max_{\xx \in \oset{v_G}}|X_\xx - \mu| < \eps\mu} &=
\begin{cases}
0 & \text{if $\eps^2\mu \to 0$,} \\
1 & \text{if $\eps^2\mu \to \infty$.}
\end{cases}\hspace{2.5em}\vspace{-0.125em}%
\end{align}%
\end{romenumerate2}\vspace{-0.125em}%
\end{theorem}
\noindent
In concrete words, \eqref{eq:main:strbal}--\eqref{eq:main:strbal:non} of~\refT{thm_strictly_balanced} give
thresholds for the concentration of extension counts in terms of~$\eps^2\mu$,
similar to the thresholds in terms of the edge probability~$p$ that are well-known
for many properties of~${\mathbb G}_{n,p}} % \newcommand{\Gnp}{\G(n,p)$.
The role of the expression~$\eps^2 \mu$ in~\eqref{eq:main:strbal}--\eqref{eq:main:strbal:non}
can heuristically be explained via Chernoff-type bounds of form~$\e^{-\Omega(\eps^2\mu)}$ on the tails~$\Pr(|X_\xx - \mu| \ge \eps \mu)$ of~$X_\xx$.
Indeed, considering the union bound over the~$\Theta(n^{v_G})$ roots~$\xx$, it then seems plausible that the $1$-statement follows when~$\eps^2\mu$ is at least a large enough multiple of~$\log n$.
An intuitive reason why the~$\log n$ factor is absent in the ungrounded threshold~\eqref{eq:main:strbal:non} is that here the~$X_{\xx}$ are strongly correlated and in fact almost equal (e.g., in example~(b) from~\refF{fig_primal} each~$X_{\xx}$ is well-approximately by the total number of triangles), so there should be no need to use a union~bound.
The main contribution of \refT{thm_strictly_balanced}
is the $0$-statement in the grounded threshold~\eqref{eq:main:strbal},
which was missing in previous work:
our proof uses a careful second moment argument
(combining correlation inequalities and counting arguments with Janson's inequality)
in order to establish that, with high probability,
there exists a root~$\xx$ with~${X_\xx \ge (1+\eps)\mu}$, i.e., with too~many $(G,H)$-extensions.
This is closely related to the task of obtaining good lower bounds on ${\Pr(X_\xx \ge (1+\eps)\mu)}$, which are not so well understood as upper bounds;
see~\cite{JR2002,JW,Ch19,SW18}.
To sidestep this conceptual obstacle, in \refS{s_strictly_balanced} we therefore work with (easier to estimate)
auxiliary events that enforce~${X_\xx \ge (1+\eps)\mu}$ via `disjoint' extensions,
and we believe that our approach might also be useful for establishing `lower bounds' in other~problems.
\pagebreak[3]
\subsection{Partial results: beyond the strictly balanced case}\label{ss_partial}
We also establish some threshold results for extension counts of rooted graphs~$(G,H)$ that are not necessarily strictly balanced.
Here things are more complicated, since we now need to take into account all subgraphs of~$J \subseteq H$ containing the root~$G$,
in particular those that satisfy~$d(G,J) = m(G,H)$; cf.~\cite{S90a,S90b,R92,JLR}.
We call such subgraphs~$J$~\emph{primal},
and for brevity also say that~$J$~is~\emph{grounded} if~$(G,J)$~is~grounded.
The partial results Theorems~\ref{thm_unique}--\ref{thm_nogrounded} below cover all strictly balanced~$(G,H)$,
and they in particular imply that \refT{thm_strictly_balanced} also holds with~$\eps^2\Phi$ instead of~$\eps^2\mu$ (possibly after modifying the constants~$c,C,\alpha$), where
\begin{equation}\label{eq_PhiGH}
\Phi = \Phi_{G,H} := \min_{G \subseteq J \subseteq H : e_J > e_G} \mu_{G,J}.
\end{equation}
There is no contradiction here: the extra assumption~$\eps \ge n^{-\alpha}$ ensures that the conclusions of the {$0$-}~and {$1$-statements} of \refT{thm_strictly_balanced} coincide regardless of whether we use~$\eps^2\Phi$ or~$\eps^2\mu$ (cf.~\refS{sec:ext:non}).
It thus comes at no surprise that in our main result \refT{thm_strictly_balanced} the technical assumption~$\eps \ge n^{-\alpha}$ is indeed\footnote{For examples~(a) and~(b) from \refF{fig_primal} with~$\eps \asymp n^{-1/2}$ and~$\eps \asymp n^{-1}$, when~$p \asymp n^{-1/4}$ it is routine to check that~$\Phi \to \infty$, $\eps^2\Phi \to 0$ and~$\eps^2 \mu \gg \log n$ in both cases. Hence the~$0$-statement holds by~\eqref{eq:general} of \refT{thm_general}, showing that \refT{thm_strictly_balanced}~fails.}
necessary.
The following result covers the case where the unique primal subgraph of~$(G,H)$ is grounded, such as in examples~(a) and~(c) from~\refF{fig_primal};
this case includes the graphs in~\refT{thm_strictly_balanced}~(i).
\begin{theorem}[Unique and grounded primal case]\label{thm_unique}%
Let~$(G,H)$ be a rooted graph with a unique primal subgraph~$J$.
If~$(G,J)$ is grounded, then there are constants $c, C, \alpha > 0$ such that,
for all~$p=p(n) \in [0,1]$ and $\eps=\eps(n) \in [n^{-\alpha},1]$,
\begin{equation}\label{eq:thm:unique}
\lim_{n \to \infty} \Pr\Bigpar{\max_{\ii \in \oset{v_G}} |X_{\ii} - \mu| < \eps\mu} =
\begin{cases}
0 &\text{if } \eps^2 \Phi \le c \log n, \\
1 &\text{if } \eps^2 \Phi \ge C \log n.
\end{cases}\vspace{-0.125em}%
\end{equation}%
\end{theorem}
\noindent
The heuristic idea is that main contribution to deviations of~$X_\xx=X_{G,H}(\xx)$ comes from those of~$X_{G,J}(\xx)$,
and, since~$(G,J)$ is strictly balanced and grounded, the problem thus intuitively reduces to \refT{thm_strictly_balanced}~(i).
\begin{figure}
\begin{center}
\hspace*{\fill}
\begin{tikzpicture}[thick,scale=1]
\foreach \x in {1, 2, ..., 3}
{
\draw[ultra thick] (-120*\x+30:0.5) node[vertex] (n\x) {} -- (-120*\x + 150:0.5);
}
\draw (n1) circle (4pt);
\node at (1,-0.5) () {(a)};
\end{tikzpicture}
\hspace*{\fill}
\begin{tikzpicture}[thick,scale=1]
\foreach \x in {1, 2, ..., 4}
{
\node[vertex] at (-90*\x:0.5) (n\x) {};
}
\foreach \x/\y in {2/3,3/4,4/2}
{
\draw[ultra thick] (n\x) -- (n\y);
}
\draw (n1) circle (4pt);
\node at (1,-0.5) () {(b)};
\end{tikzpicture}
\hspace*{\fill}
\begin{tikzpicture}[thick]
\foreach \x in {1, 2, ..., 3}
{
\draw[ultra thick] (-120*\x+30:0.5) node[vertex] (n\x) {} -- (-120*\x + 150:0.5);
}
\draw (n1) circle (4pt);
\draw (n3) -- +(0.7,0) node[vertex] (){};
\node at (1,-0.5) () {(c)};
\end{tikzpicture}
\hspace*{\fill}
\begin{tikzpicture}[thick]
\begin{scope}
\foreach \x in {1, 2, ..., 4}
{
\draw[ultra thick] (-90*\x:0.5) node[vertex] (n\x) {} -- (-90*\x + 90:0.5);
}
\end{scope}
\foreach \x/\y in {1/3,2/4}
{
\draw[ultra thick] (n\x) -- (n\y);
}
\foreach \x/\y in {4/5, 5/6, 6/7, 7/8, 8/9}
{
\draw (n\x) -- (0.5*\x - 1.5, 0) node[vertex] (n\y) {};
}
\draw (n9) circle (4pt);
\node at (1.5,-0.5) () {(d)};
\end{tikzpicture}\vspace{-1.125em}
\hspace*{\fill}
\end{center}
\caption{Examples of rooted graphs, with the root vertex circled and primal subgraphs marked in bold:
(a)~strictly balanced and grounded,
(b)~strictly balanced and not~grounded,
(c)~with a unique primal that is~grounded,
and
(d)~with a unique primal that is not~grounded.
Our main result \refT{thm_strictly_balanced} applies to~(a),(b),
\refT{thm_unique} applies to~(a),(c),
\refT{thm_nogrounded} applies to~(b),(d),
and \refT{thm_general} applies to all of~them.}
\label{fig_primal}
\end{figure}
The following result covers the case where no primal subgraph of~$(G,H)$ is grounded, such as in examples~(b) and~(d) from~\refF{fig_primal};
this case includes the graphs in~\refT{thm_strictly_balanced}~(ii).
\begin{theorem}[No grounded primals case]\label{thm_nogrounded}%
Let~$(G,H)$ be a rooted graph with no grounded primal subgraphs.
There is a constant~$\alpha > 0$ such that,
for all~$p=p(n) \in [0,1]$ and~$\eps=\eps(n) \in [n^{-\alpha},1]$,
\begin{equation}\label{eq:main:nogrounded}
\lim_{n \to \infty} \Pr\Bigpar{\max_{\ii \in \oset{v_G}} |X_{\ii} - \mu| < \eps\mu} =
\begin{cases}
0 &\text{if } \eps^2 \Phi \to 0, \\
1 &\text{if } \eps^2 \Phi \to \infty.
\end{cases}\vspace{-0.125em}%
\end{equation}%
\end{theorem}
\noindent
Similarly to~\refT{thm_strictly_balanced}~(ii),
the intuition is that all~$X_\xx$ are approximately equal once we know the number of unrooted copies of a certain subgraph of~$H$
(e.g., in example~(d) from \refF{fig_primal} this special subgraph~is~$K_4$).
Theorems~\ref{thm_unique}--\ref{thm_nogrounded} give thresholds for the concentration of extension counts in terms of~$\eps^2\Phi$.
For general~${(G,H)}$ we do not have such a threshold, but the following
result intuitively states that the transition from the \mbox{$0$-statement} to the \mbox{$1$-statement}
always happens at some point as~$\eps^2 \Phi$ changes from~$o(1)$ to~$n^{\Omega(1)}$.
\begin{theorem}[General case: approximate conditions]\label{thm_general
Let~$(G,H)$ be a rooted graph.
For all~$p=p(n) \in [0,1]$ and~$\eps = \eps(n) \in (0,1]$ with~$1-p = \Omega(1)$ and~$\Phi \to \infty$,
\begin{equation}\label{eq:general}
\lim_{n \to \infty} \Pr\Bigpar{\max_{\xx \in \oset{v_G}}|X_\xx - \mu| < \eps\mu} =
\begin{cases}
0 &\text{if } \eps^2 \Phi \to 0, \\
1 &\text{if } \eps^2 \Phi = n^{\Omega(1)}.
\end{cases}\vspace{0.75em
\end{equation}%
\end{theorem}
\noindent
The $1$-statement in~\eqref{eq:general}
implies~\cite[Corollary~4]{S90b}, which in turn strengthens a result that
played a key role in the study of zero-one laws~\cite{SS1988} due to Shelah and Spencer
(since the `safe' assumptions from~\cite{S90b,SS1988} imply~$\Phi = n^{\Omega(1)}$
via~\refR{rem:Phibig}~\ref{eq:Phibig:iv} from \refS{s_prelim}).
\pagebreak[3]
\subsection{Discussion: open problems and cautionary examples}\label{sec:intro:general}
For rooted subgraph extension counts, the main open problem is to fully determine the thresholds for concentration,
i.e., to close the gap in~\eqref{eq:general} of \refT{thm_general}
(and to weaken the conditions of Theorems~\ref{thm_strictly_balanced}--\ref{thm_nogrounded}).
\begin{problem}\label{prb:open}%
Determine the `correct' conditions for the $0$-~and $1$-statements of any rooted graph~$(G,H)$.
\end{problem}
Our
understanding of~\refPr{prb:open} is still far from satisfactory.
Indeed, even for fixed~$\eps \in (0,1]$ the correct {$1$-statement} condition remains open,
which we now illustrate for the rooted graph~(e) from \refF{counterexample}.
In this case, any {$(G,H)$-extension} can be viewed as a combination of a {$(G,K_4)$-extension} and a {$(K_4,H)$-extension}.
The proof of Spencer's general $1$-statement \mbox{result~\cite[Theorem~3]{S90b}} combines this
decomposition with his strictly balanced result~\eqref{eq_Spencer_SB} for~$(G,K_4)$ and~$(K_4,H)$,
leading to a sufficient condition of form~$\min\{\mu_{G,K_4}, \mu_{K_4,H}\} \ge K'(\eps) \log n$ (cf.~\cite[Section~2]{S90b}).
The following result shows that this sufficient condition can be weakened in some range, demonstrating that Spencer's general $1$-statement condition is not always optimal.
\begin{proposition}\label{prop:counterexample:2}%
Let~$(G,H)$ be the rooted graph~(e) depicted in \refF{counterexample}.
Set~$\omega := np^2$.
For all~$p=p(n) \in [0,1]$ and $\eps=\eps(n) \in (0,1]$
such that~$\omega \ll \log n$ and~$\eps^2 \omega^3 \gg \log n$,
we have~$\eps^2\mu_{G,K_4} \gg \log n \gg \eps^2 \mu_{K_4,H}$
but~$\Pr(\max_{\xx \in \oset{v_G}}|X_\xx - \mu| < \eps\mu) \to 1$ as~$n \to \infty$.
\end{proposition}
\begin{figure}
\begin{center}
\hspace*{\fill}
\begin{tikzpicture}[thick,scale=0.5]
\draw
(-1,0) node[vertex] (root) { } -- (1, 1) node[vertex] (top) { } -- (1, -1) node[vertex] (bot) { } -- (root)
(top) -- (0.2, 0) node[vertex] (mid) {} -- (bot)
(mid) -- (root)
(top) -- (2, 0) node[vertex] (add1) {} -- (bot);
\draw (root) circle (8pt);
\node at (3.5,-1) () {(e)};
\end{tikzpicture}
\hspace*{\fill}
\begin{tikzpicture}[thick,scale=0.5]
\draw
(-1,0) node[vertex] (root) { } -- (1, 1) node[vertex] (top) { } -- (1, -1) node[vertex] (bot) { } -- (root)
(top) -- (0.2, 0) node[vertex] (mid) {} -- (bot)
(mid) -- (root)
(top) -- (2, 0.5) node[vertex] (add1) {} -- (bot)
(top) -- (2, -0.5) node[vertex] (add2) {} -- (bot);
\draw (root) circle (8pt);
\node at (3.5,-1) () {(f)};
\end{tikzpicture}\vspace{-1.125em}
\hspace*{\fill}
\end{center}
\caption{The rooted graphs used in~Propositions~\ref{prop:counterexample:2}--\ref{prop:counterexample:1}, with the root vertex circled:
for~(e) Spencer's general $1$-statement is not~optimal,
and for~(f) the natural condition~$\eps^2\Phi \gg \log n$ does~not imply the~$1$-statement.}
\label{counterexample}
\end{figure}
\noindent
It is not hard to see that in the setting of \refP{prop:counterexample:2} we have $\eps^2\Phi \asymp \eps^2 \mu_{G, K_4} \gg \log n$, which together with Theorems~\ref{thm_unique}--\ref{thm_nogrounded} suggests that maybe~$\eps^2 \Phi \gg \log n$ is always a sufficient condition\footnote{Further support comes from the fact that~$X_\xx$ is asymptotically normal, see~\refCl{cl:mom}~\ref{cl:mom:asymp} in \refApp{apx:general} and the variance estimate~\eqref{eq:Variance} from \refS{s_prelim}, which makes it plausible that~$\Pr(|X_\xx - \mu| \ge \eps\mu) \le \e^{-\Omega((\eps \mu)^2/\Var X_\xx)} \le \e^{-\Omega(\eps^2 \Phi)} \ll n^{-v_G}$, which in turn would then establish the $1$-statement by taking the union bound over all~$\Theta(n^{v_G})$ roots~$\xx$.}
for the $1$-statement (which would sharpen~\refT{thm_general}).
However, the following cautionary result shows that this speculation is false for the rooted graph~(f) depicted in~\refF{counterexample},
indicating that Problem~\ref{prb:open} is more tricky than one might~think.
\begin{proposition}\label{prop:counterexample:1}
Let~$(G,H)$ be the rooted graph~(f) depicted in \refF{counterexample}.
Set~$\omega := np^2$.
For all~$p=p(n) \in [0,1]$ and~$\eps=\eps(n) \in (0,1]$
such that~$\omega \ll (\log n)^{0.39}$ and~$\eps^2 \omega^3 \gg \log n$,
we have~$\eps^2\Phi \asymp \eps^2\mu_{G,K_4} \gg \log n$
but~$\Pr(\max_{\xx \in \oset{v_G}}|X_\xx - \mu| < \eps\mu) \to 0$ as~$n \to \infty$.
\end{proposition}
Overall, we hope that the above intriguing examples and open problems
will stimulate more research into rooted subgraph counts.
When~$(G,H)$ is strictly balanced and grounded,
then we conjecture that~\eqref{eq:thm:unique} holds for suitable~$c,C>0$
under the natural assumptions~$\mu \to \infty$ and~$1-p=\Omega(1)$,
i.e., without assuming~$\eps \ge n^{-\alpha}$.
We leave it as an open problem to formulate a conjecture for the general solution to~\refPr{prb:open},
which in many cases is closely related to determining the regime where $\Pr(|X_\xx - \mu| \ge \eps\mu)$ changes from~$n^{-o(1)}$ to~$n^{-\omega(1)}$, say.
In the concluding remarks we also discuss a potential connection to extreme value theory (see \refS{sec:conclusion}).
\subsection{Organization of the paper}
In \refS{s_prelim} we introduce some auxiliary results, which also imply~\refT{thm_general}.
In \refS{s_strictly_balanced} we prove our main result \refT{thm_strictly_balanced}~(i) for strictly balanced~$(G,H)$ that are grounded.
In Sections~\ref{s_nogrounded} and~\ref{s_unique} we prove Theorems~\ref{thm_unique}--\ref{thm_nogrounded}
for the no grounded primal case, and the unique and grounded primal case.
In \refS{sec:ext:non} we then prove \refT{thm_strictly_balanced}~(ii) for strictly balanced~$(G,H)$ that are not grounded.
In \refS{sec:counterexample} we prove the cautionary examples from Propositions~\ref{prop:counterexample:2}--\ref{prop:counterexample:1}.
Finally, \refS{sec:conclusion} contains some concluding remarks and~problems.
\pagebreak[3]
\section{Preliminaries}\label{s_prelim}
In this preliminary section we collect some useful basic observations,
and a partial result which implies Theorem~\ref{thm_general}.
First, by adapting the textbook argument~\cite[Lemma~3.5]{JLR} for (unrooted) subgraph counts,
for any rooted graph~$(G,H)$ it is standard to see that the variance of~$X_{G,H}(\xx)$ satisfies
\begin{equation}\label{eq:Variance}
\sigma^2 = \sigma_{G,H}^2 := \Var X_{G,H}(\xx)
\asymp (1 - p)\mu_{G,H}^2 /\Phi_{G,H}
\end{equation}
for any edge probability~$p=p(n) \in (0,1]$,
where~$\mu=\mu_{G,H}$ and~$\Phi=\Phi_{G,H}$ are defined as in~\eqref{def:muGH} and~\eqref{eq_PhiGH};~cf.~\cite{Matas2012phd}.
Next, inspired by similar statements for subgraph counts~\cite[Lemma~3.6]{JLR},
using that~$\mu_{G,J} \asymp \xpar{n^{1/d(G,J)}p}^{e_J-e_G}$ for all~$G \subseteq J \subseteq H$ with~$e_J > e_G$,
it is straightforward to establish the following useful~properties.
\begin{remark}\label{rem:Phibig}%
For any rooted graph~$(G,H)$, the following holds for all~$p=p(n)\in [0,1]$:
\begin{romenumerate}
\item\label{eq:Phibig:i
$\Phi \to \infty$ is equivalent to~$p \gg n^{-1/m(G,H)}$.
\item\label{eq:Phibig:ii
$\Phi = \Omega(1)$ is equivalent to~$p = \Omega(n^{-1/m(G,H)})$.
\item\label{eq:Phibig:iii
If~$\Phi \asymp 1$, then~$\mu_{G,J} \asymp 1$ for any~$G \subseteq J \subseteq H$ that is primal for~$(G,H)$.
\item\label{eq:Phibig:iv
If~$p = \Omega(n^{-1/m(G,H) + \eta})$ for some constant~$\eta \ge 0$, then~$\Phi = \Omega(n^{\eta})$.
\end{romenumerate}
\end{remark}
\noindent
Finally, the approximate result Theorem~\ref{thm_general} immediately follows
from the following slightly more general theorem,
whose technical statement will be convenient in several later proofs.
In particular, in some ranges of the parameters, we will be able to deduce the desired $1$-~or $0$-statements directly from~\eqref{eq:thm_generaltail:1}--\eqref{eq:thm_generaltail:0} below.
\begin{theorem}\label{thm_generaltail}%
For any rooted graph~$(G,H)$, the following holds for all~$p=p(n)\in [0,1]$:
\begin{romenumerate}
\item\label{thm_tail1}%
If~$\Phi = \Omega(1)$ and $(t/\mu)^2\Phi\ge n^{\Omega(1)}$, then
\begin{equation}\label{eq:thm_generaltail:1}
\lim_{n \to \infty} \Pr\Bigpar{\max_{\xx \in \oset{v_G}}|X_\xx - \mu| < t} = 1. \hspace{2.5em}
\end{equation}%
\item\label{thm_tail0}%
If $\eps = \eps(n) \in (0,1]$ and either (a)~$\Phi(1-p) \to \infty$ and $\eps^2 \Phi/(1-p) \to 0$, or~(b)~$\Phi \to 0$, then
\begin{equation}\label{eq:thm_generaltail:0}
\lim_{n \to \infty} \Pr\Bigpar{\max_{\xx \in \oset{v_G}}|X_\xx - \mu| \ge \eps \mu} = 1 . \hspace{2.5em}
\vspace{-0.125em}%
\end{equation}%
\end{romenumerate}
\end{theorem}
\begin{remark}\label{rem:thm_generaltail}%
In~\ref{thm_tail1}, the conclusion~\eqref{eq:thm_generaltail:1} holds with probability~$1 - o(n^{-\tau})$ for any constant~$\tau > 0$.
\end{remark}
\noindent
We defer the simple proof of Theorem~\ref{thm_generaltail} to Appendix~\ref{apx:general}, and only mention the main ideas here.
Claim~\ref{thm_tail0} exploits that~$X_\xx$ is asymptotically normal in a wide range.
Claim~\ref{thm_tail1} is based on Markov's inequality and a central moment estimate $\E (X_\xx - \mu)^{2m} \le C_m \sigma^{2m} \le D_m (\mu^2/\Phi)^m$ that is a by-product of the usual asymptotic normality proof via the method of moments (see Claim~\ref{cl:mom} in Appendix~\ref{apx:general}).
This approach for obtaining tail estimates `without much effort' does not seem to be as widely known in probabilistic combinatorics,
and we believe that it will be useful in other applications
(e.g., it yields a simple direct proof of~\cite[Corollary~4]{S90b}). %
\section{Strictly balanced and grounded case (Theorem~\ref{thm_strictly_balanced})}\label{s_strictly_balanced}
In this section we prove the threshold~\eqref{eq:main:strbal} of Theorem~\ref{thm_strictly_balanced}~(i) for strictly balanced rooted graphs~$(G,H)$ that are grounded (see \refS{sec:ext:non} for the less interesting ungrounded case).
The~$0$-statement in~\eqref{eq:main:strbal} is the main difficulty, and here the plan is to
use a second moment argument to show the existence of a root~$\xx \in \oset{v_G}$ with too many $(G,H)$-extensions, i.e., with~$X_{\xx} \ge (1+\eps)\mu$.
Unfortunately, even an asymptotic estimate of the relevant first moment is challenging,
since the upper tail proba\-bi\-li\-ty~$\Pr(X_{\xx} \ge (1+\eps)\mu)$ is hard to estimate up to a~$1+o(1)$ factor
(this is an instance of the `infamous' upper tail problem~\cite{JR2002,SW18}).
To sidestep this technical difficulty, we instead show the existence of a root~${\xx \in \oset{v_G}}$
which attains~$X_{\xx}=\ceil{(1+\eps)\mu}$ due to exactly~$\ceil{(1+\eps)\mu}$ extensions that are vertex-disjoint outside of~$\xx$.
The crux is that these auxiliary events are more tractable: we can estimate the relevant first and second moments up to the required~$1+o(1)$ factors
via a careful mix of Harris' inequality~\cite{Harris}, Janson's inequality~\cite{J90,BS,RiordanWarnke2015}, and counting arguments.
It turns out that here the extra assumption~$\eps \ge n^{-\alpha}$ is helpful:
it will allow us to focus on fairly small edge probabilities~$p=p(n)$ close to~$n^{-1/d(G,H)}$,
which intuitively makes it easier to show that various events are approximately independent (as tacitly required by the second moment method);
see~\refS{sec:0statement} for the~details.
The~$1$-statement in~\eqref{eq:main:strbal} is simpler (and nowadays fairly routine).
For edge probabilities~$p=p(n)$ that are close to~$n^{-1/d(G,H)}$, we use a standard union bound argument, estimating the lower tail~$\Pr(X_{\xx} \le (1-\eps)\mu)$ via Janson's inequality~\cite{J90,JLR,RiordanWarnke2015} and the upper tail~$\Pr(X_{\xx} \ge (1+\eps)\mu)$ via an inequality of Warnke~\cite{WUT}.
For edge probabilities~$p=p(n)$ much larger than~$n^{-1/d(G,H)}$,
it turns out that we can simply use the partial result \refT{thm_generaltail}~\ref{thm_tail1} due to the extra assumption~$\eps \ge n^{-\alpha}$;
see~\refS{sec:1statement} for the~details.
\subsection{Technical preliminaries}\label{s_strictly_balanced:prelim}
Our upcoming arguments exploit two standard properties of strictly balanced rooted graphs:
(i)~that, for fairly small edge probabilities~$p=p(n)$, the expectation~$\mu=\mu_{G,H}$ is significantly smaller than any other expectation~$\mu_{G,J}$ with~$G \subsetneq J \subsetneq H$ (note that~$\mu_{G,H}/\mu_{G,J} \asymp n^{v_H - v_J}p^{e_H - e_J} \ll 1$ via~\eqref{eq:lem:density:subs} below),
and (ii)~that, after removing the root vertices~$G$ from~$H$, the remaining graph~${H - V(G)}$ is connected.
Both mimic well-known properties from the unrooted case,
so we defer the routine
proof of \refL{lem:StrBal} to~\refS{s_strictly_balanced:prelim:deferred}.
\begin{lemma}\label{lem:StrBal}%
For any strictly balanced rooted graph~$(G,H)$, the following holds:
\begin{romenumerate}
\item\label{eq:StrBal:density
There is a constant $\beta > 0$ such that, for all~$p=p(n) \in [0,1]$ with~$p= O(n^{-1/d(G,H) + \beta})$,
\begin{equation}
\label{eq:lem:density:subs}
\max_{G \subsetneq J \subsetneq H} n^{v_H - v_J}p^{e_H - e_J} \ll n^{-\beta}. \hspace{2.5em}
\vspace{-0.125em}%
\end{equation}%
\item\label{eq:StrBal:connected
The graph~${H - V(G)}$, obtained from~$H$ by deleting the vertices of~$G$, is connected.
\end{romenumerate}
\end{lemma}
\subsection{The $0$-statement}\label{sec:0statement}
Our second moment based proof of the $0$-statement{} in~\eqref{eq:main:strbal} of Theorem~\ref{thm_strictly_balanced} hinges on the following key lemma.
Given a root~$\xx \in \oset{v_G}$, let~$\cE_{\xx}$ denote the event that, in~${\mathbb G}_{n,p}} % \newcommand{\Gnp}{\G(n,p)$, the root~$\xx$ has
exactly~$\dex:= \ceil{(1 + \eps)\mu}$ pairwise \emph{vertex-disjoint} $(G,H)$-extensions (i.e., sharing no vertices outside~$\xx$), and no~other $(G,H)$-extensions.
We also say that two roots~$\xx_1, \xx_2 \in \oset{v_G}$ are \emph{disjoint} if they share no elements as (unordered) sets.
\begin{lemma}\label{lem:main}%
Let $(G,H)$ be a rooted graph that is strictly balanced and grounded.
There are constants~$c, \gamma > 0$ such that,
for all~$\eps=\eps(n) \in (0,1]$ and~$p=p(n) \in [0,1]$ with~$p \le n^{-1/d(G,H) + \gamma}$, $\mu \ge 1/2$ and~$\eps^2\mu \le c \log n$,
the following holds:
for all roots~$\xx \in \oset{v_G}$ we have
\begin{equation}\label{eq:pr:lb}
\Pr(\cE_{\xx}) \gg n^{-1/2},
\end{equation}
and for all disjoint roots~$\xx_1,\xx_2 \in \oset{v_G}$ we have
\begin{equation}\label{eq:pr:ub}
\Pr(\cE_{\xx_1}, \: \cE_{\xx_2}) \le (1 + o(1)) \Pr(\cE_{\xx_1})\Pr(\cE_{\xx_2}).
\end{equation}
\end{lemma}
\begin{proof}[Proof of the $0$-statement{} in~\eqref{eq:main:strbal} of Theorem~\ref{thm_strictly_balanced} (assuming Lemma~\ref{lem:main})]
Let~$c, \gamma>0$ be the constants given by Lemma~\ref{lem:main}.
Fix arbitrary $0 < \alpha < \gamma/2$.
First, when~$p > n^{-1/d(G,H) + \gamma}$, then~$\eps \ge n^{-\alpha}$ and \refR{rem:Phibig}~\ref{eq:Phibig:iv}
imply~$\eps^2\mu \ge n^{-2\alpha} \cdot \Phi_{G,H} = \Omega(n^{\gamma - 2\alpha}) \gg \log n$, so the condition of the $0$-statement{} cannot be satisfied and hence there is nothing to prove.
Next, when~$\mu < 1/2$, then~$(1+\eps) \mu \le 2 \mu < 1$ and~$\eps \le 1$ imply that the interval~$\left((1-\eps)\mu, (1 + \eps)\mu\right)$ contains no integers, and so the $0$-statement{} again holds trivially.
Henceforth we thus can assume~$\mu \ge 1/2$ and~$p \le n^{-1/d(G,H) + \gamma}$, as required by Lemma~\ref{lem:main}.
For convenience, we set~$s := \lfloor n / v_G \rfloor \asymp n$, and choose disjoint roots~$\xx_1, \dots, \xx_s \in \oset{v_G}$.
Writing~$Y := |\left\{ i \in [s] : \cE_{\xx_i} \text { holds} \right\}|$, to prove the $0$-statement{} of Theorem~\ref{thm_strictly_balanced} we shall now show that~$Y > 0$~\whp{}.
Namely, using~\eqref{eq:pr:lb} we obtain~$\E Y = \sum_{1 \le i \le s}\Pr(\cE_{\xx_i}) \gg s \cdot n^{-1/2} \asymp n^{1/2} \to \infty$,
and together with~\eqref{eq:pr:ub} it follows~that
\begin{equation*}\label{eq:mu2:ub}
\begin{split}
\E Y^2 & \le \sum_{1 \le i ,j \le s: \; i \neq j} \Pr(\cE_{\xx_i}, \: \cE_{\xx_j}) + \sum_{1 \le i \le s} \Pr(\cE_{\xx_i}) \; \le \; (1 + o(1)) \cdot (\E Y)^2 + \E Y \; \sim \; (\E Y)^2.
\end{split}
\end{equation*}
Now Chebyshev's inequality readily yields $\Pr(Y=0) \le \Var Y/(\E Y)^2 \to 0$, completing the proof.
\end{proof}
The remainder of Section~\ref{sec:0statement} is dedicated to the proof of Lemma~\ref{lem:main}.
For concreteness, for~$\beta>0$ as given by \refL{lem:StrBal}~\ref{eq:StrBal:density}, we choose the constants~$\gamma, c \in (0,1)$ such that
\begin{equation}\label{eq:gammadef}
\gamma e_H \; < \; \min\bigcpar{\beta/v_H, \: \beta/2, \: 1/2, \: 1-c}.
\end{equation}
Recalling~$\mu \asymp n^\vGH p^\eGH$ and~$\eps \le 1$, using the assumptions~$\mu \ge 1/2$ and~$p \le n^{-1/d(G,H) + \gamma}$ we infer
\begin{equation}\label{eq:mumupper}
1/2 \: \le \: \mu \: \le \: \dex = \ceil{(1 + \eps)\mu} \: \le \: O(n^{\gamma e_H}) \ll \min\bigcpar{n^{1/2},n^{\beta/2}} ,
\end{equation}
with room to spare.
With foresight, given~$\xx \in \oset{v_G}$, we denote by~$N = N_{G,H}(\xx)$ the number of $(G,H)$-extensions of~$\xx$ in~$K_n$.
Note that $N \asymp n^{\vGH}$ does not depend on the particular choice of~$\xx$.
\subsubsection{The first moment: inequality~\eqref{eq:pr:lb}}\label{sec:first}
We start with~\eqref{eq:pr:lb}, i.e., a lower bound for~$\Pr(\cE_{\xx})$.
Recall that every $\xx \in \oset{v_G}$ has~$N$ extensions in~$K_n$.
The plan is to show that~$\Pr(\cE_{\xx})$ is comparable to $\Pr(\Bin(N,p^\eGH) = \dex)$, more precisely~that
\begin{equation}\label{eq:pr:lb:bin}
\Pr(\cE_{\xx}) \; \ge \; (1+o(1)) \cdot \binom{N}{\dex} p^{(\eGH)\dex} (1-p^\eGH)^{N - \dex} .
\end{equation}
In view of $\dex \approx (1+\eps)\mu = (1 + \eps) Np^\eGH$,
using a standard local limit theorem
(or alternatively Stirling's formula)
it then will be routine to deduce that~\eqref{eq:pr:lb:bin} is~$\Theta(\mu^{-1/2}) \cdot \e^{-\Theta(\eps^2\mu)}$,
which together with~\eqref{eq:gammadef}--\eqref{eq:mumupper} and the assumption~$\eps^2\mu \le c\log n$
eventually establishes the desired inequality~\eqref{eq:pr:lb}; see \eqref{eq:MoivreLaplace}--\eqref{eq:MoivreLaplace2}~below.
Turning to the technical details, given $\xx \in \oset{v_G}$,
let~$\fH(\xx)$ denote the set of all (unordered) collections of~$\dex$ vertex-disjoint $(G,H)$-extensions of~$\xx$.
Given~$\cC \in \fH(\xx)$, let~$\cC^c$ denote the remaining~${N - \dex}$ extensions of~$\xx$.
Writing~$\cI_{\cS}$ for the event that all extensions in~$\cS$ are present in~${\mathbb G}_{n,p}} % \newcommand{\Gnp}{\G(n,p)$,
and~$\cD_{\cS}$ for the event that all extensions in~$\cS$ are not present in~${\mathbb G}_{n,p}} % \newcommand{\Gnp}{\G(n,p)$,
note that
\begin{equation}\label{eq:er}
\Pr(\cE_{\xx}) = \sum_{\cC \in \fH(\xx)} \Pr(\cI_{\cC} , \: \cD_{\cC^c}) = \sum_{\cC \in \fH(\xx)} \Pr(\cI_{\cC}) \Pr(\cD_{\cC^c} \mid \cI_{\cC}) \ge |\fH(\xx)| \min_{\cC \in \fH(\xx)} \Pr(\cI_{\cC})\Pr(\cD_{\cC^c} \mid \cI_{\cC}).
\end{equation}
Since~$N \asymp n^{\vGH}$ and~$\dex \ll n^{1/2}$ (see~\eqref{eq:mumupper}),
routine calculations give
\begin{equation}\label{eq:Hr}
|\fH(\xx)| = \frac{\left(N- O(\dex n^{\vGH-1})\right)^\dex}{\dex!} = \frac{N^\dex}{\dex!} \cdot \e^{O(\dex^2/n)}
\sim \frac{N^\dex}{\dex!} \sim \binom{N}{\dex} .
\end{equation}
Since the extensions in $\cC \in \fH(\xx)$ are disjoint, we have
\begin{equation}\label{eq:I}
\Pr(\cI_{\cC})= p^{(\eGH)\dex}.
\end{equation}
For the remaining lower bound on~$\Pr(\cD_{\cC^c} | \cI_{\cC})$,
the idea is to apply Harris' inequality~\cite{Harris}, and
then use \refL{lem:StrBal}~\ref{eq:StrBal:density} to show that the effect of `overlapping' pairs of extensions is negligible.
\begin{claim}\label{cl:D:lower}%
Let~$\xx \in \oset{v_G}$. Then, for all~$\cC \in \fH(\xx)$, we have
\begin{equation}\label{eq:D:lower}
\Pr(\cD_{\cC^c} | \cI_{\cC}) \; \ge \; (1+o(1)) \cdot (1-p^\eGH)^{N-\dex} .
\end{equation}
\end{claim}
\begin{proof}%
Defining the auxiliary graph~$F := \bigpar{[n], \; \bigcup\{E(H_1): H_1 \in \cC\}}$,
note that every extension $H_2 \in \cC^c$ contains at least one edge not in~$F$
(since by \refL{lem:StrBal}~\ref{eq:StrBal:connected},
after deleting the root vertices~$\xx$, all graphs in~$\{H_1 -\xx: H_1 \in \cC\}$ are vertex-disjoint and connected).
Since~$1-x \ge \e^{-2x}$ for~$x \le 1/2$, using Harris' inequality it routinely follows~that
\begin{equation} \label{eq:Harris}
\Pr(\cD_{\cC^c} | \cI_{\cC}) \ge \prod_{H_2 \in \cC^c} (1-p^{\eGH - e(H_2 \cap F)})
\ge (1-p^{\eGH})^{N - \dex} \cdot \exp \Big( -2 \hspace{-0.25em} \sum_{\substack{H_2 \in \cC^c: \\ e(H_2 \cap F) \ge 1}} \hspace{-0.5em} p^{\eGH - e(H_2 \cap F)}\Big) .
\end{equation}
To estimate the sum in \eqref{eq:Harris}, note that if~$H_2 \in \cC^c$ shares an edge with~$F$,
then~$E(H_2 \cap F)$ corresponds to a $(G,J)$-extension of~$\xx$ for some~$G \subsetneq J \subsetneq H$.
The number of such extensions is at most~$(v_H\dex)^{v_J-v_G} = O(\dex^{v_H})$, with room to spare.
Given a $(G,J)$-extension, it can be further extended to some~$H_2 \in \cC^c$ in at most~$n^{v_H-v_J}$ ways.
Using~$e_H-e_G-(e_J-e_G)=e_H-e_J$ together with~\eqref{eq:mumupper} and \eqref{eq:lem:density:subs},
it follows that
\begin{equation}\label{eq:Harris:overlap}
\sum_{\substack{H_2 \in \cC^c: \\ e(H_2 \cap F) \ge 1}} \hspace{-0.5em} p^{\eGH - e(H_2 \cap F)} \le
\sum_{G \subsetneq J \subsetneq H} \hspace{-0.375em} O\Bigpar{\dex^{v_H} n^{v_H-v_J} \cdot p^{e_H-e_J}}
\ll n^{\gamma e_H v_H - \beta} = o(1),
\end{equation}
which together with~\eqref{eq:Harris} establishes inequality~\eqref{eq:D:lower}.
\end{proof}
Combining estimates~\eqref{eq:er}--\eqref{eq:D:lower}, we readily obtain inequality~\eqref{eq:pr:lb:bin}.
To establish~\eqref{eq:pr:lb}, it remains to estimate the right-hand side of~\eqref{eq:pr:lb:bin}
via a standard local limit theorem for the binomial distribution, namely~\cite[Theorem~1 in~Section~VII.3]{Feller}.
Number~$k$ in~\cite{Feller} translates in our setting to~${k := {\dex - \lfloor(N + 1)p^\eGH\rfloor}={\eps\mu+O(1)}}$
(what is~$m$ in~\cite{Feller} is~$\lfloor(N+1)p^\eGH \rfloor$ in our case),
and thus~$k \le \dex \ll N^{2/3}$ by~\eqref{eq:mumupper}.
Hence the aforementioned local limit theorem from~\cite{Feller} applies,
which in view of~$\mu=Np^\eGH$ gives
\begin{equation}\label{eq:MoivreLaplace}
\binom{N}{\dex} p^{(\eGH)\dex} (1-p^\eGH)^{N-\dex} \; \sim\; \frac{1}{\sqrt{2\pi \mu(1-p^\eGH)}} \cdot \exp\biggpar{- \frac{k^2}{\mu(1 - p^\eGH)}} .
\end{equation}
Using~\eqref{eq:mumupper} we readily infer~$k^2/\mu
= \eps^2\mu + O(1)$.
Note that~\eqref{eq:gammadef} implies~$p^\eGH \le n^{-(v_H-v_G) + \gamma(\eGH)} \le n^{-1 + 1/2}= n^{-1/2} \to 0$.
Using the estimates~\eqref{eq:mumupper} and~$\eps^2\mu \le c \log n$ together with~$\gamma e_H + c < 1$ (see~\eqref{eq:gammadef}),
it now follows that~\eqref{eq:MoivreLaplace} is at least
\begin{equation}\label{eq:MoivreLaplace2}
\Omega\bigpar{\mu^{-1/2}} \cdot \exp \Bigpar{-\bigpar{1+O(p^\eGH)} \tfrac{1}{2}\eps^2\mu}
\; \ge \;
\Omega(1) \cdot \exp \Bigpar{-\tfrac{\gamma e_H + c}{2}\log n} \gg n^{-1/2},
\end{equation}
which together with~\eqref{eq:pr:lb:bin} completes the proof of inequality~\eqref{eq:pr:lb} from Lemma~\ref{lem:main}.
\subsubsection{The second moment: inequality~\eqref{eq:pr:ub}}
Now we turn to~\eqref{eq:pr:ub}, i.e., an upper bound for $\Pr\left( \cE_{\xx_1}, \cE_{\xx_2} \right)$ when~$\xx_1$, $\xx_2$ are disjoint.
Recalling~\eqref{eq:er}, note~that
\begin{equation}\label{eq:EE00}
\Pr(\cE_{\xx_1} , \: \cE_{\xx_2}) = \sum_{\cC_1 \in \fH(\xx_1)}\sum_{\cC_2 \in \fH(\xx_2, \cC_1)} \Pr(\cI_{\cC_1 \cup \cC_2} , \: \cD_{\cC_1^c \cup \cC_2^c}) ,
\end{equation}
where we (with foresight) define
\begin{equation}\label{eq:HRC}
\fH(\xx_2, \cC_1) := \bigcpar{ \cC_2 \in \fH(\xx_2) : \ \Pr(\cI_{\cC_1 \cup \cC_2} , \: \cD_{\cC_1^c \cup \cC_2^c})>0 }.
\end{equation}
Guided by the heuristic that the various events are approximately independent,
the plan is to show~that
\begin{equation}\label{eq:EE01}
\Pr(\cI_{\cC_1 \cup \cC_2} , \: \cD_{\cC_1^c \cup \cC_2^c})
\; \le \; (1+o(1)) \Pr(\cI_{\cC_1} , \: \cD_{\cC_1^c}) \cdot \Pr(\cI_{\cC_2}, \: \cD_{\cC_2^c}) ,
\end{equation}
though the actual details will be slightly more involved.
Ignoring these complications for now, note that~\eqref{eq:EE01} would together with~\eqref{eq:EE00}, \eqref{eq:er} and~$\fH(\xx_2, \cC_1) \subseteq \fH(\xx_2)$ indeed imply the desired inequality~\eqref{eq:pr:ub}.
Turning to the technical details, since $\cI_{\cC_1 \cup \cC_2}$ is an increasing event and $\cD_{\cC_1^c \cup \cC_2^c}$ is a decreasing event, using~\eqref{eq:EE00} and Harris' inequality~\cite{Harris} we obtain
\begin{equation}\label{eq:EE}
\begin{split}
\Pr(\cE_{\xx_1}, \: \cE_{\xx_2}) \le \sum_{\cC_1 \in \fH(\xx_1)}\sum_{\cC_2 \in \fH(\xx_2, \cC_1)} \Pr(\cI_{\cC_1 \cup \cC_2}) \Pr(\cD_{\cC_1^c \cup \cC_2^c}) .
\end{split}
\end{equation}
Recalling that every~$\xx \in \oset{v_G}$ has~$N$ extensions in~$K_n$,
Harris' inequality also readily gives~$\Pr(\cD_{\cC_1^c \cup \cC_2^c}) \ge (1 - p^\eGH)^{2(N-\dex)}$.
We will now prove an asymptotically matching upper bound that
does \emph{not} depend on the choice of $\cC_1$ and~$\cC_2$ (similarly as in Claim~\ref{cl:D:lower}).
Here the idea is to apply a form of Janson's inequality~\cite{BS,JLR,AS},
and then again use \refL{lem:StrBal}~\ref{eq:StrBal:density} to argue that `overlaps' have negligible contribution.
\begin{claim}\label{cl:D2:upper}%
Let~$\xx_1,\xx_2 \in \oset{v_G}$ be disjoint.
Then, for all~$\cC_1 \in \fH(\xx_1)$ and~$\cC_2 \in \fH(\xx_2)$, we~have
\begin{equation}\label{eq:D2:upper}
\Pr(\cD_{\cC_1^c \cup \cC_2^c}) \; \le \; (1 + o(1)) \cdot (1-p^\eGH)^{2(N - \dex)} .
\end{equation}
\end{claim}
\begin{proof}%
Let~$\cF$ be the family of~{{$(e_H$}{$-$}{$e_G)$}}-element edge-sets corresponding to extensions in~$\cC_1^c \cup \cC_2^c$ (each extension of~$\xx_i$ is uniquely determined by its edge-set, since~$H$ has no isolated vertices outside of~$V(G)$ by \refL{lem:StrBal}~\ref{eq:StrBal:connected}).
Note that if an extension in~$\cC_1^c$ is also an extension in~$\cC_2^c$, then it must contain some vertex from~$\xx_2$ (because~$(G,H)$ is grounded).
Since~$\xx_1, \xx_2$ are disjoint, the number of such duplicate extensions is~$O(n^{v_H - v_G - 1})$, so that~$|\cF| \ge 2(N - \dex) - O(n^{v_H - v_G - 1})$.
Note that the event~$\cD_{\cC_1^c \cup \cC_2^c}$ implies~$\sum_{E \in \cF} \indic{E \subseteq {\mathbb G}_{n,p}} % \newcommand{\Gnp}{\G(n,p)} = 0$.
Since~$(1-x)^{-1} \le \e^{2x}$ for~$x \le 1/2$,
by invoking the Boppana--Spencer~\cite{BS} variant of Janson's inequality (see, e.g.,~\cite[Remark~2.20]{JLR} or~\cite[Theorem~8.1.1]{AS})
it then follows~that
\begin{equation}
\label{eq:Janson}
\Pr(\cD_{\cC_1^c \cup \cC_2^c})
\le (1-p^\eGH)^{|\cF|} \cdot \e^{2\Delta}
\le (1-p^\eGH)^{2(N - \dex)} \cdot \e^{O(n^{v_H - v_G - 1}p^{\eGH}+\Delta)},
\end{equation}
where
\begin{equation}\label{eq:Janson:Delta}
\Delta := \sum_{\substack{(E_1, E_2) \in \cF\times\cF:\\ 1 \le |E_1 \cap E_2| < \eGH}} \hspace{-0.75em} p^{|E_1 \cup E_2|}.
\end{equation}
Using~$\mu=Np^\eGH \asymp n^{\vGH}p^\eGH$ together with~\eqref{eq:mumupper}, it follows that
\begin{equation}\label{eq:Janson:approx}
n^{v_H - v_G - 1}p^\eGH \asymp \mu \cdot n^{-1} \ll n^{1/2 - 1} = o(1).
\end{equation}
Turning to the~$\Delta$-term, note that~$|\cF|p^\eGH \le 2(N - \dex)p^\eGH \le 2 \mu$.
By proceeding analogously to the estimates in~\eqref{eq:Harris}--\eqref{eq:Harris:overlap}, using \eqref{eq:mumupper} and \eqref{eq:lem:density:subs} it routinely follows~that
\begin{equation}\label{eq:Janson:Delta:bound}
\Delta
\le \sum_{E_1 \in \cF} p^{\eGH}\hspace{-0.75em}\sum_{\substack{E_2 \in \cF:\\ 1 \le |E_1 \cap E_2| < \eGH}}\hspace{-1.0em} p^{\eGH-|E_1 \cap E_2|}
\le O\Bigpar{\mu \cdot \sum_{G \subsetneq J \subsetneq H} n^{v_H - v_J}p^{e_H - e_J}}
= o(1),
\end{equation}
which together with~\eqref{eq:Janson}--\eqref{eq:Janson:approx} establishes inequality~\eqref{eq:D2:upper}.
\end{proof}
To sum up, by inserting the estimates~\eqref{eq:I} and~\eqref{eq:D2:upper} into~\eqref{eq:EE}, we readily arrive at
\begin{equation}\label{eq:EE1}
\Pr(\cE_{\xx_1},\cE_{\xx_2}) \; \le \; (1+o(1)) \cdot p^{(\eGH)\dex}(1-p^\eGH)^{2(N - \dex)} \sum_{\cC_1 \in \fH(\xx_1)}\sum_{\cC_2 \in \fH(\xx_2, \cC_1)} \Pr(\cI_{\cC_2} \mid \cI_{\cC_1}) .
\end{equation}
Anticipating that the main contribution comes from pairs~$\cC_1, \cC_2$ of `disjoint' collections, we~partition
\begin{equation}\label{eq:part}
\fH(\xx_1) := \fH_0(\xx_1,\xx_2) \cup \fH_{\ge 1}(\xx_1,\xx_2) ,
\end{equation}
where $\fH_0(\xx_1,\xx_2)$ contains the collections~$\cC_1 \in \fH(\xx_1)$ for which the auxiliary~graph
\begin{equation}\label{eq:F}
F = F(\cC_1) := \Bigl([n], \; \bigcup\bigl\{E(H') : H' \in \cC_1\bigr\}\Bigr)
\end{equation}
contains no extensions of~$\xx_2$, and~$\fH_{\ge 1}(\xx_1,\xx_2)$ contains the remaining ones.
Since~$\xx_1, \xx_2$ are disjoint and~$(G,H)$ is grounded,
every $\cC_1 \in \fH_{\ge 1}(\xx_1,\xx_2)$ must contain at least one extension overlapping with~$\xx_2$ (in at least one vertex).
From \eqref{eq:Hr}, $N \asymp n^{\vGH}$ and~$\dex \ll n$ (see~\eqref{eq:mumupper})
it follows~that, for some constant~$A = A(G,H) > 0$,
\begin{equation}\label{eq:negligbleC1}
\left|\fH_{\ge 1}(\xx_1,\xx_2)\right| \le A n^{\vGH-1} \cdot \binom{N}{\dex-1}
\asymp n^{\vGH-1}\cdot \frac{\dex}{N} \cdot |\fH(\xx_1)|
\ll |\fH(\xx_1)| .
\end{equation}
Exploiting the groundedness assumption, we next show that pairs~$\cC_1, \cC_2$
can only overlap in at most~$v_G=O(1)$ extensions (see~Claim~\ref{cl:finiteOverlaps}),
and that overlapping pairs effectively have negligible contribution (see~Claim~\ref{cl:conditionalsum}).
\begin{claim}\label{cl:finiteOverlaps}%
Let~$\xx_1,\xx_2 \in \oset{v_G}$ be disjoint.
Then, for all~$\cC_1 \in \fH(\xx_1)$, the graph~$F = F(\cC_1)$ defined in~\eqref{eq:F} contains
at most~$v_G$ vertex-disjoint extensions of~$\xx_2$.
\end{claim}
\begin{proof}%
The graph~${F - \xx_1}$ (obtained by removing the vertices~$\xx_1$ from~$F$), consists of isolated vertices and vertex-disjoint copies of the graph~${H - V(G)}$, which, by \refL{lem:StrBal}~\ref{eq:StrBal:connected}, is connected.
Let~$H'$ be obtained from~${H - E(G)}$ by removing isolated root vertices (if any).
Since~$(G,H)$ is grounded, we have ${e_{{H - V(G)}} < e_{H'}}$.
Note that~$H'$ is connected (since it equals~${H - V(G)}$ with some root vertices connected to it) and therefore~${F - \xx_1}$ is~$H'$-free.
It follows that any extension of~$\xx_2$ that is present in~$F$ must intersect~$\xx_1$, so there are at most~$|\xx_1| = v_G$ such vertex-disjoint extensions of~$\xx_2$.
\end{proof}
\begin{claim}\label{cl:conditionalsum}%
Let~$\xx_1,\xx_2 \in \oset{v_G}$ be disjoint. Then
\begin{equation}\label{eq:sumP}
\sum_{\cC_1 \in \fH(\xx_1)}\sum_{\cC_2 \in \fH(\xx_2, \cC_1)}\Pr(\cI_{\cC_2} \mid \cI_{\cC_1})
\; \le \;
(1+o(1)) \sum_{\cC_1 \in \fH(\xx_1)}\sum_{\cC_2 \in \fH(\xx_2)}\Pr(\cI_{\cC_2}) .
\end{equation}
\end{claim}
\begin{proof}[Proof of Claim~\ref{cl:conditionalsum}]%
In the first step we estimate~$\sum_{\cC_2 \in \fH(\xx_2, \cC_1)}\Pr(\cI_{\cC_2} \mid \cI_{\cC_1})$ using a counting argument
that accounts for the different kinds of overlaps of~$\cC_2$ with the graph~$F = F(\cC_1)$ defined in~\eqref{eq:F}.
Turning to the details, as in the proof of Claim \ref{cl:D2:upper} we will think of~$(G,H)$-extensions as~{{$(e_H$}{$-$}{$e_G)$}}-element edge-sets.
Suppose that~$F$ contains~$k$ extensions of~$\xx_2$.
If $\cC_2 \in \fH(\xx_2,\cC_1)$ then all these~$k$ extensions must be present in $\cC_2$, since otherwise $\Pr(\cI_{\cC_1 \cup \cC_2} , \cD_{\cC_1^c \cup \cC_2^c}) \le \Pr(\cI_{\cC_1}, \cD_{\cC_2^c})= 0$ contradicting $\cC_2 \in \fH(\xx_2, \cC_1)$.
Each of the remaining~${\dex - k}$ extensions~$E_i$ in~$\cC_2$ is not fully contained in~$F$, and thus intersects~$F$ in an edge-set that corresponds to a $(G,J_i)$-extension of~$\xx_2$ for some~$G \subseteq J_i \subsetneq H$ (the case~$J_i = G$ occurs when the extension~$E_i$ is edge-disjoint from~$F$).
When these intersections are given by~${J_1, \dots, J_{\dex-k}}$, then we clearly have
\begin{equation*}
\prob{ \cI_{\cC_2} \mid \cI_{\cC_1} } = \prod_{i = 1}^{\dex - k} p^{e_H - e_G - (e_{J_i}-e_G)} = \prod_{i = 1}^{\dex - k} p^{e_H - e_{J_i}}.
\end{equation*}
Furthermore, the number of collections~$\cC_2$ with such intersections~${J_1, \dots, J_{\dex-k}}$ is bounded by
\begin{equation*
\frac{1}{(\dex-k)!} \cdot \prod_{i = 1}^{\dex - k} \bigpar{v_G + (\vGH)\dex}^{v_{J_i} - v_G}\extcount{J_i,H},
\end{equation*}
where~$\extcount{G,H} := N_{G,H}=N$ and~$\extcount{J,H} := n^{v_H - v_J}$ otherwise (to clarify: we divided by~$(\dex - k)!$ since we count unordered collections~$\cC_2$).
Hence, summing over all possible choices of~$J_1, \dots, J_{\dex-k}$, it follows that
\begin{align}
\sum_{\cC_2 \in \fH(\xx_2, \cC_1)}\Pr(\cI_{\cC_2} \mid \cI_{\cC_1}) & \le \notag \frac{1}{(\dex-k)!} \sum_{\substack{J_1, \dots, J_{\dex-k}:\\ G \subseteq J_i \subsetneq H}} \prod_{i = 1}^{\dex - k} \bigpar{v_G + (\vGH)\dex}^{v_{J_i} - v_G} \extcount{J_i,H} p^{e_H - e_{J_i}}
\\
&\le \frac{\dex^k}{\dex!} \cdot \biggpar{\sum_{G \subseteq J \subsetneq H}\bigpar{v_G + (\vGH)\dex}^{v_J - v_G} \extcount{J,H} p^{e_H - e_J}}^{\dex - k}
\label{eq:rhs}.
\end{align}
Noting that $\extcount{G,H}p^{e_H - e_G} = \mu$, using \eqref{eq:Harris:overlap} and~$\mu \asymp \dex$ we bound the sum in~\eqref{eq:rhs} from above by, say,
\begin{equation}\label{eq:muerror}
\mu + O\bigpar{\sum_{G \subsetneq J \subsetneq H} \dex^{v_H}n^{v_H - v_J}p^{e_H - e_J}}
\le \mu + o(1) = \mu \cdot \Bigpar{1 + o\bigpar{\dex^{-1}}}.
\end{equation}
From the assumptions~$\eps \le 1$ and~$\mu \ge 1/2$
it follows that~$\dex \le (1+\eps)\mu + 1 \le 4 \mu$, say.
Therefore, in view of \eqref{eq:rhs}--\eqref{eq:muerror}, using~$\mu=Np^\eGH$ and~\eqref{eq:Hr} it follows~that
\begin{equation}\label{eq:intermediate}
\sum_{\cC_2 \in \fH(\xx_2, \cC_1)}\Pr(\cI_{\cC_2} \mid \cI_{\cC_1}) \le \left( \frac{\dex}{\mu}\right)^k\frac{(Np^\eGH)^{\dex}}{\dex!} \Bigpar{1 + o\bigpar{\dex^{-1}}}^{\dex-k}
\: \le \:
(1+o(1)) \cdot 4^k|\fH(\xx_2)|p^{(\eGH)\dex} ,
\end{equation}
whenever the graph~$F$ defined in~\eqref{eq:F} contains exactly~$k$ extensions of~$\xx_2$.
In the second step we sum the above estimate~\eqref{eq:intermediate} over all~$\cC_1 \in \fH(\xx_1)$.
Recalling the partition~\eqref{eq:part}, note that~$k =0$ when~$\cC_1 \in \fH_0(\xx_1,\xx_2)$, and that~$k \le v_G$ otherwise (see Claim~\ref{cl:finiteOverlaps}).
From~\eqref{eq:intermediate} it follows that
\[
\sum_{\cC_1 \in \fH(\xx_1)}\sum_{\cC_2 \in \fH(\xx_2, \cC_1)}\Pr(\cI_{\cC_2} \mid \cI_{\cC_1}) \le (1+o(1)) \cdot \Bigpar{|\fH_0(\xx_1,\xx_2)| + 4^{v_G}|\fH_{\ge 1}(\xx_1,\xx_2)|} \cdot |\fH(\xx_2)|p^{(\eGH)\dex} .
\]
In view of~\eqref{eq:negligbleC1}, the factor in the above parentheses is at most~$(1+o(1)) \cdot |\fH(\xx_1)|$, say,
which together with~$p^{(\eGH)\dex}=\Pr(\cI_{\cC_2})$ from~\eqref{eq:I} then completes the proof of inequality~\eqref{eq:sumP}.
\end{proof}
Finally, inserting the estimates~\eqref{eq:sumP}, $p^{(\eGH)\dex}=\Pr(\cI_{\cC_1})$, and~\eqref{eq:D:lower} into~\eqref{eq:EE1},
it follows that
\begin{equation*
\Pr(\cE_{\xx_1},\cE_{\xx_2}) \; \le \; (1+o(1)) \sum_{\cC_1 \in \fH(\xx_1)}\Pr(\cI_{\cC_1})\Pr(\cD_{\cC_1^c} | \cI_{\cC_1})\sum_{\cC_2 \in \fH(\xx_2)} \Pr(\cI_{\cC_2})\Pr(\cD_{\cC_2^c} | \cI_{\cC_2}) ,
\end{equation*}
which together with~\eqref{eq:er} completes the proof of inequality~\eqref{eq:pr:ub} and thus Lemma~\ref{lem:main}
(which in turn implies the $0$-statement{} in~\eqref{eq:main:strbal} of \refT{thm_strictly_balanced}, as discussed). \noproof
\subsection{The $1$-statement}\label{sec:1statement}
Our proof of the $1$-statement{} in~\eqref{eq:main:strbal} of Theorem~\ref{thm_strictly_balanced} is based on a fairly standard union bound argument.
\begin{proof}[Proof of the $1$-statement in~\eqref{eq:main:strbal} of Theorem~\ref{thm_strictly_balanced}
Fix an arbitrary constant~$\tau > 0$.
For~$\beta > 0$ as given by \refL{lem:StrBal}~\ref{eq:StrBal:density}, fix constants~$0 < \gamma \le \beta$ and $0 < \alpha < \gamma/2$ as in the proof of the $0$-statement (see \refS{sec:0statement}).
If~$p > n^{-1/d(G,H) + \gamma}$, then \refR{rem:Phibig}~\ref{eq:Phibig:iv} implies~$\Phi_{G,H} = \Omega(n^\gamma)$, and using~$\eps^2 \Phi_{G,H} = \Omega(n^{\gamma - 2\alpha}) = n^{\Omega(1)}$
we see that the $1$-statement of Theorem~\ref{thm_strictly_balanced} follows from Theorem~\ref{thm_generaltail}~\ref{thm_tail1} with~$t = \eps \mu$.
In the remaining main case~$p \le n^{-1/d(G,H) + \gamma}$, we fix a root~$\xx \in \oset{v_G}$.
Since there are~$O(n^{v_G})$ many such roots,
for the $1$-statement of Theorem~\ref{thm_strictly_balanced} it suffices to show that, for~$C>0$ large enough,
\begin{equation}\label{eq:thm_ext_01:goal}
\prob{|X_\xx - \mu| \ge \eps \mu} = o\bigl( n^{- (v_G + \tau)} \bigr) \qquad \text{ if~$\eps^2 \mu \ge C \log n$.}
\end{equation}
To avoid clutter, we shall use the convention that all implicit constants~$c_i$ may depend on~$(G,H)$.
For the lower tail we shall apply Janson's inequality~\cite[Theorem~1]{RiordanWarnke2015}, which
in view of~\eqref{eq:lem:density:subs} from \refL{lem:StrBal}~\ref{eq:StrBal:density} routinely (similarly to the textbook argument~\cite{JLR,JW} for unrooted subgraph counts) gives
\begin{equation}\label{eq:1:LT:Janson}
\prob{X_\xx \le (1- \eps) \mu}
\; \le \; \exp\Bigl(-c_1 \eps^2 \mu\Bigr)
\le n^{-c_1 C} = o\bigl( n^{- (v_G + \tau)} \bigr)
\end{equation}
for~$C > (v_G+\tau)/c_1$
(analogous to~\eqref{eq:Janson:Delta:bound}, the relevant~$\Delta$-term from~\cite{RiordanWarnke2015}
is here again~$o(1)$ by~\eqref{eq:mumupper} and~\eqref{eq:lem:density:subs}).
For the upper tail we shall apply~\cite[Theorem~32]{WUT} in the setting described in~\cite[Example~20]{WUT}
(the conditions (H$\ell$), (P), (P$q$) are defined in~\cite[Section~4.1]{WUT}).
The underlying hypergraph $\cH = \fH(\xx)$ consists of the edge-sets of extensions of $\xx$, thus having vertex-set~$V(\cH) = E(K_n)$.
We set the parameters to~$N = n^2$, $\ell = 1$, $q = k = \eGH$, and~$K=v_G+2\tau$.
The quantity~$\mu_j$ from~\cite[Example~20]{WUT} satisfies~$\max_{1 \le j < q}\mu_j \le \max_{G \subsetneq J \subsetneq H} n^{v_H - v_J}p^{e_H - e_J} \ll n^{-\beta}$ by \refL{lem:StrBal}~\ref{eq:StrBal:density}.
Invoking~\cite[Theorem~32]{WUT}, it then follows~that
\begin{equation}\label{eq:1:UT:Wsmallp}
\prob{X_\xx \ge (1+\eps) \mu} \; \le \; \bigl(1 + o(1)\bigr) \cdot \exp\Bigl(-\min\bigl\{c_2\eps^2 \mu, \: (v_G + 2\tau)\log n\bigr\}\Bigr) = o\bigl( n^{- (r + \tau)} \bigr)
\end{equation}
for~$C > (v_G+\tau)/c_2$, completing the proof of~\eqref{eq:thm_ext_01:goal}
and thus the $1$-statement in~\eqref{eq:main:strbal} of Theorem~\ref{thm_strictly_balanced}.
\end{proof}
\begin{remark}[Theorem~\ref{thm_strictly_balanced}: stronger $1$-statement]\label{rem:thm_strictly_balanced
The above proof yields, in view of~\refR{rem:thm_generaltail}, the following stronger conclusion:
for any fixed~$\tau > 0$ there is a constant~$C=C(\tau,G,H)>0$ such that
the~$1$-statement in~\eqref{eq:main:strbal} of Theorem~\ref{thm_strictly_balanced}
holds with probability~$1 - o(n^{-\tau})$.
\end{remark}
\subsection{Deferred proof of \refL{lem:StrBal}}\label{s_strictly_balanced:prelim:deferred}
For completeness, we now give the routine proof of \refL{lem:StrBal}
deferred from \refS{s_strictly_balanced:prelim}.
\begin{proof}[Proof of \refL{lem:StrBal}
\ref{eq:StrBal:density}: Set~$\Psi_{J,H} := n^{v_H - v_J}p^{e_H - e_J}$.
In the case $v_J = v_H$, for any~$\beta > 0$ satisfying~$1/d(G,H) > 2\beta$
we have~$\Psi_{J,H} = p^{e_H-e_J} \ll n^{-(e_H - e_J)\beta} \le n^{-\beta}$.
Henceforth we can thus assume $v_J < v_H$.
Since~$G$ is an induced subgraph of~$H$ and thus of~$J$, we also have~$v_G < v_J$.
Since~$(G,H)$ is strictly balanced we have~$d(G,J)< d(G,H)$, which implies
\begin{equation}\label{eq:lem:density:subs:3}
d(J,H) = \frac{(e_H-e_G)-(e_J-e_G)}{(v_H-v_G)-(v_J-v_G)} = \frac{(v_H-v_G)d(G,H)-(v_J-v_G)d(G,J)}{(v_H-v_G)-(v_J-v_G)} > d(G,H) .
\end{equation}
Hence~$1/d(G,H) > 1/d(J,H) + 2\beta$ for~$\beta>0$ sufficiently small,
so that~$p = O(n^{-1/d(G,H)+\beta}) \ll n^{-1/d(J,H) - \beta}$.
Observe that~$e_H > e_J$, since otherwise~$e_H = e_J$ and~$v_H > v_J$ imply~$d(G,J) > d(G,H)$,
contradicting that~$(G,H)$ is strictly~balanced.
Hence $\Psi_{J,H} = (n^{1/d(J,H)} p)^{e_H-e_J} \ll n^{-\beta}$,
completing the proof of~\eqref{eq:lem:density:subs}.
\ref{eq:StrBal:connected}: Assume the contrary.
Then we can split $V(H) \setminus V(G)$ into two nonempty sets~$V_1$ and~$V_2$
such that there are no edges between $V_1$ and $V_2$.
Writing~$H_i := H[V(G) \cup V_i]$, we readily obtain
\begin{equation*}
d(G,H) = \frac{e_H-e_G}{v_H-v_G} = \frac{\sum_{i \in [2]}(e_{H_i}-e_G)}{\sum_{i \in [2]}(v_{H_i}-v_G)} = \frac{\sum_{i \in [2]}(v_{H_i}-v_G)d(G,H_i)}{\sum_{i \in [2]}(v_{H_i}-v_G)} \le \max_{i \in [2]}d(G,H_i) .
\end{equation*}
Since~$(G, H)$ is strictly balanced we have~$d(G,H_i)< d(G,H)$, yielding the desired contradiction.
\end{proof}
\section{No grounded primals case (\refT{thm_nogrounded})}\label{s_nogrounded}
In this section we prove Theorem~\ref{thm_nogrounded} by focusing on a maximal primal subgraph~$J_{\max}$ of~$(G, H)$;
we remark that~$J_{\max}$ is in fact unique (the union of all primal subgraphs), but we do not need this.
Our arguments hinge on the basic observation that, since~$J_{\max}$ is by assumption not grounded (i.e.,~there are no edges between~$V(G)$ and~$V(J_{\max}) \setminus V(G)$),
extension counts~$X_{G,J_{\max}}(\xx)$ are essentially the same as the number of \emph{unrooted} copies of the graph~$K := {J_{\max} - V(G)}$, where the vertices of~$G$ are~deleted from~$J_{\max}$.
For the $1$-statement this heuristically means that if~$X_{G,J_{\max}}(\xx)$ is concentrated for \emph{some}~$\xx$, then $X_{G,J_{\max}}(\xx)$ is concentrated for \emph{all}~$\xx$
(the reason being that not too many copies of~$K$ can overlap with any root~$\xx'$, see Lemma~\ref{lem:scattered} below).
Furthermore, using Theorem~\ref{thm_generaltail}~\ref{thm_tail1} it turns out that \whp{} each copy of~$J_{\max}$ extends to the `right' number of $H$-copies
(here the crux will be that~$\Phi_{J_{\max}, H} = n^{\Omega(1)}$ follows from \refR{rem:Phibig}~\ref{eq:Phibig:iv} and Lemma~\ref{lem:nogroundeddensity} below).
Combining these two estimates then allows us to deduce that~\whp{}~$X_{G,H}(\xx)$ is concentrated for all~$\xx$; see Section~\ref{sec:ext:1} for~the~details.
For the $0$-statement we shall proceed similarly, the main difference is that, for a \emph{fixed}~$\xx$, we start by arguing that~$X_{G,J_{\max}}(\xx)$ is not concentrated, i.e., \whp{} far away from its expected value.
This allows us to deduce that~$\xx$ has~\whp{} the wrong number of $(G,H)$-extensions (since by Theorem~\ref{thm_generaltail}~\ref{thm_tail1} \whp{} each copy of~$J_{\max}$ again extends to the right number of copies of~$H$); see Section~\ref{sec:ext:0} for~the~details.
\subsection{Setup and technical preliminaries}
In the upcoming arguments it will, as in~\cite{S90b}, often be convenient to treat extensions as sequences of vertices. Given a rooted graph~$(G,H)$ with labeled vertices~$V(G) = {\left\{ 1, \dots, v_G \right\}}$ and~${V(H) \setminus V(G)} = {\{v_G + 1, \dots, v_H\}}$,
\emph{an ordered $(G,H)$-extension of} $\xx = {(x_1, \dots, x_{v_G})} \in \oset{v_G}$ is a sequence $\yy = {(y_{v_G + 1}, \dots, y_{v_H})}$ of distinct vertices
from~$[n] \setminus \{x_1, \dots, x_{v_G}\}$
such that the injection which maps each vertex~${j \in V(G)}$ onto $x_j$ and each vertex~${i \in {V(H)\setminus V(G)}}$ onto~$y_i$, also maps every edge~${f \in {E(H) \setminus E(G)}}$ onto an edge.
Given a root~$\xx \in \oset{v_G}$, let~$Y_{G,H}(\xx)$ denote the number of ordered~$(G,H)$-extensions of~$\xx$ in~${\mathbb G}_{n,p}} % \newcommand{\Gnp}{\G(n,p)$.
Note that
\begin{equation}\label{def:nuGH}
\nu_{G,H} := \E Y_{G,H}(\xx) = (n-v_G) (n-v_G-1) \cdots (n-v_H+1) \cdot p^{\eGH}
\end{equation}
does not depend on the particular choice of~$\xx$.
Let~$\operatorname{aut}(G,H)$ denote the number of automorphisms of~$H$ that fix the set~$V(G)$.
Since each extension corresponds to~$\operatorname{aut}(G,H)$ many ordered extensions, we~obtain
\begin{align}
\label{eq:YX}
Y_{G,H}(\xx) &= \operatorname{aut}(G,H) \cdot X_{G,H}(\xx) , \\
\label{eq:numu}
\nu_{G,H} &= \operatorname{aut}(G,H) \cdot \mu_{G,H}.
\end{align}
One further useful elementary observation is that, for any induced~$G \subseteq J \subseteq H$, we have
\begin{equation}\label{eq:muprod}
\nu_{G,J} \cdot \nu_{J,H} = \nu_{G,H}.
\end{equation}
Our arguments will also exploit the following technical property of maximal primal~subgraphs.
\begin{lemma}\label{lem:nogroundeddensity}%
If~$J_{\max} \subsetneq H$ is a maximal primal of the rooted graph~$(G,H)$, \linebreak[3]
then~$m(J_{\max},H) < m(G,H)$.
\end{lemma}
\begin{proof}
Fix~$J_{\max} \subsetneq J \subseteq H$.
Using maximality of~$J_{\max} \supsetneq G$,
we infer~$d(G, J) < m(G,H)$ and~$d(G, J_{\max} ) = m(G,H)$.
Proceeding analogous to inequality~\eqref{eq:lem:density:subs:3}, it routinely follows that
\[
d(J_{\max},J)
= \frac{(\vr{G}{J})d(G,J)-(\vr{G}{J_{\max}})d(G,J_{\max})}{(\vr{G}{J})-(\vr{G}{J_{\max}})}
< m(G,H),
\]
which completes the proof by maximizing over all feasible~$J$.
\end{proof}
\subsection{The $0$-statement}\label{sec:ext:0}
As discussed, for the $0$-statement of Theorem~\ref{thm_nogrounded} the core idea is
to show that~$X_{G,J_{\max}}(\xx)$ is not concentrated for some~$\xx \in \oset{v_G}$,
and that~$X_{J_{\max},H}(\y)$ is concentrated for all~$\y \in \oset{v_{J_{\max}}}$,
see~\eqref{eq:badJ}--\eqref{eq:sparser}~below.
\begin{proof}[Proof of the $0$-statement of Theorem~\ref{thm_nogrounded}]
Assuming~$\eps \ge n^{-\alpha}$ with~$\alpha < 1/2$ (as we may), we have $\eps^2\Phi_{G,H} = \Omega(n^{1-2\alpha}p^{\eGH}) \gg p^{\eGH}$, so the assumption~$\eps^2\Phi_{G,H} \to 0$ implies~$p \to 0$ and thus~$1-p=\Theta(1)$.
Since $(G,H)$ has no grounded primals, the desired $0$-statement now follows by combining
the conclusions of Theorem~\ref{thm_generaltail}~\ref{thm_tail0} for~$\Phi_{G,H} \to 0$ and~$\Phi_{G,H} \to \infty$
with the conclusion of Lemma~\ref{lem:generalzero} below for~$\Phi_{G,H} \asymp 1$
(formally using, as usual, the subsubsequence principle~\cite[Section~1.2]{JLR}).
\end{proof}
\begin{lemma}\label{lem:generalzero}%
Let~$(G,H)$ be a rooted graph with no grounded primal subgraphs.
Then, for all~$p=p(n) \in [0,1]$ and~$\eps=\eps(n) \in (0,1]$ with~$\Phi_{G,H} \asymp 1$ and~$\eps \to 0$,
\begin{equation}\label{eq:lem:generalzero}
\lim_{n \to \infty} \Pr\Bigpar{\max_{\xx \in \oset{v_G}}|X_\xx - \mu| \ge \eps \mu} = 1 .
\end{equation}
\end{lemma}
\begin{proof}%
Note that we may assume $\eps \ge n^{-\alpha}$ for any $\alpha > 0$ (since increasing~$\eps$ gives a stronger conclusion).
Let~$J_{\max}$ be a maximal primal subgraph of~$(G,H)$.
By \refR{rem:Phibig}~\ref{eq:Phibig:ii}--\ref{eq:Phibig:iii}, the assumption~$\Phi_{G,H} \asymp 1$~implies
\begin{gather}
\label{eq:muGJmax}
\mu_{G,J_{\max}} \asymp 1 ,\\
\label{eq:conditionPhip}
p = \Omega\bigl(n^{-1/m(G,H)}\bigr) .
\end{gather}
Turning to the details, we start with the claim that, \whp{},
\begin{align}
\label{eq:badJ}
\max_{\xx \in \oset{v_G}} |X_{G,J_{\max}}(\xx) - \mu_{G,J_{\max}}| &> 3\eps \mu_{G,J_{\max}}, \\
\label{eq:sparser}
\max_{\y \in \oset{v_{J_{\max}}}}|X_{J_{\max},H}(\y) - \mu_{J_{\max},H}| &< \tfrac{1}{2}\eps \mu_{J_{\max},H} .
\end{align}
To show that this claim implies the desired $0$-statement,
we consider ordered extensions and note that multiplying~\eqref{eq:badJ} and~\eqref{eq:sparser} by $\operatorname{aut}(G, J_{\max})$ and $\operatorname{aut}(J_{\max},H)$, respectively, we can replace~$X$~by~$Y$ and~$\mu$~by~$\nu$, cf.~\eqref{eq:YX} and \eqref{eq:numu}.
Observe that each ordered $(G,H)$-extension corresponds to a unique pair of extensions: one of~$\xx$ with respect to~$(G,J_{\max})$ and one of~$\y$ (which consists of~$\xx$ plus the vertices of the first extension) with respect to $(J_{\max},H)$.
Consequently, recalling the identity~\eqref{eq:muprod}, inequalities~\eqref{eq:badJ}--\eqref{eq:sparser} imply that there is~$\xx \in \oset{v_G}$ such that either
\begin{equation}\label{eq:YGHR:1}
Y_{G,H}(\xx) > (1 + 3\eps)\nu_{G,J_{\max}} \cdot (1 - \eps/2) \nu_{J_{\max},H} > (1 + \eps) \nu_{G,H}
\end{equation}
or
\begin{equation}\label{eq:YGHR:2}
Y_{G,H}(\xx) < (1 - 3\eps)\nu_{G,J_{\max}} \cdot (1 + \eps/2) \nu_{J_{\max},H} < (1 - \eps) \nu_{G,H} ,
\end{equation}
which in view of~\eqref{eq:YX} and \eqref{eq:numu} establishes the desired $0$-statement (after rescaling by~$\operatorname{aut}(G,H)$).
It remains to show that~\eqref{eq:badJ} and \eqref{eq:sparser} hold \whp{}, and we start with~\eqref{eq:badJ}.
Consider the unrooted graph~$K := {J_{\max} - V(G)}$, where the vertices of~$G$ are deleted from~$J_{\max}$.
By construction, we have~$v_K = \vr{G}{ J_{\max}}$.
Since~$J_{\max}$ is not grounded, we also have~$e_K = \er{G}{ J_{\max}}$.
Using~\eqref{eq:muGJmax} we infer
\begin{equation}\label{eq:muK:Jmax}
\mu_K \asymp n^{v_K}p^{e_K} = n^{\vr{G}{J_{\max}}}p^{\er{G}{J_{\max}}} \asymp \mu_{G,J_{\max}} \asymp 1 ,
\end{equation}
which by Markov's inequality implies that the number of $K$-copies is \whp{} at most~$n/(2v_K)$, say (with room to spare).
This means that either (i)~there are no $K$-copies, in which case $X_{G,J_{\max}}(\xx) = 0$ for all $\xx \in \oset{v_G}$, or (ii)~the $K$-copies span at most~$n/2$ vertices, in which case there is one $\xx_1 \in \oset{v_G}$ that is disjoint from all $K$-copies and another set~$\xx_2 \in \oset{v_G}$ that intersects at least one $K$-copy, so that~$X_{G,J_{\max}}(\xx_1) = X_K > X_{G,J_{\max}}(\xx_2)$.
In both cases it follows that~\eqref{eq:badJ} holds \whp{}, since~\eqref{eq:muGJmax} and~$\eps \to 0$ imply that the interval~$(1 \pm 3\eps)\mu_{G,J_{\max}}$ does not contain zero, and moreover contains at most one~integer.
Turning to~\eqref{eq:sparser}, note that~\eqref{eq:sparser} holds trivially when~$J_{\max} = H$.
Otherwise~$m(J_{\max},H) < m(G,H)$ by Lemma~\ref{lem:nogroundeddensity},
so that~\eqref{eq:conditionPhip} implies $p = \Omega( n^{\gamma-1/m(J_{\max},H)} )$ for some constant $\gamma > 0$.
Using \refR{rem:Phibig}~\ref{eq:Phibig:iv}, it follows that~$\Phi_{J_{\max},H} = \Omega(n^\gamma)$.
Assuming~$\eps \ge n^{-\alpha}$ with~$\alpha < \gamma / 2$ (as we may), we infer $\eps^2\Phi_{J_{\max},H} = \Omega(n^{\gamma/ 2 -\alpha}) = n^{\Omega(1)}$.
Applying Theorem~\ref{thm_generaltail}~\ref{thm_tail1} with~$t = \tfrac{1}{2}\eps \mu_{J_{\max},H}$,
now~\eqref{eq:sparser} holds~\whp{}.
\end{proof}
\subsection{The $1$-statement}\label{sec:ext:1}
As discussed, for the $1$-statement of Theorem~\ref{thm_nogrounded} we rely on the fact that no vertex is contained in too many copies of the (unrooted) graph~${J_{\max} - V(G)}$, which is formalized by \refL{lem:scattered} below.
As usual, given a graph~$K$ with~$v_K \ge 1$,
subgraphs~$J \subseteq K$ with~$v_J \ge 1$ that maximize the density $d_J := d(\emptyset, J) = e_J/v_J$ are called~\emph{primal} (consistently with rooted graphs terminology),
and~$K$ is called~\emph{balanced} when~$K$ itself is~primal.
\begin{lemma}\label{lem:scattered}%
Let $K$ be a balanced graph with $e_K \ge 1$.
There are constants $\beta, C > 0$
such that, for all $p=p(n) \in [0,1]$ with~$n^{\beta - 1/d_K} \ll p = O(n^{\beta - 1/d_K})$,
in ${\mathbb G}_{n,p}} % \newcommand{\Gnp}{\G(n,p)$ \whp{} every vertex~$x \in [n]$ is contained in at most~$C\lambda^{\vr{G_{\min}}{K}}$ copies of~$K$, where~$\lambda := np^{d_K}$
and~$G_{\min} \subseteq K$ is a primal subgraph with the smallest number of vertices.
\end{lemma}
\noindent
We defer the density based proof to \refApp{apx:scattered}
(which is rather tangential to the main~argument here),
and now use \refL{lem:scattered} to prove the desired $1$-statement~of~Theorem~\ref{thm_nogrounded}.
\begin{proof}[Proof of the $1$-statement of~\refT{thm_nogrounded}]%
The assumptions~$\eps \le 1$ and~$\eps^2\Phi_{G,H} \to \infty$ imply~$\Phi_{G,H} \to \infty$, so \refR{rem:Phibig}~\ref{eq:Phibig:i} implies~$p\gg~n^{-1/m(G,H)}$.
If~$\eps^2\Phi_{G,H} = n^{\Omega(1)}$, then the desired $1$-statement follows from Theorem~\ref{thm_generaltail}~\ref{thm_tail1},
so we may further assume~$\eps^2 \Phi_{G,H} \le n^c$ for any constant~$c>0$ of our choice,
which together with the assumption~$\eps \ge n^{-\alpha}$ implies $\Phi_{G,H} \le n^{c + 2\alpha}$.
Using the contrapositive of \refR{rem:Phibig}~\ref{eq:Phibig:iv}, by choosing~$\alpha,c>0$ sufficiently small (as we may)
we thus henceforth can~assume
\begin{equation} \label{eq:conditionPhi}
n^{-1/m(G,H)} \ll p \ll n^{\beta - 1/m(G,H)},
\end{equation}
where the constant~$\beta > 0$ is as given by~Lemma~\ref{lem:scattered}.
Turning to the details, let~$J_{\max}$ be a maximal primal subgraph of~$(G,H)$.
For convenience we use ordered extensions, as before.
Note that~$Y_{G,J_{\max}}(\xx)$ is the number of (unrooted) copies of graph~$K := {J_{\max} - V(G)}$ that are disjoint from~$\xx$.
Let~$Z_K(x)$ be the number of copies of~$K$ containing vertex~$x \in [n]$.
We fix some~$\xx' \in \oset{v_G}$, and start with the claim that there exists a constant $D>0$ such that, \whp{},
\begin{align}
\label{eq:XRGprime}
|Y_{G,J_{\max}}(\xx') - \nu_{G,J_{\max}}| &< \tfrac 1 8\eps\nu_{G,J_{\max}}. \\
\label{eq:goodcount}
\max_{\y \in \oset{v_{J_{\max}}}} |Y_{J_{\max},H}(\y) - \nu_{J_{\max},H}| &< \tfrac{1}{2}\eps \nu_{J_{\max},H},\\
\label{eq:vertexbounded}
\max_{x \in [n]} Z_K(x) & \le D\frac{\eps\nu_{G, J_{\max}}}{\eps^2\Phi_{G,H}}.
\end{align}
We now show that this claim implies the desired $1$-statement.
In view of~\eqref{eq:XRGprime}, the first step is to use~\eqref{eq:vertexbounded} to show that $Y_{G,J_{\max}}(\xx)$ is also concentrated for the remaining roots $\xx \neq \xx'$.
Namely, using~\eqref{eq:vertexbounded} to bound the number of $(G, J_{\max})$-extensions of $\xx$ that overlap with $\xx'$ (and those of $\xx'$ overlapping with $\xx$),
in view of~$\eps^2\Phi_{G,H} \to \infty$ it follows that, for every~$\xx \in \oset{v_G}$,
\begin{equation*}
|Y_{G,J_{\max}}(\xx) - Y_{G,J_{\max}}(\xx')|
\le O\Bigpar{\sum_{x \in \xx \cup \xx'} Z_K(x)} \ll \tfrac 1 8 \eps\nu_{G,J_{\max}} ,
\end{equation*}
which together with \eqref{eq:XRGprime} implies that, say,
\begin{equation}\label{eq:XRG}
\max_{\xx \in \oset{v_G}} |Y_{G,J_{\max}}(\xx) - \nu_{G,J_{\max}}| < \tfrac 1 4 \eps\nu_{G,J_{\max}} .
\end{equation}
The second step exploits that by~\eqref{eq:goodcount} each copy of $J_{\max}$ extends to the `right' number of copies of~$H$.
Indeed, with analogous reasoning as for~\eqref{eq:YGHR:1}--\eqref{eq:YGHR:2} from \refS{sec:ext:0}, by combining~\eqref{eq:goodcount} with~\eqref{eq:XRG}
it now follows (in view of~\eqref{eq:muprod}) that
\[
\max_{\xx \in \oset{v_G}} Y_{G,H}(\xx) < (1 + \eps/4)\nu_{G,J_{\max}} \cdot (1 + \eps/2)\nu_{J_{\max},H} < (1 + \eps)\nu_{G,H} ,
\]
and similarly,
\[
\min_{\xx \in \oset{v_G}} Y_{G,H}(\xx) > (1 - \eps)\nu_{G,H} ,
\]
which in view of~\eqref{eq:YX}--\eqref{eq:numu} establishes the $1$-statement of Theorem~\ref{thm_nogrounded} (by rescaling by $\operatorname{aut}(G,H)$).
It remains to show that~\eqref{eq:XRGprime}--\eqref{eq:vertexbounded} hold \whp{}, and we start with~\eqref{eq:XRGprime}.
Since $\Phi_{G,J_{\max}} \ge \Phi_{G,H}$ by definition,
using Chebyshev's inequality together with the variance estimate~\eqref{eq:Variance} and $\eps^2 \Phi_{G,H} \to \infty$, it follows that
\begin{equation*
\begin{split}
\prob{|X_{G,J_{\max}}(\xx') - \mu_{G,J_{\max}}| \ge \tfrac 1 8\eps\mu_{G,J_{\max}}}
& \le \frac{\Var X_{G,J_{\max}}(\xx)}{(\eps/8)^2\mu_{G,J_{\max}}^2}
\asymp \frac{1 - p}{\eps^2\Phi_{G,J_{\max} }} \le \frac{1}{\eps^2\Phi_{G,H}} \to 0 ,
\end{split}
\end{equation*}
which in view of~\eqref{eq:YX}--\eqref{eq:numu} then implies that~\eqref{eq:XRGprime} holds \whp{} (by rescaling by $\operatorname{aut}(G,J_{\max})$).
Next we establish~\eqref{eq:goodcount}. Note that the proof of~\eqref{eq:sparser} only relies on~\eqref{eq:conditionPhip} (which here holds by \eqref{eq:conditionPhi}), and that we may assume~$\eps \ge n^{-\alpha}$ for sufficiently small~$\alpha>0$.
Hence by the same argument as for~\eqref{eq:sparser},
in view of~\eqref{eq:YX}--\eqref{eq:numu} it here follows that~\eqref{eq:goodcount} holds~\whp{} (after rescaling by~$\operatorname{aut}(J_{\max},H)$).
Finally, we turn to the auxiliary estimate~\eqref{eq:vertexbounded}.
Note that every subgraph~$J \subseteq K$ with~$v_J \ge 1$ satisfies~$d_J = d(G,G \cup J)$.
Hence~$J$ is primal for~$K$ if and only if~$G \cup J$ is primal for~$(G,J_{\max})$.
Since~$J_{\max}= G \cup K$ is primal for~$(G,H)$, it follows that~$K$ is balanced, with~$d_K=d(G,J_{\max})=m(G,H)$.
Using assumption~\eqref{eq:conditionPhi}, we thus have~$n^{-1/d_K} \ll p \ll n^{\beta-1/d_K}$.
Invoking Lemma~\ref{lem:scattered} there is a constant~$C>0$ such that,~\whp{},
\[
\max_{x \in [n]} Z_K(x) \le C \lambda^{\vr{G_{\min}}{K}},
\]
where $G_{\min}$ is a smallest primal for~$K$, which in turn gives~$d_K=d_{G_{\min}}$.
Since~$J_{\max}$ is a vertex-disjoint union of the graphs~$K$ and~$G$,
and~$G_{\min} \subseteq K$ we infer that~$e_{G_{\min}} = \er{G}{G \cup G_{\min}}$ and~$v_{G_{\min}} =\vr{G}{G \cup G_{\min}}$.
Recalling~$\lambda = np^{d_K} = np^{d_{G_{\min}}}$, it now follows analogously to~\eqref{eq:muK:Jmax} that
\begin{equation*}
\lambda^{\vr{G_{\min}}{K}} = \frac{n^{v_K} p^{e_K}}{n^{v_{G_{\min}}} p^{e_{G_{\min}}}} \asymp \frac{\mu_K}{\mu_{G_{\min}}} \asymp \frac{\mu_{G, J_{\max}}}{\mu_{G,G \cup G_{\min}}} \le \frac{\mu_{G, J_{\max}}}{\Phi_{G,H}} ,
\end{equation*}
which together with~$1 \le 1/\eps = \eps/\eps^2$ and~\eqref{eq:numu} completes the proof of~\eqref{eq:vertexbounded} for suitable~$D>0$.
\end{proof}
\section{Further cases}\label{sec:further}
\subsection{Unique and grounded primal case (\refT{thm_unique})}\label{s_unique}
In this section we prove \refT{thm_unique} by adapting the arguments from \refS{s_nogrounded} (focusing on the unique primal~$J=J_{\max}$).
The key difference is that here we can use the $0$- and $1$-statements of our main result \refT{thm_strictly_balanced} to deduce that~$X_{G,J}(\xx)$ is not concentrated for some~$\xx$, or concentrated for all~$\xx$, respectively.
This then allows us to prove the desired $0$- and $1$-statements, since each copy of~$J$ again extends to the `right' number of copies of~$H$ (by Theorem~\ref{thm_generaltail}~\ref{thm_tail1}, as in Section~\ref{s_nogrounded}); see \eqref{eq:JHconc}--\eqref{eq:GJconc} and~\eqref{eq:GJoff} below.
\begin{proof}[Proof of \refT{thm_unique}]%
If~$\Phi_{G,H} \to 0$, then the $0$-statement holds by \refT{thm_generaltail}~\ref{thm_tail0}.
Therefore we henceforth can assume~$\Phi_{G,H} = \Omega(1)$, which by \refR{rem:Phibig}~\ref{eq:Phibig:ii} is equivalent to
\begin{equation}\label{eq:overthreshold}
p = \Omega\bigl(n^{-1/m(G,H)}\bigr).
\end{equation}
Note that the proof of~\eqref{eq:sparser} relies only on~\eqref{eq:conditionPhip} (which is the same as \eqref{eq:overthreshold}), the fact that $J_{\max}$ is the maximal primal (which also holds trivially for~$J$ in the current setting),
and that we may assume $\eps \ge n^{-\alpha}$ for sufficiently small~$\alpha>0$ (which we may also assume here).
Hence by the same argument as for~\eqref{eq:sparser},
after rescaling by~$\operatorname{aut}(J,H)$ (see~\eqref{eq:YX}--\eqref{eq:numu}) we here obtain that, \whp{},
\begin{equation}\label{eq:JHconc}
\max_{\y \in \oset{v_J}}|Y_{J,H}(\y) - \nu_{J,H}| < \tfrac{1}{2} \eps \nu_{J,H}.
\end{equation}
We start with the~$1$-statement. Since~$\mu_{G,J} \ge \Phi_{G,H}$ by definition, the assumption~$\eps^2\Phi_{G,H} \ge C \log n$ implies~$\eps^2\mu_{G,J} \ge C \log n$. By uniqueness of the primal~$J$, the rooted graph $(G,J)$ is strictly balanced.
Therefore~\eqref{eq:main:strbal} of~\refT{thm_strictly_balanced} implies (after rescaling by~$\operatorname{aut}(G,J)$) for suitable~$\alpha, C > 0$ that, \whp{},
\begin{equation}\label{eq:GJconc}
\max_{\xx \in \oset{v_G}} |Y_{G,J}(\xx) - \nu_{G,J}| < \tfrac{1}{4}\eps \nu_{G,J}.
\end{equation}
The $1$-statement of \refT{thm_unique} now follows from~\eqref{eq:JHconc} and~\eqref{eq:GJconc}
by exactly the same reasoning with which~\eqref{eq:goodcount} and~\eqref{eq:XRG} from Section~\ref{sec:ext:1}
implied the $1$-statement of \refT{thm_nogrounded}.
We now turn to the~$0$-statement. We again plan to apply~\eqref{eq:main:strbal} of~\refT{thm_strictly_balanced} to the strictly balanced rooted graph~$(G,J)$,
for which we need to check that the assumption~$\eps^2 \Phi_{G,H} \le c \log n$ implies the required condition~$\eps^2 \mu_{G,J} \le c\log n$.
We will do this by showing that~$\Phi_{G,H} = \mu_{G,J}$ for~$n$ large enough.
First, note that the assumptions~$\eps \ge n^{-\alpha}$ and~$\eps^2 \Phi_{G,H} \le c \log n$ imply~$\Phi_{G,H} = O(n^{2\alpha}\log n)$.
By~\eqref{eq:overthreshold} and the contrapositive of \refR{rem:Phibig}~\ref{eq:Phibig:iv} we can thus assume that, say,
\begin{equation}\label{eq:closetohreshold}
p \asymp n^{\theta - 1/m(G,H)} \quad \text{ with } \quad \theta = \theta(n,p) \in [0,3\alpha].
\end{equation}
Since the primal~$J$ is unique, we have~$d(G,J)=m(G,H)$, and~$d(G,K)< m(G,H)$ when~$G \subsetneq K \subseteq H$ satisfies $J \neq K$.
Hence there exists a constant~$\gamma=\gamma(G,J,H)>0$
such that, for any $G \subsetneq K \subseteq H$,
\begin{equation}\label{eq:mu:minimal}
\mu_{G,K} \asymp \left( np^{d(G,K)} \right)^{\vr{G}{K}} \asymp \left( n^{1 - \frac{d(G,K)}{m(G,H)} + \theta d(G,K)} \right)^{\vr{G}{K}} = \begin{cases}
\Omega(n^{\gamma}) & \text{if~$K \neq J$,} \\
O(n^{3\alpha \er{G}{J}}) & \text{if~$K = J$.}
\end{cases}
\end{equation}
By taking~$\alpha > 0$ small enough, it follows that~$\Phi_{G,H} = \mu_{G,J}$ for~$n$ large enough,
which (as discussed) establishes~$\eps^2 \mu_{G,J} \le c \log n$.
Therefore~~\eqref{eq:main:strbal} of~\refT{thm_strictly_balanced} implies (after rescaling by $\operatorname{aut}(G,J)$) that, \whp{},
\begin{equation}\label{eq:GJoff}
\max_{\xx \in \oset{v_G}} |Y_{G,J}(\xx) - \nu_{G,J}| \ge 3\eps \nu_{G,J}.
\end{equation}
The $0$-statement of \refT{thm_unique} now follows from~\eqref{eq:GJoff} and~\eqref{eq:JHconc}
by the same (routine) reasoning with which~\eqref{eq:badJ}--\eqref{eq:sparser} from Section~\ref{sec:ext:0}
implied the $0$-statement of \refT{thm_nogrounded}.
\end{proof}
\begin{remark}[Theorem~\ref{thm_unique}: stronger $1$-statement]\label{rem:thm_unique1}
The above proof yields, in view of Remarks~\ref{rem:thm_generaltail}--\ref{rem:thm_strictly_balanced}, the following stronger conclusion:
for any fixed~$\tau > 0$ there is a constant~$C=C(\tau,G,H)>0$ such that
the~$1$-statement in~\eqref{eq:thm:unique} of Theorem~\ref{thm_unique}
holds with probability~$1 - o(n^{-\tau})$.
\end{remark}
\subsection{Strictly balanced and ungrounded case (\refT{thm_strictly_balanced})}\label{sec:ext:non}
In this section we prove the threshold~\eqref{eq:main:strbal:non} of~\refT{thm_strictly_balanced}~(ii) for strictly balanced rooted graphs~$(G,H)$ that are not grounded,
which turns out to be a simple corollary of~\refT{thm_nogrounded}.
The crux is that, by decreasing~$\alpha>0$ (if~necessary), we can ensure that the {$0$-}~and {$1$-statement} conditions in~\eqref{eq:main:strbal:non} and~\eqref{eq:main:nogrounded} coincide.
\begin{proof}[Proof of~\eqref{eq:main:strbal:non} of~\refT{thm_strictly_balanced}]
By assumption the unique primal~$H$ is not grounded, so \refT{thm_nogrounded} applies.
Decreasing the constant~$\alpha>0$ from \refT{thm_nogrounded}, we can assume that~$\beta \ge 3\alpha$, where~$\beta>0$ is the constant given by \refL{lem:StrBal}~\ref{eq:StrBal:density}.
We now distinguish two ranges of~$p=p(n)$.
First, when~$p \le n^{-1/d(G,H)+\beta}$, then~\eqref{eq:lem:density:subs} from \refL{lem:StrBal} implies that~$\mu=\Phi$ for~$n$ large enough (since~\eqref{eq:lem:density:subs} implies~$\mu_{G,H}/\mu_{G,J} \asymp n^{v_H - v_J}p^{e_H - e_J} \ll 1$ for all~$G \subseteq J \subsetneq H$ with~$e_J > e_G$).
Second, when~$p \ge n^{-1/d(G,H)+\beta} \ge n^{-1/m(G,H)+3\alpha}$, then~$\eps \ge n^{-\alpha}$ and \refR{rem:Phibig}~\ref{eq:Phibig:iv} imply that~$\min\{\eps^2 \mu, \eps^2\Phi\} \ge n^{-2\alpha} \cdot \Phi = \Omega(n^{\alpha}) \to \infty$.
Since in both ranges the {$0$-}~and {$1$-statement} conditions in~\eqref{eq:main:strbal:non} and~\eqref{eq:main:nogrounded} coincide,
it follows that \refT{thm_nogrounded} implies~\eqref{eq:main:strbal:non}.
\end{proof}
\section{Cautionary examples (\refP{prop:counterexample:2} and~\ref{prop:counterexample:1})}\label{sec:counterexample}
In this section we prove Propositions~\ref{prop:counterexample:2}--\ref{prop:counterexample:1}
for the rooted graphs~(e) and~(f) depicted in~\refF{counterexample}.
The proof idea for~\refP{prop:counterexample:2} is to proceed in two rounds for a fixed vertex~$x$:
using~\refT{thm_strictly_balanced} we first find about~$\mu_{G,K_4}$ many~$(G,K_4)$-extensions of~$\xx=(x)$,
which we then extend to about~$\mu_{G,H}$ many~$(G,H)$-extensions of~$\xx$.
The crux is that most of the relevant~$(K_4,H)$-extensions from the second round evolve nearly independently,
which ultimately allows us to surpass the conditions of Spencer's result~\eqref{eq_Spencer_SB} and~\refT{thm_strictly_balanced} for~$(K_4,H)$.
\begin{proof}[Proof of \refP{prop:counterexample:2}
Recalling~$\omega = np^2$, by assumption we have~$\eps^2 \mu_{G,K_4} \asymp \eps^3\omega^3 \gg \log n$ and~$\eps^2 \mu_{K_4,H} \le \mu_{K_4,H} \asymp \omega \ll \log n$, which readily implies~$\log \omega \asymp \log \log n$ and~$p=n^{-1/2+o(1)}$.
Now it is not difficult to verify that~$\Phi_{G,H} \asymp \mu_{G,K_4} \asymp \omega^3$ (either directly, or similarly as for~\eqref{eq:mu:minimal} from \refS{s_unique}).
Turning to the details of the $1$-statement,
we start with the auxiliary claim that, whp, for each vertex~$x$ the following event~$\cP_x$~holds:
\begin{romenumerate2}
\item\label{eq:Pv:i
The vertex-neighbourhood~$\Gamma_x$ of~$x$ has size~$|\Gamma_x| \le 9np$.
\item\label{eq:Pv:ii
The collection~$\cT_x$ of all triangles spanned by~$\Gamma_x$
has size~$|\cT_x| = (1\pm \eps/9) \tbinom{n-1}{3}p^6$.
\item\label{eq:Pv:iii
Every vertex~$y \in \Gamma_x$ is contained in at most~$D:=15$ triangles from~$\cT_x$.
\end{romenumerate2}
Indeed, invoking the $1$-statement of \refT{thm_strictly_balanced} with~$H$ equal to~$K_4$ and~$G$ being the root vertex~$v$,
from $\eps ^2 \mu_{G,K_4} \asymp \eps^2\omega^3 \gg \log n$ it follows that, whp,~\ref{eq:Pv:ii} holds for all vertices~$x$.
Since~$np = n^{1/2+o(1)} \gg \log n$, using standard Chernoff bounds it is routine to see that, whp,~\ref{eq:Pv:i} holds for all vertices~$x$.
We claim that if~\ref{eq:Pv:iii}~fails for some~$y \in \Gamma_x$, then there are $4$~triangles in~$\cT_x$ containing $y$ that form either a flower (share no vertices other than~$y$) or a book (all contain~$yz$ for some~$z \in \Gamma_x$):
to see this, note that if we assume the contrary, then for a maximal flower (with at most~$3$ triangles) each edge of it is contained in at most~$2$ other $\cT_x$-triangles, whence there are at most~$3 + 6 \cdot 2 = 15$ triangles in~$\cT_x$ containing~$y$.
The probability that there is either a \mbox{$4$-flower} or \mbox{$4$-book} with all vertices connected to some extra vertex~$x$ is at most $n^{10}p^{21} + n^7p^{16} = n^{-1/2 + o(1)} \to 0$.
It follows that, whp, properties~\ref{eq:Pv:i}--\ref{eq:Pv:iii} hold for all vertices~$x$, establishing the~claim.
We now fix a root vertex~$x$, and expose the edges of~${\mathbb G}_{n,p}} % \newcommand{\Gnp}{\G(n,p)$ in two rounds:
in the first round we expose all edges incident to~$x$ and all edges inside~$\Gamma_x$,
and then in the second round we expose all remaining edges.
We henceforth condition on the outcome of the first exposure round, and assume that~$\cP_x$ holds.
As usual, to avoid clutter we shall omit this conditioning from our notation.
Given distinct vertices~$a,b \in \Gamma_x$,
let~$Y_{a,b}$ denote the number of common neighbours of~$a$ and~$b$ in~$[n] \setminus (\{x\} \cup \Gamma_x)$.
Note that~$\eps \omega \gg \sqrt{\log n/\omega} \gg 1$ by assumption.
Since, by~\ref{eq:Pv:ii}, $|\cT_x| \asymp n^3p^6 = \omega ^3 \ll \eps \omega^4 \asymp \eps \mu$,
using~\ref{eq:Pv:iii} it is not difficult to see that
\begin{equation}\label{eq:Xx}
Z_x := \sum_{abc \in \cT_x}(Y_{a,b}+Y_{b,c} + Y_{a,c}) \qquad \text{satisfies} \qquad \bigl|X_{(x)}-Z_x\bigr| \: \le \: 3|\cT_x| \cdot D \ll \eps \mu/2 .
\end{equation}
Using~\ref{eq:Pv:i} and~$\eps \omega \gg 1$ (see above) we infer $1+|\Gamma_x| \le 10np = n^{1/2+o(1)} \ll n/\omega \ll \eps n$,
and together with~\ref{eq:Pv:ii} it then follows that, say,
\begin{equation}\label{eq:EZv}
\E Z_x = 3|\cT_x| \cdot \bigpar{n-1-|\Gamma_x|} p^2 = (1\pm \eps/8) \mu .
\end{equation}
In~\eqref{eq:Xx} we now write each~$Y_{a',b'}$ as a sum of indicators of length~$2$~paths,
which enables us to estimate the lower tail of~$Z_x$ via Janson's inequality.
By distinguishing between pairs of edge-overlapping paths that share one or two endpoints,
using~\ref{eq:Pv:iii} it is standard to see that the relevant~$\Delta$ term is at most~$\E Z_x \cdot (2D p + 2D) = O(\E Z_x)$, say.
Using~$\eps^2 \E Z_x \asymp \eps^2 \omega^4 \gg \log n$,
by invoking~\cite[Theorem~1]{RiordanWarnke2015} it then follows~that
\begin{equation}\label{eq:Zv:upper}
\Pr(Z_x \le (1-\eps/8) \E Z_x) \le \exp\bigpar{- \Omega(\eps^2 \E Z_x)} = o(n^{-1}) .
\end{equation}
Using~\ref{eq:Pv:iii} we also see that any path shares an edge with a total of at most~$2 D = O(1)$ paths,
which enables us to estimate the upper tail of~$Z_v$ via concentration inequalities for random variables with `controlled dependencies'.
In particular, by invoking~\cite[Proposition~2.44]{JLR} (see also~\cite[Theorem~9]{W14}) it follows~that
\begin{equation}\label{eq:Zv:lower}
\Pr(Z_x \ge (1+\eps/8) \E Z_x) \le \exp\bigpar{- \Omega(\eps^2 \E Z_x)} = o(n^{-1}) .
\end{equation}
To sum up, \eqref{eq:Xx}--\eqref{eq:Zv:lower} and~$1-\eps/2 < (1 \pm \eps/8)^2 < 1 + \eps/2$ imply~$\Pr(|X_{(x)}-\mu| \ge \eps \mu \mid \cP_x) = o(n^{-1})$,
which readily completes the proof of the desired $1$-statement (since, whp,~$\cP_x$ holds for all~$n$ vertices~$x$).
\end{proof}
The proof idea for \refP{prop:counterexample:1} is to find a copy of~$K_4$ containing an edge with extremely large codegree.
To this end we proceed in two steps, inspired by~\cite[Lemma~3]{SW18}:
in the first steps we find~$\Theta(n)$ many vertex-disjoint copies of~$K_4$,
and in the second step we then find the desired edge with large codegree.
\begin{proof}[Proof of \refP{prop:counterexample:1}]%
Note that~$\mu \asymp \omega^5$.
As in the proof of \refP{prop:counterexample:2}, we again have~$\log \omega \asymp \log \log n$ and $\Phi_{G,H} \asymp \mu_{G,K_4} \asymp \omega^3$,
so~$\eps^2 \Phi_{G,H}\asymp \eps^2 \mu_{G, K_4} \gg \log n$ by assumption.
Noting~$0.39< 2/5$, we~define
\begin{equation}\label{eq:CE:defz}
z := \Bigceil{2 \bigpar{(1+\eps) \mu}^{1/2}} \asymp \omega^{5/2} = o(\log n/\log \omega) .
\end{equation}
Turning to the details of the desired $0$-statement,
let~$X^v_{K_4}$ denote the size of the largest collection of vertex-disjoint copies of~$K_4$ spanned by the vertices in~$W:=\{1, \ldots, \lfloor n/2 \rfloor \}$.
It is routine to check that the minimum of~${|W|^{v_G}p^{e_G}}$, taken over all~${G \subseteq K_4}$ with~${v_G \ge 1}$, equals~$|W| \approx n/2$ for~$n$ large enough.
Since~${\mathbb G}_{n,p}[W]$ has the same distribution as~${\mathbb G}_{|W|,p}$,
by invoking~\cite[Theorem~3.29]{JLR} there is a constant~$c>0$ such~that
\begin{equation}\label{eq:XK4v}
\Pr(X^v_{K_4} \ge cn) = 1 - o(1).
\end{equation}
We now condition on the edges spanned by~$W$, and assume that~$X^v_{K_4} \ge cn$.
To avoid clutter, we shall again omit this conditioning from our notation (as in the proof of \refP{prop:counterexample:1}).
We henceforth fix~$\ceil{cn}$ vertex-disjoint copies of~$K_4$ spanned by~$W$,
and from the~$i$-th such copy
we pick an edge~${\{v_i,w_i\}}$ and a further vertex~${x_i \not\in \{v_i,w_i\}}$.
Defining~$Z_i$ as the number of vertices in~$[n] \setminus W$ that are common neighbours of~$v_i$ and~$w_i$,
using~$\tbinom{m}{z} \ge (m/z)^z$ for~$m \ge z$ together with~$np^2 = \omega = o(z)$ and~\eqref{eq:CE:defz}, it routinely follows~that
\begin{equation*}
\Pr(Z_i \ge z)
\ge
\binom{\lceil n/2 \rceil}{z}p^{2z} \bigpar{1-p^2}^{\lceil n/2 \rceil-z} \ge \Bigpar{\frac{np^2}{2z}}^{z} \e^{-np^2} = \e^{-\Theta(z\log \omega)} \ge n^{-o(1)} .
\end{equation*}
Note that~$Z_i \ge z$ implies $X_{(x_i)} \ge \binom{z}{2} \ge \bigpar{z/2}^2 \ge (1+\eps) \mu$.
Since the random variables~$Z_i$ depend on disjoint sets of independent edges, it then follows that
\begin{equation*
\Pr(\max_{x \in [n]}X_{(x)} < (1+\eps)\mu) \le \Pr(\max_{1 \le i \le \lceil cn \rceil}\hspace{-0.25em}Z_i < z) = \hspace{-0.125em}\prod_{1 \le i \le \lceil cn \rceil} \hspace{-0.375em} \Pr(Z_i < z) \le \Bigpar{1-n^{-o(1)}}^{\lceil cn \rceil} = o(1) .
\end{equation*}
Hence~$\Pr(\max_{x \in [n]}X_{(x)} \ge (1+\eps)\mu \mid X^v_{K_4} \ge cn ) = 1-o(1)$,
which together with~\eqref{eq:XK4v} completes the~proof.
\end{proof}
\pagebreak[3]
\section{Concluding remarks}\label{sec:conclusion}
The results and problems of this paper can also be viewed through the lens of \emph{extreme value theory},
where a standard goal is to show that a (suitably shifted and normalized) maximum converges to a non-degenerate distribution.
To see the connection, note that the proof of \refT{thm_strictly_balanced}~(i) describes an interval on which~$\max_{\xx \in \oset{v_G}} X_\xx$ is \whp{} concentrated.
Our setting concerns discrete random variables (which can have complicated behaviour, cf.~\cite[Section 8.5]{FHR}),
with a correlation structure that seems quite unusual for the field.
Hence, as a first step, it would already be interesting to establish a `law of large numbers' result
(even for a restricted class of~$(G,H)$, such as strictly balanced~ones),
which is the content of the following~problem.
\begin{problem}\label{prb:extreme_val}
Determine for what rooted graphs~$(G,H)$ and edge probabilities~$p = p(n)$ there is a sequence~$(a_n)$ of real positive numbers such that~${(\max_{\xx} X_{\xx} - \mu)/a_n}$ converges to~$1$ in probability (as~$n \to \infty$).
\end{problem}
\pagebreak[3]
\small
\bibliographystyle{plain}
| {
"timestamp": "2019-11-11T02:15:53",
"yymm": "1911",
"arxiv_id": "1911.03012",
"language": "en",
"url": "https://arxiv.org/abs/1911.03012",
"abstract": "We consider rooted subgraphs in random graphs, i.e., extension counts such as (i) the number of triangles containing a given vertex or (ii) the number of paths of length three connecting two given vertices. In 1989, Spencer gave sufficient conditions for the event that, with high probability, these extension counts are asymptotically equal for all choices of the root vertices. For the important strictly balanced case, Spencer also raised the fundamental question as to whether these conditions are necessary. We answer this question by a careful second moment argument, and discuss some intriguing problems that remain open.",
"subjects": "Combinatorics (math.CO); Probability (math.PR)",
"title": "Counting extensions revisited",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9833429570618729,
"lm_q2_score": 0.817574478416099,
"lm_q1q2_score": 0.8039561052240052
} |
https://arxiv.org/abs/1706.08052 | Mean perimeter and mean area of the convex hull over planar random walks | We investigate the geometric properties of the convex hull over $n$ successive positions of a planar random walk, with a symmetric continuous jump distribution. We derive the large $n$ asymptotic behavior of the mean perimeter. In addition, we compute the mean area for the particular case of isotropic Gaussian jumps. While the leading terms of these asymptotics are universal, the subleading (correction) terms depend on finer details of the jump distribution and describe a "finite size effect" of discrete-time jump processes, allowing one to accurately compute the mean perimeter and the mean area even for small $n$, as verified by Monte Carlo simulations. This is particularly valuable for applications dealing with discrete-time jumps processes and ranging from the statistical analysis of single-particle tracking experiments in microbiology to home range estimations in ecology. | \section{Introduction}
Consider a set of $n$ points with position vectors $\{\vec r_1, \vec
r_2,\ldots, \vec r_n\}$ in a $d$-dimensional space. The most natural
and perhaps the simplest way to characterize the {\it shape} of this
set of points is by drawing the convex hull around this set: a convex
hull is the unique minimal convex polytope that encloses all the
points. This unique polytope is convex since the line segment joining
any two points on the surface of the polytope is fully contained
within the polytope. Properties of such convex polytopes have been
widely studied in mathematics, computer science (image processing and
patter recognition) and in the physics of crystallography (the Wulff
construction). In two dimension, where the convex hull is a polygon,
there are many other applications, most notably in ecology where the
home range of animals or the spread of an epidemics are typically
estimated by convex hulls. For a review on the history and
applications of convex hulls, see Ref.~\cite{Majumdar2010a}.
When the points $\{\vec r_1, \vec r_2,\ldots, \vec r_n\}$ are drawn
randomly from a joint distribution $P(\vec r_1, \vec r_2,\ldots, \vec
r_n)$, the associated convex hull also becomes random and
characterizing its statistical properties is a challenging problem,
since the convex hull is a highly nontrivial functional of the random
variables $\{\vec r_1, \vec r_2,\ldots, \vec r_n\}$. For instance,
what can one say about the statistics of the surface area $S_d$ or the
volume $V_d$ of the convex hull, for a given joint distribution
$P(\vec r_1, \vec r_2,\ldots, \vec r_n)$? Even finding the mean
surface area $\langle S_d\rangle$ or the mean volume $\langle
V_d\rangle$, for arbitrary joint distribution, is a formidably
difficult problem. In the special case when the points are
independent and identically distributed, i.e., when the joint
distribution factorizes as $P(\vec r_1, \vec r_2,\ldots, \vec r_n)=
\prod_{k=1}^n P(\vec r_k)$ (with $P(\vec r_k)$ representing the
marginal distribution), several results on the statistics of the
surface and volume of the convex hull are known
(see~\cite{Majumdar2010a} for a historical review). However, for {\it
correlated} points where the joint distribution does not factorize,
very few results are available.
The simplest example of a set of correlated points corresponds to the
case of a random walk in $d$-dimensional continuous space, where $\vec
r_k$ represents the position of the walker at step $k$, starting at
the origin at step $0$. The position evolves via the Markov rule,
$\vec r_k= \vec r_{k-1}+ \vec \eta_k$, where $\vec \eta_k$ represents
the jump at step $k$, and one assumes that $\vec \eta_k$ are
independent and identically distributed random variables, each drawn
from some prescribed distribution $p(\vec \eta_k)$. The walk evolves
up to $n$ steps generating the vertices $\{\vec r_1, \vec r_2,\ldots,
\vec r_n\}$ of its trajectory. There is a unique convex hull for each
sample of this trajectory and what can one say about the mean surface
area or the mean volume of this convex hull, given the jump
distribution $p(\vec \eta_k)$? This is the basic problem that we
address in this paper. We show that at least for $d=2$ (planar random
walks), it is possible to obtain precise {\it explicit} results for
all $n$ for the mean perimeter and the mean area of the convex hull of
the walk, for a large class of jump distributions $p(\vec \eta)$,
including in particular L\'evy flights where the jump distribution has
a fat tail. We also obtain similar results for the mean area of the
convex hull but under additional assumptions on the jump distribution.
This problem concerning the convex hull of a random walk becomes
somewhat simpler in the special case of the Brownian limit, where
several results are known. Consider, for example a jump distribution
$p(\vec \eta_k)$ with zero mean and a finite variance $\sigma^2$. In
this case, the walk converges in the large $n$ limit to the Brownian
motion. In other words, one can consider the continuous-time limit,
as $\sigma^2\to 0$ and $n\to \infty$ with $n\sigma^2= 2\, D\, t$ being
fixed (here $D$ is called the diffusion constant and $t$ is the
duration of the walk). In this Brownian limit and for $d=2$, the mean
perimeter and the mean area have been known exactly for a while.
Tak\'acs~\cite{Takacs1980} computed the mean perimeter
\begin{equation}
\langle S_2\rangle = \sqrt{16 \pi\, D\, t}\, ,
\label{eq:perim_BM}
\end{equation}
while El Bachir~\cite{ElBachir83} and Letac~\cite{Letac93}
computed the mean area
\begin{equation}
\langle V_2\rangle = \pi\, D\, t\, .
\label{eq:area_BM}
\end{equation}
For a planar Brownian bridge of duration $t$ (where the walker returns
to the origin after time $t$), the mean perimeter $\langle
S_2\rangle_{\rm bridge} = \sqrt{\pi^3\, D\, t}$ was computed by
Goldman~\cite{Goldman96}, while the mean area $\langle V_2\rangle_{\rm
bridge}=(2\pi/3)\, D\, t$ was computed relatively recently by
Randon-Furling {\it et al.}~\cite{Randon09}. An interesting extension
of this problem in $d=2$ is to the case of $N$ independent planar
Brownian motions (or Brownian bridges)~\cite{Randon09,Majumdar2010a}.
This is relevant in the context of the home range of animals, where
$N$ represents the size of an animal population and the trajectory of
each animal is approximated by a Brownian motion during their foraging
period. For a fixed population size $N$, the mean perimeter and the
mean area of the convex hull was computed exactly: $\langle S_2\rangle
= \alpha_N \, \sqrt{D\,t}$ and $\langle V_2 \rangle = \beta_N\, D\,
t$, where the prefactors $\alpha_N$ and $\beta_N$ were found to have
nontrivial $N$ dependence \cite{Randon09,Majumdar2010a}. For $d>2$,
very few exact results are known for this problem. For a single
$(N=1)$ Brownian motion, the mean surface area and the mean volume of
the convex hull was recently computed by Eldan~\cite{Eldan2014}:
$\langle S_d \rangle=\frac{2(4\pi D t)^{(d-1)/2}}{\Gamma(d)}$ and
$\langle V_d\rangle= \frac{(\pi D t)^{d/2}}{\Gamma(d/2+1)^2}$ (see
also \cite{Kabluchko16b} for another derivation and extension to
Brownian bridges). However, for $N>1$ and $d>2$, no exact result is
available. Finally, going beyond the mean surface and the mean
volume, very few results are known for higher moments (see the review
\cite{Majumdar2010a} for results on variance) or even the full
distribution of the surface or the volume of the convex hull of
Brownian motion (see Refs.~\cite{Wade15,Wade15b} for a recent
discussion on the distribution of the perimeter in $d=2$ and $N=1$).
Very recently, the full distribution (including the large deviation
tails) of the perimeter and the area of $N\ge 1$ planar Brownian
motions were calculated numerically~\cite{Claussen15,Dwenter16}. Some
rigorous results on the convex hulls of L\'evy processes were recently
derived \cite{Molchanov12,Molchanov16}.
If one is interested only in the mean area or the mean volume of the
convex hull of a generic stochastic process (not necessarily just a
random walk), a particular simplification occurs in $d=2$ (planar
case) where several analytical results can be derived by adapting
Cauchy's formula~\cite{Cauchy1832,Santalo} for arbitrary closed convex
curves in $d=2$. Indeed by employing Cauchy's formula for every
realization of a random planar convex hull, it was shown in
Refs.~\cite{Randon09,Majumdar2010a} that the problem of computing the
mean perimeter and the mean area of an {\it arbitrary} two dimensional
stochastic process (can in general be non-Markovian and in
discrete-time) can be mapped to computing the extremal statistics
associated with the one dimensional component of the process (see
Section 3 for the precise mapping). This mapping was introduced
originally in~\cite{Randon09} to compute $\langle S_2\rangle$ and
$\langle V_2\rangle$ exactly for $N\ge 1$ planar Brownian motions.
Since then, it has been used for a number of continuous-time planar
processes: random acceleration process~\cite{Reymbaut11}, branching
Brownian motion with applications to animal epidemic
outbreak~\cite{Dumonteil13}, anomalous diffusion
processes~\cite{Lukovic13} and also to a single Brownian motion
confined to a half-space~\cite{Chupeau2015a}.
The objective of this paper is to go beyond the continuous-time limit
and obtain results for the convex hull of a discrete-time planar
random walk of $n$ steps (with $n$ large but finite) with arbitrary
jump distribution, including for instance L\'evy flights. Indeed, in
any realistic experiment or simulation, the points of the trajectory
are always discrete. For example, recently proposed local convex hull
estimators \cite{Lanoiselee17} are based on a relatively small number
of points, where we can not apply the Brownian limiting results
reviewed above. The first rigorous result for a two-dimensional
discrete random walk, modeled as a sum of independent random variables
in the complex plane, was derived for the mean perimeter of the convex
hull by Spitzer and Widom \cite{Spitzer1961},
\begin{equation} \label{eq:Spitzer}
\langle L_n\rangle = 2\sum_{k=1}^n\frac{\langle \vert x_k + i y_k \vert \rangle}{k}
\end{equation}
(here $x_k+iy_k$ is the complex-valued position of the walker after
$k$ steps). Although the formula (\ref{eq:Spitzer}) looks deceptively
simple, an {\it explicit} computation of the mean $\langle L_n
\rangle$ is difficult using Eq. (\ref{eq:Spitzer}), in particular
its behavior for large but finite $n$. For the case of zero mean and
finite variance jump distributions, the leading $n^{\frac12}$ term in
$\langle L_n\rangle$ was identified in \cite{Spitzer1961}, but the
relevant subleading terms were not known, to the best of our
knowledge. For other results on the statistics of $L_n$, see
Ref.~\cite{Snyder93}. The Spitzer-Widom formula (\ref{eq:Spitzer})
was extended to generic $d$-dimensional random walks in
Ref. \cite{Vysotsky15}, in which exact combinatorial expressions for
the expected surface area and the expected volume of the convex hull
were derived. However, these expressions are not suitable for the
asymptotic analysis at large $n$. Several other geometrical
properties of the convex hull of random walks are known: for example,
the exact formula for the mean number of facets of the convex polytope
of a $d$-dimensional random walk, for $d=2$~\cite{Baxter61} and $d >
2$~\cite{Kabluchko16,Randon17}. But in this paper, we will restrict
ourselves only to the mean perimeter $\langle L_n\rangle$ and the mean
area $\langle A_n\rangle$ of a planar random walk of $n$ steps and our
main goal is to derive explicitly not only the leading term in
$\langle L_n\rangle$ and $\langle A_n\rangle$ for large $n$, but also
the subleading terms.
Our strategy is to adapt the mapping between the convex hull of a
$2$-d process and the extreme statistics of the $1$-d component
process, mentioned above, to the case of a single discrete-time planar
random walk with generic jump distributions. Using this strategy, we
are able to compute explicitly the leading and subleading terms of the
mean perimeter of the convex hull of a planar random walk of $n$ steps
with arbitrary symmetric continuous jump distributions for large but
finite $n$. The mean area is also computed but only for the
particular case of isotropic Gaussian jumps. The rest of the paper is
organized as follows. Section \ref{sec:outline} outlines the class of
considered planar random walks and the main results. In
Sec. \ref{sec:theory}, we explain the derivation steps. In
Sec. \ref{sec:simu}, several explicit examples are presented and used
to illustrate the accuracy of the derived asymptotic relations by
comparison with Monte Carlo simulations. In
Sec. \ref{sec:discussion}, we discuss the main results, their
applications, and conclusions. \ref{sec:Aderivation} and
\ref{sec:Aexamples} contain some technical details of the derivation
and exactly solvable examples, respectively.
\section{The model and the main results}
\label{sec:outline}
We consider a discrete-time random walker in the plane whose jumps are
random, independent, and identically distributed. Starting from the
origin, the walker produces a sequence of $(n+1)$ points $\{(x_0,y_0),
(x_1,y_1), \ldots, (x_n,y_n)\}\subset \mathbb R^2$ after $n$ jumps such that
\begin{equation} \label{eq:RW_def}
\fl
(x_0,y_0) = (0,0), \qquad (x_k,y_k) = (x_{k-1}, y_{k-1}) + (\eta_k^x,\eta_k^y) \quad (k=1,2,\ldots,n),
\end{equation}
where the jumps $\vec \eta_k = (\eta_k^x,\eta_k^y)$ at the $k$-th step
are independent from step to step, and at each step they are drawn
from a prescribed joint probability density function (PDF) $p(x,y)$,
i.e.,
\begin{equation}
\mathbb P\{\eta_k^x \in (x, x+dx), \eta_k^y \in (y, y+dy)\} = p(x,y)\, dx\,dy\, .
\label{noisepdf}
\end{equation}
We emphasize that the starting point $(x_0,y_0)$ is not random and for
convenience, we choose $(x_0= y_0=0)$ to be the origin. The convex
hull constructed over these $(n+1)$ points is the minimal convex
polygon that encloses all these points (see Fig. \ref{fig:conhull_rw}
for an illustration). We are interested in the perimeter $L_n$ and
the area $A_n$ of the convex hull which are random variables given
that the points are generated as successive positions of a planar
random walk. We aim at computing {\it exactly} the leading and
subleading terms of the mean perimeter, $\langle L_n\rangle$, and the
mean area, $\langle A_n\rangle$, of the convex hull for large $n$.
\begin{figure}
\begin{center}
\includegraphics[width=80mm]{figure1.pdf}
\end{center}
\caption{
Illustration of the convex hull of a $7$-stepped planar random
walk. The walk starts at the origin $O$ and makes independent jumps at
each step (shown by arrows). after $7$ steps, the convex hull (shown
by dashed red lines) is constructed around the points of the
trajectory.}
\label{fig:conhull_rw}
\end{figure}
As explained in Sec. \ref{sec:theory}, our computation relies on two
key results: (i) Cauchy's formula for the perimeter and the area of a
closed convex curve, that allows one to reduce the original planar
problem to the analysis of one-dimensional projections, and (ii) the
Pollaczek-Spitzer formula describing the distribution of the maximum
of partial sums of independent symmetric continuously distributed
random variables~\cite{Pollaczek52,Spitzer56}. To use the
Pollaczek-Spitzer formula, we need thus to assume that the joint
probability density $p(x,y)$ is continuous and centrally symmetric:
\begin{equation} \label{eq:p_symm}
p(-x,-y) = p(x,y) .
\end{equation}
In particular, our results will not be applicable to a classical
random walk on the square lattice because its distribution is not
continuous. In the following, we outline the main results that will
be derived in Sec. \ref{sec:theory}.
The mean perimeter $\langle L_n\rangle$ is computed for a very general
class of symmetric continuous jump distributions. Writing the Fourier
transform of $p(x,y)$ as
\begin{equation} \label{eq:hatrho_xi_general}
\hat{\rho}_\theta(k) = \int\limits_{-\infty}^\infty dx
\int\limits_{-\infty}^\infty dy \, p(x,y) \, e^{ik (x\cos\theta + y \sin\theta)} ,
\end{equation}
one can characterize the behavior of the mean perimeter according to
the asymptotic properties of $\hat{\rho}_\theta(k)$ as $k\to 0$. We
assume a general expansion
\begin{equation} \label{eq:hatrho_mu}
\hat{\rho}_\theta(k) \simeq 1 - |a_\theta k|^\mu + o(|k|^\mu) \qquad (k\to 0) ,
\end{equation}
with the scaling exponent $0 < \mu \leq 2$, and a scale $a_\theta >
0$. When $0 < \mu \leq 1$, the mean perimeter of the convex hull is
infinite. We therefore focus on the case $1 < \mu \leq 2$.
First, we derive an exact formula for the generating function of
$\langle L_n\rangle$ which is valid for any $1 < \mu \leq 2$.
Extracting the asymptotic large $n$ behavior of $\langle L_n\rangle$
from this general formula is, however, nontrivial. We distinguish the
following cases.
\begin{enumerate}
\item
When the jump variance is finite ($\mu = 2$), the mean perimeter
is shown to behave as
\begin{equation} \label{eq:Lmean_asympt0}
\langle L_n\rangle \simeq C_0 \, n^{\frac12} + C_1 + o(1) \qquad (n \gg 1),
\end{equation}
with
\begin{equation} \label{eq:CLn}
C_0 = \frac{\sqrt{2}}{\sqrt{\pi}} \int\limits_0^{2\pi} d\theta \, \sigma_\theta , \qquad
C_1 = \int\limits_0^{2\pi} d\theta \, \sigma_\theta \, \gamma_\theta , \quad
\end{equation}
where $\sigma_\theta$ and $\gamma_\theta$ are given by
\begin{eqnarray}
\label{eq:sigma_theta}
\sigma_\theta^2 &=& - \lim\limits_{k\to 0} \frac{\partial^2 \hat{\rho}_\theta(k)}{\partial k^2} =
\langle (\eta^x \cos \theta + \eta^y \sin\theta)^2 \rangle = \frac{a_\theta^2}{2} \,, \\
\label{eq:gamma}
\gamma_\theta &=& \frac{1}{\pi \sqrt{2}} \int\limits_0^\infty \frac{dk}{k^2}
\ln \left(\frac{1 - \hat{\rho}_\theta\bigl(\sqrt{2}\,k/\sigma_\theta\bigr)}{k^2}\right) .
\end{eqnarray}
If in addition the fourth-order moment of the jump distribution is
finite, one gains the second subleading term,
\begin{equation} \label{eq:Lmean_asympt}
\langle L_n\rangle \simeq C_0 \, n^{\frac12} + C_1 + C_2 \, n^{-\frac12} + o(n^{-\frac12}) \qquad (n \gg 1),
\end{equation}
with
\begin{equation}
C_2 = \frac{C_0}{8} + \frac{\sqrt{2}}{24\sqrt{\pi}} \int\limits_0^{2\pi} d\theta \, \sigma_\theta \, {\mathcal K}_\theta
\end{equation}
and
\begin{equation}
\label{eq:a4}
{\mathcal K}_\theta = \frac{1}{\sigma_\theta^4} \lim\limits_{k\to 0} \frac{\partial^4 \hat{\rho}_\theta(k)}{\partial k^4}
= \frac{\langle (\eta^x \cos \theta + \eta^y \sin\theta)^4 \rangle}{\langle (\eta^x \cos \theta + \eta^y \sin\theta)^2 \rangle^2} \,.
\end{equation}
Higher-order corrections can also be derived under further moments
assumptions. Note that the integral expression for the coefficient
$C_0$ in front of the leading term $n^{1/2}$ first appeared in
\cite{Spitzer1961}. In Sec. \ref{sec:simu}, we will show that the
asymptotic formula (\ref{eq:Lmean_asympt}) is very accurate even for
small $n$.
\item
When the jump variance is infinite (i.e., $1 < \mu < 2$), one needs to
consider the subleading term in the small $k$ asymptotics of
$\hat{\rho}_\theta(k)$:
\begin{equation} \label{eq:hatrho_nu}
\hat{\rho}_\theta(k) \simeq 1 - |a_\theta k|^\mu + b_\theta |k|^\nu + o(|k|^\nu) \qquad (k\to 0) ,
\end{equation}
with the subleading exponent $\nu > \mu$ and a coefficient $b_\theta$.
Depending on the subleading exponent $\nu$, we distinguish two cases:
\subitem
(1) if $\mu < \nu < \mu+1$, one has
\begin{equation} \label{eq:Ln_Levy1}
\langle L_n\rangle \simeq C_0 \, n^{1/\mu} + C_1 \, n^{1-(\nu-1)/\mu} + o(n^{1-(\nu-1)/\mu}) \qquad (n\gg 1),
\end{equation}
with
\begin{eqnarray} \label{eq:C0_Levy1}
C_0 &=& \frac{\mu \,\Gamma(1- 1/\mu)}{\pi} \int\limits_0^{2\pi} d\theta \, a_\theta , \\
\label{eq:C1_Levy1}
C_1 &=& - \frac{\Gamma((\nu-1)/\mu)}{\pi (\mu+1-\nu)} \int\limits_0^{2\pi} d\theta \, a_\theta^{1-\nu} \, b_\theta .
\end{eqnarray}
Note that the coefficient $C_0$ also appears in the mean perimeter of
the convex hull of continuous-time symmetric stable processes
\cite{Molchanov12}.
\subitem
(2) if $\nu > \mu+1$, one has
\begin{equation} \label{eq:Ln_Levy2}
\langle L_n\rangle \simeq C_0 \, n^{1/\mu} + C_1 + o(1) \qquad (n\gg 1),
\end{equation}
with $C_0$ from Eq. (\ref{eq:C0_Levy1}) and
\begin{equation} \label{eq:C1_Levy2}
C_1 = \int\limits_0^{2\pi} d\theta \, \gamma_\theta,
\end{equation}
where
\begin{equation} \label{eq:gamma_Levy2}
\gamma_\theta = \frac{1}{\pi} \int\limits_0^{\infty} \frac{dk}{k^2} \ln \left(\frac{1- \hat{\rho}_\theta(k)}{(ak)^\mu}\right) .
\end{equation}
For instance, for a L\'evy symmetric alpha-stable distribution with
$\hat{\rho}(k) = \exp(-|ak|^{\mu})$, one gets \cite{Comtet05}
\begin{equation} \label{eq:gamma_Levystable}
\gamma = a \, \frac{\zeta(1/\mu)}{(2\pi)^{1/\mu} \sin(\pi/(2\mu))} \, ,
\end{equation}
where $\zeta(z)$ is the Riemann zeta function.
\end{enumerate}
The obtained results are indeed very general.
In turn, our method of computation of the mean area requires two
additional strong assumptions: (a) the independence of the jumps along
$x$ and $y$ coordinates, i.e., $p(x,y)= p(x) p(y)$ and (b) the
isotropy of the jump PDF, i.e., $p(x,y)$ should depend only on the
distance $r=\sqrt{x^2+y^2}$ but not on the direction of the jump.
According to Porter-Rosenzweig theorem~\cite{Porter60}, only the
Gaussian jump distribution with identical variance $\sigma^2$ along
$x$ and $y$ directions, i.e., $p(x,y)=
\frac{1}{2\pi \sigma^2}\,\exp[-(x^2+y^2)/{2\sigma^2}]$, satisfies
both properties (a) and (b). Our result for the mean area is thus
only valid for this Gaussian distribution:
\begin{equation} \label{eq:Amean_asympt}
\sigma^{-2} \langle A_n \rangle = \frac{\pi}{2} n + \gamma \sqrt{8\pi} \, n^{\frac12} + \pi ({\mathcal K}/12 + \gamma^2) + o(1) ,
\end{equation}
with ${\mathcal K} = 3$ and
\begin{equation} \label{eq:gamma_Gauss}
\gamma = \frac{\zeta(1/2)}{\sqrt{2\pi}} = - 0.58259 \ldots
\end{equation}
Recently, an exact formula the mean area of the convex hull of a
Gaussian random walk was derived \cite{Kabluchko16b}. In the
isotropic case, the formula reads
\begin{equation} \label{eq:sum_Gauss}
\sigma^{-2} \langle A_n \rangle = \frac12 \sum\limits_{i=1}^n \sum\limits_{j=1}^{n-i} \frac{1}{\sqrt{ij}} \,.
\end{equation}
While the result in Eq. (\ref{eq:sum_Gauss}) is very useful for finite
$n$, deriving the large $n$ asymptotics of this double sum (including
up to two subleading terms as in Eq. (\ref{eq:Amean_asympt})) seems
somewhat complicated. Our method, in contrast, gives a more direct
access to the asymptotics. Moreover, one can check numerically that
our asymptotic formula (\ref{eq:Amean_asympt}) agrees accurately with
the exact expression (\ref{eq:sum_Gauss}) even for moderately large
$n$.
The leading term of Eq. (\ref{eq:Amean_asympt}) was shown to be valid
for a generic random walk with increments of a finite variance (see
Proposition 3.3 in \cite{Wade15}). Moreover, our numerical
simulations (see Sec. \ref{sec:exp_radial} and
Fig. \ref{fig:exp_radial}) suggest that the obtained formula
(\ref{eq:Amean_asympt}) (including the subleading terms) may be
applicable for some other isotropic processes. In other words, the
technical assumption about the independence of the jumps along $x$ and
$y$ might be relaxed in future. This statement, which is uniquely
based on numerical simulations for some jump distributions, is
conjectural. In turn, the isotropy assumption is important, as
illustrated by numerical simulations.
The large $n$ asymptotic relations (\ref{eq:Lmean_asympt},
\ref{eq:Ln_Levy1}, \ref{eq:Ln_Levy2}, \ref{eq:Amean_asympt}) are the
main results of the paper. Setting $t = n\tau$ and $D =
2\sigma^2/(4\tau)$ with a time step $\tau$, one recovers from
Eqs. (\ref{eq:Lmean_asympt}, \ref{eq:Amean_asympt}) the same leading
terms as in Eq. (\ref{eq:perim_BM}, \ref{eq:area_BM}) for Brownian
motion (note that we write $2\sigma^2$ in $D$ because $\sigma^2$ is
the variance of jumps along one direction). It is thus not surprising
that the leading term in Eq. (\ref{eq:Lmean_asympt}) is universal
because its derivation is valid for any planar random walk with a
symmetric and continuous jump distribution and having a finite
variance $\sigma^2$.
Thinking of Brownian motion as a limit of random walks, the subleading
terms in Eq. (\ref{eq:Lmean_asympt}) can be understood as ``finite
size'' corrections. The first subleading term is valid under the same
assumptions as the leading term, although the coefficient
$\gamma_\theta$ depends on the jump distribution (see examples in
Sec. \ref{sec:simu}). In turn, the second subleading term depends on
the kurtosis ${\mathcal K}_\theta$ and thus requires an additional assumption
that ${\mathcal K}_\theta$ is finite.
\section{Main steps leading to the derivation of results}
\label{sec:theory}
\subsection{Reduction to a one-dimensional problem}
\begin{figure}
\begin{center}
\includegraphics[width=120mm]{figure2.pdf}
\end{center}
\caption{
The support function $M(\theta)$ for a closed convex curve (left) and
for a set of points $\{(x_0,y_0),(x_1,y_1),\ldots,(x_n,y_n)\}$
(right). $M(\theta)$ is the distance between two open circles.}
\label{fig:domain}
\end{figure}
We start with Cauchy's formula for the perimeter $L$ and the area $A$
of an arbitrary convex domain ${\cal C}$ with a reasonably smooth
boundary $\gamma_{\cal C}$~\cite{Cauchy1832,Santalo}. Let the
boundary $\gamma_{\cal C}$ be parameterized as $(X(s),Y(s))$ with a
curvilinear coordinate $s$ ranging from $0$ to $1$. Setting the
origin of coordinates inside the domain, one defines the support
function $M(\theta)$ as the distance from the origin to the closest
straight line that does not cross the domain and is perpendicular to
the vector from the origin in direction $\theta$
(Fig. \ref{fig:domain}). In other words,
\begin{equation}
M(\theta) = \max\limits_{0\leq s \leq 1} \bigl\{ X(s) \cos \theta + Y(s) \sin \theta\bigr\} .
\end{equation}
Cauchy showed that~\cite{Cauchy1832}
\begin{eqnarray}
L &=& \int\limits_0^{2\pi } d\theta \, M(\theta) , \\
A &=& \frac12\int\limits_0^{2\pi } d\theta \, \bigl(M^2(\theta) - [M'(\theta)]^2\bigr) .
\end{eqnarray}
For a simple derivation of this formula see Ref.~\cite{Majumdar2010a}.
A straightforward calculation of $M(\theta)$ for a convex hull over a
set of points may seem to be hopeless, as one would need first to
construct the convex hull by identifying and ordering its vertices
among the given set of points and then to compute $M(\theta)$. The
key idea is that $M(\theta)$ can be found directly from the vertices
of the trajectory as \cite{Spitzer1961,Randon09}
\begin{equation}
M(\theta) = \max\limits_{0\leq k \leq n} \bigl\{ x_k \cos\theta + y_k \sin \theta \bigr\} .
\end{equation}
Moreover, given that the maximum for a fixed $\theta$ is realized by
a certain vertex (with index $k^*$ which discretely changes with
$\theta$), one also obtains the derivative:
\begin{equation} \label{eq:Mprime}
M'(\theta) = - x_{k^*} \sin\theta + y_{k^*} \cos \theta .
\end{equation}
When the points $(x_k,y_k)$ are random, the perimeter and the area of
the convex hull are random variables. We focus on the mean values
$\langle L_n\rangle$ and $\langle A_n\rangle$:
\begin{eqnarray} \label{eq:Lmean}
\langle L_n \rangle & = & \int\limits_0^{2\pi } d\theta \, \langle M(\theta)\rangle , \\
\label{eq:Amean}
\langle A_n \rangle & = & \frac12 \int\limits_0^{2\pi } d\theta \, \bigl(\langle M^2(\theta)\rangle - \langle [M'(\theta)]^2 \rangle\bigr),
\end{eqnarray}
i.e., the computation is reduced to the first two moments of
$M(\theta)$ and to the mean $\langle [M'(\theta)]^2 \rangle$. The
important observation is that, for a fixed direction $\theta$, one
needs to characterize the maximum of the projection of points
$(x_k,y_k)$ onto that direction
\begin{equation}
M(\theta) = \max\limits_{0\leq k \leq n} \{ z_k^\theta\}, \qquad
z_k^\theta = x_k \cos \theta + y_k \sin \theta .
\end{equation}
The projection of a random walk is also a random walk. In fact, we
can write according to Eq. (\ref{eq:RW_def})
\begin{equation} \label{eq:zk}
z_0^\theta = 0 , \qquad z_k^\theta = z_{k-1}^\theta + \xi_k^\theta \quad (k=1,2,\ldots,n),
\end{equation}
with
\begin{equation} \label{eq:xik}
\xi_{k}^\theta = \eta_k^x \cos \theta + \eta_k^y \sin \theta.
\end{equation}
The probability density of $\xi_k^\theta$, $\rho_\theta(z)$, is fully
determined by that of the jump $(\eta_k^x,\eta_k^y)$. In particular,
its Fourier transform $\hat{\rho}_\theta(k)$ is related to $p(x,y)$ by
Eq. (\ref{eq:hatrho_xi_general}). The symmetry (\ref{eq:p_symm})
implies that $\hat{\rho}_\theta(-k) = \hat{\rho}_\theta(k)$ and thus
the density $\rho_\theta(z)$ is symmetric.
Having discussed the general jump distributions, let us mention two
particular cases that will be important later.
\vskip 0.3cm
\noindent (a) in the case of
independent jumps along $x$ and $y$ coordinates, one has $p(x,y) =
p_x(x) p_y(y)$, and thus
\begin{equation} \label{eq:hatrho_xi}
\hat{\rho}_\theta(k) = \hat{p}_x(k\cos\theta) \, \hat{p}_y(k\sin\theta) ,
\end{equation}
where $\hat{p}_x$ and $\hat{p}_y$ are the Fourier transforms of
$p_x(x)$ and $p_y(y)$, respectively.
\vskip 0.3cm
\noindent (b) For isotropic jumps,
$p(x,y)$ depends only on the radial coordinate, $p(x,y)dxdy = p_r(r)
dr \, d\phi/(2\pi)$, where $p_r(r)$ is the radial density (that
includes the factor $r$ from the Jacobian). From
Eq. (\ref{eq:hatrho_xi_general}), one gets
\begin{equation} \label{eq:hatrho_xi_iso}
\hat{\rho}(k) = \int\limits_0^\infty dr \, p_r(r) \, J_0(|k|r),
\end{equation}
in which the integration over the angular coordinate $\phi$ eliminated
the dependence on $\theta$ and resulted in the Bessel function of the
first kind, $J_0(|k|r)$.
\subsection{Formal solution of the one-dimensional problem}
The formal exact solution of the one-dimensional problem can be
obtained via the Pollaczek-Spitzer formula
\cite{Pollaczek52,Spitzer56}. This formula characterizes the maximum
of partial sums of independent identically distributed random
variables $\xi_k$ with a symmetric and continuous density $\rho(z)$:
\begin{equation}
M_n = \max\{ 0, \xi_1, \xi_1 + \xi_2, \ldots, \xi_1 + \xi_2 + \ldots + \xi_n\}
\end{equation}
(in this subsection, we temporarily drop the subscript and superscript
$\theta$ from all the variables; it will be restored at the end).
Considering $\xi_k$ as jumps of a random walker, $z_{k} = z_{k-1} +
\xi_k$ (with $z_0 = 0$), one can also write
\begin{equation}
M_n = \max\{ z_0, z_1, z_2, \ldots, z_n\}.
\end{equation}
Pollaczek and later Spitzer showed that the cumulative distribution
$Q_n(z) = \mathbb P\{ M_n \leq z\}$ of $M_n$ satisfies the following identity
for $0 \leq s \leq 1$ and $\lambda
\geq 0$
\begin{equation} \label{eq:PS_identity}
\sum\limits_{n=0}^\infty s^n \, \langle e^{-\lambda M_n} \rangle =
\sum\limits_{n=0}^\infty s^n \int\limits_0^\infty dz \, e^{-\lambda z} Q'_n(z) = \frac{\phi(s,\lambda)}{\sqrt{1-s}} \, ,
\end{equation}
with
\begin{equation} \label{eq:phi}
\phi(s,\lambda) = \exp\left( - \frac{\lambda}{\pi} \int\limits_0^\infty dk \, \frac{\ln(1 - s \hat{\rho}(k))}{\lambda^2 + k^2} \right) ,
\end{equation}
where $Q'_n(z) = dQ_n(z)/dz$ is the probability density of the maximum
\cite{Pollaczek52,Spitzer56}. In principle, all the moments of $M_n$
can be obtained from the formula in Eq. (\ref{eq:PS_identity}).
However, in practice, deriving explicitly the moments of $M_n$ by
inverting this formula is highly nontrivial~\cite{Majumdar09}. For
example, the expected maximum of a discrete-time random walk $\langle
M_n\rangle$ appears in a number of different problems, from packing
algorithms in computer science~\cite{Coffman98}, all the way to the
survival probability of a single or multiple walkers in presence of a
trap~\cite{Comtet05,MCZ2006,ZMC2007,ZMC2009,Franke2012,MMS2017}. It
has also appeared in the context of the order, gap and record
statistics of random
walks~\cite{SM2012,SM_review,MMS2013,WMS2012,GMS2017} and $\langle
M_n\rangle$ has been analyzed for large $n$ (for the leading and the
next subleading term) in detail using the Pollaczek-Spitzer formula in
Eq. (\ref{eq:PS_identity}). Here, in addition to calculating the
first three terms in the asymptotic expansion of $\langle M_n\rangle$
for $n\gg 1$, we also calculate the large $n$ behavior of the second
moment $\langle M_n^2 \rangle$, that we need for the computation of
the mean area of the convex hull.
In fact, the Pollaczek-Spitzer formula also determines the generating
functions for all moments of $M_n$:
\begin{equation} \label{eq:hm}
h_m(s) = \sum\limits_{n=0}^\infty s^n \, \langle M_n^m \rangle =
(-1)^m \lim\limits_{\lambda\to 0} \frac{\partial^m}{\partial \lambda^m} \frac{\phi(s,\lambda)}{\sqrt{1-s}} \, .
\end{equation}
By considering a general asymptotic expansion
\begin{equation}
\hat{\rho}(k) \simeq 1 - |ak|^\mu + o(|k|^\mu) \qquad (k\to 0)
\end{equation}
with an exponent $0 < \mu \leq 2$ and a scale $a > 0$, we derive in
\ref{sec:Agenerating} the exact expressions
\begin{equation} \label{eq:h1}
h_1(s) = \frac{1}{\pi(1-s)} \int\limits_0^\infty \frac{dk}{k^2} \ln \left(\frac{1- s \hat{\rho}(k)}{1-s}\right) \qquad (0 \leq s < 1),
\end{equation}
which is valid for any $1<\mu \le 2$ (note that $\langle M_n\rangle =
\infty$ for $0 < \mu \leq 1$), and
\begin{equation} \label{eq:h2}
h_2(s) = (1-s) [h_1(s)]^2 + \frac{a^2 s}{(1-s)^2} \qquad (0 \leq s < 1),
\end{equation}
which is valid for $\mu = 2$ (note that $\langle M_n^2\rangle =
\infty$ for $0 < \mu < 2$). The exact relations (\ref{eq:h1},
\ref{eq:h2}) are new results which allow one to study the first two
moments of the maximum $M_n$. In the next subsection, we will analyze
the expansion of Eqs. (\ref{eq:h1}, \ref{eq:h2}) as $s\to 1$ in order
to determine the asymptotic behavior of the moments $\langle
M_n\rangle$ and $\langle M_n^2\rangle$ as $n\to \infty$. We consider
separately jumps with a finite variance, and L\'evy flights.
\subsection{Mean perimeter and mean area of the convex hull}
\subsubsection{Mean perimeter for jumps with a finite variance.}
Given a generic continuous jump distribution $p(x,y)$ satisfying the
property in Eq. (\ref{eq:p_symm}), we determine $\hat{\rho}_\theta(k)$
using Eq. (\ref{eq:hatrho_xi_general}). Furthermore, by examining the
small $k$ behavior of $\hat{\rho}_\theta(k)$, we determine the
$\theta$-dependent variance $\sigma_\theta^2$ and the
$\theta$-dependent kurtosis ${\mathcal K}_\theta$, using respectively Eqs.
(\ref{eq:sigma_theta}) and (\ref{eq:a4}). In addition, knowing
$\hat{\rho}_\theta(k)$ from Eq. (\ref{eq:hatrho_xi_general}), we also
determine $\gamma_\theta$ in Eq. (\ref{eq:gamma}). Equipped with
these three quantities $\sigma_\theta$, ${\mathcal K}_\theta$ and
$\gamma_\theta$, we show in \ref{sec:Aderivation} that the leading
large $n$ terms of the first two moments of $M_n$ are given by
\begin{eqnarray} \label{eq:Mn}
\frac{\langle M_n \rangle}{\sigma_\theta} &\simeq&
\frac{\sqrt{2}}{\sqrt{\pi}}\, n^{\frac12} + \gamma_\theta
+ \frac{{\mathcal K}_\theta + 3}{12\sqrt{2\pi}}\, n^{-\frac12} + o(n^{-\frac12}), \\
\label{eq:Mn2}
\frac{\langle M_n^2\rangle}{\sigma_\theta^2} &\simeq& n +
\frac{\sqrt{8}\, \gamma_\theta}{\sqrt{\pi}}\, n^{\frac 12} + ({\mathcal K}_\theta/12 +
\gamma_\theta^2) + o(1) \, .
\end{eqnarray}
For the mean perimeter of the convex hull, we will only need the first
moment in Eq. (\ref{eq:Mn}). Indeed, using Eq. (\ref{eq:Lmean}), the
integration of the expansion (\ref{eq:Mn}) over $\theta$ from $0$ to
$2\pi$ yields the announced result (\ref{eq:Lmean_asympt}) for the
mean perimeter of the convex hull. The result for the second moment
in Eq. (\ref{eq:Mn2}) will be needed later to determine the mean area
$\langle A_n\rangle$.
\subsubsection{Mean perimeter for L\'evy flights.}
\label{sec:Levy}
When the variance of jumps is infinite, one gets the Taylor expansion
Eq. (\ref{eq:hatrho_mu}), with the scaling exponent $0 < \mu < 2$.
When $0 < \mu \leq 1$, the mean perimeter of the convex hull is
infinite. Throughout this section, we focus on the case $1 < \mu <
2$, in which the first moment of jumps is finite (and zero due to the
assumption of a symmetric distribution), whereas the variance is
infinite. In this case, the leading behavior of the mean maximum of
partial sums is universal \cite{Comtet05}
\begin{equation} \label{eq:Mn_mu}
\langle M_n \rangle \simeq a_\theta \frac{ \mu \Gamma(1- 1/\mu)}{\pi} \, n^{1/\mu} + o(n^{1/\mu}) \qquad (n\gg 1).
\end{equation}
However, the subleading term depends on finer details of the jump
distribution. In order to determine the subleading term, we consider
the expansion (\ref{eq:hatrho_nu}) with the subleading term $b_\theta
|k|^\nu$ such that $\nu > \mu$. We distinguish two cases: $\mu < \nu
< \mu+1$ and $\nu > \mu + 1$. In \ref{sec:ALevy}, we derive the
following asymptotics results:
\begin{equation} \label{eq:Mn_Levy1}
\fl
\langle M_n\rangle \simeq a_\theta \, \frac{\mu \Gamma(1- 1/\mu)}{\pi} \, n^{1/\mu}
- a_\theta^{1-\nu} \, b_\theta \, \frac{\Gamma((\nu-1)/\mu)}{\pi (\mu+1-\nu)} \, n^{1-(\nu-1)/\mu} + o(n^{1-(\nu-1)/\mu}) \quad (n\gg 1)
\end{equation}
for $\mu < \nu < \mu+1$, and
\begin{equation} \label{eq:Mn_Levy2}
\langle M_n\rangle \simeq a_\theta \, \frac{\mu \Gamma(1- 1/\mu)}{\pi} \, n^{1/\mu} + \gamma_\theta + o(1) \quad (n\gg 1)
\end{equation}
for $\nu > \mu + 1$, with $\gamma_\theta$ given by
Eq. (\ref{eq:gamma_Levy2}). The asymptotic relation
(\ref{eq:Mn_Levy2}) was first derived in \cite{Comtet05} for the
particular case $\nu = 2\mu$. One can see that for $\mu < \nu <
\mu+1$, the subleading term of $\langle M_n\rangle$ grows with $n$,
whereas for when $\nu > \mu + 1$, the subleading term is constant.
Higher-order corrections can be derived as well.
Finally, using the Cauchy formula (\ref{eq:Lmean}), the integration of
Eqs. (\ref{eq:Mn_Levy1}, \ref{eq:Mn_Levy2}) over $\theta$ from $0$ to
$2\pi$ yields Eqs. (\ref{eq:Ln_Levy1}, \ref{eq:Ln_Levy2}) for the mean
perimeter of the convex hull, announced in Section \ref{sec:outline}.
\subsubsection{Mean area for isotropic Gaussian jumps.}
According to Eq. (\ref{eq:Amean}), the expansion (\ref{eq:Mn2})
determines the first contribution to the mean area. This contribution
was calculated for an arbitrary symmetric continuous jump distribution
with a finite variance. The second contribution to the mean area
comes from $\langle [M'(\theta)]^2\rangle$ that has to be computed
separately. We recall that $M'(\theta)$ is given by
Eq. (\ref{eq:Mprime}). Our computation of this contribution relies on
two additional simplifying assumptions: (a) the jumps along $x$ and
$y$ coordinates are independent and (b) the jump process is isotropic,
i.e., the distribution of jumps does not depend on their direction.
In this case, using the isotropy condition (b), we get
\begin{eqnarray}
\langle [M(\theta)]^2\rangle &=& \langle [M(0)]^2\rangle = \langle x_{k^*}^2 \rangle , \\
\langle [M'(\theta)]^2\rangle &=& \langle [M'(0)]^2\rangle = \langle y_{k^*}^2 \rangle .
\end{eqnarray}
The disentanglement of $\langle [M(\theta)]^2\rangle$ and $\langle
[M'(\theta)]^2\rangle$ allows one to compute the latter one by using
the following argument. We recall that $k^*$ is the index of the
maximal position among $x_k$, i.e., its statistics is fully determined
by the jumps along $x$ coordinate. Once this statistics is known, the
mean $\langle [M'(0)]^2\rangle$ can be found by taking the conditional
expectation of $y^2_{k^*}$ at any fixed value $k^*$ and then the
expectation with respect to the distribution of $k^*$. Now, once we
condition $k^*$, i.e., the time step at which the $x_k$'s achieve
their maximum, the $y_k$ process will, in general, be affected by this
conditioning. However, if $x_k$ and $y_k$ are independent (which
happens when the jump process satisfies property (a) above), we get
\begin{equation}
\langle [M'(0)]^2\rangle = \sigma^2 \, \langle k^* \rangle ,
\end{equation}
where $\langle [\eta_k^y]^2\rangle = \sigma^2$. It remains to find
$\langle k^* \rangle$. For symmetric and continuous jump
distributions, it follows from symmetry that $\langle k^* \rangle=n/2$
independent of the details of the jump PDF. This can be deduced
formally also, by noting that the time step $k^*$, at which the
maximum of $x_k$ is achieved, has a universal distribution,
independent of the jump distribution (given that the latter is
continuous and symmetric)~\cite{Majumdar09}:
\begin{equation}
P_n(k^* = k) = \left(\begin{array}{c} 2k \\ k \end{array}\right) \left(\begin{array}{c} 2(n-k) \\ n-k \end{array}\right) 2^{-2n} .
\end{equation}
This is the direct consequence of the Sparre Andersen theorem. From
this distribution, one easily computes the mean value as
\begin{equation}
\langle k^* \rangle = \frac{n}{2} \, ,
\end{equation}
and thus
\begin{equation} \label{eq:Mprime_average}
\langle [M'(\theta)]^2\rangle = \sigma^2 \, \frac{n}{2} \,.
\end{equation}
This yields Eq. (\ref{eq:Amean_asympt}) for the mean area of the
convex hull for an isotropic jump process with independent jumps along
$x$ and $y$ coordinates. As discussed in Sec. \ref{sec:outline}, the
only process satisfying two requirements (a) and (b) is the isotropic
Gaussian process. Our formula (\ref{eq:Amean_asympt}) is thus
provided only in this case. For a more general case, one would need
to compute the joint distribution of the maximum position $k^*$ and
both values $x_{k^*}$ and $y_{k^*}$ which is more complicated and
remains an open problem.
\section{Examples and simulations}
\label{sec:simu}
In this section, we illustrate the above general results on several
examples of symmetric planar random walks. We derive the explicit
values of the relevant parameters that determine the mean perimeter
and the mean area. We also investigate the effect of anisotropy of
the jump distribution on the convex hull properties. Finally, we
compare our theoretical predictions to the results of Monte Carlo
simulations. For this purpose, we generate $10^5$ planar random
walks, compute the convex hull for each generated trajectory with
$n+1$ points by using the Matlab function 'convhull', and determine
its perimeter and area. These simulations yield the representative
statistics of perimeters and areas from which the mean values are
computed.
\subsection{Gaussian jumps}
We first consider the basic example of Gaussian jumps which are
independent along $x$ and $y$ coordinates and characterized by
variances $\sigma_x^2$ and $\sigma_y^2$. Substituting the jump
probability density,
\begin{equation}
p(x,y) = \frac{\exp\bigl(-\frac{x^2}{2\sigma^2_x}\bigr)}{\sqrt{2\pi}\, \sigma_x} \,
\frac{\exp\bigl(-\frac{y^2}{2\sigma^2_y}\bigr)}{\sqrt{2\pi} \, \sigma_y} \, ,
\end{equation}
into Eq. (\ref{eq:hatrho_xi_general}) yields $\hat{\rho}_\theta(k) =
e^{-k^2 \sigma^2_\theta/2}$, with $\sigma_\theta^2 = \sigma_x^2
\cos^2\theta + \sigma_y^2 \sin^2\theta$. One can see that the
anisotropy only affects the variance $\sigma_\theta^2$, whereas the
two other relevant parameters, $\gamma$ and ${\mathcal K}$, which are rescaled
by variance, do not depend on $\theta$. One finds ${\mathcal K} = 3$, whereas
the integral in Eq. (\ref{eq:gamma}) was computed exactly in
\cite{Comtet05} and provided in Eq. (\ref{eq:gamma_Gauss}).
Assuming (without loss of generality) that $\sigma_x \geq \sigma_y$,
we set
\begin{equation} \label{eq:sigma_Gauss}
\sigma \equiv \frac{1}{2\pi} \int\limits_0^{2\pi} d\theta \, \sigma_\theta = \frac{2}{\pi} E\biggl(\sqrt{1 - (\sigma_y/\sigma_x)^2}\biggr) \, \sigma_x ,
\end{equation}
where $E(k)$ is the complete elliptic integral of the second kind (for
the isotropic case, $\sigma_x = \sigma_y = \sigma$). With this
notation, we get the expansion coefficients
\begin{equation} \label{eq:Cj_Gauss}
\sigma^{-1} C_0 = \sqrt{8\pi} , \qquad \sigma^{-1} C_1 = 2\pi \gamma = -3.6605 \ldots , \qquad
\sigma^{-1} C_2 = \frac{\sqrt{\pi}}{\sqrt{2}} \,.
\end{equation}
Figure \ref{fig:Gauss_iso} shows the rescaled mean perimeter, $\langle
L_n\rangle/n^{1/2}$, and the rescaled mean area, $\langle
A_n\rangle/n$, for isotropic planar random walk with independent
Gaussian jumps ($\sigma_x = \sigma_y = 1$). The results of Monte
Carlo simulations are in perfect agreement with our theoretical
predictions (\ref{eq:Lmean_asympt}, \ref{eq:Amean_asympt}). One can
see that the subleading terms play an important role. In fact, if one
kept only the leading term and omitted the subleading terms, one would
get the horizontal dotted line. This corresponds to the case of
Brownian motion, in which only the leading term is present, see
Eqs. (\ref{eq:perim_BM}, \ref{eq:area_BM}). The subleading terms
account for the discrete-time character of random walks which is
particularly important for moderate values of $n$. In order to
highlight the role of the third term in the asymptotic expansions, we
also draw by dashed line the asymptotics without this term. As
expected, the third term improves the quality of the theoretical
prediction at small $n$. Note also that the asymptotic relation is
slightly less accurate for the mean area.
\begin{figure}
\begin{center}
\includegraphics[width=75mm]{figure3a.pdf}
\includegraphics[width=75mm]{figure3b.pdf}
\end{center}
\caption{
The rescaled mean perimeter, $\langle L_n\rangle/n^{1/2}$, (left), and
the rescaled mean area, $\langle A_n\rangle/n$, (right), for isotropic
planar random walks with independent Gaussian jumps, with $\sigma_x =
\sigma_y = 1$. The results of Monte Carlo simulations (shown by
circles) are in perfect agreement with our theoretical predictions
(\ref{eq:Lmean_asympt}, \ref{eq:Amean_asympt}) (shown by solid line).
The dotted horizontal line presents the coefficients $\sqrt{8\pi}$ and
$\pi/2$ of the leading term, whereas the dashed line illustrates the
theoretical predictions with only two principal terms. }
\label{fig:Gauss_iso}
\end{figure}
In Fig. \ref{fig:Gauss_ani}, we consider the convex hull for
anisotropic random walk with independent Gaussian jumps, with
$\sigma_x = 5$ and $\sigma_y = 1$. In this case, one can use the
asymptotic formula (\ref{eq:Lmean_asympt}), in which the expansion
coefficients $C_j$ are given by Eq. (\ref{eq:Cj_Gauss}), with an
effective variance $\sigma^2$ computed in Eq. (\ref{eq:sigma_Gauss}).
In this example, $\sigma = 5 \frac{2}{\pi} E\bigl(\sqrt{1-1/25}\bigr)
= 3.3439\ldots$. For the mean perimeter, one observes a perfect
agreement between the theoretical predictions and Monte Carlo
simulations. In turn, our asymptotic formula (\ref{eq:Amean_asympt})
for the mean area is not applicable for anisotropic case, as also
confirmed by simulations (not shown).
\begin{figure}
\begin{center}
\includegraphics[width=75mm]{figure4.pdf}
\end{center}
\caption{
The rescaled mean perimeter, $\langle L_n\rangle/n^{1/2}$, for
anisotropic planar random walk with independent Gaussian jumps, with
$\sigma_x = 5$ and $\sigma_y = 1$. The results of Monte Carlo
simulations (shown by circles) are in perfect agreement with our
theoretical prediction (\ref{eq:Lmean_asympt}) (shown by solid line).
The dotted horizontal line presents the coefficient $\sigma
\sqrt{8\pi}$ of the leading term (with $\sigma = 5 \frac{2}{\pi}
E\bigl(\sqrt{1-1/25}\bigr) = 3.3439\ldots$), whereas the dashed line
illustrates the theoretical predictions with only two principal
terms.}
\label{fig:Gauss_ani}
\end{figure}
\subsection{Exponentially distributed radial jumps}
\label{sec:exp_radial}
The next common model has exponentially distributed radial jumps with
uniform angular distribution. This is a particular realization of a
``run-and-tumble'' model of bacterial motion
\cite{Berg72,Berg,Lauga09}. Substituting the radial density $p_r(r) =
\sigma^{-1}\, e^{-r/\sigma}$ into Eq. (\ref{eq:hatrho_xi_iso}) yields
$\hat{\rho}(k) = \bigl(1 + (k\sigma)^2\bigr)^{-1/2}$. One gets ${\mathcal K} =
9$ and
\begin{equation}
\gamma = \frac{1}{\pi \sqrt{2}} \int\limits_0^\infty \frac{dk}{k^2} \ln \biggl(\frac{1 - \bigl(1 + 2k^2\bigr)^{-1/2}}{k^2}\biggr)
= -0.8183\ldots
\end{equation}
from which the expansion coefficients are
\begin{equation}
\sigma^{-1} C_0 = \sqrt{8\pi} , \qquad \sigma^{-1} C_1 = - 5.1416\ldots , \qquad \sigma^{-1} C_2 = \sqrt{2\pi} .
\end{equation}
Figure \ref{fig:exp_radial} illustrates the obtained results. As for
isotropic Gaussian jumps, there is a perfect agreement between theory
and simulations for the mean perimeter. We also present the results
for the mean area. We recall that the theoretical formula
(\ref{eq:Amean_asympt}) was derived under the assumption of
independent jumps along $x$ and $y$ coordinates, which evidently fails
for the exponential radial jumps. In spite of this failure, the
theoretical formula (\ref{eq:Amean_asympt}) is in perfect agreement
with Monte Carlo simulations, except for very small $n$. This
empirical observation suggests a possibility to relax this technical
assumption in future, at least for large $n$.
\begin{figure}
\begin{center}
\includegraphics[width=75mm]{figure5a.pdf}
\includegraphics[width=75mm]{figure5b.pdf}
\end{center}
\caption{
The rescaled mean perimeter, $\langle L_n\rangle/n^{1/2}$, (left), and
the rescaled mean area, $\langle A_n\rangle/n$, (right), for isotropic
planar random walk with exponentially distributed radial jumps, with
$\sigma = 1$. The results of Monte Carlo simulations (shown by
circles) are in perfect agreement with our theoretical predictions
(\ref{eq:Lmean_asympt}, \ref{eq:Amean_asympt}) (shown by solid line).
The dotted horizontal line presents the coefficient $\sqrt{8\pi}$ and
$\pi/2$ of the leading term, whereas the dashed line illustrates the
theoretical predictions with only two principal terms.}
\label{fig:exp_radial}
\end{figure}
\subsection{Independent exponentially distributed jumps}
We also consider the case, when the jumps along $x$ and $y$
coordinates are independent and exponentially distributed, with two
densities $p_x(x) = \frac12 \sigma_x^{-1} e^{-|x|/\sigma_x}$ and
$p_y(y) = \frac12 \sigma_y^{-1} e^{-|y|/\sigma_y}$. The Fourier
transforms are $\hat{p}_x(k) = (1 + (k\sigma_x)^2)^{-1}$ and
$\hat{p}_y(k) = (1 + (k\sigma_y)^2)^{-1}$ so that
\begin{equation}
\hat{\rho}_\theta(k) = \frac{1}{1 + (k\sigma_x)^2 \cos^2 \theta} \, \frac{1}{1 + (k\sigma_y)^2 \sin^2 \theta} \,.
\end{equation}
We get $\sigma^2_\theta = 2(\sigma_x^2 \cos^2\theta + \sigma_y^2
\sin^2\theta)$ and
\begin{equation}
{\mathcal K}_\theta = 24 \frac{\sigma_x^4 \cos^4\theta + \sigma_x^2 \sigma_y^2 \sin^2\theta \cos^2\theta + \sigma_y^4 \sin^4\theta}{\sigma_\theta^4} \,.
\end{equation}
Using the identity (\ref{eq:auxil1}), we compute explicitly
\begin{equation}
\gamma_\theta = \frac{\sqrt{2} \sigma_x \sigma_y |\sin\theta \cos\theta| - \sigma_\theta (\sigma_x |\cos\theta| + \sigma_y |\sin\theta|)}{\sigma_\theta^2} .
\end{equation}
In the particular case $\sigma_x = \sigma_y = \sigma$, one gets
$\sigma^2_\theta = 2\sigma^2$, ${\mathcal K}_\theta = 6(1 - \sin^2\theta
\cos^2\theta)$, and
\begin{equation}
\gamma_\theta = \frac{|\sin\theta \cos\theta| - |\cos\theta| - |\sin\theta|}{\sqrt{2}} \,.
\end{equation}
For this case, we obtain from Eqs. (\ref{eq:CLn})
\begin{equation}
\sigma^{-1} C_0 = 4\sqrt{\pi} , \qquad \sigma^{-1} C_1 = -6 , \qquad \sigma^{-1} C_2 = \frac{11\sqrt{\pi}}{8}\,.
\end{equation}
Figure \ref{fig:exp_indep} illustrates the excellent agreement between
theory and simulations.
\begin{figure}
\begin{center}
\includegraphics[width=75mm]{figure6.pdf}
\end{center}
\caption{
The rescaled mean perimeter, $\langle L_n\rangle/n^{1/2}$, for
anisotropic planar random walk with independent exponentially
distributed jumps, with $\sigma_x = \sigma_y = 1$. The results of
Monte Carlo simulations (shown by circles) are in perfect agreement
with our theoretical prediction (\ref{eq:Lmean_asympt}) (shown by
solid line). The dotted horizontal line presents the coefficient
$4\sqrt{\pi}$ of the leading term, whereas the dashed line illustrates
the theoretical predictions with only two principal terms.}
\label{fig:exp_indep}
\end{figure}
\subsection{Radial L\'evy jumps}
Now, we investigate the example of radial L\'evy flights with infinite
variance (but finite mean) and uniform angle distribution. Among
various heavy-tailed jump distributions (e.g., Pareto distributions),
we choose for our illustrative purposes the distribution
\begin{equation} \label{eq:Levy_distrib}
\mathbb P\{ \xi > r\} = \bigl(1 + (r/R)^2\bigr)^{-\alpha} ,
\end{equation}
with a scale $R > 0$ and the scaling exponent $\mu = 2\alpha$, with
$\frac12 < \alpha < 1$. For this distribution,
Eq. (\ref{eq:hatrho_xi_iso}) yields a simple closed formula
\cite{Gradshteyn}
\begin{equation}
\hat{\rho}(k) = \frac{2^{1-\alpha}}{\Gamma(\alpha)} (|k|R)^{\alpha} K_{\alpha}(|k|R) ,
\end{equation}
where $K_\alpha(z)$ is the modified Bessel function of the second kind.
The asymptotic behavior of $K_\alpha(z)$ near $z$ implies as $k\to 0$
\begin{equation}
\hat{\rho}(k) \simeq 1 - \frac{\pi \, (|k|R)^{2\alpha}}{2^{2\alpha} \sin(\pi\alpha) \Gamma(\alpha)\Gamma(\alpha+1)}
+ \frac{(kR)^2}{4(1-\alpha)} + O(|k|^{2+2\alpha}) .
\end{equation}
Comparing this expansion to Eq. (\ref{eq:hatrho_nu}), one can identify
\begin{equation} \label{eq:aR}
\fl
a = R \left(\frac{\pi}{2^{2\alpha} \sin(\pi\alpha) \Gamma(\alpha)\Gamma(\alpha+1)}\right)^{\frac{1}{2\alpha}} \,, \qquad
b = \frac{R^2}{4(1-\alpha)} \,, \qquad \nu = 2.
\end{equation}
In Fig. \ref{fig:levy_radial}, the mean perimeter computed by Monte
Carlo simulations for $\mu = 1.5$ and $R = 1$ is compared to the
theoretical prediction (\ref{eq:Ln_Levy1}). One can see that the
agreement is good but worse than for the earlier examples with a
finite variance. One obvious reason is that here we have determined
only two terms, whereas Eq. (\ref{eq:Lmean_asympt}) contains three
terms.
\begin{figure}
\begin{center}
\includegraphics[width=85mm]{figure7.pdf}
\end{center}
\caption{
The rescaled mean perimeter, $\langle L_n\rangle/n^{1/\mu}$, for
isotropic planar random walk with radial L\'evy flights whose lengths
are distributed according to Eq. (\ref{eq:Levy_distrib}) with $\mu =
1.5$ and $R = 1$. The results of Monte Carlo simulations (shown by
circles) agree well with the theoretical prediction
(\ref{eq:Ln_Levy1}) (shown by solid line). }
\label{fig:levy_radial}
\end{figure}
\subsection{Independent L\'evy $\alpha$-stable symmetric jumps}
Finally, we investigate L\'evy $\alpha$-stable symmetric jumps, with
independent displacements along $x$ and $y$ coordinates given by
$\hat{p}_x(k) = \hat{p}_y(k) = \exp(-|ak|^\mu)$, with $1 < \mu <2$.
Using Eq. (\ref{eq:hatrho_xi}), one gets thus $\hat{\rho}_\theta(k) =
\exp(-|a_\theta k|^\mu)$, with
\begin{equation}
a_\theta = a \bigl(|\cos \theta|^\mu + |\sin \theta|^\mu\bigr)^{1/\mu} .
\end{equation}
so that $\nu = 2\mu$, and $b_\theta = a_\theta^{2}/2$. The mean
perimeter is determined by Eq. (\ref{eq:Ln_Levy2}), with
\begin{equation}
C_0 = a \, \frac{\mu \Gamma(1-1/\mu)}{\pi} \int\limits_0^{2\pi} d\theta \, \bigl(|\cos \theta|^\mu + |\sin \theta|^\mu\bigr)^{1/\mu} ,
\end{equation}
and $\gamma$ given by Eq. (\ref{eq:gamma_Levystable}) which is
independent of $\theta$.
For $\mu = 3/2$, we obtain numerically $a^{-1} C_0 = 8.6275\ldots$ and
$a^{-1} C_1 = -5.2151\ldots$. Figure \ref{fig:levy_stable} shows the
good agreement between the theoretical prediction (\ref{eq:Ln_Levy2})
and Monte Carlo simulations.
\begin{figure}
\begin{center}
\includegraphics[width=85mm]{figure8.pdf}
\end{center}
\caption{
The rescaled mean perimeter, $\langle L_n\rangle/n^{1/\mu}$, for
anisotropic planar random walk with independent L\'evy $\alpha$-stable
jump distribution with $\mu = 1.5$ and $a = 1$. The results of Monte
Carlo simulations (shown by circles) agree well with the theoretical
prediction (\ref{eq:Ln_Levy2}) (shown by solid line). }
\label{fig:levy_stable}
\end{figure}
\section{Discussion and conclusion}
\label{sec:discussion}
To summarize, we have presented exact asymptotic results for the mean
perimeter of the convex hull of an $n$-step discrete-time random walk
in a plane, with a generic continuous jump distribution satisfying the
central symmetry assumption in Eq. (\ref{eq:p_symm}). Explicit
results, along with simulations confirming them have been presented
for several examples of such jump distributions. For the mean area of
the convex hull, we have derived exact results for isotropic Gaussian
jump distributions. For jumps with a finite variance, our results
provide precise estimates of the deviations from the Brownian limit
and explain the discrepancies between the asymptotic Brownian limit
results and observed simulations for finite but large $n$.
The obtained results are particularly valuable for applications
dealing with discrete-time random processes, e.g., home range
estimation in ecology. Given that the tracks of animal displacements
are typically recorded at discrete time steps (e.g., daily
observations) and relatively short, the subleading terms play an
important role. The asymptotic formulas can also be used for
calibrating new estimators, based on the local convex hull, that were
proposed for the analysis of intermittent processes in microbiology
\cite{Lanoiselee17}. Finally, the knowledge of the mean perimeter of
the convex hull can be used to estimate the scaling exponent and the
scale of symmetric L\'evy flights, for which the conventional mean and
variance estimators are useless.
There are many interesting open problems that may possibly be
addressed using the methods presented here. For example, the
numerical evidence suggests a possible extension of the derived
asymptotic formula for the mean area to other isotropic processes,
beyond the Gaussian case.
Also, it would be interesting to extend our results to the case of the
convex hull of planar discrete-time random bridges (where the walker
is constrained to come back to the starting point after $n$ discrete
jumps). For such discrete-time bridges, there are recent exact
results on the statistics of the first two maximum and the gap between
them~\cite{MMS2014} which may be useful for the convex hull problem.
One can also consider the problem with many independent discrete-time
walkers. Finally, it would be interesting to study the statistics of
the perimeter and the area for random walks with jump distributions
that violate the reflection property in Eq. (\ref{eq:p_symm}), for
example, for walks in presence of a drift or a potential.
\section*{Acknowledgments}
DG acknowledges the support under Grant No. ANR-13-JSV5-0006-01 of the
French National Research Agency.
| {
"timestamp": "2017-08-31T02:07:55",
"yymm": "1706",
"arxiv_id": "1706.08052",
"language": "en",
"url": "https://arxiv.org/abs/1706.08052",
"abstract": "We investigate the geometric properties of the convex hull over $n$ successive positions of a planar random walk, with a symmetric continuous jump distribution. We derive the large $n$ asymptotic behavior of the mean perimeter. In addition, we compute the mean area for the particular case of isotropic Gaussian jumps. While the leading terms of these asymptotics are universal, the subleading (correction) terms depend on finer details of the jump distribution and describe a \"finite size effect\" of discrete-time jump processes, allowing one to accurately compute the mean perimeter and the mean area even for small $n$, as verified by Monte Carlo simulations. This is particularly valuable for applications dealing with discrete-time jumps processes and ranging from the statistical analysis of single-particle tracking experiments in microbiology to home range estimations in ecology.",
"subjects": "Statistical Mechanics (cond-mat.stat-mech)",
"title": "Mean perimeter and mean area of the convex hull over planar random walks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.983342957061873,
"lm_q2_score": 0.8175744761936437,
"lm_q1q2_score": 0.8039561030385695
} |
https://arxiv.org/abs/2101.03402 | Summability characterizations of positive sequences | In this paper, we propose extensions for the classical Kummer test, which is a very far-reaching criterion that provides sufficient and necessary conditions for convergence and divergence of series of positive terms. Furthermore, we present and discuss some interesting consequences and examples such as extensions of the Olivier's theorem and Raabe, Bertrand and Gauss's test. | \section{Introduction}
The Kummer's test is an advanced theoretical test which provides necessary and sufficient conditions that ensures convergence and divergence of series of positive terms. Below we present the statement of this result. Its proof and some additional historical background may be found in~\cite{Ludmila}, \cite{Knopp}, \cite{Tong:2004}.
\begin{theorem}
(Kummer's test)\label{Kummer}
Consider the series \(\sum a_{n}\) where \(\{a_{n}\}\) is a~sequence of positive real numbers.
\begin{enumerate}
\item[(i)] The series \(\sum a_{n}\) converges if and only if there exist a~sequence \(\{q_{n}\}\), a~real number \(c>0\) and an integer \(N\geq 1\) for which
\[
q_{n}\frac{a_{n}}{a_{n+1}}-q_{n+1}\geq c, \qquad n \geq N.
\]
\item[(ii)] The series \(\sum a_{n}\) diverges if, and only if there exist a~sequence \(\{q_{n}\}\) and an integer \(N\geq 1\) for which
\(\sum \frac{1}{q_{n}}\) is a~divergent series and
\[
q_{n}\frac{a_{n}}{a_{n+1}}-q_{n+1}\leq 0, \qquad n \geq N.
\]
\end{enumerate}
\end{theorem}
Besides providing an extremely far-reaching characterization of convergence and divergence of series with positive terms, the importance of Kummer's test it is mostly ratified by its implications. For instance, Bertrand's test, Gauss's test, Raabe's test~\cite{Tong:2004} are all special cases of Theorem~\ref{Kummer}. Kummer's test may be also usefull to characterize convergence in normed vector spaces~\cite[p. 7]{Muscat} and applications of this test
can be found in other branches of Analysis, such as difference equations~\cite{Gyori}, as well.
On the other hand, turning our focus to series of the form \(\sum c_n a_n\), there are only few results dealing with this type of series. The Abel's test and test of Dedekind and Du-Bois Reymond (see for instance,~\cite[p. 315]{Knopp},~\cite{Hadamard}) are probably the most famous, since they deal with general series of complex numbers. These tests provide conditions that ensure convergence by means of independent assumptions on \(\{c_{n}\}\) and \(\{a_{n}\}\). In this context, the main feature of our results (Theorem~\ref{thm1} and Theorem~\ref{thm2}) is that they characterize the relation between the sequences \(\{a_{n}\}\) and \(\{c_{n}\}\) in order to ensure necessary and sufficient conditions for the convergence and divergence of the series \(\sum c_n a_n\), respectively. Moreover, we present some examples and interesting consequences of this characterization. In particular, generalized versions of Raabe's, Bertrand's and Gauss's test for convergence and divergence of series of the form \(\sum c_n a_n\)
are obtained. Another important consequence of Theorem~\ref{thm1} is that it is possible to
show that Olivier's theorem (see, for instance~\cite[p. 124]{Knopp} or~\cite{Const}, \cite{Salat} for more information)
still holds when the monotonicity assumption on the sequence of positive terms \(\{a_n\}\)
is replaced by an additional assumption on a~auxiliary sequence. We also present consequences of Theorem~\ref{thm1} when it is combined to the well-know Abel summation formula
and the Cauchy condensation theorem. We refer to~\cite[pp. 120 and 313]{Knopp} for more details on these results.
The rest of the paper is organized as follows.
In Section~\ref{subsum}, we present necessary and sufficient conditions for convergence/divergence of series generated by subsequences by extending Theorem~\ref{Kummer}.
In Section~\ref{cnan} we present the results dealing with convergence and divergence of series of the form \(\sum c_n a_n\). The main ideia is to obtain necessary and sufficient conditions by means of an extension of Theorems~\ref{conv} and Theorem~\ref{div}. As we show, we characterize the relation between the sequences \(\{c_n\}\) and \(\{a_n\}\) that ensures convergence and divergence of the series.
In Section~\ref{EC} we present some consequences of the results obtained.
\section{An extension of Kummer's test: I}\label{subsum}
In this section we present a~first extension of Theorem~\ref{Kummer}. Its main feature is that it showns that is possible to obtain information about the summability of a~sequence of positive real numbers based on the relation between non-consecutive elements of this sequence. In partiular, the idea is to characterize the summability of a~sequence by comparing it to the elements of the translated sequence \(\{a_{n+m}, \ n\geq 1\}\), for some~\(m \ge 1\).
The first main result of this section is presented below.
\begin{theorem}\label{conv}
Let \(\{a_n\}\) be a~sequence of positive real numbers and \(m\geq 1\) any
fixed positive integer.
If there exists a~positive sequence \(\{q_n\}\) such that
\[
q_n\frac{a_n}{a_{n+m}}-q_{n+m}\geq c,
\]
for some \(c>0\), for all \(n\) sufficiently large, then \(\sum a_{n}\) converges.
The converse holds as well.
\end{theorem}
\begin{proof}
From the assumption we get that
\[
q_n a_n-a_{n+m}q_{n+m}\geq ca_{n+m},
\]
for all \(n>N\), for some \(N\) large.
Hence
\[
\sum_{n = N+1}^{N+k} q_n a_n-a_{n+m}q_{n+m}\geq c \sum_{n = N+1}^{N+k} a_{n+m},
\]
for all \(k\geq 1\). That is, by the telescopic sum and considering without loss of generality \(k > m\), we have
\begin{align*}
q_{N+1}a_{N+1} + \dots + q_{N+m}a_{N+m} - a_{N+k+1}q_{N+k+1} - \dots -{ }&{ }a_{N+k+m}q_{N+k+m} \\
&{ }\geq c \sum_{n = N+1}^{N+k} a_{n+m},
\end{align*}
for all \(k>m\). Since \(\{a_n\}\) and \(\{q_n\}\) are positive, the left side of previous inequality is less than \(q_{N+1} a_{N+1}+ \dots + q_{N+m}a_{N+m}\) and then the series
\(\sum a_{n+m}\) converges. Therefore, \(\sum a_{n}\) also converges.
Conversely, if \(\sum a_n\) converges, \(\sum a_n = S\) say,
then let us write \(\sum a_{n+m-1} = S_m\), for \(m\geq 1\), positive integer.
Let us define \(\{q_n\}\) as
\[
q_n = \frac{S_{m}-\sum_{i = 1}^{n}a_{i+m-1}}{a_{n}}, \qquad n = 1, 2, 3, \dots ,
\]
thus, for this \(\{q_n\}\) we have that
\begin{align*}
q_n \frac{a_n}{a_{n+m}}-q_{n+m}
{ }&{ }= \frac{\sum_{i = n+1}^{n+m}a_{i+m-1}}{a_{n+m}}\\
{ }&{ }= 1+\frac{a_{n+m+1}+ \dots +a_{n+2m-1}}{a_{n+m}}\\
{ }&{ }> 1,
\end{align*}
for all \(n\geq 1\).
The proof is concluded.
\end{proof}
We proceed by presenting a~divergence version for the previous theorem.
\begin{theorem}\label{div}
Let \(\{a_n\}\) be a~sequence of positive real and \(m\geq 1\) a~fixed positive integer. If there exists a~positive sequence \(\{q_n\}\) such that \(\sum \frac{1}{q_{n}}\) diverges, \(q_n a_n\geq c>0\),
and
\[
q_n\frac{a_n}{a_{n+m}}-q_{n+m}\leq 0,
\]
for all \(n\) sufficiently large,
then
\(\sum_{n = 1}^{\infty} a_{n}\) diverges.
The converse holds, as well.
\end{theorem}
\begin{proof}
From the assumptions we obtain that there exists \(N>0\) such that
\[
q_n\frac{a_n}{a_{n+m}}-q_{n+m}\leq 0,
\]
for all \(n\geq N\).
As so,
\[
c\frac{1}{q_{n+m}}\leq a_{n+m},
\]
for all \(n>N\). Since \(\sum 1/q_n\) diverges, we obtain from the comparsion test that \(\sum a_n\) diverges.
Conversely, suppose that \(\sum a_{n}\) diverges. Define for each \(n\geq 1\)
\[
q_n = \frac{\sum_{i = 1}^{n}a_i}{a_n}.
\]
Note that the definition implies \(q_1 = 1\), hence \(a_n q_n = \sum_{i = 1}^{n}a_{i}\geq a_1\), for all \(n\geq 1\), that is, \(a_n q_n\geq a_1 q_1>0\) for all \(n\geq 1\).
Clearly
\[
q_{n}\frac{a_{n}}{a_{n+m}}-q_{n+m}\leq 0,
\]
for all \(n\geq 1\).
Let us now show that \(\sum \frac{1}{q_{n}}\)
diverges. From the divergence of \(\sum a_{n}\), given any positive integer \(k\) there exists a~positive integer \(n\geq k\) such that
\begin{equation}\label{aux11}
a_{k}+\dots+a_{n}\geq a_1+\dots+a_{k-1}.
\end{equation}
Due to~\eqref{aux11},
\begin{align*}
\sum_{j = k}^{n}\frac{1}{q_{j}}
{ }&{ }= \frac{a_{k}}{a_{1}+ \dots +a_{k}}+\dots+\frac{a_{n}}{a_{1}+ \dots +a_{n}}\\
{ }&{ }\geq \frac{a_{k}}{a_{1}+ \dots +a_{n}}+\dots+\frac{a_{n}}{a_{1}+ \dots +a_{n}}\\
{ }&{ }= \frac{1}{\frac{a_{1}+ \dots +a_{k-1}}{a_{k}+ \dots +a_{n}} +1}\\
{ }&{ }> \frac{1}{2}.
\end{align*}
Hence, \(\sum_{j = 1}^{n}\frac{1}{q_{j}}\) is not a~Cauchy sequence. Therefore the series \(\sum \frac{1}{q_{n}}\) diverges.
\end{proof}
\section{Extension of Kummer's test: II}\label{cnan}
Let us now turn our atention to series of the form \(\sum c_{n}a_{n}\) with positive terms.
The central idea in the following result is that it characterizes
the relation between the sequences \(\{c_{n}\}\) and \(\{a_{n}\}\) in order to ensure the convergence of the series.
The reader will note that the proof follows the same lines as the proof of Theorem~\ref{conv} and also, that it could be obtained
by some changes in the proof of Theorem~\ref{Kummer}, nevertherless, as the reader will also note, our proof
provides important informations about the relation between the sequences \(\{a_n\}\) and \(\{c_n\}\).
\begin{theorem}\label{thm1}
Consider the series \(\sum c_{n}a_{n}\) with \(\{a_{n}\}\) \(\{c_{n}\}\) sequences of positive real numbers.
The series \(\sum c_{n}a_{n}\) converges if and only if that there exist a~sequence \(\{q_{n}\}\) of positive real numbers and a~positive integer \(N\geq 1\) for which
\[
q_{n}\frac{a_{n}}{a_{n+1}}-q_{n+1}\geq c_{n+1}, \quad n\geq N.
\]
\end{theorem}
\begin{proof}
Let us show that \(\sum c_{n}a_{n}\) converges. For this, note that the condition
\[
q_{n}\frac{a_{n}}{a_{n+1}}-q_{n+1}\geq c_{n+1}, \quad n\geq N
\]
implies that
\begin{equation}\label{auxx}
a_{n}q_{n}\geq a_{n+1}(q_{n+1}+ c_{n+1}), \quad n\geq N.
\end{equation}
That is,
\begin{align*}
a_{N}q_{N}
{ }&{ }\geq a_{N+1}(q_{N+1}+c_{N+1})\\
{ }&{ }\geq a_{N+2}(q_{N+2}+c_{N+2})+a_{N+1}c_{N+1}\\
{ }&{\ \,}\vdots\\
{ }&{ }\geq a_{N+k}q_{N+k} + \sum_{i = 1}^{k}c_{N+i}a_{N+i}\\
{ }&{ }\geq \sum_{i = 1}^{k}c_{N+i}a_{N+i}>0,
\end{align*}
for all integer \(k\geq 0\). This implies the convergence of \(\sum c_{n}a_{n}\).
For the converse, suppose that
\(S:= \sum c_{n}a_{n}\) and
let us define
\begin{equation}\label{pn}
q_{n} = \,\frac{S-\sum_{i = 1}^{n}c_{i}a_{i}}{a_{n}}, \quad n\geq N.
\end{equation}
For this \(\{q_{n}\}\), clearly \(q_{n}>0\) for all \(n\geq 1\) and it is easy to check that
\[
q_{n}\frac{a_{n}}{a_{n+1}}-q_{n+1} = c_{n+1}, \quad n\geq N.
\qedhere
\]
\end{proof}
Some remarks:
\begin{enumerate}
\item[(i)] One can observe that it is, of course, possible to
reduce any series to this form, as any number can be
expressed as the product of two other numbers. Success in applying the above
theorem will depend on the skill with which the terms are so split up.
\item[(ii)] Note that in the first part of Theorem~\ref{thm1}, the assumption of
positivity of the sequences \(\{a_{n}\}\) and \(\{c_{n}\}\)
can be replaced by the following assumptions: \(\{a_{n}\}\) is positive and \(\{c_n\}\) is such that \(\sum_{i = 1}^{k}c_{i}a_{i}>0 \) for all
\(k\) sufficiently large.
\end{enumerate}
Next, we presente a~version of Kummer's test for divergent series
of the form \(\sum c_{n}a_{n}\).
The reader will note that it is more restrictive when
it is compared to Theorem~\ref{Kummer}-\((ii)\) however it may be suitable in some cases.
\begin{theorem}\label{thm2}
Consider the series \(\sum c_{n}a_{n}\) with \(\{a_{n}\}\) \(\{c_{n}\}\) sequences of positive real numbers.
\begin{enumerate}
\item[(i)] Suppose that
there exist a~sequence \(\{q_{n}\}\) and a~positive integer \(N\) for which
\[
q_{n}\frac{a_{n}}{a_{n+1}}-q_{n+1}\leq -c_{n+1}, \quad n\geq N
\]
with \(\sum \frac{1}{q_{n}}\) being a~divergent series. Then \(\sum a_{n}\), \(\sum \frac{1}{c_{n}}\), \(\sum (q_{n}-c_{n})a_{n}\) and \(\sum q_{n}a_{n}\) diverge. If, in addition, \(\sum \frac{c_{n}}{q_{n}}\) diverges
then \(\sum c_{n}a_{n}\) diverges.
\item[(ii)] Suppose that both series \(\sum c_{n}a_{n}\) and \(\sum a_{n}\) diverge. Also,
suppose that for every \(m\in\mathbb{N}\) there exists \(r\geq m\), \(r\in\mathbb{N}\), such that
\[
a_{m}+\dots+a_{r}\geq c_{m}a_{m}+\dots+c_{r}a_{r}.
\]
Then there exist a~sequence \(\{q_{n}\}\) and a~positive integer \(N\geq 1\) such that
\(\sum\frac{1}{q_{n}}\) diverges
and
\[
q_{n}\frac{a_{n}}{a_{n+1}}-q_{n+1}\leq -c_{n+1}, \quad n\geq N.
\]
\end{enumerate}
\end{theorem}
\begin{proof}
To prove \((i)\) note that
\(\{q_{n}\}\) satisfies
\begin{gather}
a_{n+1}\geq \frac{q_{n}a_{n}}{q_{n+1}-c_{n+1}}, \quad n\geq N,\label{aux111} \\
0<q_{n+1}-c_{n+1}<q_{n+1},\quad n\geq N \label{aux1} \\
\textup{and} \nonumber \\
0<c_{n+1}<q_{n+1}, \quad n\geq N. \nonumber
\end{gather}
By last inequality and comparsion test we see that \(\sum \frac{1}{c_{n}}\) diverges. Next, using~\eqref{aux111} successively we see that
\begin{gather*}
a_{N+1}\geq \frac{q_{N}a_{N}}{q_{N+1}-c_{N+1}}, \\
a_{N+2}\geq \frac{q_{N+1}a_{N+1}}{q_{N+2}-c_{N+2}}\geq \frac{a_{N}q_{N}q_{N+1}}{(q_{N+2}-c_{N+2})(q_{N+1}-c_{N+1})},
\end{gather*}
and in general,
\begin{equation}\label{aux}
a_{N+k+1} \geq \frac{a_{N}q_{N}q_{N+1} \dots q_{N+k}}{(q_{N+1}-c_{N+1}) \dots (q_{N+k+1}-c_{N+k+1})}, \quad k\geq 0.
\end{equation}
From~\eqref{aux1} and~\eqref{aux} we get
\begin{equation}\label{auxDiverg}
a_{N+k+1}> \frac{a_{N}q_N}{q_{N+k+1}}, \quad k\geq 0.
\end{equation}
Thus
\[
\sum_{k = 0}^{\infty}a_{N+k+1}> a_{N}q_N\sum_{k = 0}^{\infty}\frac{1}{q_{N+k+1}}
\]
and therefore \(\sum a_{n}\) diverges.
From~\eqref{aux}
\[
(q_{N+k+1}-c_{N+k+1})a_{N+k+1}\geq \frac{a_{N}q_{N}q_{N+1} \dots q_{N+k}}{(q_{N+1}-c_{N+1}) \dots (q_{N+k}-c_{N+k})}, \quad k\geq 0,
\]
and applying once again~\eqref{aux1} we obtain that
\[
q_{N+k+1}a_{N+k+1}>(q_{N+k+1}-c_{N+k+1})a_{N+k+1}\geq a_{N}q_{N}>0, \quad k\geq 0.
\]
This last set of inequalities implies that
\[
\lim_{n\to\infty} q_{N+k+1}a_{N+k+1}\neq 0
\quad \textup{ and } \quad
\lim_{k\to\infty} (q_{N+k+1}-c_{N+k+1})a_{N+k+1}\neq 0,
\]
so both series \(\sum q_{n}a_{n}\) and \(\sum (q_{n}-c_{n})a_{n}\) diverge.
Note that from~\eqref{auxDiverg} we obtain that
\begin{equation}\label{auxDiverg1}
c_{N+k+1}a_{N+k+1}> a_{N}q_N\frac{c_{N+k+1}}{q_{N+k+1}}, \quad k\geq 0.
\end{equation}
Therefore, if \(\sum \frac{c_{n}}{q_{n}} \) diverges, then it is clear that
\(\sum c_{n}a_{n}\) diverges.
In order to prove \((ii)\) define
\[
q_{n} = \frac{\sum_{i = 1}^{n}c_{i}a_{i}}{a_{n}}, \quad n\geq 1.
\]
Clearly, this is a~sequence of positive real numbers that satisfies
\[
q_{n}\frac{a_{n}}{a_{n+1}}-q_{n+1}\leq -c_{n+1}, \quad n\geq 1.
\]
Let us show that \(\sum \frac{1}{q_{n}}\) diverges by concluding that the sequence \(\{s_{k}\}\), defined as \(s_{k} = \sum_{i = 1}^{k}\frac{1}{q_{i}}\), for each \(k\geq 1\), is not a~Cauchy sequence.
Since \(\sum c_{n}a_{n}\) is divergent, given \(m\in \mathbb{N}\) there exists \(k > m\), \(k\in\mathbb{N}\), such that
\begin{equation}\label{aux2}
c_{m}a_{m}+\dots+c_{k}a_{k}>c_{1}a_{1}+\dots+c_{m-1}a_{m-1}.
\end{equation}
Also, from the hypothesis, there exists \(r\geq m\) such that
\begin{equation}\label{aux3}
a_{m}+\dots+a_{r}\geq c_{m}a_{m}+\dots+c_{r}a_{r}.
\end{equation}
Next, we split the proof in two cases: \(k\leq r\) and \(k>r\).
If \(k\leq r\), from~\eqref{aux2} we see that
\begin{equation}\label{aux112}
c_{m}a_{m}+\dots+c_{k}a_{k}+\dots+c_{r}a_{r}\geq c_{m}a_{m}+\dots+c_{k}a_{k}>c_{1}a_{1}+\dots+c_{m-1}a_{m-1}.
\end{equation}
Thus, by~\eqref{aux112} and~\eqref{aux3}
\begin{align*}
\sum_{n = m}^{r}\frac{1}{q_{n}}
{ }&{ }= \frac{a_{m}}{c_{1}a_{1}+ \dots +c_{m}a_{m}}+\dots+\frac{a_{r}}{c_{1}a_{1}+ \dots +c_{r}a_{r}}\\
{ }&{ }\geq \frac{a_{m}+ \dots +a_{r}}{c_{1}a_{1}+ \dots +c_{r}a_{r}}\\
{ }&{ }\geq \frac{c_{m}a_{m}+ \dots +c_{r}a_{r}}{c_{1}a_{1}+ \dots +c_{r}a_{r}}\\
{ }&{ }= \frac{1}{\frac{c_{1}a_{1}+ \dots +c_{m-1}a_{m-1}}{c_{m}a_{m}+ \dots +c_{r}a_{r}}+ 1}\\
{ }&{ }>\frac{1}{2}
\end{align*}
and \(\{s_k\}\) is not a~Cauchy sequence. On the other hand, if \(k>r\) we can use hypothesis again (now applied to \(m_1 = r+1\)) and to obtain \(r_{1}\geq r+1\) such that
\[
a_{r+1}+\dots+a_{r_{1}}\geq c_{r+1}a_{r+1}+\dots+c_{r_{1}}a_{r_1}.
\]
Again, we can use the same argument to conclude that there exists \(r_{2}\geq r_{1}+1\) such that
\[
a_{r_{1}+1}+\dots+a_{r_{2}}\geq c_{r_{1}+1}a_{r_{1}+1}+\dots+c_{r_{2}}a_{r_{2}}.
\]
This procedure can be applied a~finite number of times in order to obtain \(r_{j} \geq k\) for which
\[
a_{r_{(j-1)}+1}+\dots+a_{r_{j}}\geq c_{r_{(j-1)}+1}a_{r_{(j-1)}+1}+\dots+c_{r_{j}}a_{r_{j}}.
\]
Summing up~\eqref{aux3} with all these previous inequalities we obtain that
\[
a_{m}+\dots+a_{r_{j}}\geq c_{m}a_{m}+\dots+c_{r_{j}}a_{r_{j}}
\]
with \(k\leq r_j\).
This reduces the proof to the previous case which we have already proved.
\end{proof}
\section{Some examples and consequences}\label{EC}
The main goal in this section is to present some of the implications of the main results of this paper.
The next three theorems are extensions of the Raabe, Bertrand and Gauss test derived from
Theorem~\ref{thm1} and Theorem~\ref{thm2}. For more information about these tests we refer to~\cite{Ludmila}, \cite{Knopp} and references therein.
Consider the sequences
\[
R_n^{-} = n \frac{a_n}{a_{n+1}}-(n+1)- c_{n+1} \quad \textrm{and} \quad \,R_n^{+} = n \frac{a_n}{a_{n+1}}-(n+1)+ c_{n+1},
\]
for all positive integer \(n\).
\begin{theorem}[Raabe's test]
Let \(\sum c_n a_n\) be a~series of positive terms and suppose
that \(\liminf R_n^{-} = R_1\) and \(\limsup R_n^{-} = R_2\). If
\begin{enumerate}
\item[(i)] \(R_1> 0\), then \(\sum c_n a_n\) converges;
\item[(ii)] \(R_2< 0\) and \(\sum c_n/n \) diverges, then \(\sum c_n a_n\) diverges.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}
\item[(i)] If \(R_1> 0\), then for all \(n\) sufficiently large we have that
\[
n\frac{a_n}{a_{n+1}}-(n+1)- c_{n+1}\geq 0,
\]
hence Theorem~\ref{thm1}, with \(q_n = n\) for all \(n\geq 1\), implies that the series \(\sum c_n a_n\) converges.
\item[(ii)] If \(R_2< 0\), then for all \(n\) sufficiently large
\[
n\frac{a_n}{a_{n+1}}-(n+1)+ c_{n+1}\leq 0.
\]
Again, we have \(q_{n} = n\) for all \(n\geq1\). So, due to the divergence of \(\sum c_n/n \), Theorem~\ref{thm2} implies that \(\sum c_n a_n\) diverges.
\qedhere
\end{enumerate}
\end{proof}
\begin{theorem}[Bertrand's test]
Let \(\sum c_n a_n\) be a~series of positive terms.
\begin{enumerate}
\item[(i)] If
\[
\frac{a_n}{a_{n+1}}> 1+\frac{1}{n}+\frac{\theta_n +c_{n+1}}{n\ln(n)},
\]
for some sequence \(\{\theta_n\}\), such that \(\theta_{n}\geq \theta>1\), for all \(n\geq 1\), then \(\sum c_n a_n\) converges.
\item[(ii)] If
\[
\frac{a_n}{a_{n+1}}\leq 1+\frac{1}{n}+\frac{\theta_n -c_{n+1}}{n\ln(n)},
\]
for some sequence \(\{\theta_n\}\), such that \(\theta_{n}\leq \theta<1\), for all \(n\geq 1\), and \(\sum \frac{c_n}{n\ln(n)} \) diverges, then \(\sum c_n a_n\) diverges.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}
\item[(i)] From the assumption
we get
\[
n\ln(n)\frac{a_n}{a_{n+1}}\geq n\ln(n)+\ln(n)+ c_{n+1}+\theta_n,
\]
for all \(n\) sufficiently large.
That is,
\[
n\ln(n)\frac{a_n}{a_{n+1}}-(n+1)\ln(n+1)\geq (n+1)\ln\left(\frac{n}{n+1}\right)+\theta_n+ c_{n+1},
\]
for all \(n\) sufficiently large.
It follows from the assumption on \(\{\theta_n\}\) that
\[
(n+1)\ln\left(\frac{n}{n+1}\right)+\theta_n> 0,
\]
for all \(n>1\) sufficiently large
hence
we conclude that
\[
n\ln(n)\frac{a_n}{a_{n+1}}-(n+1)\ln(n+1)> c_{n+1},
\]
for all \(n\) sufficiently large.
Therefore, the convergence of \(\sum c_{n}a_{n}\) follows from an application of Theorem~\ref{thm1}.
\item[(ii)] It suffices to note that
\[
n\ln(n)\frac{a_n}{a_{n+1}}-(n+1)\ln(n+1)\leq (n+1)\ln\left(\frac{n}{n+1}\right)+\theta_n- c_{n+1},
\]
for all \(n\) sufficiently large.
Since \((n+1)\ln\left(\frac{n}{n+1}\right)+\theta_n< 0\) for all \(n>1\) sufficiently large
we obtain
\[
n\ln(n)\frac{a_n}{a_{n+1}}-(n+1)\ln(n+1)< - c_{n+1},
\]
for all \(n\) sufficiently large. The conclusion follows from Theorem~\ref{thm2}.
\qedhere
\end{enumerate}
\end{proof}
\begin{theorem}[Gauss's test]\label{gauss}
Let \(\sum c_n a_n\) be a~series of positive terms, \(\gamma\geq 1\) and \(\{\theta_n\}\) a~bounded sequence of real numbers.
\begin{enumerate}
\item[(i)] Suppose that there exists a~\(\mu \in\mathbb{R}\) such that \(\theta_{n}\geq (1-\mu)n^{\gamma-1}\) holds for all \(n\) sufficiently large. If
\[
\frac{a_n}{a_{n+1}}\geq 1+ \frac{c_{n+1}}{n}+\frac{\mu}{n}+\frac{\theta_n}{n^{\gamma}},
\]
holds for all \(n\) sufficiently large, then
\(\sum c_n a_n\) converges.
\item[(ii)] Suppose that there exists a~\(\mu \in\mathbb{R}\) such that \(\theta_{n}\leq (1-\mu)n^{\gamma-1}\) holds for all \(n\) sufficiently large. If \(\sum c_n/n \) diverges and
\[
\frac{a_n}{a_{n+1}}\leq 1- \frac{c_{n+1}}{n}+\frac{\mu}{n}+\frac{\theta_n}{n^{\gamma}},
\]
for all \(n\) sufficiently large, then
\(\sum c_n a_n\) diverges.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}
\item[(i)] From the assumption we obtain that
\[
n\frac{a_n}{a_{n+1}}-(n+1)\geq c_{n+1}+(\mu-1)+\frac{\theta_n}{n^{\gamma-1}},
\]
for all \(n\) sufficiently large.
Taking \(N>0\) such that \(\mu-1+\frac{\theta_n}{n^{\gamma-1}}\geq 0\),
for all \(n>N\), we concude that
\[
n\frac{a_n}{a_{n+1}}-(n+1)\geq c_{n+1},
\]
for all \(n>N\). Therefore, by Theorem~\ref{thm1}, the series \(\sum c_n a_n\) converges.
\item[(ii)] Due to the assumptions on \((ii)\), we have that \(\mu-1 +\frac{\theta_n}{n^{\gamma-1}}<0\) and
\[
n\frac{a_n}{a_{n+1}}-(n+1)\leq -c_{n+1}+(\mu-1)+\frac{\theta_n}{n^{\gamma-1}}\leq -c_{n+1},
\]
for all \(n\) sufficiently large. The conclusion follows from an application of Theorem~\ref{thm2}.
\qedhere
\end{enumerate}
\end{proof}
Theorem~\ref{thm1} also allows us to provide a~different approach for the well-know Cauchy's condensation test, which we present in the next lemma.
\begin{lemma}\label{CC}\cite[p. 120]{Knopp}(Cauchy's condensation test)
Let \(\{a_n\}\) be a~decreasing sequence of positive numbers.
Then \(\sum a_{n}\) converges if, and only if, \(\sum 2^{n}a_{2^{n}}\) converges.
\end{lemma}
For a~decreasing sequence \(\{a_{n}\}\) of positive real numbers, combining Lemma~\ref{CC} with Theorem~\ref{thm1}, we obtain a~the following characterization of convergence.
\begin{theorem}
Let
\(\sum a_{n}\) be a~series with \(\{a_n\}\) being a~decreasing sequence. Then \(\sum a_{n}\) converges if, and only if,
there exists a~sequence \(\{q_{n}\}\) of positive numbers such that
\[
q_{n}-2 q_{n+1}\geq 2 a_{2^{n+1}},
\]
for all \(n\) sufficiently large.
\end{theorem}
\begin{proof}
By Lemma~\ref{CC}, \(\sum a_n\) coverges if, and only if,
\(\sum 2^n a_{2^n}\) converges. On the other hand, an application of Theorem~\ref{thm1} with \(a_{n} = 2^n\) and \(c_n = a_{2^{n}}\)
show us that \(\sum 2^n a_{2^n}\) converges if, and only if,
there exists a~sequence \(\{q_n\}\) of positive real numbers
such that
\[
q_n-2 q_{n+1}\geq 2 a_{2^{n+1}},
\]
for all \(n\) sufficiently large.
The proof is concluded.
\end{proof}
To close this section of applications we present a~result related to the Olivier's Theorem, which is stated below.
\begin{lemma}[{\cite[p. 124]{Knopp} or \cite{Const}}] \label{Olivier}
Let \(\{a_{n}\}\) be summable decreasing sequence of positive real numbers. Then \(\lim n\,a_{n} = 0\).
\end{lemma}
We are going to show that it possible to recover the same Olivier's asymptotic behavior for \(\{a_n\}\) without the decreasigness assumption on \(\{a_n\}\).
Instead of using the monotonicity, we consider an additional assumption on the sequence \(\{q_{n}\}\) (that auxiliary sequence of Theorem~\ref{thm1}).
\begin{theorem}
Suppose that \(\{a_{n}\}\) is a~sequence of positive numbers. We have that \(\sum a_n\) converges
if, and only if, there exists a~sequence \(\{q_n\}\) of positive numbers such that
\[
q_{n}\frac{n+1}{n}-q_{n+1}\geq (n+1)a_{n+1},
\]
for all \(n\) sufficiently large.
Moreover, if \(\{q_n\}\) satisfies
\[
\lim q_{n}\frac{n+1}{n}-q_{n+1} = 0,
\]
then \(\lim n a_{n} = 0\).
\end{theorem}
\begin{proof}
It is clear that \(\sum a_{n}\) converges if and only if \(\sum \frac{1}{n} na_{n}\) also converges.
From Theorem~\ref{thm1}, with \(a_n = 1/n\) and \(c_n = n a_n\), we can conclude that \(\sum a_n\) converges if, and only if, there exists
a sequence \(\{q_{n}\}\) such that
\[
q_{n}\frac{n+1}{n}-q_{n+1}\geq (n+1)a_{n+1},
\]
for all \(n\) sufficiently large.
Hence, \(\lim n a_n = 0\) certainly occurs when
the sequence \(\{q_n\}\) above is such that
\[
\lim q_{n}\frac{n+1}{n}-q_{n+1} = 0.
\qedhere
\]
\end{proof}
For more information on this asymptotic behavior of summable sequences of positive numbers we refer to~\cite{Lifly}, \cite{Const}, \cite{Salat} and references therein.
| {
"timestamp": "2022-04-08T02:03:52",
"yymm": "2101",
"arxiv_id": "2101.03402",
"language": "en",
"url": "https://arxiv.org/abs/2101.03402",
"abstract": "In this paper, we propose extensions for the classical Kummer test, which is a very far-reaching criterion that provides sufficient and necessary conditions for convergence and divergence of series of positive terms. Furthermore, we present and discuss some interesting consequences and examples such as extensions of the Olivier's theorem and Raabe, Bertrand and Gauss's test.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Summability characterizations of positive sequences",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9833429580381724,
"lm_q2_score": 0.817574471748733,
"lm_q1q2_score": 0.8039560994658953
} |
https://arxiv.org/abs/2005.05500 | Binary polynomial power sums vanishing at roots of unity | Let $c_1(x),c_2(x),f_1(x),f_2(x)$ be polynomials with rational coefficients. With obvious exceptions, there can be at most finitely many roots of unity among the zeros of the polynomials $c_1(x)f_1(x)^n+c_2(x)f_2(x)^n$ with $n=1,2\ldots$. We estimate the orders of these roots of unity in terms of the degrees and the heights of the polynomials $c_i$ and $f_i$. | \section{Introduction}
\label{sintr}
Let $c_1(x),c_2(x),f_1(x),f_2(x)$ be non-zero polynomials in ${\mathbb Q}[x]$. We denote by ${\bf u}:=\{u_n(x)\}_{n\ge 1}\subset {\mathbb Q}[x]$
the sequence of polynomials given by
\begin{equation}
\label{eq:1}
u_n(x)=c_1(x)f_1(x)^n+c_2(x)f_2(x)^n\quad {\text{\rm for~all}}\quad n\ge 1.
\end{equation}
We study roots of unity $\zeta$ such that ${u_n(\zeta)=0}$ for some~$n$. It can happen accidentally that $u_n(x)$ is the zero polynomial for some~$n$. We ignore these~$n$.
We would like to show that aside from some exceptional situations, the following holds true:
there exist at most finitely many roots of unity~$\zeta$ such that for some~$n$ the polynomial $u_n(x) $ is not identically zero but ${u_n(\zeta)=0}$.
The following example shows that we indeed have to exclude some exceptional cases.
\begin{example}
\label{exinf}
Let $a,b$ be integers with $b$ non-zero,
and assume that
$$
c_2(x)/c_1(x)=\delta x^a, \qquad f_2(x)/f_1(x)=\varepsilon x^b, \qquad \delta,\varepsilon\in \{1,- 1\}.
$$
We then get
$$
u_n(x)=c_1(x)f_1(x)^n(1+\delta \varepsilon^n x^{a+bn})
$$
and we see that if $x=\zeta$ is such that $\zeta^{a+bn}=-\delta\varepsilon^n$, then ${u_n(\zeta)=0}$.
The condition that $b\ne 0$ insures that $u_n(x)$ is non-zero for $n$ sufficiently large (in fact, for all $n$ except eventually one of them, namely $n=-a/b$), and every $u_n(x)$ vanishes at the roots of unity of order ${|a+bn|}$ or ${2|a+bn|}$ depending on the sign of $\delta\varepsilon^n$.
\end{example}
It turns out that this example is the only case when the polynomials $u_n(x)$ vanish at infinitely many roots of unity. We have the following theorem.
\begin{theorem}
\label{thm:thmmain}
Let ${c_1(x),c_2(x),f_1(x),f_2(x)\in \Q[x]}$ be non-zero polynomials. For a positive integer~$n$ define $u_n(x)$ as in~\eqref{eq:1}. Then the following two conditions are equivalent.
\begin{enumerate}
\item
\label{iinf}
There exist infinitely many roots of unity~$\zeta$ such that for some~$n$ the polynomial $u_n(x) $ is not identically zero but ${u_n(\zeta)=0}$.
\item
\label{iex}
There exist ${a,b \in \Z}$ with ${b\ne 0}$ and ${\delta,\varepsilon \in \{1,-1\}}$ such that
$$
c_2(x)/c_1(x)=\delta x^a, \qquad f_2(x)/f_1(x)=\varepsilon x^b.
$$
\end{enumerate}
\end{theorem}
It is not hard to derive this theorem from classical results on unlikely intersection like the Theorem of Bombieri-Masser-Zannier-Maurin~\cite{BHMZ10,BMZ99,Ma08}. See also the recent work of Ostafe and Shparlinski~\cite{Os16,OS20}, especially Theorem~2.11 and Corollary~2.14 in~\cite{OS20}.
However, we are mainly interested in a quantitative statement: when condition~\ref{iex} of Theorem~\ref{thm:thmmain} is not satisfied, we want to bound the orders of the roots of unity~$\zeta$ such that ${u_n(\zeta)=0}$ for some~$n$, in terms of the degrees and the heights of our polynomials $f_i,c_i$. To the best of our knowledge, no quantitative version of the Bombieri-Masser-Zannier-Maurin theorem is available which would imply such a bound.
To state our result, let us recall the definition of the height of a non-zero polynomial in $\Q[x]$. The height of a primitive vector ${{\mathbf a}=(a_1, \ldots,a_k)\in \Z^{k}}$
(\textit{primitive} means that ${\gcd(a_1,\ldots, a_k)=1}$) is defined by
$$
{\mathrm{h}}({\mathbf a}):=\log\max\{|a_1|, \ldots, |a_k|\}.
$$
In general, given a non-zero vector ${{\mathbf a}\in \Q^{k+1}}$, there exists ${\lambda \in \Q^\times}$, well defined up to multiplication by~$\pm1$, such that ${{\mathbf a}^\ast=\lambda {\mathbf a}}$ is primitive, and we set ${{\mathrm{h}}({\mathbf a}):={\mathrm{h}}({\mathbf a}^\ast)}$.
We define the height of a non-zero polynomial ${g(x)\in \Q[x]}$ as the height of the vector of its coefficients. More generally, we define the height of a non-zero vector ${(g_1, \ldots, g_k)\in \Q[x]^k}$ as the height of the vector formed of the coefficients of all polynomials $g_1, \ldots, g_k$.
We have the following theorem.
\begin{theorem}
\label{thm:thmmain1}
Let ${c_1(x),c_2(x),f_1(x),f_2(x)\in \Q[x]}$ be non-zero polynomials such that condition~\ref{iex} of Theorem~\ref{thm:thmmain} is not satisfied.
Set
\begin{align*}
D&:=\max\{\deg c_1,\deg c_2,\deg f_1, \deg f_2\},\\
X&:=\max\{3,{\mathrm{h}}(c_1,c_2),{\mathrm{h}}(f_1,f_2)\}.
\end{align*}
Let~$m$ be a positive integer and~$\zeta$ a primitive $m$th root of unity such that for some~$n$ the polynomial $u_n(x) $ is not identically zero but ${u_n(\zeta)=0}$. Then
\begin{equation}
\label{eupperm}
m\le e^{100D(X+D)}.
\end{equation}
\end{theorem}
The numerical constant~$100$ here is rather loose; probably, one can replace it by~$4$ or so.
One may ask whether there is a bound for~$m$ which depends only on one of the parameters~$D$ or~$X$. The following examples show that this is not the case.
\begin{example}
Consider
${u_n(x)=(2x)^n-2^m}$,
for which
$$
(c_1(x),c_2(x),f_1(x),f_2(x))=(1,-2^m,2x,1), \qquad X=\max\{3,m\log 2\}.
$$
Then
${u_m(x)=2^m(x^m-1)}$
vanishes at primitive $m$th roots of unity, and we have ${m\ge X/\log2}$ (provided ${m\ge 5}$). Hence no bound independent of~$X$ is possible.
\end{example}
\begin{example}
\label{exd}
Consider
${u_n(x)=x^n+x^D+1}$,
for which
$$
(c_1(x),c_2(x),f_1(x),f_2(x))=(1,x^D+1,x,1).
$$
Then $u_{2D}=(x^{3D}-1)/(x^D-1)$ vanishes at primitive $3D$th roots of unity, so we have ${m\ge 3D}$. Hence no bound independent of~$D$ is possible.
\end{example}
One may also ask whether in Theorem~\ref{thm:thmmain1} one can bound~$n$ such that $u_n(x)$ vanishes at a root of unity. The answer is ``no'' in general. Indeed, if polynomials ${c_1(x)f_1(x)}$ and ${c_2(x)f_2(x)}$ have a common root, then every $u_n(x)$ will vanish at that root. But even if ${c_1(x)f_1(x)}$ and ${c_2(x)f_2(x)}$ do not simultaneously vanish at some root of unity, it is still possible that $u_n(x)$ vanishes at a root of unity for infinitely many~$n$. This is, for instance, the case for the sequence ${u_n(x)=x^n+x^D+1}$ from Example~\ref{exd}: it vanishes at primitive $3D$th roots of unity whenever ${n\equiv 2D\bmod 3D}$. Nevertheless, we can bound the \textit{smallest}~$n$ with this property. Here is the precise statement.
\begin{theorem}
\label{thsmallestn}
In the set-up of Theorem~\ref{thm:thmmain}, assume that, for a given~$m$, the set of positive integers~$n$ with the property ``the polynomial $u_n(x)$ is not identically~$0$ but vanishes at an $m$th root of unity'' is not empty. Then the smallest~$n$ in this set satisfies
$$
n \le m(\log m)^3(X+\log D).
$$
More precisely, either there exists~$n$ in this set satisfying ${n\le 2m}$, or every~$n$ in this set satisfies ${n\le m(\log m)^3(X+\log D)}$.
\end{theorem}
Throughout the article we use standard notation. We denote $\varphi(n)$ the Euler function, $\mu(n)$ the Möbius function, $\Lambda(n)$ the von Mangoldt function and ${\omega(n)}$ the number of prime divisors of~$n$ counted without multiplicities.
Theorems~\ref{thm:thmmain} and~\ref{thm:thmmain1} are proved in Section~\ref{sproofs}, and Theorem~\ref{thsmallestn} is proved in Section~\ref{ssmallestn}.
In Sections~\ref{sheights},~\ref{scyclo} and~\ref{sprim} we collect various auxiliary facts used in the proof. In particular, in Section~\ref{sprim} we revisit Schinzel's classical Primitive Divisor Theorem~\cite{Sc74}. We obtain a version of this theorem fully explicit in all parameters, which is key ingredient in our proof of Theorem~\ref{thm:thmmain1}.
\section{Heights}
\label{sheights}
All results of this section are well-known, but sometimes we prefer to give a short proof than to look for a bibliographical reference.
Recall the definition of the absolute logarithmic (projective) height. Let
$$
\bar\alpha=(\alpha_0,\alpha_1, \ldots, \alpha_k)\in \bar\Q^{k+1}
$$
be a non-zero vector of algebraic numbers. Pick a number field~$K$ containing all~$\alpha_i$ and normalize the absolute values of~$K$ to extend the standard absolute values of~$\Q$. With this normalization, the height
of~$\bar\alpha$ is defined by
\begin{equation}
\label{eheight}
{\mathrm{h}}(\bar\alpha)=d^{-1}\sum_{v\in M_K}d_v\log\max\{|\alpha_0|_v,\ldots, |\alpha_k|_v\},
\end{equation}
where ${d=[K:\Q]}$ and ${d_v=[K_v:\Q_v]}$ is the local degree. This definition is known to be independent of the choice of~$K$ and invariant under multiplication of~$\bar\alpha$ by a non-zero algebraic number: ${{\mathrm{h}}(\lambda\bar\alpha)={\mathrm{h}}(\bar\alpha)}$ for ${\lambda \in \bar\Q^\times}$. When ${\alpha \in \Q^{n+1}}$ this definition coincides with the definition of height from Section~\ref{sintr}.
Separating the contributions of infinite and finite places, we can rewrite equation~\eqref{eheight} as
\begin{equation}
\label{eheightold}
\begin{aligned}
{\mathrm{h}}(\bar\alpha)&= d^{-1}\sum_{K\stackrel\sigma\hookrightarrow \C} \log \max\{|\alpha_0^\sigma|,\ldots, |\alpha_k^\sigma|\}\\
&+ d^{-1}\sum_{{\mathfrak{p}}}\max\{-\nu_{\mathfrak{p}}(\alpha_0), \ldots, -\nu_{\mathfrak{p}}(\alpha_k)\}\log{\mathcal{N}}{\mathfrak{p}},
\end{aligned}
\end{equation}
where the first sum is over the complex embeddings of~$K$, the second sum is over the finite primes of~$K$, and ${\mathcal{N}}{\mathfrak{p}} $ denotes the absolute norm of~${\mathfrak{p}}$.
Now we define the height ${\mathrm{h}}(g)$ of a non-zero polynomial~$g$ with algebraic coefficients (in one or in several variables), or, more generally, the height ${{\mathrm{h}}(g_1,\ldots, g_k)}$ of a vector of such polynomials as the height of the vector of all coefficients of those polynomials (ordered somehow).
With a standard abuse of notation, for ${\alpha\in \bar\Q}$ we write ${{\mathrm{h}}(\alpha)}$ for ${{\mathrm{h}}(1,\alpha)}$. If~$\alpha$ belongs to a number field~$K$ then
\begin{align}
\label{ehplus}
{\mathrm{h}}(\alpha)&=d^{-1}\sum_{v\in M_K}d_v\log^+|\alpha|_v\\
\label{ehmin}
&=d^{-1}\sum_{v\in M_K}-d_v\log^-|\alpha|_v \qquad (\alpha\ne 0),
\end{align}
where ${\log^+=\max\{\log, 0\}}$ and ${\log^-=\min \{\log,0\}}$.
\begin{lemma}
\label{lhpol}
Let ${\alpha\in \bar\Q}$ and ${f(x)\in\bar\Q[x]}$ a polynomial of degree less or equal to~$D$. Then
\begin{equation}
\label{ehfa}
{\mathrm{h}}(f(\alpha))\le D{\mathrm{h}}(\alpha)+ {\mathrm{h}}(1,f)+\log(D+1).
\end{equation}
More generally, if ${g(x)\in\bar\Q[x]}$ is another polynomial of degree less or equal to~$D$ and ${g(\alpha) \ne0}$ then
\begin{equation}
\label{ehfga}
{\mathrm{h}}(f(\alpha)/g(\alpha))\le D{\mathrm{h}}(\alpha)+ {\mathrm{h}}(g,f)+\log(D+1).
\end{equation}
If ${f(\alpha) =0}$ then
\begin{equation}
\label{ehroot}
{\mathrm{h}}(\alpha)\le {\mathrm{h}}(f)+\log 2.
\end{equation}
Furthermore, let~$r$ be a non-negative integer. Then
\begin{equation}
\label{ehder}
{\mathrm{h}}(1,f^{(r)}/r!) \le {\mathrm{h}}(1,f)+D\log2.
\end{equation}
\end{lemma}
\begin{proof}
We start by proving~\eqref{ehfga}. By definition,
$$
{\mathrm{h}}(f(\alpha)/g(\alpha))={\mathrm{h}}(1,f(\alpha)/g(\alpha))={\mathrm{h}}(g(\alpha), f(\alpha)).
$$
Write
$$
f(x)=a_Dx^D+\cdots+a_0, \qquad g(x)=b_Dx^D+\cdots+b_0.
$$
Let~$K$ be a number field containing~$\alpha$ and the coefficients of $f,g$. We set ${d=[K:\Q]}$. For ${v\in M_K}$ we have
$$
|f(\alpha)|_v \le
\begin{cases}
(D+1)|f|_v\max\{1,|\alpha|_v\}^D,& v\mid \infty,\\
|f|_v|\max\{1,|\alpha|_v\}^D,& v<\infty,
\end{cases}
$$
where ${|f|_v =\max\{|a_0|_v, \ldots,|a_D|_v\}}$, and similarly for $g(\alpha)$.
Hence
\begin{align*}
{\mathrm{h}}(g(\alpha), f(\alpha))&\le d^{-1}\sum_{v\in M_K}d_v\log\max\{|g(\alpha)|_v,|f(\alpha)|_v\}\\
&\le d^{-1}\sum_{v\in M_K}d_v(\log\max\{|f_v|,|g|_v\}+D\log^+|\alpha|_v) \\
&+d^{-1}\sum_{\substack{v\in M_K\\v\mid \infty}}d_v\log(D+1)\\
&= {\mathrm{h}}(g,f)+D{\mathrm{h}}(\alpha)+ \log(D+1),
\end{align*}
which proves~\eqref{ehfga}.
For~\eqref{ehroot} see \cite[Proposition~3.6(1)]{BB13}. Finally, we have
$$
\frac{f^{(r)}}{r!}(x)=\sum_{k=r}^D\binom kr a_k x^{k-r}.
$$
Since
$$
\binom kr \le 2^k\le 2^D,
$$
we have
$$
\left|\frac{f^{(r)}}{r!}\right|_v \le
\begin{cases}
2^D|f|_v, & v\mid \infty,\\
|f|_v, & v<\infty.
\end{cases}
$$
Hence
\begin{align*}
{\mathrm{h}}\left(1, \frac{f^{(r)}}{r!}\right) &= d^{-1}\sum_{v\in M_K}d_v\log^+\left|\frac{f^{(r)}}{r!}\right|_v\\
&\le d^{-1}\sum_{v\in M_K}d_v\log^+|f_v| +d^{-1}\sum_{\substack{v\in M_K\\v\mid \infty}}d_vD\log2\\
&= {\mathrm{h}}(1,f)+D\log2.
\end{align*}
The lemma is proved.
\end{proof}
\begin{lemma}
\label{lhdivide}
Let ${f_1(x), \ldots, f_k(x)\in \bar \Q[x]}$ be non-zero polynomials of degrees not exceeding~$D$, and let ${g(x) \in \bar\Q[x]}$ be a common divisor of ${f_1, \ldots, f_k}$ (in the ring ${\bar\Q[x]}$). Then
$$
{\mathrm{h}}(f_1/g, \ldots, f_k/g) \le {\mathrm{h}}(f_1, \ldots, f_k) +(D+k-1)\log2.
$$
\end{lemma}
\begin{proof}
Consider the polynomial
$$
f(x,y_1, \ldots, y_{k-1}):=f_1(x)y_1+\cdots +f_{k-1}(x)y_{k-1}+ f_k(x) \in \bar\Q[x,y_1, \ldots, y_{k-1}].
$$
Applying Theorem~1.6.13 from \cite{BG06}, we obtain
$$
{\mathrm{h}}(f/g) \le {\mathrm{h}}(f/g)+{\mathrm{h}}(g) \le {\mathrm{h}}(f) +(D+k-1)\log2.
$$
Since
$$
{\mathrm{h}}(f_1/g, \ldots, f_k/g)= {\mathrm{h}}(f/g), \qquad {\mathrm{h}}(f_1, \ldots, f_k)={\mathrm{h}}(f),
$$
the result follows.
\end{proof}
\begin{lemma}
\label{lvals}
Let~$K$ be a number field of degree~$d$ and ${\alpha\in K}$. Then
\begin{equation}
\label{evalsone}
\sum_{\nu_{\mathfrak{p}}(\alpha)<0}\log{\mathcal{N}}{\mathfrak{p}} \le d{\mathrm{h}}(\alpha), \qquad \sum_{\nu_{\mathfrak{p}}(\alpha)>0}\log{\mathcal{N}}{\mathfrak{p}} \le d{\mathrm{h}}(\alpha),
\end{equation}
where the first sum is over (finite) primes~${\mathfrak{p}}$ of~$K$ with ${\nu_{\mathfrak{p}}(\alpha)<0}$, the second sum over those with ${\nu_{\mathfrak{p}}(\alpha)>0}$, and in the second sum we assume ${\alpha\ne 0}$.
More generally, let ${\alpha_1, \ldots, \alpha_k\in K}$. Then
\begin{equation}
\label{evalsmany}
\sum_{\substack{\nu_{\mathfrak{p}}(\alpha_i)<0\ \text{for} \\\text{some}\ i\in \{1,\ldots,k\}}}\log{\mathcal{N}}{\mathfrak{p}} \le d{\mathrm{h}}(\bar\alpha),
\qquad \bar\alpha=(1,\alpha_1, \ldots, \alpha_k).
\end{equation}
\end{lemma}
\begin{proof}
Inequality~\eqref{evalsmany} is immediate from~\eqref{eheightold} (note that ${\alpha_0=1}$), and both statements in~\eqref{evalsone} are special cases of~\eqref{evalsmany}.
\end{proof}
\begin{lemma}[``Liouville's inequality'']
\label{lliouv}
Let~$K$ and~$\alpha$ be as in Lemma~\ref{lvals}, ${\alpha \ne 0}$. Let ${S\subset M_K}$ be any set of places of~$K$ (finite or infinite). Then
$$
e^{-d{\mathrm{h}}(\alpha)}\le \prod_{v\in M_K}|\alpha|_v^{d_v}\le e^{d{\mathrm{h}}(\alpha)}.
$$
In particular, if ${\sigma_1, \ldots \sigma_r:K\hookrightarrow\C}$ are some distinct complex embeddings of~$K$ then
$$
\prod_{i=1}^r |\alpha^{\sigma_i}|\ge e^{-d{\mathrm{h}}(\alpha)}.
$$
\end{lemma}
We omit the proof, which is well-known and easy.
\section{Cyclotomic polynomials}
\label{scyclo}
We denote ${\Phi_m(T)}$ the $m$th cyclotomic polynomial. We will systematically use the identity
\begin{equation}
\label{ecyclomu}
\Phi_m(T) = \prod_{d\mid m}(T^d-1)^{\mu(m/d)},
\end{equation}
In this section we study values of cyclotomic polynomials at algebraic points. We give an asymptotic expression for the height of $\Phi_m(\gamma)$ as ${\gamma \in \bar\Q}$ is fixed and ${m\to\infty}$. We also estimate the absolute value of $\Phi_m(\gamma)$ from below.
The results of this section can be viewed as totally explicit versions of some results from \cite[Section~3]{BBL13}, and we follow~\cite{BBL13} rather closely. We note however that all this goes back to the 1974 work of Schinzel~\cite{Sc74} or even earlier.
\subsection{The height}
\begin{theorem}
\label{tas}
Let~$\gamma$ be an algebraic number. Then
$$
{\mathrm{h}}(\Phi_m(\gamma))=\varphi(m){\mathrm{h}}(\gamma)+O_1\bigl(2^{\omega(m)}\log (\pi m)\bigr).
$$
\end{theorem}
Recall that ${A=O_1(B)}$ means that ${|A|\le B}$.
To prove this theorem we need some preparations. We follow \cite[Section~3]{BBL13} with some changes.
\begin{proposition}
\label{pcyc}
For a positive integer~$m$ we have
\begin{equation}
\label{ephinz}
\max_{|z|\le1}\log|\Phi_m(z)|\le 2^{\omega(m)}\log (\pi m),
\end{equation}
the maximum being over the unit disc on the complex plane. (We use the convention ${\log0=-\infty}$.) For ${0<\varepsilon\le 1/2}$ we also have
\begin{equation}
\label{ephitriv}
\min_{|z|\le1-\varepsilon}\log|\Phi_m(z)|\ge -2^{\omega(m)}\log \frac1\varepsilon.
\end{equation}
\end{proposition}
\begin{proof}
By the maximum principle, it suffices to prove that~\eqref{ephinz} holds for complex~$z$ with ${|z|=1}$. Thus, fix such~$z$. We will actually prove a slightly sharper bound
\begin{equation}
\label{ecircle}
\log|\Phi_m(z)|\le (2^{\omega(m)-1}+1)\log m+2^{\omega(m)}\log\pi.
\end{equation}
We can write~$z$ in a unique way as ${z=\zeta e^{2\pi i\theta/m}}$, where~$\zeta$ is an $m$th root of unity (not necessarily primitive) and ${-1/2<\theta\le 1/2}$. We may assume ${\theta\ne 0}$, because for the finitely many~$z$ with ${\theta=0}$ the bound extends by continuity.
Let~$\ell$ be the exact order of~$\zeta$; thus, ${\ell\mid m}$ and~$\zeta$ is a primitive $\ell$th root of unity. Let~$d$ be any other divisor of~$m$. If ${\ell\nmid d}$ then ${d\le m/2}$ and
$$
2\ge |z^d-1| \ge 2\sin(\pi d/2m)\ge 2d/m.
$$
(We use the inequality ${|\sin x|\ge (2/\pi)x}$ which holds for ${|x|\le\pi/2}$.) This implies that
\begin{equation}
\label{elndivd}
\bigl|\log |z^d-1| \bigr|\le\log (m/d).
\end{equation}
And if ${\ell\mid d}$ then we have
${|z^m-1| = 2\sin(\pi \theta d/m)}$, which implies that
$$
2\pi\theta d/m \ge |z^d-1|\ge 4\theta d/m.
$$
Writing ${d=d'\ell}$,
this implies that
\begin{equation}
\label{eldivd}
\log |z^{d'\ell}-1| = \log d'-\log \frac m{2\ell\theta}+O_1\left(\log\pi\right).
\end{equation}
Using~\eqref{ecyclomu} we obtain
\begin{align*}
\log|\Phi_m(z)|
&= \sum_{\genfrac{}{}{0pt}{}{d\mid m}{\ell\nmid d}} \mu\left(\frac md\right) \log |z^d-1|+ \sum_{d'\mid m/\ell}\mu\left(\frac{m/\ell}{d'}\right) \log |z^{\ell d'}-1|\\
&\le \sum_{d\mid m} \left|\mu\left(\frac md\right)\right| \log \frac md+ \sum_{d'\mid m/\ell}\mu\left(\frac{m/\ell}{d'}\right) \left(\log d'-\log \frac m{2\ell\theta}\right)\\
&\hphantom{\le{}}+O_1(2^{\omega(n/\ell)}\log\pi)\\
&=2^{\omega(m)-1}\sum_{p\mid m}\log p +\Lambda\left(\frac m\ell\right) +\delta\log(2\theta)+ O_1(2^{\omega(m/\ell)}\log\pi),
\end{align*}
where ${\delta=0}$ if ${\ell<m}$ and ${\delta=1}$ if ${\ell=m}$. Since ${\log(2\theta)\le 0}$,
this proves~\eqref{ecircle}.
The proof of~\eqref{ephitriv} is much easier.
When ${|z|\le 1-\varepsilon}$, we have
$$
2\ge |z^d-1|\ge 1-|z|^d\ge 1-|z|\ge \varepsilon.
$$
Since ${0<\varepsilon\le 1/2}$ this implies that ${\bigl|\log|z^d-1|\bigr|\le \log (1/\varepsilon)}$. We obtain
$$
\bigl|\log |\Phi_m(z)|\bigr|=\left|\sum_{d\mid m}\mu\left(\frac md\right) \log|z^d-1|\right| \le 2^{\omega(m)}\log \frac1\varepsilon.
$$
In particular,~\eqref{ephitriv} holds.
\end{proof}
\begin{corollary}
\label{ccyc}
Let~$m$ be a positive integer and ${z\in \C}$. Then
$$
\log^+|\Phi_m(z)|= \varphi(m)\log^+|z|+O_1\bigl(2^{\omega(m)}\log (\pi m)\bigr),
$$
where ${\log^+=\max\{\log, 0\}}$.
\end{corollary}
\begin{proof}
For ${|z|\le 1}$ this is Proposition~\ref{pcyc}. If ${ |z|>1}$ then
\begin{equation}
\label{ephire}
\log|\Phi_m(z)|=\varphi(m)\log|z|+\log|\Phi_m(z^{-1})|,
\end{equation}
and ${\log|\Phi_m(z^{-1})|\le 2^{\omega(m)}\log (\pi m)}$ by Proposition~\ref{pcyc}. This already implies the upper bound
$$
\log^+|\Phi_m(z)|\le \varphi(m)\log^+|z|+2^{\omega(m)}\log (\pi m).
$$
The lower bound
\begin{equation}
\label{ephilo}
\log^+|\Phi_m(z)|\ge \varphi(m)\log^+|z|-2^{\omega(m)}\log (\pi m)
\end{equation}
is trivial when ${m=1}$, so we will assume ${m\ge 2}$ in the sequel. In the case ${1<|z|\le m/(m-1)}$ we have
$$
\log^+|\Phi_m(z)| \ge 0 \ge \varphi(m)\log\frac{m}{m-1}-1\ge \varphi(m)\log^+|z|-1,
$$
which is much better than wanted. Finally, if ${|z|\ge m/(m-1)}$, then
$$
\log|\Phi_m(z^{-1})|\ge -2^{\omega(m)}\log m
$$
by~\eqref{ephitriv} with ${\varepsilon=1/m}$. Hence~\eqref{ephilo} follows from~\eqref{ephire} in this case.
\end{proof}
\paragraph{Proof of Theorem~\ref{tas}.}
We use~\eqref{ehplus} with ${\alpha=\Phi_m(\gamma)}$. For ${v\in M_K}$ we have
$$
\log^+|\Phi_m(\gamma)|_v=
\begin{cases}
\varphi(m)\log^+|\gamma|_v+O_1\bigl(2^{\omega(m)}\log (\pi m)\bigr), & v\mid \infty,\\
\varphi(m)\log^+|\gamma|_v, & v<\infty.
\end{cases}
$$
Indeed, the archimedean case is Corollary~\ref{ccyc}, and the non-archimedean case is obvious. Summing up, the result follows.
\qed
\subsection{The lower bound}
The following result is proved in \cite[Corollary~4.2]{BL20} as a consequence of Baker's theory of logarithmic forms.
\begin{proposition}
\label{pabs}
Let~$\gamma$ be a complex algebraic number of degree~$d$, not a root of unity, and~$n$ a positive integer. Then
\begin{equation*}
|\gamma^n-1|\ge
e^{-10^{12}d^4({\mathrm{h}}(\gamma)+1)\log (n+1)}.
\end{equation*}
\end{proposition}
\begin{corollary}
\label{carch}
Let~$\gamma$ and~$m$ be as in Proposition~\ref{pabs}. Then
\begin{equation}
\label{elowerreal}
\log |\Phi_m(\gamma)|\ge -10^{12}d^4({\mathrm{h}}(\gamma)+1)\cdot 2^{\omega(m)}\log (m+1).
\end{equation}
\end{corollary}
\begin{proof}
If ${|\gamma|\ge 1}$ then
$$
\log|\Phi_m(\gamma)|=\varphi(m)\log|\gamma|+\log|\Phi(\gamma^{-1})|\ge \log|\Phi(\gamma^{-1})|.
$$
Hence, replacing, if necessary,~$\gamma$ by~$\gamma^{-1}$, we may assume ${|\gamma|\le 1}$.
We have
\begin{equation}
\label{esumagain}
\log|\Phi_m(\gamma)|=\sum_{n\mid m}\mu\left(\frac mn\right) \log |\gamma^n-1|.
\end{equation}
Proposition~\ref{pabs} implies that
$$
2\ge |\gamma^n-1|\ge e^{-10^{12}d^4({\mathrm{h}}(\gamma)+1)\log (n+1)}.
$$
Hence for ${1\le n\le m}$ we have
$$
\bigl|\log |\gamma^n-1|\bigr|\le 10^{12}d^4({\mathrm{h}}(\gamma)+1)\log (m+1).
$$
Substituting this to~\eqref{esumagain}, we obtain
$$
\bigl|\log |\Phi_m(\gamma)|\bigr|\le 10^{12}d^4({\mathrm{h}}(\gamma)+1)\cdot2^{\omega(m)}\log (m+1).
$$
In particular, we proved~\eqref{elowerreal}.
\end{proof}
\section{Schinzel's Primitive Divisor Theorem}
\label{sprim}
Let~$\gamma$ be a non-zero algebraic number, not a root of unity. We consider the sequence
$$
u_n=u_n(\gamma)=\gamma^n-1.
$$
(Note that in this section $(u_n)$ is a numerical sequence, while in the other sections it is a sequence of polynomials.) A prime ${\mathfrak{p}}$ of the number field ${K=\Q(\gamma)}$ is called \textit{primitive divisor} for~$u_n$ if
$$
\nu_{\mathfrak{p}}(u_n) >0, \qquad \nu_{\mathfrak{p}}(u_k)=0 \quad (k=1, \ldots, n-1).
$$
For further use, let us fix here some basic properties of primitive divisors. Recall that ${\Phi_n(T)}$ denotes the $n$th cyclotomic polynomial, and~${\mathcal{N}}{\mathfrak{p}}$ is the absolute norm of~${\mathfrak{p}}$.
\begin{proposition}
\label{pprim}
Assume that~${\mathfrak{p}}$ is a primitive divisor of~$u_n$. Then~$n$ divides ${{\mathcal{N}}{\mathfrak{p}}-1}$ and ${\nu_{\mathfrak{p}}(\Phi_n(\gamma))\ge 1}$. In particular, ${n<{\mathcal{N}}{\mathfrak{p}}}$.
\end{proposition}
The proofs are very easy and we omit them.
Schinzel~\cite{Sc74} proved that~$u_n$ admits a primitive divisor for ${n\ge n_0(d)}$, where~$d$ is the degree of~$\gamma$. This was an improvement upon the earlier work~\cite{PS68}, where the same was proved under the assumption ${n\ge n_0(\gamma)}$.
Stewart~\cite{St77} made Schinzel's result explicit, but he imposed an additional hypothesis ${\gamma=\alpha/\beta}$, where ${\alpha,\beta\in {\mathcal{O}}_K}$ are coprime algebraic integers. Here we obtain a fully explicit version of Schinzel's result without any extra hypothesis.
\begin{theorem}
\label{thschin}
Let~$\gamma$ be an algebraic number of degree~$d$, not a root of unity. Assume that
\begin{equation}
\label{ehypo}
n\ge \max\{2^{d+1},10^{30}d^{9}\}.
\end{equation}
Then ${u_n=\gamma^n-1}$ admits a primitive divisor.
\end{theorem}
Theorem~\ref{thschin} is a consequence of the following result, appearing, albeit in a different setting, in Schinzel's work.
\begin{proposition}
\label{pup}
In the above set-up, assume that~$u_n$ does not admit a primitive divisor. Then
\begin{equation}
\label{eup}
{\mathrm{h}}(\Phi_n(\gamma)) \le 10^{13}d^4
({\mathrm{h}}(\gamma)+1)\cdot2^{\omega(n)}\log (n+1).
\end{equation}
\end{proposition}
\subsection{Proof of Proposition~\ref{pup}}
We start from the following well-known fact.
\begin{lemma}
\label{lwellknown}
Let~$K$ be a number field of degree~$d$ and~$p$ a prime number. Let~${\mathfrak{p}}$ be a prime of~$K$ above~$p$ of ramification index~$e_{\mathfrak{p}}$ (that is, ${e_{\mathfrak{p}}=\nu_{\mathfrak{p}}(p)}$). Let ${\xi\in K}$ satisfy
$$
\nu_{\mathfrak{p}}(\xi-1) >\frac {e_{\mathfrak{p}}}{p-1}.
$$
Then for any positive integer~$n$ we have
$$
\nu_{\mathfrak{p}}(\xi^n-1)=\nu_{\mathfrak{p}}(\xi-1)+\nu_{\mathfrak{p}}(n).
$$
\end{lemma}
The proof of the lemma can be found, for instance, in \cite[Lemma~1]{PS68}.
\begin{lemma}
\label{lschin}
Let~$\gamma$ be an algebraic number of degree~$d$, not a root of unity, and~$n$ an integer satisfying ${n\ge 2^{d+1}}$. Let~${\mathfrak{p}}$ be a prime of the field $\Q(\gamma)$ which is not a primitive divisor of ${u_n=\gamma^n-1}$. Then
${\nu_{\mathfrak{p}}(\Phi_n(\gamma))\le \nu_{\mathfrak{p}}(n)}$.
\end{lemma}
This is Schinzel's~\cite{Sc74} crucial ``Lemma~4''. Since his set-up is slightly different, we reproduce the proof here.
\begin{proof}
We may assume that ${\nu_{\mathfrak{p}}(\gamma^n-1)>0}$, since there is nothing to prove otherwise. In particular, ${\nu_{\mathfrak{p}}(\gamma)=0}$.
For ${k=0,1,2\ldots}$ denote~$\ell_k$ the multiplicative order of ${\gamma \bmod {\mathfrak{p}}^k}$; that is,~$\ell_k$ is the smallest positive integer~$\ell$ with the property ${\nu_{\mathfrak{p}}(\gamma^\ell- 1)\ge k}$. Clearly, ${\nu_{\mathfrak{p}}(\gamma^n -1)\ge k}$ if and only if ${\ell_k\mid n}$. Together with~\eqref{ecyclomu} this implies that for every~$k$ the following holds:
\begin{equation}
\label{eschinzel}
\nu_{\mathfrak{p}}\bigl(\Phi_n(\gamma)\bigr)= \sum_{i=1}^k \sum_{\ell_i\mid m\mid n}\mu\left(\frac nm\right)+\sum_{\ell_{k+1}\mid m\mid n}\mu\left(\frac nm\right)\bigl(\nu_{\mathfrak{p}}(\gamma^m-1) -k\bigr)
\end{equation}
Let~$p$ be the rational prime below~${\mathfrak{p}}$ and ${e_{\mathfrak{p}}=\nu_{\mathfrak{p}}(p)}$ the ramification index. We will apply~\eqref{eschinzel} with
$$
k=\left\lfloor \frac {e_{\mathfrak{p}}}{p-1}\right\rfloor,
$$
which will be our choice of~$k$ from now on.
We claim that
\begin{equation}
\label{eclaim}
n>\ell_{k+1}.
\end{equation}
We postpone the proof of~\eqref{eclaim} (which is a bit messy) until later, and now complete the proof of the lemma assuming validity of~\eqref{eclaim}.
Since ${n>\ell_{k+1}\ge\ell_i}$ for ${i=1, \ldots, k}$, the double sum in~\eqref{eschinzel} vanishes. Also,
if ${\ell_{k+1}\mid m}$ then
$$
\nu_{\mathfrak{p}}(\gamma^m-1)=\nu_{\mathfrak{p}}(\gamma^{\ell_{k+1}}-1)+\nu_{\mathfrak{p}}\left(\frac{m}{\ell_{k+1}}\right)
$$
by Lemma~\ref{lwellknown}. Hence~\eqref{eschinzel} can be rewritten as
\begin{equation}
\label{eschinzelbis}
\nu_{\mathfrak{p}}\bigl(\Phi_n(\gamma)\bigr)= \sum_{\ell_{k+1}\mid m\mid n}\mu\left(\frac nm\right)\bigl(\nu_{\mathfrak{p}}(\gamma^{\ell_{k+1}}-1) -k\bigr)+ \sum_{\ell_{k+1}\mid m\mid n}\mu\left(\frac nm\right)\nu_{\mathfrak{p}}\left(\frac{m}{\ell_{k+1}}\right).
\end{equation}
Since ${n>\ell_{k+1}}$, the first sum in~\eqref{eschinzelbis} vanishes. As for the second sum, it vanishes (just being empty) if ${\ell_{k+1}\nmid n}$. From now on assume that ${\ell_{k+1}\mid n}$ and set ${n'=n/\ell_{k+1}}$. We obtain
\begin{equation*}
\nu_{\mathfrak{p}}\bigl(\Phi_n(\gamma)\bigr)= e_{\mathfrak{p}}\sum_{m'\mid n'}\mu\left(\frac {n'}{m'}\right)\nu_p\left(m'\right)=
\begin{cases}
e_{\mathfrak{p}},& \text{$n'$ is a power of~$p$},\\
0,& \text{otherwise}.
\end{cases}
\end{equation*}
In any case we obtain ${\nu_{\mathfrak{p}}\bigl(\Phi_n(\gamma)\bigr)\le \nu_{\mathfrak{p}}(n)}$. This proves the lemma.
We are left with the claim~\eqref{eclaim}. Note first of all that
\begin{equation}
\label{eellone}
n>\ell_1
\end{equation}
because~${\mathfrak{p}}$ is not a primitive divisor of~$u_n$.
Another useful observation is that
\begin{equation}
\label{epelli}
\ell_{i+1}\le p\ell_i\qquad (i=1,2, \ldots).
\end{equation}
Indeed,
$$
\gamma^{p\ell_i}-1=\sum_{j=1}^{p-1}\binom pj(\gamma^{\ell_i}-1)^j+(\gamma^{\ell_i}-1)^p,
$$
which implies that ${\nu_{\mathfrak{p}}(\gamma^{p\ell_i}-1)>\nu_{\mathfrak{p}}(\gamma^{\ell_i}-1)}$, proving~\eqref{epelli}.
If ${k=0}$ then~\eqref{eclaim} is~\eqref{eellone}. Now assume that ${k\ge 1}$. In this case
\begin{equation}
\label{ebet}
p-1\le e_{\mathfrak{p}}\le d.
\end{equation}
On the other hand, let ${p^{f_{\mathfrak{p}}}={\mathcal{N}}{\mathfrak{p}}}$ be the absolute norm of~${\mathfrak{p}}$. Clearly,
$$
\ell_1\le p^{f_{\mathfrak{p}}}-1\le p^{d/e_{\mathfrak{p}}}-1.
$$
In the special case ${p=3}$, ${e_{\mathfrak{p}}=d=2}$ we have ${k=1}$ and
${\ell_2\le p\ell_1 \le 6}$.
Since ${n\ge 2^{d+1}=8}$ by the hypothesis, this proves~\eqref{eclaim} in this special case.
From now on we assume that ${d\ge 3}$ for ${p=3}$.
Using~\eqref{epelli} iteratively, we obtain
$$
\ell_{k+1}\le p^k\ell_1 <p^{e_{\mathfrak{p}}/(p-1)+d/e_{\mathfrak{p}}}\le \max_{p-1\le t\le d}p^{t/(p-1)+d/t}=p^{1+d/(p-1)}.
$$
We have to show that
\begin{equation*}
p^{1+d/(p-1)}\le 2^{d+1}.
\end{equation*}
This is true by inspection in the cases
$$
p=2, \qquad p=3,\ d\ge 3, \qquad p=5, \ d\ge 4.
$$
Now assume that ${p\ge 7}$, in which case ${d\ge 6}$. Since ${p\le d+1}$, we have
$$
p^{1+d/(p-1)} \le (d+1) \cdot7^{d/6}.
$$
A calculation shows that
${(d+1) \cdot7^{d/6} \le 2^{d+1}}$
for ${d\ge 6}$. This completes the proof of~\eqref{eclaim}.
\end{proof}
\paragraph{Proof of Proposition~\ref{pup}}
We use~\eqref{ehmin} with ${\alpha=\Phi_n(\gamma)}$.
For ${v\in M_K}$ we have
$$
-\log^-|\Phi_n(\gamma)|_v\le
\begin{cases}
10^{12}d^4({\mathrm{h}}(\gamma)+1)\cdot2^{\omega(n)}\log (n+1), & v\mid \infty,\\
-\log|n|_v, & v<\infty.
\end{cases}
$$
Indeed, the archimedean case is Corollary~\ref{carch}, and the non-archimedean case is Lemma~\ref{lschin}. Summing up, we obtain
$$
{\mathrm{h}}(\Phi_n(\gamma))\le 10^{12}d^4({\mathrm{h}}(\gamma)+1)\cdot2^{\omega(n)}\log (n+1)+\log n,
$$
which is sharper than~\eqref{eup}.
\qed
\subsection{Proof of Theorem~\ref{thschin}}
Assume~$u_n$ does not have a primitive divisor, but~$n$ satisfies~\eqref{ehypo}. We have, in particular, ${n\ge 10^{30}}$.
Comparing Proposition~\ref{pup} and Theorem~\ref{tas}, we obtain
\begin{align*}
\varphi(n){\mathrm{h}}(\gamma)&\le
10^{13}d^4 %
({\mathrm{h}}(\gamma)+1)\cdot2^{\omega(n)}\log (n+1)+2^{\omega(n)}\log (\pi n)\\
&\le
10^{14}d^4
({\mathrm{h}}(\gamma)+1)\cdot2^{\omega(n)}\log (n+1).
\end{align*}
Since~$\gamma$ is not a root of unity, we have
\begin{equation}
\label{evout}
d{\mathrm{h}}(\gamma) \ge 2(\log(3d))^{-3},
\end{equation}
see \cite[Corollary~2]{Vo96}. Hence
$$
\varphi(n){\mathrm{h}}(\gamma)\le
10^{15}d^5 (\log(3d))^3
{\mathrm{h}}(\gamma)\cdot2^{\omega(n)}\log (n+1),
$$
which implies
\begin{equation}
\label{ealmost}
\varphi(n)\le
10^{15}d^5 (\log(3d))^3
\cdot2^{\omega(n)}\log (n+1).
\end{equation}
For ${n\ge 10^{30}}$ we have
\begin{equation}
\label{eboundphomega}
\varphi(n) \ge 0.5 \frac n{\log\log n},\qquad
\omega(n)\le \frac{\log n}{\log\log n-1.2},
\end{equation}
see \cite[Theorem~15]{RS62} and~\cite[Theorem~13]{Ro83}.
Hence for ${n\ge 10^{30}}$
\begin{align*}
2^{\omega(n)}\frac{n}{\varphi(n)}\log(n+1) &\le n^{(\log2)/(\log\log(10^{30})-1.2)} \cdot 2(\log\log n) \cdot \log(n+1)\\
&\le n^{1/3}.
\end{align*}
Using this, we deduce from~\eqref{ealmost} the inequality
${n^{2/3}\le 10^{15}d^5 (\log(3d))^3 }$.
A quick calculation shows that this inequality is incompatible with~\eqref{ehypo}. \qed
\section{Proof of Theorems~\ref{thm:thmmain} and~\ref{thm:thmmain1}}
\label{sproofs}
Since condition~\ref{iex} of Theorem~\ref{thm:thmmain} trivially implies condition~\ref{iinf} (see Example~\ref{exinf}) it suffices to prove Theorem~\ref{thm:thmmain1}. Thus, in the sequel:
\begin{itemize}
\item
$c_i(x)$ and $f_i(x)$ are polynomials not satisfying condition~\ref{iex} of Theorem~\ref{thm:thmmain} and
$$
u_n(x)=c_1(x)f_1(x)^n+c_2(x)f_2(x)^n \qquad (n=1,2,\ldots);
$$
\item
$m$ and~$n$ are positive integers such that ${u_n(\zeta)=0}$ for a primitive $m$th root of unity~$\zeta$; since ${u_n(x)\in \Q[x]}$, this is equivalent to
\begin{equation}
\Phi_m(x)\mid u_n(x).
\end{equation}
\end{itemize}
\subsection{Some reductions}
We start by some general observations.
\begin{itemize}
\item
We may assume that
\begin{equation}
\label{enonv}
c_1(\zeta)c_2(\zeta)f_1(\zeta)f_2(\zeta) \ne 0.
\end{equation}
Otherwise ${\varphi(m)\le D}$, and, using
\begin{equation}
\label{ephgeroot}
\varphi(m)\ge m^{1/2} \qquad (m\ne 2,6)
\end{equation}
(see~\cite{Va67}), we obtain ${m\le \max\{6, D^2\}}$, which is much sharper than what we want to prove.
\item
We may assume that at least one of $f_1,f_2$ is a non-constant polynomial. Otherwise ${\deg u_n(x)\le D}$, and we again obtain ${\varphi(m)\le D}$.
\item
We may assume that ${n>D}$. Otherwise ${\deg u_n(x) \le D+D^2}$, and, using~\eqref{ephgeroot} we obtain ${m\le \max\{6, (D+D^2)^2\}}$, again much sharper than the wanted result.
\item
Replacing ${c_i(x)}$ and ${f_i(x)}$ by
$$
\tilde{c}_i(x):=c_i(x)/\gcd(c_1(x),c_2(x)), \qquad \tilde{f}_i(x):=f_i(x)/\gcd(f_1(x),f_2(x)),
$$
respectively, we may assume that the polynomials $c_1,c_2$ are coprime in the ring $\Q[x]$, and so are $f_1,f_2$:
\begin{equation}
\label{ecoprime}
\gcd(c_1(x),c_2(x))=\gcd(f_1(x),f_2(x)) =1.
\end{equation}
Lemma~\ref{lhdivide} implies that
$$
{\mathrm{h}}(\tilde{c}_1,\tilde{c}_2) \le {\mathrm{h}}(c_1,c_2)+(D+1)\log2 \le X+(D+1)\log2,
$$
and similarly for ${{\mathrm{h}}(\tilde{f}_1,\tilde{f}_2)}$. Hence, to prove~\eqref{eupperm} in the general case, it suffices to prove
\begin{equation}
\label{eupperngam}
m\le e^{30D(X+D)}
\end{equation}
in the ``coprime case'', that is, assuming~\eqref{ecoprime}.
\end{itemize}
We distinguish several cases according to the nature of roots of our polynomials:
\begin{enumerate}
\item
$f_1(x)f_2(x)$ admits a root which is non-zero and not a root of unity;
\item
$f_1(x)f_2(x)$ vanishes at a root of unity;
\item
$f_1(x)f_2(x)$ vanishes only at~$0$.
\end{enumerate}
These cases are treated separately in the subsequent subsections.
\subsection{The polynomial $f_1(x)f_2(x)$ admits a root~$\gamma$ which is non-zero and not a root of unity}
\label{ssgamma}
By symmetry, we may assume that~$\gamma$ is a root of $f_1(x)$.
Since the statement of Theorem~\ref{thm:thmmain1} is invariant under multiplication of the polynomials $c_1,c_2$ by the same non-zero rational number, we may assume that the polynomial $c_1(x)$ is monic. Similarly, we may assume that $f_1(x)$ is monic.
Denote ${K=\Q(\gamma)}$. Then
$$
d:=[K:\Q]\le D.
$$
Since ${X\ge 3}$, the right-hand side of~\eqref{eupperngam} exceeds ${10^{30}D^9}$.
Hence we may assume that
$$
m>\max\{ 2^{d+1}, 10^{30}d^9\}.
$$
Theorem~\ref{thschin} together with Proposition~\ref{pprim} implies now that there exists a prime~${\mathfrak{p}}$ of~$K$ such that ${\nu_{\mathfrak{p}}(\Phi_m(\gamma))>0}$ and
\begin{equation*}
m<{\mathcal{N}}{\mathfrak{p}}.
\end{equation*}
So we only have to bound ${\mathcal{N}}{\mathfrak{p}}$.
\subsubsection{The numbers~$\beta$ and~$\delta$}
We have ${f_2(\gamma)\ne 0}$ by~\eqref{ecoprime}. However, it it possible that ${c_2(\gamma)=0}$. Denote~$r$ the order of~$\gamma$ as a root of $c_2(x)$, and set
$$
\beta=\frac{c_2^{(r)}(\gamma)}{r!}, \qquad \delta=f_2(\gamma),
$$
These are non-zero elements of the number field~$K$.
We claim that one of the following holds:
\begin{align}
\label{elocal}
\nu_{\mathfrak{p}}(\alpha)&<0 \quad\text{for some coefficient $\alpha$ of~$c_1$ or~$f_1$ or~$c_2$ or~$f_2$};\\
\label{ebeta}
\nu_{\mathfrak{p}}(\beta) &>0;\\
\label{edelta}
\nu_{\mathfrak{p}}(\delta)&>0.
\end{align}
Indeed, since ${\nu_{\mathfrak{p}}(\Phi_m(\gamma))>0}$, there exists a primitive $m$th rooth of unity~$\zeta$ and a prime ${{\mathfrak{P}}\mid {\mathfrak{p}}}$ of the field $K(\zeta)$ such that
$$
\nu_{\mathfrak{P}}(\zeta-\gamma) >0.
$$
Now, if~\eqref{elocal} does not hold, then our four polynomials belong to ${{\mathcal{O}}_{\mathfrak{P}}[x]}$, where~${\mathcal{O}}_{\mathfrak{P}}$ is the local ring of~${\mathfrak{P}}$. Moreover, since~$f_1$ is monic, ${\gamma\in {\mathcal{O}}_{\mathfrak{P}}}$. Hence the polynomials
$$
F(x):=\frac{c_1(x)f_1(x)^n}{(x-\gamma)^r}, \qquad G(x) :=\frac{c_2(x)}{(x-\gamma)^r}
$$
belong to ${\mathcal{O}}_{\mathfrak{P}}[x]$ as well. Note that $F(x)$ is indeed a polynomial, and moreover
$$
F(\gamma) =0,
$$
because ${n>D\ge r}$.
We have ${\beta=G(\gamma)}$ and
${F(\zeta) =-G(\zeta)f_2(\zeta)^n}$ (because ${u_n(\zeta)=0}$).
This implies the following congruences in the ring~${\mathcal{O}}_{\mathfrak{P}}$:
\begin{align*}
\beta\delta^n \equiv G(\zeta)f_2(\zeta)^n \equiv -F(\zeta)\equiv -F(\gamma) \equiv 0\mod{\mathfrak{P}}.
\end{align*}
Hence either ${\beta\equiv0\bmod {\mathfrak{P}}}$ or ${\delta\equiv0\bmod {\mathfrak{P}}}$, which means that one of~\eqref{ebeta} or~\eqref{edelta} holds true.
\subsubsection{Estimates}
Now we are ready to estimate ${\mathcal{N}}{\mathfrak{p}}$. Using Lemma~\ref{lvals}, we obtain
\begin{equation}
\log{\mathcal{N}}{\mathfrak{p}} \le \max\{{\mathrm{h}}(1,c_1),{\mathrm{h}}(1,c_2),{\mathrm{h}}(1,f_1), {\mathrm{h}}(1,f_2), {\mathrm{h}}(\beta), {\mathrm{h}}(\delta)\}.
\end{equation}
Since $f_1(x)$ is a monic polynomial, we have
\begin{equation}
\label{ehfoneftwo}
{\mathrm{h}}(1,f_1),{\mathrm{h}}(1,f_2)\le {\mathrm{h}}(f_1,f_2) \le X,
\end{equation}
and similarly for $c_1,c_2$.
Furthermore, using Lemma~\ref{lhpol}, we find
\begin{align*}
{\mathrm{h}}(\gamma) & \le {\mathrm{h}}(f_1)+\log2\\
& \le X+\log2,\\
{\mathrm{h}}(\delta) &\le {\mathrm{h}}(1,f_2)+ D{\mathrm{h}}(\gamma) + \log(D+1)\\
&\le (D+1)X+2D,\\
{\mathrm{h}}(\beta) &\le {\mathrm{h}}(1, c_2^{(r)}/r!) + D{\mathrm{h}}(\gamma) + \log(D+1) \\
& \le {\mathrm{h}}(1,c_2) + D\log2 +DX+D\log2+\log(D+1) \\
&\le (D+1)X+2D.
\end{align*}
This implies that
$$
\log{\mathcal{N}}{\mathfrak{p}} \le (D+1)X+2D <3DX.
$$
Since ${m< {\mathcal{N}}{\mathfrak{p}}}$, this proves~\eqref{eupperngam}.
\subsection{The polynomial $f_1(x)f_2(x)$ vanishes at a root of unity~$\xi$}
We may assume that ${f_1(\xi)=0}$. Then ${f_2(\xi)\ne 0}$ by~\eqref{ecoprime}.
Let us describe our argument informally. Since ${f_1(\xi)/f_2(\xi)=0}$, there exists ${\varepsilon>0}$ such that ${|f_1(z)/f_2(z)|\le 1/2}$ when ${|z-\xi|\le\varepsilon}$.
Now assume that ${u_n(\zeta)=0}$ for some primitive $m$th root of unity~$\zeta$. Using~\eqref{enonv}, we may write
\begin{equation}
\label{ealphanow}
0\ne\alpha:=\frac{c_2(\zeta)}{c_1(\zeta)}=-\left(\frac{f_1(\zeta)}{f_2(\zeta)}\right)^n.
\end{equation}
Let ${\Q(\zeta)\stackrel\sigma\hookrightarrow \C}$ be a complex embedding of the field~$\Q(\zeta)$ such that~$\zeta^\sigma$ belongs to the $\varepsilon$-neighborhood of~$\xi$. Then
${|\alpha^\sigma| \le (1/2)^n}$.
Define
\begin{equation}
\label{ebetanow}
\beta:=\prod_{|\zeta^\sigma-\xi|\le \varepsilon} \alpha^\sigma,
\end{equation}
the product being over all~$\sigma$ as above. Since the $\varepsilon$-neighborhood of~$\xi$ contains a positive proportion of primitive $m$th roots of unity, we have
$$
-\log|\beta|\gg n\varphi(m),
$$
where the implied constant depends on our polynomials $c_i$ and $f_i$ and on our choice of~$\varepsilon$.
On the other hand, ${\alpha\ne 0}$, and ${{\mathrm{h}}(\alpha) \ll1}$ by Lemma~\ref{lhpol}. Hence Liouville's inequality (Lemma~\ref{lliouv}) implies that
$$
-\log|\beta| = \sum_{|\zeta^\sigma-\xi|\le \varepsilon}-\log |\alpha^\sigma| \ll [\Q(\zeta):\Q]=\varphi(m).
$$
This bounds~$n$.
This all will be made explicit in Subsection~\ref{sssexpl}. But first, we establish some simple lemmas.
\subsubsection{Some lemmas}
\begin{lemma}
\label{lcountcoprime}
Let ${a,b\in \R}$, ${a<b}$, and~$m$ a positive integer. Denote ${\varphi(m,a,b)}$ the number of integers~$k$ coprime with~$m$ and satisfying ${a\le k\le b}$. Then
$$
\varphi(m,a,b) = (b-a)\varphi(m)+O_1(2^{\omega(m)}).
$$
\end{lemma}
For the proof, see \cite[Lemma~2.3]{FGL17}.
\begin{lemma}
\label{lcountroots}
Let~$\varepsilon$ satisfy ${0<\varepsilon\le 1}$ and let~$\xi$ be a complex number on the unit circle; that is, ${|\xi|=1}$. Let~$m$ be a positive integer. Then there exist at least ${\pi^{-1}\varepsilon\varphi(m)- 2^{\omega(m)}}$ primitive $m$th roots of unity~$\zeta$ satisfying ${|\zeta-\xi|\le \varepsilon}$.
\end{lemma}
\begin{proof}
Write ${\xi=e^{2\pi \theta i}}$ with ${\theta\in \R}$, and let ${\eta>0}$ be the smallest positive real number with the property ${2\sin (\pi \eta)=\varepsilon}$. Note that ${1/6\ge \eta>(2\pi)^{-1}\varepsilon}$. If~$k$ is an integer satisfying
$$
m(\theta-\eta) \le k\le m(\theta+\eta), \qquad \gcd(m,k)=1,
$$
then ${\zeta:=e^{2\pi i k/m}}$ is a primitive $m$th root of unity satisfying ${|\zeta-\xi|\le \varepsilon}$.
Lemma~\ref{lcountcoprime} implies that there is at least ${2\eta\varphi(m)- 2^{\omega(m)}}$ choices for~$k$, with distinct~$k$ giving rise to distinct~$\zeta$ (this is because ${\eta\le 1/6}$). Since ${\eta \ge (2\pi)^{-1}\varepsilon}$, the result follows.
\end{proof}
\begin{lemma}
\label{leps}
Let ${f_1(x),f_2(x)\in \C[x]}$ be polynomials of degrees bounded by~$D$, and with coefficients bounded by ${H\ge 1}$ in absolute value. Let ${\xi\in \C}$ be such that
$$
|\xi|\le 1, \qquad f_1(\xi)=0, \qquad f_2(\xi)=\delta\ne 0.
$$
Set
$$
\varepsilon= \frac{\min\{|\delta|,1\}}{3D^2H}.
$$
Then for ${z\in \C}$ satisfying ${|z-\xi|\le \varepsilon}$ we have ${|f_1(z)/f_2(z)|\le 1/2}$.
\end{lemma}
\begin{proof}
Since ${|\xi|\le 1}$ and ${\varepsilon \le 1/3D}$, we have for ${|z-\xi|\le\varepsilon}$ trivial estimates
$$
|f_i'(z)|\le \frac12D(D+1)H(1+\varepsilon)^{D-1}\le D^2H \qquad (i=1,2).
$$
Hence for ${|z-\xi|\le\varepsilon}$ we have
\begin{equation*}
|f_1(z)|\le D^2H\varepsilon \le \frac13|\delta|, \qquad
|f_2(z)|\ge |\delta|-D^2H\varepsilon \ge \frac23|\delta|.
\end{equation*}
This proves the lemma.
\end{proof}
\subsubsection{The estimates}
\label{sssexpl}
As in Subsection~\ref{ssgamma} we may assume that~$f_1$ is monic, which implies that we have~\eqref{ehfoneftwo}. In particular, the coefficients of~$f_1$ and~$f_2$ are bounded in absolute value by ${H:=e^X}$. Set ${\delta=f_2(\xi)}$.
Note that the degree of~$\xi$ is at most~$D$ and the height is~$0$, because it is a root of unity.
Using Lemmas~\ref{lhpol} and~\ref{lliouv}, we estimate
$$
|\delta|\ge e^{-{\mathrm{h}}(f_2(\xi))} \ge e^{-{\mathrm{h}}(1,f_2)-\log(D+1)}\ge ((D+1)H)^{-1}.
$$
Setting ${\varepsilon =(6D^3H^2)^{-1}}$, Lemma~\ref{leps} implies that
$$
\left|\frac{f_1(z)}{f_2(z)}\right|\le 1/2
$$
for ${z\in \C}$ with ${|z-\xi|\le \varepsilon}$.
Now define~$\alpha$ and~$\beta$ as in~\eqref{ealphanow},~\eqref{ebetanow}. Then
\begin{equation}
\label{ebetasmall}
-\log|\beta|\ge nr\log2,
\end{equation}
where~$r$ is the number of embeddings ${\Q(\zeta)\stackrel\sigma\hookrightarrow\C}$ such that ${|\zeta^\sigma-\xi|\le \varepsilon}$. Denote ${\sigma_1, \ldots, \sigma_r}$ all those~$\sigma$.
Lemmas~\ref{lliouv} and~\ref{lhpol} imply that
\begin{align*}
-\log|\beta|&=\sum_{i=1}^r-\log|\alpha^{\sigma_i}|\\
&\le [\Q(\zeta):\Q] {\mathrm{h}}(\alpha) \\
&\le \varphi(m) ({\mathrm{h}}(c_1,c_2)+\log(D+1))\\
&\le \varphi(m) (X+\log(D+1)).
\end{align*}
Together with~\eqref{ebetasmall} this implies that
\begin{equation}
\label{erphm}
n\le \frac{\varphi(m)}{r\log2}(X+\log(D+1)),
\end{equation}
so we only have to bound~$r$ from below.
Lemma~\ref{lcountroots} implies that
$$
r\ge \pi^{-1}\varepsilon \varphi(m)-2^{\omega(m)},
$$
where we recall that
${\varepsilon=(6D^3H^2)^{-1}}$ with ${H=e^X}$.
Using~\eqref{eboundphomega} with~$n$ replaced by~$m$, a messy but trivial calculation shows that either ${m\le e^{30D(X+D)}}$ (as we want) or
${2^{\omega(m)} \le (2\pi)^{-1}\varepsilon \varphi(m)}$.
Thus, ${r\ge (2\pi)^{-1}\varepsilon \varphi(m)}$, which, substituted to~\eqref{erphm}, gives
$$
n\le 100 D^4e^{3X}.
$$
Then
$$
\varphi(m) \le \deg u_n(x) \le 200D^5e^{3X},
$$
and, using~\eqref{ephgeroot}, we deduce from this an estimate much sharper than~\eqref{eupperngam}.
\subsection{The only root of $f_1(x)f_2(x)$ is~$0$}
We may assume that ${f_1(x)=1}$ and ${f_2(x)=\kappa x^b}$, where ${\kappa\in \Q^\times}$ and
$$
1\le b\le D<n.
$$
We recall the following theorem of Mann~\cite{Ma65}.
\begin{theorem}
Let ${a_0,a_1,\ldots,a_k\in\Q^\times}$ and $x_0=1,x_1,\ldots,x_k$ be roots of unity such that
\begin{equation}
\label{eq:Mann}
a_0x_0+a_1x_1+\cdots+a_kx_k=0.
\end{equation}
Assume that
\begin{equation}
\label{eq:nondeg}
\sum_{i\in I} a_i x_i\ne 0
\end{equation}
for every non-empty proper subset ${I\subset \{0,\ldots,k\}}$. Then $x_i^m=1$
where
$$
m=\prod_{p\le k+1} p.
$$
\end{theorem}
For us, we label
$$
c_i(x)=\sum_{j=0}^D c_{i,j} x^j\quad {\text{\rm for}}\quad i=1,2,
$$
and we get
\begin{equation}
\label{eq:Mann1}
\sum_{j=0}^D c_{1,j} \zeta^j+\sum_{j=0}^D c_{2,j}\kappa^n \zeta^{j+nb}=0.
\end{equation}
This almost looks like the equation from Mann's theorem \eqref{eq:Mann} except that the non-degeneracy condition \eqref{eq:nondeg} might fail. So, let us study~\eqref{eq:Mann1}. Let $C$ be the set of non-zero coefficients among $c_{1,j}$ and $c_{2,j}\kappa^n$ for $0\le j\le D$. If $c\in C$ then ${c=c_{\ell,j}\kappa^{\delta n}}$ for some $\ell\in \{1,2\}$ and $j\in \{0,\ldots,j\}$, then put
$x_c=\zeta^{j+\delta nb}$. Here, we take $\delta=0$ if $\ell=1$ and $\delta=1$ if $\ell=2$. With these conventions, equation \eqref{eq:Mann1} is
$$
\sum_{c\in C} cx_c=0.
$$
This splits into a certain number of non-degenerate equations. That is, there is a partition $C_1\cup C_2\cup \cdots \cup C_t=C$ such that $\sum_{c\in C_i} cx_c=0$ for ${i=1,\ldots,t}$ and each of these sub-equations is non-degenerate in the sense that it has no zero proper sub-sums. Clearly, ${\#C_i\ge 2}$ for each~$i$.
We analyze two sub-cases.
\subsubsection{We have $\#C_i\ge 3$ for some ${i\in \{1,\ldots,t\}}$}
\label{ssgethree}
Then $C_i$ contains two coefficients with the same $\ell$. We assume that ${\ell=1}$ (the case ${\ell=2}$ reduces to ${\ell=1}$ replacing~$\zeta$ by~$\zeta^{-1}$) and let $j_1<j_2$ be the smallest such that
$c_{1,j_1}$,~$c_{1,j_2}$ belong to $C_i$. Then the equation is
$$
c_{1,j_1}\zeta^{j_1}+c_{1,j_2}\zeta^{j_2}+\sum_{\substack{c_{\ell,j}\kappa^{\delta n}\in C_i\\ \ell=2 ~{\text{\rm or}}~j>j_2}} c_{\ell,j} \kappa^{n\delta} \zeta^{j+n\delta b}=0.
$$
Dividing by $\zeta^{j_1}$, we get
$$
c_{1,j_1}+c_{1,j_2}\zeta^{j_2-j_1}+\sum_{\substack{c_{\ell,j}\kappa^{\delta n}\in C_i\\ \ell=2 ~{\text{\rm or}}~j>j_2}} c_{\ell,j} \kappa^{n\delta} \zeta^{j-j_1+n\delta b}=0.
$$
We are now in the position to apply Mann's theorem to conclude that
$$
\zeta^{(j_2-j_1)m_1}=1, \qquad m_1\mid \prod_{p\le \#C_i} p \mid \prod_{p\le 2D+2}p,
$$
because ${\#C_i\le 2D+2}$.
Since ${|j_2-j_1|\le D}$, we have
\begin{equation}
\label{emzero}
m\le D\prod_{p\le 2D+2}p.
\end{equation}
The inequality
$
{\sum_{p\le x}\log p \le 1.02x}
$
holds for all ${x>0}$, see \cite[Theorem~9]{RS62}. Hence
$$
\log m\le \log D+\sum_{p\le 2D+2}\log p \le 4D,
$$
which is much sharper than what we need.
\subsubsection{We have $\#C_i=2$ for all $i=1,\ldots,t$}
In fact, we may assume not only that $\#C_i=2$ but also that each $C_i$ contains exactly one $c_{1,j_1}$ and one $c_{2,j_2}\kappa^n$; otherwise the argument from Subsection~\ref{ssgethree} applies, and we again have~\eqref{emzero}. So, let
$$
c_{1,j_1}\zeta^{j_1}+c_{2,j_2}\kappa^n\zeta^{j_2+nb}=0.
$$
We then get $\zeta^{j_2-j_1+nb}=-c_{1,j_1}/c_{2,j_2} \kappa^{-n}$. The pair $(j_1,j_2)$ depends on $i$. Assume first that, as we loop over~$i$, the differences $j_2-j_1$ are not the same over all $i$; that is, there are two values of $i$ corresponding to
say $(j_1,j_2)$ and $(j_1',j_2')$ such that ${j_2'-j_1'\ne j_2-j_1}$. We obtain
$$
\zeta^{(j_2-j_1)-(j_2'-j_1')}=\frac{c_{1,j_1}/c_{2,j_2}}{c_{1,j_1'}/c_{2,j_2'}}
$$
and the number on the right is a root of unity belonging to~$\Q$. Hence it is $\pm1$. The exponent on the left satisfies
$$
0\ne \bigl|(j_2-j_1)-(j_2'-j_1')\bigr|\le 2D.
$$
Hence ${m\le 4D}$, again better than wanted.
Now let us assume that ${j_2=j_1+a}$
with the same~$a$ for all $i$. In this case
$c_{2,j_1+a}=\lambda c_{1,j_1}$ with the same ${\lambda \in \Q^\times}$ holds for all the~$i$ as well. This makes the rational function $c_2(x)/c_1(x)$ equal to $\lambda x^{a}$, and so
$$
u_n(x)=c_1(x)(1+\lambda\kappa^n x^{a+nb}).
$$
Since ${u_n(\zeta)=0}$ but ${c_1(\zeta)\ne 0}$, we must have ${1+\lambda\kappa^n \zeta^{a+nb}=0}$, which means that ${\lambda\kappa^n}$ is a root of unity, so~$\pm1$. Now we have two options: either both~$\lambda$ and~$\kappa$ are $\pm1$, or none is. The first option means that condition~\ref{iex} of Theorem~\ref{thm:thmmain} is satisfied, which is against our hypothesis. Hence ${\lambda\kappa^n=\pm1}$, but ${\lambda,\kappa\ne \pm1}$.
We have clearly ${{\mathrm{h}}(\kappa)={\mathrm{h}}(f_1,f_2)\le X}$ and ${{\mathrm{h}}(\lambda)={\mathrm{h}}(c_1,c_2)\le X}$. Since~$\kappa$ is a rational number, distinct from~$0$ and from $\pm1$, its numerator or denominator (say, the former) is at least~$2$ in absolute value. It follows that the denominator of ${\lambda=\pm\kappa^{-n}}$ is at least $2^n$ in absolute value. But the denominator of~$\lambda$ cannot exceed ${e^{{\mathrm{h}}(\lambda)}\le e^X}$. We obtain ${2^n\le e^X}$, which implies ${n\le \log X}$. Hence
$$
\varphi(m)\le \deg u_n(x) \le D+D\log X,
$$
which implies a much sharper estimate for~$m$ than the wanted~\eqref{eupperngam}.
Theorem~\ref{thm:thmmain1} is proved.
\section{Proof of Theorem~\ref{thsmallestn}}
\label{ssmallestn}
Let~$\zeta$ be an $m$th primitive root of unity such that the set
\begin{equation}
\label{eprop}
\{n\in \Z_{>0}: \text{$u_n(x)$ is not identically~$0$, but ${u_n(\zeta)=0}$}\}
\end{equation}
is not empty. If
${c_1(\zeta)f_1(\zeta)=c_2(\zeta)f_2(\zeta)=0}$
then set~\eqref{eprop} consists of all positive integers, and includes~$1$ in particular.
If, say, ${c_1(\zeta)f_1(\zeta)\ne 0}$, and set~\eqref{eprop} is non-empty, then
$$
c_1(\zeta)f_1(\zeta)c_2(\zeta)f_2(\zeta)\ne0.
$$
Denoting
$$
\eta=\frac{f_1(\zeta)}{f_2(\zeta)}, \qquad \theta=-\frac{c_2(\zeta)}{c_1(\zeta)},
$$
set~\eqref{eprop} consists of~$n$ with the property ${\eta^n=\theta}$. If~$\eta$ is a root of unity, then its order divides $2m$, and there exists a positive ${n\le 2m}$ such that ${\eta^n=\theta}$. If~$\eta$ is not a root of unity, then
${n={\mathrm{h}}(\theta)/{\mathrm{h}}(\eta)}$. We have ${{\mathrm{h}}(\theta) \le X+\log(D+1)}$ by Lemma~\ref{lhpol}, and ${\varphi(m){\mathrm{h}}(\eta) \ge 2(\log\varphi(m))^{-3}}$, see~\eqref{evout}. Hence
$$
n\le m(\log m)^3(X+\log D).
$$
Theorem~\ref{thsmallestn} is proved.
\subsection*{Acknowledgements}
Yu.~B. was partially supported by the Indian Government SPARC Project P445. F. L. was supported in part by Grant NUM2020 from the Wits CoEMaSS. Part of this work was done while F. L. was visiting the Max Planck Institute for Mathematics in Bonn from September 2019 to February 2020. He thanks this institution for its support, hospitality and excellent working conditions.
We thank Yann Bugeaud, Philipp Habegger, Alina Ostafe and Igor Shpar\-linski for helpful discussions. We also thank the referee for the encouraging report and many useful suggestions that helped us to improve the presentation.
{\footnotesize
\bibliographystyle{amsplain}
| {
"timestamp": "2020-11-24T02:14:40",
"yymm": "2005",
"arxiv_id": "2005.05500",
"language": "en",
"url": "https://arxiv.org/abs/2005.05500",
"abstract": "Let $c_1(x),c_2(x),f_1(x),f_2(x)$ be polynomials with rational coefficients. With obvious exceptions, there can be at most finitely many roots of unity among the zeros of the polynomials $c_1(x)f_1(x)^n+c_2(x)f_2(x)^n$ with $n=1,2\\ldots$. We estimate the orders of these roots of unity in terms of the degrees and the heights of the polynomials $c_i$ and $f_i$.",
"subjects": "Number Theory (math.NT)",
"title": "Binary polynomial power sums vanishing at roots of unity",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130586647623,
"lm_q2_score": 0.8128673133042217,
"lm_q1q2_score": 0.803936387819616
} |
https://arxiv.org/abs/physics/0503214 | Optimal supply against fluctuating demand | Sornette et al. claimed that the optimal supply does not agree with the average demand, by analyzing a bakery model where a daily demand fluctuates with a uniform distribution. In this note, we extend the model to general probability distributions, and obtain the formula of the optimal supply for Gaussian distribution, which is more realistic. Our result is useful in a real market to earn the largest income on average. | \section{Introduction}
Sornette et al.\ (1999) claimed that the optimal supply does
not agree with the average demand, contrary to the common sense in
economy. They considered a bakery model where a daily demand
fluctuates with a uniform distribution, and derived the formula of the
optimal supply. Although their result is reasonable and
meaningful, it is not clear how the result will change if we consider
a different distribution.
In this note, we extend the model to general probability
distributions, and calculate the optimal supply for the
Gaussian distribution, which is more realistic in a market.
\section{Model and analysis of Sornette, Stauffer and Takayasu}
In this section we review the model and analysis of Sornette
et al.\ Let us consider a bakery shop where baked croissants are sold
every day. A question is how many croissants should be baked a day to
make the maximal profit.
We define the variables as follows.
\begin{itemize}
\item $x$: the selling price of a croissant.
\item $y$: its production cost.
\item $s$: the production number of croissants per day (supply).
\item $n$: the number of croissants requested by customers per day
(demand).
\item $D$: the average demand, i.e., $D\equiv<n>$.
\end{itemize}
The expectation of the total profit $L(s)$ is given by
\begin{equation}\label{L}
L(s)\equiv<x~{\rm min}(n,s)-ys>
=x\int^s_0nP(n)dn+xs\int^{\infty}_sP(n)dn-ys,
\end{equation}
where $P(n)$ is the probability distribution of $n$.
Sornette et al.\ assumed, for simplicity, a uniform distribution,
\begin{equation}\label{uniform}
P_u(n)\equiv
\left\{\begin{array}{ll}
1/2\delta ~& {\rm for} ~~~D-\delta\le n\le D+\delta\\
0 ~& {\rm for} ~~~n<D-\delta,~D+\delta<n.
\end{array}\right.
\end{equation}
Then one can integrate (\ref{L}) as
\begin{equation}
L(s)=-{x\over4\delta}\left\{s-D-\delta\left(1-{2y\over
x}\right)\right\}^2
+(x-y)\left(D-{\delta y\over x}\right).
\end{equation}
$L(s)$ takes the maximam value when $s$ takes
\begin{equation}\label{su}
s_{{\rm max}}\equiv D+\delta\left(1-{2y\over x}\right).
\end{equation}
This shows that, if the cost per price, $y/x$, is larger (smaller)
than half, the optimal demand, $s_{{\rm max}}$, is smaller (larger)
than the average demand, $D$.
\section{Optimal supply for Gaussian distribution}
Let us re-analyze (\ref{L}) for general probability distributions. We
do not have to integrate (\ref{L}) directly, because what we want to
know is the optimal supply $s_{{\rm max}}$, which is given by
\begin{equation}\label{dL}
{dL\over ds}(s_{{\rm max}})=x\int^{\infty}_{s_{{\rm max}}}P(n)dn-y=0.
\end{equation}
This simple equation gives the optimal supply for general probability
distributions.
If we assume Gaussian distribution,
\begin{equation}
P_G(n)\equiv{1\over\sqrt{2\pi}\sigma}
\exp\left[-{(n-D)^2\over2\sigma^2}\right],
\end{equation}
the above integration is expressed as
\begin{equation}
\int^{\infty}_{s_{{\rm max}}}P_G(n)dn=\frac12-\frac12
{\rm Erf}\left({s_{{\rm max}}-D\over\sqrt{2}\sigma}\right),
\end{equation}
where Erf is the error function, which is defined as
\begin{equation}
{\rm Erf}~z\equiv{2\over\sqrt{\pi}}\int^z_0e^{-t^2}dt.
\end{equation}
Then we arrive at the formula of the optimal supply for the Gaussian
distribution,
\begin{eqnarray}\label{sG}
{s_{{\rm max}}-D\over\sigma}
&=&\sqrt{2}{\rm Erf}^{-1}\left(1-\frac{2y}x\right) \nonumber\\
&=&\sqrt{{\pi\over2}}\left(1-\frac{2y}x\right)
+{\sqrt{2}\pi^{\frac32}\over24}\left(1-\frac{2y}x\right)^3
+O\left[\left(1-\frac{2y}x\right)^5\right]
~~~({\rm Gaussian}).
\end{eqnarray}
Because the cost is usually in the range $0.3x<y<0.7x$, which reads
$|1-2y/x|<0.4$, the first-order approximation in
(\ref{sG}) is sufficient in most cases.
For reference, we rewrite the result for the uniform distribution
(\ref{su}). Because the variance of the uniform distribution (\ref{uniform}) is
evaluated as $\sigma^2=\delta^2/3$, (\ref{su}) is rewritten as
\begin{equation}\label{su2}
{s_{{\rm max}}-D\over\sigma}=\sqrt{3}\left(1-\frac{2y}x\right)
~~~({\rm uniform}).
\end{equation}
We see that the difference between $\sqrt{\pi/2}\approx1.25$ in
(\ref{sG}) and $\sqrt{3}\approx1.73$ in (\ref{su2}) is not negligible.
Contrary to the speculation of
Sornette {\it et al.}, however, the critical value of the
cost-to-price ratio, $y/x=1/2$, is unchanged.
Because Gaussian distribution is more realistic,
our simple formula (\ref{sG}) is useful
in a real market to earn the largest income on average.
| {
"timestamp": "2005-03-30T00:04:50",
"yymm": "0503",
"arxiv_id": "physics/0503214",
"language": "en",
"url": "https://arxiv.org/abs/physics/0503214",
"abstract": "Sornette et al. claimed that the optimal supply does not agree with the average demand, by analyzing a bakery model where a daily demand fluctuates with a uniform distribution. In this note, we extend the model to general probability distributions, and obtain the formula of the optimal supply for Gaussian distribution, which is more realistic. Our result is useful in a real market to earn the largest income on average.",
"subjects": "Physics and Society (physics.soc-ph); General Finance (q-fin.GN)",
"title": "Optimal supply against fluctuating demand",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9888419680942664,
"lm_q2_score": 0.8128673178375734,
"lm_q1q2_score": 0.8037973183700137
} |
https://arxiv.org/abs/2211.03255 | Minimal Area of a Voronoi Cell in a Packing of Unit Circles | We present a new self-contained proof of the well-known fact that the minimal area of a Voronoi cell in a unit circle packing is equal to $2\sqrt{3}$, and the minimum is achieved only on a perfect hexagon. The proof is short and, in our opinion, instructive. | \section{Introduction}
This work originated from attempts to rely on the proof in \cite{H} of the fact that the minimal area of a Voronoi cell in a unit circle packing in $\bbR^2$ is equal to $2\sqrt{3}$. Unfortunately, this proof contains some gaps (see Appendix), and our efforts to recover the proof resulted in an alternative approach presented below.
The result itself immediately implies the strong/local version of the theorem of Thue and Fejes T\'oth \cite{Th, To} on dense unit circle packings in $\bbR^2$. Additional applications include the derivation of so-called Peierls estimates in two-dimensional lattice hard-core models of statistical mechanics \cite{MSS1}.
\section{Definitions and basic properties}
For any $\bx \in \bbR^2$ and $r > 0$ denote by $B_r(\bx)$ an open disk of radius $r$ centered at $\bx$. A shorthand notation $B(\bx)$ is used for the {\it unit} disk having $r=1$. A collection of non-overlapping open unit disks is called {\it admissible} and is denoted by $\{B(\bx_i)\}$. An admissible collection $\{B(\bx_i)\}$ represents a {\it unit circle packing}. (It is clear that an admissible collection is finite or countable, as inside each disk there exists a point with rational coordinates.) An admissible collection $\{B(\bx_i)\}$ can be identified with the corresponding collection of centers $\{\bx_i\}$ which is also called admissible. Each element of $\{\bx_i\}$ is called an {\it occupied} point in $\bbR^2$; respectively, we speak about admissible collections of occupied points. Clearly, $|\bx_{i'}-\bx_{i''}| \ge 2$ for any distinct $\bx_{i'}, \bx_{i''} \in \{\bx_i\}$, where $|\bx_{i'}-\bx_{i''}|$ denotes the Euclidean distance between $\bx_{i'},\bx_{i''} \in \bbR^2$.
For each $\bx_{i'} \in \{\bx_i\}$ the corresponding {\it Voronoi cell} is defined as
$$V(\bx_{i'}) = \left\{\bz \in \bbR^2:\;\;|\bx_{i'}-\bz| \le \inf_{i'' \not = i'}|\bx_{i''}-\bz|\right\}.$$
If for $\bx_{i''}$ the intersection $V(\bx_{i'}) \cap V(\bx_{i''})$ contains more than one point of $\bbR^2$ then we say that $\bx_{i''}$ is a {\it
neighboring occupied point} for $\bx_{i'}$ or simply $\bx_{i''}$ is a {\it neighbor} of $\bx_{i'}$.
Observe that $V(\bx)$ can be unbounded. If $V(\bx)$ is bounded, i.e., $V(\bx) \subset B_{r}(\bx)$ for some $0 < r <\infty$, then $B(\by) \subset B_{4r}(\bx)$ for any neighboring occupied point $\by$. Due to the admissibility requirement the number of such $\by$ cannot exceed $|B_{4r}(\cdot)|/|B(\cdot)| < \infty$, where $|\cdot|$ denotes the area of the corresponding disk.
Our aim is to find the minimal possible area of a Voronoi cell among all Voronoi cells in all admissible collections of occupied points. Suppose that a Voronoi cell with minimal or close to minimal area can be found in an admissible collection containing unbounded Voronoi cells. Then this collection can be completed without breaking the admissibility by a finite or countable set of additional occupied points such that the resulting admissible collection has bounded Voronoi cells only. For that reason from now on we mainly consider admissible collections without unbounded Voronoi cells.
The rest of this section verifies some standard properties of Voronoi cells which makes the entire argument self-contained.
\medskip
{\bf Lemma 1.} {\sl For an occupied point $\bx$ in an admissible collection the corresponding Voronoi cell $V(\bx)$ contains the closure of $B(\bx)$.}
\medskip
{\bf Proof.} If for a point $\bz \in B(\bx)$ there exists an occupied point $\by \not = \bx$ with $|\by-\bz| < |\bx-\bz| < 1$ then by the triangle inequality $|\by-\bx| < 2$ which contradicts the admissibility of the collection. \qed
\medskip
{\bf Lemma 2.} {\sl For an occupied point $\bx$ in an admissible collection the corresponding Voronoi cell $V(\bx)$ is a convex subset of $\bbR^2$.}
\medskip
{\bf Proof.} For an occupied point $\by \not = \bx$ and any $\bz \in V(\bx)$ one has $|\bz-\bx|^2 \le |\bz-\by|^2$ and consequently
$$|\bx|^2 - 2\bz\cdot \bx \le |\by|^2 - 2\bz\cdot \by.$$ Consider two distinct points $\bz', \bz'' \in V(\bx)$ and suppose that $\bz = \lambda \bz' + (1-\lambda) \bz''$, where $0 \le \lambda \le 1$. Then
$$\beacl
|\bz -\bx|^2 &= |\bz|^2 + \lambda(|\bx|^2-2 \bz'\cdot \bx) + (1-\lambda)(|\bx|^2-2 \bz''\cdot \bx)\cr \\
&\le |\bz|^2 + \lambda(|\by|^2-2 \bz'\cdot \by) + (1-\lambda)(|\by|^2-2 \bz''\cdot \by) \cr
&= |\bz -\by|^2,\ena
$$
which establishes the lemma. \qed
\medskip
{\bf Lemma 3.} {\sl For an occupied point $\bx$ in an admissible collection the boundary of the corresponding Voronoi cell $V(\bx)$ is piecewise linear.}
\medskip
{\bf Proof.} Consider two different occupied points $\bx, \by$ with $V(\bx)\cap V(\by)$ containing more than one $\bbR^2$ point. Let $\bz', \bz'' \in V(\bx)\cap V(\by)$ and $\bz' \not= \bz''$. Now suppose that $\bz = \lambda \bz' + (1-\lambda) \bz''$, $0 \le \lambda \le 1$. Then
$$\beacl
|\bz -\bx|^2 &= |\bz|^2 + \lambda(|\bx|^2-2 \bz'\cdot \bx) + (1-\lambda)(|\bx|^2-2 \bz''\cdot \bx)\cr
&= |\bz|^2 + \lambda(|\by|^2-2 \bz'\cdot \by) + (1-\lambda)(|\by|^2-2 \bz''\cdot \by) \cr
&= |\bz -\by|^2,\ena
$$
i.e., $\bz \in V(\bx)\cap V(\by)$. Also note that for any two distinct occupied points $\bx, \by$ the set
$$\left\{ \bz \in \bbR^2:\;\; |\bz -\bx| \le |\bz -\by| \right\}$$
is a closed half-plane. Consequently, $V(\bx)$ is an intersection of a finite or countable number of closed half-planes. \qed
\medskip
Lemmas~1-3 are valid for both bounded and unbounded Voronoi cells in an admissible collection of occupied points. If $V(\bx)$ is bounded then these lemmas imply that $V(\bx)$ is a convex polygon containing the unit disk $B(\bx)$. Furthermore, each neighboring occupied point $\by$ is the reflection of $\bx$ with respect to the common side of $V(\bx)$ and $V(\by)$.
The clockwise circular order of sides of a bounded polygon $V(\bx)$ generates the clockwise circular order of the corresponding neighboring occupied points. Connecting the neighboring points in this circular order we obtain a so-called {\it polygon of neighbors}. Together with $\bx$, each side of this polygon uniquely defines a {\it constituting triangle} such that the entire polygon of neighbors is partitioned into constituting triangles. By construction, the vertices of $V(\bx)$ are the centers of the circumcircles of these constituting triangles. The angle of the constituting triangle at vertex $\bx$ is called the {\it constituting angle}.
\medskip
{\bf Lemma 4.} {\sl The circumradius of a constituting triangle is not shorter than $2/\sqrt{3}$.}
\medskip
{\bf Proof.} At least one angle of a constituting triangle, say angle $\alpha$, is not larger than $\pi / 3$. By the admissibility requirement the length $a$ of the opposite triangle side is not shorter than $2$. According to the sine theorem for triangles the triangle circumradius
$$r = {a \over 2 \sin \alpha} \le {2 \over 2 \sin {\pi \over 3}} = {2\over\sqrt{3}},$$
which establishes the lemma. \qed
\section{Results}
Consider an admissible collection $\{\bx_i\}$ which forms a triangular lattice with the shortest distance between sites equal to $2$. Then for each $\bx_i$ the corresponding polygon of neighbors is a perfect hexagon with the side length equal to $2$. Correspondingly, $V(\bx_i)$ is a perfect hexagon with the side length equal to $2 / \sqrt{3}$ and the area $|V(\bx_i)|= 2 \sqrt{3}$.
\medskip
{\bf Theorem.} {\sl The minimal area of a Voronoi cell in an admissible configuration of occupied points is equal to $2 \sqrt{3}$. The minimum is achieved only on occupied points $\bx$ having a perfect hexagon with side length $2/\sqrt{3}$ as its Voronoi cell, or equivalently, a perfect hexagon with side length} $2$ as the corresponding polygon of neighbors.
\medskip
{\bf Proof.} In view of properties of a Voronoi cell presented in Lemmas~1-4 the problem is reduced to finding the polygon of minimal area among all polygons $P$ having the following properties:
\begin{description}
\item{(i)} $P$ is convex.
\item{(ii)} $P$ contains a unit disk.
\item{(iii)} The distances from the vertices of $P$ to the center of this disk are not shorter than~${2 \over \sqrt{3}}$.\end{description}
\noindent
The last property is a weaker replacement of the requirement for a Voronoi cell to have the corresponding neighboring occupied points at distances not shorter than $2$ from each other. It turns out that this weaker requirement is enough to establish the theorem.
For a convex polygon of area $a$ and one of its angles of measure $\alpha$ define the {\it angular density} (with respect to this angle) as the ratio $E={a \over \alpha}$. Now take any polygon satisfying (i)-(iii) and consider several rays originating at the center $\bo$ of the contained unit disk.
Let the angles between the clockwise consecutive rays be
smaller than
$\pi$. These rays partition the polygon into several convex polygons each located inside
the angle $\alpha_j$ between the corresponding two clockwise consecutive rays. Clearly, the area $a$ of the original polygon $P$ can be calculated as
$a= \sum_j E_j \alpha_j$, where $E_j$ is the corresponding angular density (with respect to angle $\alpha_j$).
For any vertex $\bv$ of $P$
the convex hull of this vertex and the unit disk centered at $\bo$ belongs to $P$.
If $|\bo-\bv|=r$ then this convex hull is bounded by two straight segments $[\bv\ba]$ and $[\bv\bc]$ of length $\sqrt{r^2-1}$ and the arc of the unit circle connecting $\ba$ with $\bc$ and having length $2\pi - 2 \arctan\sqrt{r^2-1}$. The area of the quadrilateral $[\bo\ba\bv\bc]$ is equal to $\sqrt{r^2-1}$ and $|\angle \ba\bo\bc|=2\arctan\sqrt{r^2-1}$. Therefore, the corresponding angular density is $E_{\bv} = {\sqrt{r^2-1}\over 2 \arctan\sqrt{r^2-1}}$. Take any point $\bb$ inside the segment $[\bv\bc]$ and consider the angular density $\overline E_{\bv}$ of the smaller quadrilateral $[\bo\ba\bb\bv]$ with respect to the angle $\angle \ba\bo\bb$. Let $|\bb-\bc| = x < \sqrt{r^2-1}$. Then
$$\overline E_{\bv} = {\sqrt{r^2-1} - {x \over 2} \over 2\arctan \sqrt{r^2-1} - \arctan x} > {\sqrt{r^2-1} \over 2\arctan \sqrt{r^2-1}} = E_{\bv}.$$
For two clockwise consecutive polygon vertices $\bv_i$ and $\bv_{i+1}$ consider the corresponding convex hulls and two corresponding quadrilaterals $[\bo\ba_i\bv_i\bc_i]$ and $[\bo\ba_{i+1}\bv_{i+1}\bc_{i+1}]$. Observe that consecutive open quadrilaterals are either adjacent (have a common side) or intersecting. (In the case when they are separated by
a non-zero angle $\angle \bc_i\bo\ba_{i+1}$ the segment $[\bv_i \bv_{i+1}]$ intersects the interior of the unit disk which contradicts property (ii).) With this observation at hand, denote by $\bb_i$ the intersection point of segments $[\bv_i\bc_i]$ and $[\bv_{i+1}\ba_{i+1}]$. According to the displayed equation above, the angular density of $[\bo\ba_i\bv_i\bb_i]$ is larger than the angular density of $[\bo\ba_i\bv_i\bc_i]$ and the angular density of $[\bo\bb_i\bv_{i+1}\bc_{i+1}]$ is larger than the angular density of $[\bo\ba_{i+1}\bv_{i+1}\bc_{i+1}]$.
\def\cQ{\mathcal Q}
Consider now the union of the quadrilaterals $[\bo\ba_i\bv_i\bc_i]$ over all vertices $\bv_i$. It is a polygon $Q$ (generally, non-convex) contained in $P$. Between each two vertices $\bv_i$ and $\bv_{i+1}$ the polygon $Q$ may contain an additional vertex $\bb_i$ that is the intersection point introduced above. The angular density of $[\bo\bb_{i-1}\bv_i\bb_i]$ is not smaller than the angular density of $[\bo\ba_i\bv_i\bc_i]$, and the two angular densities are equal only if $\bb_{i-1}=\ba_i$ and $\bb_i=\bc_i$. For $r \ge {2 \over \sqrt{3}}$ the minimum of $E_{\bv} = {\sqrt{r^2-1}\over 2\arctan\sqrt{r^2-1}}$ equals $\sqrt{3} \over \pi$ and is achieved only at $r = {2 \over \sqrt{3}}$. Thus, the total area of the union of the quadrilaterals $[\bo\ba_i\bv_i\bc_i]$ (or equivalently the union of mutually disjoint open quadrilaterals $[\bo\bb_{i-1}\bv_i\bb_{i}]$) is not smaller than ${\sqrt{3} \over \pi} 2\pi = 2\sqrt{3}$. Obviously, this minimum is achieved only when the interiors of the quadrilaterals $[\bo\ba_i\bv_i\bc_i]$ are disjoint (i.e., $[\bo\ba_i\bv_i\bc_i] = [\bo\bb_{i-1}\bv_i\bb_{i}]$) and $|\bo-\bv_i| = {2 \over \sqrt{3}}$ for all $i$. In this case the corresponding polygon is the perfect hexagon with side length ${2 \over \sqrt{3}}$.
Indeed, if $V(\bx)$ is a hexagon with $r_i > {2 \over \sqrt{3}}$ for some $i$ then $|\angle \ba_i \bo \bc_i| > {2 \pi \over 6}$. Therefore, such a hexagon has the angular density larger than minimal for some non-zero angle. Consequently, its area is larger than $2\sqrt{3}$.
Any polygon with $n > 6$ vertices contains at least two overlapping quadrilaterals $[\bo \ba_i\bv_i\bc_i]$ even if $r_i = {2 \over \sqrt{3}}$ for all $i$ because $n {2 \pi \over 6} > 2\pi$. Therefore, the angular density becomes larger than minimal for some non-zero angle. Hence, any polygon satisfying (i)-(iii) with more than 6 vertices has a non-minimal area.
A polygon with $n = 3,4$ or 5 vertices necessarily has at least one $|\angle \ba_i \bo \bc_i| \ge {2\pi \over n}$ and therefore $r_i > {1 \over \cos{\pi \over n}} > {2 \over \sqrt{3}}$. Thus, the corresponding angular density $E_{\bv_i}$ is again greater than $\sqrt{3} \over \pi$ for some non-zero angle; consequently, the area of the entire polygon is larger than the minimal area $2\sqrt{3}$.~\qed
\section{Appendix}
The problem with the argument in \cite{H} is that at some point the proof considers the ``most critical case'', but it is not specified what quantity is optimized at this ``most critical case'' and why this quantity is important. The desired quantity seems to be the area excess in $V(\bx)$ over the area of the contained unit disk which can be attributed to a single non-close neighbor. (In the terminology of \cite{H} a neighbor $\by$ is non-close to the center $\bx$ of $V(\bx)$ iff $|\bx-\by| > 2.3$.) By considering this ``most critical case'' for a single non-close neighbor the proof in \cite{H} concludes that the corresponding area excess is at least $0.21$. After that the proof claims that the presence of two non-close neighbors implies that the area excess is at least $0.42$. This additivity assumption is actually wrong as individual excesses can overlap, and one can give a counterexample showing two non-close neighbors with the total attributed area excess smaller than $0.42$.
| {
"timestamp": "2022-11-08T02:17:35",
"yymm": "2211",
"arxiv_id": "2211.03255",
"language": "en",
"url": "https://arxiv.org/abs/2211.03255",
"abstract": "We present a new self-contained proof of the well-known fact that the minimal area of a Voronoi cell in a unit circle packing is equal to $2\\sqrt{3}$, and the minimum is achieved only on a perfect hexagon. The proof is short and, in our opinion, instructive.",
"subjects": "Metric Geometry (math.MG)",
"title": "Minimal Area of a Voronoi Cell in a Packing of Unit Circles",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9859363713038173,
"lm_q2_score": 0.8152324871074608,
"lm_q1q2_score": 0.8037673601077159
} |
https://arxiv.org/abs/2101.02995 | The set of ratios of derangements to permutations in digraphs is dense in $[0, 1/2]$ | A permutation in a digraph $G=(V, E)$ is a bijection $f:V \rightarrow V$ such that for all $v \in V$ we either have that $f$ fixes $v$ or $(v, f(v)) \in E$. A derangement in $G$ is a permutation that does not fix any vertex. In [1] it is proved that in any digraph, the ratio of derangements to permutations is at most $1/2$. Answering a question posed in [1], we show that the set of possible ratios of derangements to permutations in digraphs is dense in the interval $[0, 1/2]$. | \section{Introduction}
A {\em permutation} in a digraph $G=(V, E)$ is a bijection $f:V \rightarrow V$ such that for all $v \in V$ we either have that $f$ fixes $v$ or $(v, f(v)) \in E$. A {\em derangement} in $G$ is a permutation that does not fix any vertex. We define the parameter $(d/p)_G$ to be the ratio of derangements to permutations in $G$.
Bucic, Devlin, Hendon, Horne and Lund \cite{BDHHL} showed that $(d/p)_G \le 1/2$ for all digraphs $G$, with equality if and only if $G$ is a directed cycle. They also gave a construction (the blow-up of a directed cycle) that can achieve a ratio arbitrarily close to but not equal to $1/2$. Let $S=\{(d/p)_G\; :\; G \;\;\text{is a digraph}\}$ be a set of values arising as a ratio $(d/p).$ In \cite{BDHHL} they analyzed the ratio $(d/p)_G$ for the random graph $G=G(n, m)$, and as a corollary of this analysis they showed that $S$ is dense in $[0, 1/e]$. This corollary follows from two facts: $(d/p)_G$ is concentrated around its mean, and by choosing a suitable value of $m$ one can make the expected ratio $(d/p)_G$ close to any given value in $[0, 1/e]$. At the end of the paper \cite{BDHHL} they ask whether $S$ is dense in $[0, 1/2]$. Our main theorem, below, answers this question in the positive.
\begin{maintheorem}\label{thm:main}
The set of possible ratios of derangements to permutations in digraphs is dense in $[0, 1/2]$.
\end{maintheorem}
The construction we use, described in more detail later, is a random subgraph of the blow-up of a directed cycle. The main part of the proof is an application of the second moment method (see \cite{FK}, for example, for an introduction to the method) to show that the number of derangements and permutations are concentrated around their expectations.
\section{Proof of Theorem \ref{thm:main}}
\subsection{Outline}
First we outline the proof. Suppose we are given a fixed real number $r \in [0, 1/2]$. We will show that there exists a sequence of digraphs $G_k$ such that the ratio of derangements to permutations in $G_k$ is $r+o(1)$ as $k \rightarrow \infty$ (which proves Theorem \ref{thm:main}). If $r=0$ or $1/2$ this is trivial. Indeed, for $r=0$ observe that a digraph with one vertex and no edges has no derangements and one permutation, and for $r=1/2$ observe that any directed cycle has one derangement and two permutations. So we assume $0 < r < 1/2$.
Our construction is as follows. As defined in \cite{BDHHL}, let the digraph $D_{k,\l}$ where $k\geq 1$ and $\l\geq 2$ have vertices $v_{ij}$ for $i\in [k]$ and $j\in[\l]$ such that $(v_{ij}, v_{lm})\in E(D_{k,\l})$ if and only if $m=j+1\, \text{mod}\, l$. In other words, $D_{k, \l}$ is the blow-up of a directed $\l$-cycle where each vertex is expanded to a set of $k$ vertices. We let $V_i = \{v_{ij}: j \in [\l]\}$.
As was shown in \cite{BDHHL}, the number of derangements on $D_{k,\l}$ is $(k!)^\l$ and the number of permutations on $D_{k,\l}$ is $\sum_{i=0}^k\left(\binom{k}{i} (k-i)!\right)^\l$. Hence, $(d/p)_{k,\l}=\left(\sum_{i=0}^k \left(\frac{1}{i!} \right)^\l\right)^{-1}$ can be made arbitrarily close to 1/2 by choosing $\ell$ large enough (even for large $k$). This construction yields a graph for which the ratio of derangements to permutations is arbitrarily close to 1/2 but not exactly 1/2. We will also use this construction, but we will randomly remove some edges. By taking a random subraph we can ``interpolate" between $D_{k, \l}$ (a dense digraph whose ratio of derangements to permutations is close to $1/2$) and a sparse random digraph (whose ratio is $0$).
In this paper all asymptotics are as $k \rightarrow \infty$. $\l$ is treated as fixed. We use standard big-O, little-o and $\Omega$ notation. We write $x \sim y$ if $x=(1+o(1))y$. All logarithms are base $e$.
\subsection{Proof details}
Let the random graph $G_{k,\l}(m)$ be chosen uniformly from among all subgraphs of $D_{k,\l}$ with $m$ edges. We will fix some $p, \l$ and let $m = pk^2 \l$ (so $p$ is the probability that any particular edge of $D_{k,\l}$ becomes an edge of $G_{k,\l}$). Let the random variables $X, Y$ be the number of derangements and permutations in $G_{k,\l}(m)$ respectively. Let $\mc{D}, \mc{P}$ be the collection of all possible derangements and permutations on $D_{k,\l}(m)$.
\subsubsection{First moments of $X, Y$}
We have
\begin{align}
\mathbb{E}[X] &= \sum_{D \in \mc{D}}\mathbb{P}[D\subseteq G_{k, \l}]=(k!)^\l\frac{\binom{k^2\l-k \l}{m-k \l} }{\binom{k^2\l}{m}}\nonumber\\
&=(k!)^\l \left(\frac{m}{k^2\ell} \right)^{k\ell}\exp\left\{\frac{k^2 \ell^2}{2}\left(\frac{1}{k^2\ell}-\frac{1}{m}\right)+O\left(\frac{k^3}{m^2} + \frac{k}{m} \right) \right\}\nonumber\\
&\sim (k!)^\l p^{k\ell}\exp\left\{\frac{ \ell}{2}\left(1-\frac{1}{p}\right) \right\}\label{eqn:EX}
\end{align}
where on the second line we have used the following fact:
\begin{fact}\label{fact:edgeprob}
\[
\frac{\binom{a-x}{b-x} }{\binom{a}{b}} = \frac{(b)_x}{(a)_x} = \rbrac{\frac ba}^x \exp\cbrac{\frac{x^2}{2}\rbrac{\frac1a - \frac1b} + O\rbrac{\frac{x^3}{b^2} + \frac xb}}.
\]
\end{fact}
For completeness we include the proof although it is well-known.
\begin{proof}
\begin{align*}
\frac{(b)_x}{(a)_x} &= \rbrac{\frac ba}^x \cdot \frac{1 \rbrac{1-\frac 1b}\rbrac{1-\frac 2b} \cdots \rbrac{1-\frac {x-1}b} }{1 \rbrac{1-\frac 1a}\rbrac{1-\frac 2a} \cdots \rbrac{1-\frac {x-1}a}}\\
&= \rbrac{\frac ba}^x \cdot \exp\cbrac{\sum_{i=0}^{x-1} \sbrac{ \ln\rbrac{1-\frac{i}{b} }- \ln\rbrac{1-\frac{i}{a}} }}\\
&= \rbrac{\frac ba}^x \cdot \exp\cbrac{\sum_{i=0}^{x-1} \sbrac{ -\frac ib + \frac ia + O\rbrac{\frac{i^2}{a^2} + \frac{i^2}{b^2}} }}\\
&= \rbrac{\frac ba}^x \cdot \exp\cbrac{ \frac{x(x-1)}{2} \rbrac{\frac1a - \frac1b} + O\rbrac{\frac{x^3}{b^2}} }\\
&= \rbrac{\frac ba}^x \exp\cbrac{\frac{x^2}{2}\rbrac{\frac1a - \frac1b} + O\rbrac{\frac{x^3}{b^2} + \frac xb}}.
\end{align*}
\end{proof}
Before we calculate $\mathbb{E}[Y]$ we introduce a function $f_\l(x)$. For any integer $\l \ge 1$, let
\begin{equation}\label{eqn:deff}
f_\l(x) := \sum_{i=0}^\infty\frac{ x^{i \l}}{(i!)^\l} .
\end{equation}
Note that the above power series for $f_\l(x)$ converges for all $x$ and therefore in particular each $f_\l$ is continuous in $x$. We have
\begin{align}
\mathbb{E}[Y] &= \sum_{P \in \mc{P}}\mathbb{P}[P\subseteq G_{k, \l}]=\sum_{i=0}^k\left( \binom{k}{i}(k-i)!\right)^\l \frac{\binom{k^2\l-(k-i) \l}{m-(k-i) \l} }{\binom{k^2\l}{m} }\nonumber\\
&= \sum_{i=0}^k\left( \frac{k!}{i!}\right)^\l \left(\frac{m}{k^2\ell} \right)^{(k-i) \l} \exp\left\{\frac{(k-i)^2 \l^2}{2}\left(\frac{1}{k^2\ell}-\frac{1}{m}\right)+O\left(\frac{k^3 }{m^2} + \frac km \right) \right\}\nonumber\\
&= (k!)^\l p^{k\l} \sum_{i=0}^k\left( \frac{1}{i!}\right)^\l p^{-i \l} \exp\left\{\frac{ \l}{2}\left(1-\frac{1}{p}\right) + O\rbrac{\frac{i+1}{k}} \right\} \label{eqn:EY1}.\end{align}
We split the above sum into two ranges of $i
$. Note that for $0 \le i \le \sqrt{k}$ we have \newline $\exp\cbrac{O\rbrac{\frac{i+1}{k}}} = 1+O\rbrac{\frac{1}{\sqrt{k}}}$, while for $\sqrt{k} \le i \le k$ we have $\exp\cbrac{O\rbrac{\frac{i+1}{k}}}=O(1)$. Thus line \eqref{eqn:EY1} becomes
\begin{equation}
(k!)^\l p^{k\l} \sbrac{ \rbrac{1+ O\rbrac{\frac{1}{\sqrt{k}}}}\exp\left\{\frac{ \l}{2}\left(1-\frac{1}{p}\right) \right\}\sum_{0 \le i \le \sqrt{k}} \left( \frac{1}{i!}\right)^\l p^{-i \l} + O(1)\sum_{\sqrt{k}<i \le k} \left( \frac{1}{i!}\right)^\l p^{-i \l} } \label{eqn:EY2}.
\end{equation}
As $k \rightarrow \infty$ we have $$\sum_{0 \le i \le \sqrt{k}} \left( \frac{1}{i!}\right)^\l p^{-i \l} \rightarrow f_\l(1/p), $$ and
$$\sum_{\sqrt{k} < i \le k} \left( \frac{1}{i!}\right)^\l p^{-i \l} \le \sum_{i=\sqrt{k}}^{\infty} \left( \frac{1}{i!}\right)^\l p^{-i \l} =o(1) $$ since the latter is the tail of a convergent series. Thus, returning to our estimate of $\mathbb{E}[Y]$ on line \eqref{eqn:EY2}, we have
\begin{equation}
\mathbb{E}[Y] \sim (k!)^\l p^{k\l} \exp\left\{\frac{ \l}{2}\left(1-\frac{1}{p}\right) \right\}f_\l(1/p). \label{eqn:EY}
\end{equation}
\subsubsection{Choosing $p, \l$}
Now that we know $\mathbb{E}[X], \mathbb{E}[Y]$ we will choose $p, \l$ to make sure that the ratio of $\mathbb{E}[X]$ to $\mathbb{E}[Y]$ is close to $r$. Using lines \eqref{eqn:EX} and \eqref{eqn:EY} we have
\[
\frac{\mathbb{E}[X]}{\mathbb{E}[Y]} \sim \frac{1}{f_\l\rbrac{\frac1p}},
\]
so we would like to choose $\l$ and $0<p<1$ so that $f_\l(1/p) = 1/r$. We have
\[
\lim_{x \rightarrow \infty} f_\l(x) = \infty , \qquad f_\l(1) = \sum_{i=0}^k\left( \frac{1}{i!}\right)^\l = 1 + 1 + \frac{1}{2^\l} + \frac{1}{6^\l} + \frac{1}{24^\l} + \ldots.
\]
Note that we can make $f_\l(1)$ arbitrarily close to 2 by taking $\l$ large. Indeed, we have $f_\l(1) \ge 2$ and
\begin{align*}
f_\l(1) = 2+ \sum_{i\ge 2}\left( \frac{1}{i!}\right)^\l &\leq 2+ \sum_{i\geq 2} \left( \frac{1}{2^{i-1} }\right)^\ell =2+ \frac{1}{2^\ell - 1}.
\end{align*}
Since $r<1/2$, we can choose $\l$ so that $f_{\l}(1) < 1/r$. Then by the intermediate value theorem there is some $x \in (1, \infty)$ such that $f_\l(x) = 1/r$. We choose $p$ to be the value $1/x$, so $0<p<1$ and $f_\l(1/p) = 1/r$. So we view $\l$ and $p$ as constants determined entirely by $r$.
\subsubsection{Second moments of $X, Y$}
In this section we show that $\mathbb{E}[X^2] \sim \mathbb{E}[X]^2$ and $\mathbb{E}[Y^2] \sim \mathbb{E}[Y]^2$. This will complete the proof, since then by the second moment method we have that
\[
\frac XY \sim \frac{\mathbb{E}[X]}{\mathbb{E}[Y]} \sim \frac{1}{f_\l(1/p)} = r.
\]
with probability approaching 1 as $k$ goes to infinity.
To help us estimate $\mathbb{E}[X^2], \mathbb{E}[Y^2]$ we will find the function $h(a, b)$ (defined below) useful. Suppose we have some fixed matching $B$ of $b$ many edges in the graph $K_{a, a}$. Then by inclusion-exclusion the number of perfect matchings that do not have any edges from $B$ is
\[
h(a, b) := \sum_{w=0}^b (-1)^w \binom{b}{w} (a-w)!.
\]
Note that we always have $h(a, b) \le a!$. We will now observe that, roughly speaking, $h(a,b)\approx \frac{a!}{e}$ whenever $b \approx a \rightarrow \infty$. More formally we have the following
\begin{fact}\label{fact:hest}
Suppose $a-a^{1/10} \le b \le a$. Then we have
\[
h(a, b) = \rbrac{1 + O(a^{-4/5})} \frac{a!}{e}
\]
as $a \rightarrow \infty$
\end{fact}
\begin{proof}
We have \begin{align}\label{eqn:h}
h(a,b)=\sum_{0 \le w \le b} (-1)^w \binom{b}{w} (a-w)!= a! \sum_{0 \le w \le b} \frac{(-1)^w}{w!}\frac{(b)_w}{(a)_w}.
\end{align}
Now, for $0 \le w \le a^{1/10}$ we have by Fact \ref{fact:edgeprob} that
\begin{align*}
\frac{(b)_w}{(a)_w} &= \rbrac{\frac ba}^w \exp\cbrac{ \frac{w^2}{2} \rbrac{\frac1a - \frac1b} + O\rbrac{\frac{w^3}{b^2} + \frac wb}}\\
&= \rbrac{1 + O\rbrac{a^{-9/10}}}^{O\rbrac{a^{1/10}}} \exp \cbrac{O(a^{-4/5}} = 1 + O(a^{-4/5}).
\end{align*}
Meanwhile for $w \ge a^{1/10}$ we have that the corresponding term in line \eqref{eqn:h} has absolute value
\begin{align*}
\frac{1}{w!}\frac{(b)_w}{(a)_w} \le \frac{1}{(a^{1/10})!} = \exp\cbrac{ - \Omega\rbrac{a^{1/10} \log a}}
\end{align*}
by Stirling's approximation. Thus, the sum of all such terms in line \eqref{eqn:h} is at most
\[
b \exp\cbrac{ - \Omega\rbrac{a^{1/10} \log a}} = O(a^{-4/5})
\]
(this bound is quite comfortable). By the Alternating Series Test we have that
\[
\sum_{0 \le w \le a^{1/10}} \frac{(-1)^w}{w!} = \frac 1e + O\rbrac{\frac{1}{(a^{1/10})!}} = \frac 1e + O(a^{-4/5}).
\]
Breaking up the sum for $h(a, b)$ we have \begin{align*}
h(a, b) &= a! \sbrac{\sum_{0 \le w \le a^{1/10}} \frac{(-1)^w}{w!}\frac{(b)_w}{(a)_w} + \sum_{a^{1/10}< w \le b} \frac{(-1)^w}{w!}\frac{(b)_w}{(a)_w}} \\
&= a! \sbrac{\rbrac{1 + O(a^{-4/5})} \sum_{0 \le w \le a^{1/10}} \frac{(-1)^w}{w!} + O(a^{-4/5}) } \\
&= \rbrac{1 + O(a^{-4/5})} \frac{a!}{e}.
\end{align*}
\end{proof}
We find that
\begin{align}
\mathbb{E}[X^2] &= \sum_{D,D' \in \mc{D}}\mathbb{P}[D,D'\subseteq G_{k \l}]=(k!)^\l\sum_{D' \in \mc{D}}\mathbb{P}[D_0,D'\subseteq M]\nonumber\\
&=(k!)^\l\sum_{b=0}^{k\l}\left[\frac{\binom{k^2\l-(2k \l-b)}{m-(2k \l-b)} }{\binom{k^2\l}{m} }\sum_{\vec{b} \in S_b}\prod_{c=1}^\l\binom{k}{b_c}h(k-b_c, k-b_c) \right].\label{eqn:EX2}
\end{align}
where in the inner sum, $S_b$ is the set of $\l$-dimensional vectors $\vec{b} = (b_1, \ldots b_\l)$ whose components are nonnegative integers summing to $b$.
By \ref{fact:edgeprob}, if $b \le k^{1/10}$ then we have
\begin{align*}
\frac{\binom{k^2\l-(2k \l-b)}{m-(2k \l-b)} }{\binom{k^2\l}{m} } &= p^{2k\ell-b}\exp\left\{\frac{(2k\l-b)^2}{2}\left(\frac{1}{k^2\ell}-\frac{1}{m}\right)+O\left(\frac{k^3}{m^2}+\frac{k}{m}\right) \right\} \\
& = \rbrac{1 + O(k^{-9/10})} p^{2k\ell-b}\exp\left\{2\ell \left(1-\frac{1}{p}\right) \right\}
\end{align*}
and by Fact \ref{fact:hest} we have
$ h(k-b_c, k-b_c) =\rbrac{1 + O(k^{-4/5})} \frac{(k-b_c)!}{e}$. Therefore the term corresponding to $b$ in \eqref{eqn:EX2} is
\begin{align*}
\frac{\binom{k^2\l-(2k \l-b)}{m-(2k \l-b)} }{\binom{k^2\l}{m} }\sum_{\vec{b} \in S_b}\prod_{c=1}^\l & \binom{k}{b_c}h(k-b_c, k-b_c)\\
&= \rbrac{1 + O(k^{-4/5})}p^{2k\ell-b}\exp\left\{2\ell \left(1-\frac{1}{p}\right) \right\} \sum_{\vec{b}\in S_b}\prod_{c=1}^\l\binom{k}{b_c} \frac{(k-b_c)!}{e} \\
&= \rbrac{1 + O(k^{-4/5})}(k!)^{\l} p^{2k\ell-b}\exp\left\{2\ell \left(1-\frac{1}{p}\right) - \l \right\} \sum_{\vec{b}\in S_b} \prod_{c=1}^\l \frac{1}{b_c!} \\
&=\rbrac{1 + O(k^{-4/5})}(k!)^{\l} p^{2k\ell-b}\exp\left\{\ell \left(1-\frac{2}{p}\right) \right\} \frac{\l^b}{b!}
\end{align*}
where on the last line we used the multinomial formula. Meanwhile if $b \ge k^{1/10}$ then the term corresponding to $b$ in \eqref{eqn:EX2} is
\begin{align*}
\frac{\binom{k^2\l-(2k \l-b)}{m-(2k \l-b)} }{\binom{k^2\l}{m} }\sum_{\vec{b} \in S_b}\prod_{c=1}^\l & \binom{k}{b_c}h(k-b_c, k-b_c) \le \sum_{\vec{b}\in S_b}\prod_{c=1}^\l\binom{k}{b_c} (k-b_c)!\\
& = (k!)^\l \frac{\l^b}{b!} = (k!)^\l \cdot \exp\cbrac{-\Omega\rbrac{k^{1/10} \log k}}
\end{align*}
and so the sum of all terms in \eqref{eqn:EX2} with $b \ge k^{1/10}$ is at most
\[
k\l \cdot (k!)^\l \cdot \exp\cbrac{-\Omega\rbrac{k^{1/10} \log k}} = (k!)^\l \cdot O\rbrac{k^{-4/5}}.
\]
Therefore
\begin{align*}
\mathbb{E}[X^2] &=(k!)^\l\sum_{b=0}^{k\l}\left[\frac{\binom{k^2\l-(2k \l-b)}{m-(2k \l-b)} }{\binom{k^2\l}{m} }\sum_{\vec{b} \in S_b}\prod_{c=1}^\l\binom{k}{b_c}h(k-b_c, k-b_c) \right]\\
&= (k!)^\l \sbrac{\sum_{0 \le b \le k^{1/10} } \rbrac{1 + O(k^{-4/5})}(k!)^{\l} p^{2k\ell-b}\exp\left\{\ell \left(1-\frac{2}{p}\right) \right\} \frac{\l^b}{b!} + (k!)^\l \cdot O\rbrac{k^{-4/5}}}\\
& = \rbrac{1 + O(k^{-4/5})} (k!)^{2\l}p^{2k\l}\exp\left\{\ell \left(1-\frac{2}{p}\right) \right\} \sum_{0 \le b \le k^{1/10} } p^{-b} \frac{\l^b}{b!}\\
& = \rbrac{1 + O(k^{-4/5})} (k!)^{2\l}p^{2k\l}\exp\left\{\ell \left(1-\frac{2}{p}\right) \right\}\cdot \rbrac{\exp\left\{\frac{\l}{p} \right\} + O(k^{-4/5}) } \\
& = \rbrac{1 + O(k^{-4/5})} (k!)^{2\l}p^{2k\l}\exp\left\{\ell \left(1-\frac{1}{p}\right) \right\} \\
& \sim \mathbb{E}[X]^2
\end{align*}
\vspace{1cm}
For $\mathbb{E}[Y^2]$ we find an exact expression to be cumbersome, but the following upper bound will suffice:
\begin{align}
\mathbb{E}[Y^2] &\le \sum_{\substack{0 \le i, j \le k\\ 0 \le b \le k \l \\ \vec{b} \in S_b}} \frac{\binom{k^2\ell-(2k\ell-(i+j)\ell-b)}{m-(2k\ell-(i+j)\ell-b)} }{\binom{k^2\ell}{m} } \left(\frac{k!}{i!}\right)^\ell \prod_{c=1}^\ell \binom{k-i}{b_c}\binom{k-b_c}{j} h(k-j-b_c, k-i-b_c-2j) \label{eqn:EYsquared}
\end{align}
The term corresponding to a tuple $(i, j, b, \vec{b})$ above is an upper bound on the contribution to $\mathbb{E}[Y^2]$ due to pairs of permutations $(P, P')$ such that $P$ fixes $i$ vertices per part, $P'$ fixes $j$ vertices per part, and $P$ and $P'$ share a total of $b$ edges where $b_c$ of the shared edges are between $V_c$ and part $V_{c+1}$. The first factor is the edge probability, and the next factor is the number of choices for $P$. The next factor is an upper bound on the number of choices for $P'$. Indeed, we choose the edges of $P'$ from $V_c$ to $V_{c+1}$ by first choosing $b_c$ edges of $P$ to be shared, then we choose $j$ vertices in $V_c$ to be fixed by $P'$, and finally we choose a matching between the remaining vertices (the vertices of $V_c \cup V_{c+1}$ that are not fixed by $P'$ and are not endpoints of the $b_c$ shared edges already chosen). This matching must avoid any edges of $P$, and the vertices to be matched induce at least $k-i-b_c-2j$ edges of $P$, explaining the last factor above.
We will now estimate the significant terms in \eqref{eqn:EYsquared}. Assume $i, j, b \le k^{1/10}$. Then by Fact
\ref{fact:edgeprob} \begin{align*}
\frac{\binom{k^2\ell-(2k\ell-(i+j)\ell-b)}{m-(2k\ell-(i+j)\ell-b)} }{\binom{k^2\ell}{m} } &= p^{2k\ell-(i+j)\ell-b}\exp\left\{\frac{(2k\ell-(i+j)\ell-b)^2}{2k^2\l}\rbrac{1-\frac{1}{p}} +O\rbrac{\frac{1}{k}}\right\} \\
&= p^{2k\ell-(i+j)\ell-b}\exp\left\{2\ell\left(1-\frac{1}{p}\right) +O\rbrac{k^{-9/10}}\right\}.
\end{align*}
Next we estimate
\begin{align*}
h(k-j-b_c, k-i-b_c-2j) = \rbrac{1+O\rbrac{k^{-4/5}}} \frac{(k-j-b_c)!}{e}
\end{align*}
by Fact \ref{fact:hest}. So the product in \eqref{eqn:EYsquared} is
\begin{align}
\prod_{c=1}^\ell \binom{k-i}{b_c}\binom{k-b_c}{j} h(k-j-b_c, k-i-b_c-2j) & = \rbrac{1+O\rbrac{k^{-4/5}}} \prod_{c=1}^\ell \frac{(k-i)_{b_c}}{b_c!}\frac{(k-b_c)_j}{j!} \frac{(k-j-b_c)!}{e} \nonumber \\
& \le \rbrac{1+O\rbrac{k^{-4/5}}}\rbrac{\frac{k!}{ej!}}^\l \prod_{c=1}^\ell \frac{1}{b_c!}.\label{eqn:prodest1}
\end{align}
The sum of terms in \eqref{eqn:EYsquared} corresponding to small $i, j, b$ is at most
\begin{align}
&\rbrac{1+O\rbrac{k^{-4/5}}} \sum_{\substack{0 \le i, j, b \le k^{1/10} \\ \vec{b} \in S_b}} p^{2k\ell-(i+j)\ell-b}\exp\left\{2\ell\left(1-\frac{1}{p}\right) \right\}. \left(\frac{k!}{i!}\right)^\ell \rbrac{\frac{k!}{ej!}}^\l \prod_{c=1}^\ell \frac{1}{b_c!}\nonumber \\
& =\rbrac{1+O\rbrac{k^{-4/5}}} \exp\left\{\ell\left(1-\frac{2}{p}\right) \right\}\sum_{0 \le i, j, b \le k^{1/10}} p^{2k\ell-(i+j)\ell-b} \left(\frac{k!}{i!}\right)^\ell \rbrac{\frac{k!}{j!}}^\l \frac{\l^b}{b!}\nonumber \\
& \le \rbrac{1+O\rbrac{k^{-4/5}}} (k!)^{2\l} p^{2k\l}\exp\left\{\ell\left(1-\frac{1}{p}\right) \right\}\sum_{0 \le i, j , b} \frac{p^{-i\ell} }{(i!)^\l} \cdot \frac{p^{-j\ell} }{(j!)^\l} \cdot \frac{(\l/p)^b}{b!} \nonumber\\
&= \rbrac{1+O\rbrac{k^{-4/5}}} (k!)^{2\l} p^{2k\l}\exp\left\{\ell\left(1-\frac{1}{p}\right) \right\} f_\l\rbrac{\frac1p}^2 \nonumber\\
&\sim \mathbb{E}[Y]^2 \nonumber
\end{align}
where on the second-to-last line we have used
\[
\sum_{0 \le i} \frac{p^{-i\ell} }{(i!)^\l} = f_\l\rbrac{\frac1p}, \qquad \qquad \sum_{0 \le b } \frac{(\l/p)^b}{b!} = \exp\cbrac{\frac {\l}{p}}.
\]
It remains to show that the sum of all other terms (i.e. terms where $i, j,$ or $b$ is at least $k^{1/10}$) is negligible compared to $\mathbb{E}[Y]^2$, which is of order $(k!)^{2\l} p^{2k\l}$. Note that by Fact
\ref{fact:edgeprob} \begin{align*}
\frac{\binom{k^2\ell-(2k\ell-(i+j)\ell-b)}{m-(2k\ell-(i+j)\ell-b)} }{\binom{k^2\ell}{m} } &= p^{2k\ell-(i+j)\ell-b}\exp\left\{\frac{(2k\ell-(i+j)\ell-b)^2}{2k^2\l}\rbrac{1-\frac{1}{p}} +O\rbrac{\frac{1}{k}}\right\} \\
&= O\rbrac{p^{2k\ell-(i+j)\ell-b}}.
\end{align*}
Thus, the sum (over $\vec{b}$) of terms corresponding to a fixed triple $(i, j, b)$ in line \eqref{eqn:EYsquared} big-O of
\begin{align*}
& p^{2k\ell-(i+j)\ell-b} \sum_{\vec{b} \in S_b} \left(\frac{k!}{i!}\right)^\ell \prod_{c=1}^\ell \binom{k-i}{b_c}\binom{k-b_c}{j} (k-j-b_c)! \nonumber\\
& \le p^{2k\ell-(i+j)\ell-b} \left(\frac{k!}{i!}\right)^\ell \left(\frac{k!}{j!}\right)^\ell \sum_{\vec{b} \in S_b} \prod_{c=1}^\ell \frac{1}{b_c!} \nonumber\\
& =\rbrac{(k!)^{2\l} p^{2k\l} } \cdot \rbrac{ \frac{\l^b}{p^{(i+j)\ell+b}(i!)^\ell(j!)^\l b!} }.
\end{align*}
It is easy to see that if $i, j$ or $b$ is at least $k^{1/10}$ then the second factor above is $\exp\cbrac{-\Omega\rbrac{k^{1/10} \log k}}$. Since the number of triples $(i, j, b)$ is polynomial in $k$, the sum of all such terms (i.e. where $i, j$ or $b$ is at least $k^{1/10}$) is $o\rbrac{(k!)^{2\l} p^{2k\l} }$ which is a negligible contribution to $\mathbb{E}[Y^2]$. Therefore we have $\mathbb{E}[Y^2]\sim \mathbb{E}[Y]^2.$
\section{Remarks and Open Problems}
The reader should note that we did not use a ``binomial" random construction (e.g. keep each edge of $D_{k, \ell}$ with probability $p$ independently) because such a model lacks the concentration we need here. Indeed, for example Janson (\cite{J94}) showed that the number of perfect matchings in $G(n, p)$ is not concentrated even when it is quite large, while the number of perfect matchings of $G(n, m)$ is concentrated. We tried to use a binomial random construction and found that the second moments were too large, which in light of Janson's result makes sense (for example derangements in our graph are just a union of several perfect matchings on bipartite graphs).
There are still interesting open problems in \cite{BDHHL}. In particular it is still open whether $S$, the set of possible ratios $(d/p)_G$, is equal to $\mathbb{Q} \cap [0, 1/2]$. Here we would like to pose another open problem that is mostly unrelated to our result. In particular, we ask about stability for digraphs whose ratio $(d/p)_G$ is close to $1/2$: do such digraphs have to resemble the blow-up of a directed graph?
\bibliographystyle{abbrv}
| {
"timestamp": "2021-01-11T02:14:09",
"yymm": "2101",
"arxiv_id": "2101.02995",
"language": "en",
"url": "https://arxiv.org/abs/2101.02995",
"abstract": "A permutation in a digraph $G=(V, E)$ is a bijection $f:V \\rightarrow V$ such that for all $v \\in V$ we either have that $f$ fixes $v$ or $(v, f(v)) \\in E$. A derangement in $G$ is a permutation that does not fix any vertex. In [1] it is proved that in any digraph, the ratio of derangements to permutations is at most $1/2$. Answering a question posed in [1], we show that the set of possible ratios of derangements to permutations in digraphs is dense in the interval $[0, 1/2]$.",
"subjects": "Combinatorics (math.CO)",
"title": "The set of ratios of derangements to permutations in digraphs is dense in $[0, 1/2]$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9859363725435203,
"lm_q2_score": 0.8152324826183822,
"lm_q1q2_score": 0.8037673566924162
} |
https://arxiv.org/abs/2112.02556 | Windmills of the minds: an algorithm for Fermat's Two Squares Theorem | The two squares theorem of Fermat is a gem in number theory, with a spectacular one-sentence "proof from the Book". Here is a formalisation of this proof, with an interpretation using windmill patterns. The theory behind involves involutions on a finite set, especially the parity of the number of fixed points in the involutions. Starting as an existence proof that is non-constructive, there is an ingenious way to turn it into a constructive one. This gives an algorithm to compute the two squares by iterating the two involutions alternatively from a known fixed point. | \section{Introduction}
\label{sec:introduction}
Fermat's two squares theorem, dated back to 1640, states that a prime $n$ that is one more than a multiple of $4$ can be uniquely expressed as a sum of odd and even squares (Section~\ref{sec:sum-of-two-squares}, Theorem~\ref{thm:fermat-two-squares-thm}).
Of the many proofs of this classical number theory result, this one-sentence proof by Zagier~\cite{Zagier-1990-acm} caused a sensation in 1990:
\bigskip
\begin{mquote}
\textit{
\\
The involution on the finite set
\[
S = \{(x,y,z) \in \mathbb{N}^{3} \mid \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLFreeVar{x}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\HOLSymConst{\ensuremath{}}\HOLFreeVar{z}} \}
\]
defined by
\begin{equation}
\label{eqn:zagier-map}
\HOLinline{(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{z})} \longmapsto
\begin{cases}
(x + 2 z,z,y - z - x) & \text{if }x < y - z \\
(2 y - x,y,x + z - y) & \text{if }y - z < x < 2y\\
(x - 2 y,x + z - y,y) & \text{if }x > 2y
\end{cases}
\end{equation}
has exactly one fixed point, so \HOLinline{\ensuremath{|}\HOLFreeVar{S}\ensuremath{|}} is odd, and the involution defined by
$\HOLinline{(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{z})} \longmapsto \HOLinline{(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{z}\HOLSymConst{,}\HOLFreeVar{y})}$
also has a fixed point.
\\
}
\end{mquote}
Those who are perplexed by this multi-line sentence are not alone.
Even knowing involution, a self-inverse function, and fixed points, those values unchanged by a function, the proof is not obvious at a glance!
Listed as number 20 in Formalizing 100 Theorems~\cite{Wiedijk-2020},
there are many formal proofs of this theorem.
Some are based on textbook proofs, others follow the ideas in Zagier's proof.
All show the existence of the two squares, only a few (Coq~\cite{Thery-2004} and Lean~\cite{Hughes-2019}) include the uniqueness part.
Therefore, a formalisation of this one-sentence proof, in a constructive way, is an interesting exercise in theorem-proving.
As a bonus, the exercise is a path of discovery due to recent progress in understanding this proof.
As Don Zagier remarked after the one sentence, his proof was a condensed version of a 1984 proof by Roger Heath-Brown~\cite{Heath-Brown-1984-acm}, who in turn acknowledged prior work in number theory taken up by Joseph Liouville~\cite{Williams-2010-alt}.
This one-sentence proof invokes two involutions: the second one is obvious, but the first one in Equation~\eqref{eqn:zagier-map} has been called ``black magic''~\cite{Trimble-Lama-2008}. The algebraic formulation of this involution has been given a geometric interpretation by Alexander Spivak~\cite{Spivak-2007} in 2007. These are the windmills (Section~\ref{sec:windmills}). They explain why the magic works, and suggest an interplay of the involutions to identify fixed points of each other. Moreover, this provides an algorithm to find the two squares in Fermat's theorem.
Thus the one-sentence proof can be made constructive, as elucidated by Zagier~\cite{Zagier-2013-acm} in 2013.
\subsection{Contribution}
\label{sec:contribution}
This paper gives the first formal proof of an algorithm to compute the two squares in Fermat's two squares theorem, by following a constructive version of Zagier's proof in HOL4.
As noted before, Zagier's proof has been formalised, in HOL Light~\cite{Harrison-2010}, in NASA PVS~\cite{Narkawicz-2012-acm} and in Coq~\cite{Dubach-Muehlboeck-2021-acm}, although not in this constructive form.
All the ideas used in this paper can be found in Shiu~\cite{Shiu-1996} and Zagier~\cite{Zagier-2013-acm}.
The novel feature of this work is an elegant and pictorial approach for our formalisation.
The emphasis is in providing formal definitions and developing appropriate theories, not only for the present work, but also for supporting further work.
\subsection{Overview}
\label{sec:overview}
Major features in this formalisation are:
\begin{itemize}[leftmargin=*]
\item the groundwork for Zagier's proof in Section~\ref{sec:sum-of-two-squares},
\item the two involutions for windmills in Section~\ref{sec:windmill-involutions},
\item the existence and uniqueness of two squares in Section~\ref{sec:two-squares-theorem},
\item an algorithm to compute the two squares in Section~\ref{sec:algorithm},
\item theories of involutions and iterations in Section~\ref{sec:orbits}, and
\item a correctness proof of our algorithm in Section~\ref{sec:correctness}.
\end{itemize}
After a review of the work done, we conclude in Section~\ref{sec:conclusion}.
\subsection{Notation}
\label{sec:notations}
Statements starting with a turnstile ($\vdash$) are HOL4 theorems,
automatically pretty-printed to \LaTeX{} from the relevant \text{theory} in the HOL4 development.
Generally, our notation allows an appealing combination of quantifiers ($\forall, \exists, \exists{!}$),
logical connectives (\HOLTokenConj{} for ``and'', \HOLTokenDisj{} for ``or'', \HOLTokenNeg{} for ``not'', also \HOLTokenImp{} for ``implies'' and \HOLTokenEquiv{} for ``if and only if''),
set theory ($\in$ for ``element of'', $\times$ for Cartesian product, and comprehensions such as \HOLinline{\HOLTokenLeftbrace{}\HOLBoundVar{x}\;\HOLTokenBar{}\;\HOLBoundVar{x}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLNumLit{6}\HOLTokenRightbrace{}}),
and functional programming (\HOLTokenLambda{} for abstraction, and juxtaposition for application).
Repeated application of a function $f$ is indicated by exponents,
\textit{e.g.}, \HOLinline{\HOLFreeVar{f}\;(\HOLFreeVar{f}\;(\HOLFreeVar{f}\;\HOLFreeVar{x}))\;\HOLSymConst{=}\;\HOLFreeVar{f}\ensuremath{\sp{\HOLNumLit{3}}(\HOLFreeVar{x})}}.
For a function $f$ from set $S$ to set $T$,
we write \HOLinline{\HOLFreeVar{f}\;\ensuremath{:}\;\HOLFreeVar{S}\;\ensuremath{\leftrightarrow}\;\HOLFreeVar{T}} to mean a bijection.
The empty set is denoted by \HOLinline{\HOLSymConst{\HOLTokenEmpty{}}},
and a finite set, denoted by \HOLinline{\HOLConst{\HOLConst{finite}}\;\HOLFreeVar{S}},
has cardinality \HOLinline{\ensuremath{|}\HOLFreeVar{S}\ensuremath{|}}.
The set of natural numbers is denoted by $\mathbb{N}$, counting from $0$,
and \HOLinline{\HOLConst{count}\;\HOLFreeVar{n}}\;\HOLTokenDefEquality{}\;\HOLinline{\HOLTokenLeftbrace{}\HOLBoundVar{x}\;\HOLTokenBar{}\;\HOLBoundVar{x}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{n}\HOLTokenRightbrace{}},
where \HOLTokenDefEquality{} means `equality by definition'.
For a natural number $n \in \mathbb{N}$,
\HOLinline{\HOLConst{square}\;\HOLFreeVar{n}} means it is a square: \HOLinline{\HOLSymConst{\HOLTokenExists{}}\HOLBoundVar{k}.\;\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLBoundVar{k}\HOLSymConst{\ensuremath{\sp{2}}}},
\HOLinline{\HOLConst{prime}\;\HOLFreeVar{n}} means it is a prime, and
\HOLinline{\HOLConst{\HOLConst{even}}\;\HOLFreeVar{n}} or \HOLinline{\HOLConst{\HOLConst{odd}}\;\HOLFreeVar{n}} denotes its parity.
The integer quotient and remainder of $m$ divided by $n$ are written as \HOLinline{\HOLFreeVar{m}\;\HOLConst{\HOLConst{div}}\;\HOLFreeVar{n}} and \HOLinline{\HOLFreeVar{m}\;\HOLConst{\HOLConst{mod}}\;\HOLFreeVar{n}}, respectively.
We write \HOLinline{\HOLFreeVar{n}\;\HOLConst{\ensuremath{\mid}}\;\HOLFreeVar{m}} when $n$ divides $m$,
which is equivalent to \HOLinline{\HOLFreeVar{m}\;\ensuremath{\equiv}\;\HOLNumLit{0}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLFreeVar{n}\ensuremath{)}} when \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLNumLit{0}}.
These are basic notations. Others will be introduced as they first appear.
\paragraph*{HOL4 Sources}
\ifdefined
Proof scripts are located in a repository at {\small\url{https://bitbucket.org/jhlchan/project/src/master/fermat/twosq/}}.
The scripts are compiled using HOL4, version \texttt{af01322db666}.
In this paper, each theorem has \emph{\script{windmill}{197}},
which is hyperlinked to the appropriate line of the corresponding proof script in repository.
\else
Proof scripts of all theorems are located in a repository (omitted for anonymous review).
The scripts are compiled using HOL4, version \texttt{6dcb52a09341}.
In this paper, each theorem has \emph{\script{windmill}{197}},
which is hyperlinked to the appropriate line of the corresponding proof script in repository.
(For anonymous review, this feature has been removed. See Appendix~\ref{app:cross-reference-theorems} for a cross-reference of theorems in this paper and the proof scripts in supplementary material.)
\fi
\section{Sum of Two Squares}
\label{sec:sum-of-two-squares}
The only even prime is \HOLinline{\HOLNumLit{2}\;\HOLSymConst{=}\;\HOLNumLit{1}\ensuremath{{\sp{\HOLNumLit{2}}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}\ensuremath{{\sp{\HOLNumLit{2}}}}}, a sum of two squares.
An odd prime, upon division by 4, leaves a remainder of either $1$ or $3$.
Only an odd prime of the first type can be expressed as a sum of two squares,
as supported by numerical evidence from Table~\ref{tbl:odd-primes-sum}.
\begin{table}[h]
\caption{Examples of odd primes that can be expressed as a sum of two squares.}
\Description{This table provides examples of odd primes that can be expressed as a sum of two squares.}
\label{tbl:odd-primes-sum}
\[
\begin{array}{r@{\quad\ee\quad}l@{\quad\ee\quad}l}
5 & 4(1) + 1 & 1^{2} + 2^{2}\\
13 & 4(3) + 1 & 3^{2} + 2^{2}\\
17 & 4(4) + 1 & 1^{2} + 4^{2}\\
29 & 4(7) + 1 & 5^{2} + 2^{2}\\
37 & 4(9) + 1 & 1^{2} + 6^{2}\\
41 & 4(10) + 1 & 5^{2} + 4^{2}\\
53 & 4(13) + 1 & 7^{2} + 2^{2}\\
61 & 4(15) + 1 & 5^{2} + 6^{2}\\
\end{array}
\]
\end{table}
\noindent
Pierre de Fermat, in a letter to Marin Mersenne on Christmas day 1640,
claimed that he had an ``irrefutable'' proof of this:
\begin{theorem}[\textbf{Two Squares Theorem}]
\label{thm:fermat-two-squares-thm}
\script{twoSquares}{619}
A prime $n$ can be expressed uniquely as a sum of odd and even squares
if and only if \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{k}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}} for some $k$.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{prime}\;\HOLFreeVar{n}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;(\HOLFreeVar{n}\;\ensuremath{\equiv}\;\HOLNumLit{1}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLNumLit{4}\ensuremath{)}\;\HOLSymConst{\HOLTokenEquiv{}}\\
\;\;\;\;\;\;\;\;\;\;\HOLSymConst{\HOLTokenUnique{}}(\HOLBoundVar{u}\HOLSymConst{,}\HOLBoundVar{v}).\;\HOLConst{\HOLConst{odd}}\;\HOLBoundVar{u}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLConst{\HOLConst{even}}\;\HOLBoundVar{v}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLBoundVar{u}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLBoundVar{v}\HOLSymConst{\ensuremath{\sp{2}}})
\end{HOLmath}
\end{theorem}
\noindent
This paper concentrates on formalising an elementary proof of this result by Roger Heath-Brown, later simplified by Don Zagier.
As shown in his one-sentence proof in Section~\ref{sec:introduction},
the idea is this: look at the representations of $n$ not by squares, but in another form.
Consider the following set $S_{n}$ of triples $(x,y,z)$:
\begin{equation}
\label{eqn:mills-set}
S_{n} = \{(x,y,z) \in \mathbb{N}\times{\mathbb{N}}\times{\mathbb{N}}
\mid \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLFreeVar{x}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\HOLSymConst{\ensuremath{}}\HOLFreeVar{z}} \}.
\end{equation}
For a prime $n$ of the form $4k + 1$, we have $(1,1,k) \in S_{n}$.
Thus the set $S_{n}$ is non-empty, and there are only finitely many triples in $S_{n}$.
A triple with \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{=}\;\HOLFreeVar{z}} will give \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLFreeVar{x}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\HOLSymConst{\ensuremath{\sp{2}}}}, \textit{i.e.}, a sum of two squares.
If we can show that $S_{n}$ has only one such triple,
we have a proof of Fermat's Theorem~\ref{thm:fermat-two-squares-thm}, with both existence and uniqueness.
Meanwhile, some general theories will be developed as an exercise in formal proofs, so that they can be applied to similar problems.
In addition, we extend the theories to establish not only an algorithm, but also a proof of its correctness, to compute the two unique squares for primes of the form \HOLinline{\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{k}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}}.
\begin{figure*}[h]
\begin{center}
\begin{tikzpicture}[scale=0.25]
\windmill{0}{0}{3}{8}{1}
\node at (1.5,2.2) {$x$};
\node at (4.5,4.5) {$y$};
\node at (8.5,3.5) {$z$};
\windmill{14}{0}{3}{4}{2}
\node at (15.5,2.2) {$x$};
\node at (16.0,5.5) {$y$};
\node at (18.5,4.2) {$z$};
\windmill{27}{-1}{5}{2}{2}
\node at (29.5,3.4) {$x$};
\node at (28.0,6.6) {$y$};
\node at (29.5,5.0) {$z$};
\end{tikzpicture}
\caption{Typical windmills, where \HOLinline{\HOLConst{windmill}\;\HOLFreeVar{x}\;\HOLFreeVar{y}\;\HOLFreeVar{z}\;\HOLSymConst{=}\;\HOLFreeVar{x}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\HOLSymConst{\ensuremath{}}\HOLFreeVar{z}}. The rightmost one has \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{=}\;\HOLFreeVar{z}}.}
\Description{This figure shows typical windmills. The central square is x by x, the four rectangles arranged clockwise around the square are all y by z. The rightmost one has y and z equal.}
\label{fig:windmill-sample}
\end{center}
\end{figure*}
\subsection{Windmills}
\label{sec:windmills}
The following expression will be our main focus:
\begin{definition}
\label{def:windmill-def}
A windmill consists of a central square with four identical rectangular arms.
\begin{HOLmath}
\;\;\HOLConst{windmill}\;\HOLFreeVar{x}\;\HOLFreeVar{y}\;\HOLFreeVar{z}\;\HOLTokenDefEquality{}\;\HOLFreeVar{x}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\HOLSymConst{\ensuremath{}}\HOLFreeVar{z}
\end{HOLmath}
\end{definition}
\noindent
Some typical windmills are shown in Figure~\ref{fig:windmill-sample}.
The first term \HOLinline{\HOLFreeVar{x}\HOLSymConst{\ensuremath{\sp{2}}}} is given by a central square of side $x$,
and the second term \HOLinline{\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\HOLSymConst{\ensuremath{}}\HOLFreeVar{z}} is given by four arms, each a rectangle of width $y$ and height $z$, arranged clockwise around the square.
Therefore each triple in the set $S_{n}$ of Equation~\eqref{eqn:mills-set} can be represented by a windmill, that is,
each triple $(x,y,z)$ satisfies \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLConst{windmill}\;\HOLFreeVar{x}\;\HOLFreeVar{y}\;\HOLFreeVar{z}}.
Given a prime \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{k}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}},
we shall look for a windmill with four square arms (the one on the far right in Figure~\ref{fig:windmill-sample}),
\textit{i.e.}, \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{=}\;\HOLFreeVar{z}}, so that \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLFreeVar{x}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}} \ee \HOLinline{\HOLFreeVar{x}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;(\HOLNumLit{2}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y})\HOLSymConst{\ensuremath{\sp{2}}}}.
First, we collect all triples $(x,y,z)$ which are solutions of \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLFreeVar{x}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\HOLSymConst{\ensuremath{}}\HOLFreeVar{z}}:
\begin{definition}
\label{def:mills-def}
\noindent
The mills of a number is its set of windmills.
\begin{HOLmath}
\;\;\HOLConst{mills}\;\HOLFreeVar{n}\;\HOLTokenDefEquality{}\;\HOLTokenLeftbrace{}(\HOLBoundVar{x}\HOLSymConst{,}\HOLBoundVar{y}\HOLSymConst{,}\HOLBoundVar{z})\;\HOLTokenBar{}\;\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLConst{windmill}\;\HOLBoundVar{x}\;\HOLBoundVar{y}\;\HOLBoundVar{z}\HOLTokenRightbrace{}
\end{HOLmath}
\end{definition}
\noindent
This is the formal definition of the set $S_{n}$ of Equation~\eqref{eqn:mills-set}.
The conditions for a proper windmill, with all lengths nonzero, are:
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLSymConst{\HOLTokenNeg{}}\HOLConst{square}\;\HOLFreeVar{n}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{n}\;\ensuremath{\not\equiv}\;\HOLNumLit{0}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLNumLit{4}\ensuremath{)}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\HOLSymConst{\HOLTokenForall{}}\HOLBoundVar{x}\;\HOLBoundVar{y}\;\HOLBoundVar{z}.\;(\HOLBoundVar{x}\HOLSymConst{,}\HOLBoundVar{y}\HOLSymConst{,}\HOLBoundVar{z})\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{mills}\;\HOLFreeVar{n}\;\HOLSymConst{\HOLTokenImp{}}\;\HOLBoundVar{x}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLNumLit{0}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLBoundVar{y}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLNumLit{0}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLBoundVar{z}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLNumLit{0}
\end{HOLmath}
\begin{equation}
\label{eqn:mills-triple-nonzero}
\end{equation}
When $n$ is a square, $\HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLFreeVar{x}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}}(0)$ for any value of $y$.
This would make \HOLinline{\HOLConst{mills}\;\HOLFreeVar{n}} infinite. Otherwise:
\begin{theorem}
\label{thm:mills-finite}
\script{windmill}{714}
The number of windmills for a number $n$ is finite if and only if $n$ is not a square.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{\HOLConst{finite}}\;(\HOLConst{mills}\;\HOLFreeVar{n})\;\HOLSymConst{\HOLTokenEquiv{}}\;\HOLSymConst{\HOLTokenNeg{}}\HOLConst{square}\;\HOLFreeVar{n}
\end{HOLmath}
\end{theorem}
\noindent
Given an odd $n$ that is not a square, we can determine all its windmill triples $(x,y,z)$ by noting that, since \HOLinline{\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\HOLSymConst{\ensuremath{}}\HOLFreeVar{z}} is even, $x$ must be odd, and $y$ and $z$ form the product \HOLinline{\HOLFreeVar{y}\HOLSymConst{\ensuremath{}}\HOLFreeVar{z}\;\HOLSymConst{=}\;(\HOLFreeVar{n}\;\HOLSymConst{\ensuremath{-}}\;\HOLFreeVar{x}\HOLSymConst{\ensuremath{\sp{2}}})\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{4}}.
In Table~\ref{tbl:windmills-29} this is worked out for \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLNumLit{29}}, using successive odd $x$ and factors for the product $yz$.
The corresponding windmills are shown in Figure~\ref{fig:windmills-29}.
\begin{table*}[h]
\caption{Determine all the windmill triples of \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLNumLit{29}}, by odd $x$ and factors of $yz$.}
\Description{This table shows how to detemine all the windmill triples for n = 29, using odd x and factors of the product yz.}
\label{tbl:windmills-29}
\begin{tabular}{r@{\quad}@{\quad}r@{\quad\ee\quad}r@{\quad}@{\quad}l@{\quad}l}
odd $x$ & \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{\ensuremath{-}}\;\HOLFreeVar{x}\HOLSymConst{\ensuremath{\sp{2}}}} & \HOLinline{\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\HOLSymConst{\ensuremath{}}\HOLFreeVar{z}} & triple $(x,y,z)$ & comment\\
\hline
$1$ & $29 - 1^{2}$ & $28 \ee 4(7)$ & $(1,1,7), (1,7,1)$ & factors of $7$ are $1, 7$.\\
$3$ & $29 - 3^{2}$ & $20 \ee 4(5)$ & $(3,1,5), (3,5,1)$ & factors of $5$ are $1, 5$.\\
$5$ & $29 - 5^{2}$ & $4 \ee 4(1)$ & $(5,1,1)$ & factor of $1$ is $1$.\\
\end{tabular}
\end{table*}
\begin{figure*}[h]
\begin{center}
\begin{tikzpicture}[scale=0.22]
\draw[step=1, color=white!60!black] (0,0) grid (65,17);
\windmill{8}{8}{1}{1}{7}
\windmill{23}{7}{3}{1}{5}
\windmill{34}{6}{5}{1}{1}
\windmill{44}{7}{3}{5}{1}
\windmill{57}{8}{1}{7}{1}
\node at (4,1) {$(1,1,7)$};
\node at (22,1) {$(3,1,5)$};
\node at (36,1) {$(5,1,1)$};
\node at (45,1) {$(3,5,1)$};
\node at (55,1) {$(1,7,1)$};
\end{tikzpicture}
\caption{All the windmills of \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLNumLit{29}}, determined from Table~\ref{tbl:windmills-29}.}
\Description{This figure shows all the windmills of n = 29, determined from the previous table.}
\label{fig:windmills-29}
\end{center}
\end{figure*}
When a number $n$ has the form \HOLinline{\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{k}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}},
\[
n \ee 1^{2} + 4(1)k \ee \HOLinline{\HOLConst{windmill}\;\HOLNumLit{1}\;\HOLNumLit{1}\;\HOLFreeVar{k}},
\]
showing that its \HOLinline{\HOLConst{mills}\;\HOLFreeVar{n}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLSymConst{\HOLTokenEmpty{}}}:
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLFreeVar{n}\;\ensuremath{\equiv}\;\HOLNumLit{1}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLNumLit{4}\ensuremath{)}\;\HOLSymConst{\HOLTokenImp{}}\;(\HOLNumLit{1}\HOLSymConst{,}\HOLNumLit{1}\HOLSymConst{,}\HOLFreeVar{n}\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{4})\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{mills}\;\HOLFreeVar{n}
\end{HOLmath}
Moreover, when this form corresponds to a prime, this is the only triple $(x,y,z)$ with \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{=}\;\HOLFreeVar{y}}:
\begin{theorem}
\label{thm:mills-trivial-prime}
\script{windmill}{428}
For a prime of the form \HOLinline{\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{k}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}}, the only windmill with the first and second parameters equal is \HOLinline{\HOLConst{windmill}\;\HOLNumLit{1}\;\HOLNumLit{1}\;\HOLFreeVar{k}}.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{prime}\;\HOLFreeVar{n}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{n}\;\ensuremath{\equiv}\;\HOLNumLit{1}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLNumLit{4}\ensuremath{)}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\HOLSymConst{\HOLTokenForall{}}\HOLBoundVar{x}\;\HOLBoundVar{z}.\;\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLConst{windmill}\;\HOLBoundVar{x}\;\HOLBoundVar{x}\;\HOLBoundVar{z}\;\HOLSymConst{\HOLTokenEquiv{}}\;\HOLBoundVar{x}\;\HOLSymConst{=}\;\HOLNumLit{1}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLBoundVar{z}\;\HOLSymConst{=}\;\HOLFreeVar{n}\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{4}
\end{HOLmath}
\end{theorem}
\begin{proof}
Note that \HOLinline{\HOLFreeVar{k}\;\HOLSymConst{=}\;\HOLFreeVar{n}\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{4}} for prime \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{k}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}}.
Consider \HOLinline{(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{z})\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{mills}\;\HOLFreeVar{n}} with \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{=}\;\HOLFreeVar{y}}.
This implies,
\[
\HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLConst{windmill}\;\HOLFreeVar{x}\;\HOLFreeVar{x}\;\HOLFreeVar{z}} \ee \HOLinline{\HOLFreeVar{x}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{x}\HOLSymConst{\ensuremath{}}\HOLFreeVar{z}\;\HOLSymConst{=}\;\HOLFreeVar{x}\HOLSymConst{\ensuremath{}}(\HOLFreeVar{x}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{z})}.
\]
Therefore \HOLinline{\HOLFreeVar{x}\;\HOLConst{\ensuremath{\mid}}\;\HOLFreeVar{n}}.
As prime $n$ is not a square, \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{n}}.
Hence \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{=}\;\HOLNumLit{1}}, so \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{=}\;\HOLNumLit{1}}, and \HOLinline{\HOLFreeVar{z}\;\HOLSymConst{=}\;\HOLFreeVar{k}}.
\end{proof}
\subsection{Involution}
\label{sec:involution}
We are going to study involutions on \HOLinline{\HOLConst{mills}\;\HOLFreeVar{n}}, the set of windmills for $n$.
A function $f$ is an involution on a set $S$, denoted by \HOLinline{\HOLFreeVar{f}\;\HOLConst{involute}\;\HOLFreeVar{S}}, when it is its own inverse:
\begin{HOLmath}
\;\;\HOLFreeVar{f}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLTokenDefEquality{}\;\HOLSymConst{\HOLTokenForall{}}\HOLBoundVar{x}.\;\HOLBoundVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenImp{}}\;\HOLFreeVar{f}\;\HOLBoundVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{f}\;(\HOLFreeVar{f}\;\HOLBoundVar{x})\;\HOLSymConst{=}\;\HOLBoundVar{x}
\end{HOLmath}
That is, $f$ is a bijection \HOLinline{\HOLFreeVar{f}\;\ensuremath{:}\;\HOLFreeVar{S}\;\ensuremath{\leftrightarrow}\;\HOLFreeVar{S}}, pairing up $x$ and~\HOLinline{\HOLFreeVar{f}\;\HOLFreeVar{x}}, both in $S$. When \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{=}\;\HOLFreeVar{f}\;\HOLFreeVar{x}}, the element $x$ is fixed by the involution $f$.
We define the following sets:
\begin{definition}
\label{def:involute-pairs-fixes-def}
The pairs and fixes of an involution $f$ on a set $S$.
\begin{HOLmath}
\;\;\HOLConst{pairs}\;\HOLFreeVar{f}\;\HOLFreeVar{S}\;\HOLTokenDefEquality{}\;\HOLTokenLeftbrace{}\HOLBoundVar{x}\;\HOLTokenBar{}\;\HOLBoundVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{f}\;\HOLBoundVar{x}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLBoundVar{x}\HOLTokenRightbrace{}\\
\;\;\HOLConst{fixes}\;\HOLFreeVar{f}\;\HOLFreeVar{S}\;\HOLTokenDefEquality{}\;\HOLTokenLeftbrace{}\HOLBoundVar{x}\;\HOLTokenBar{}\;\HOLBoundVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{f}\;\HOLBoundVar{x}\;\HOLSymConst{=}\;\HOLBoundVar{x}\HOLTokenRightbrace{}\\
\end{HOLmath}
\end{definition}
\noindent
Clearly they are disjoint.
The subset $\HOLinline{\HOLConst{pairs}\;\HOLFreeVar{f}\;\HOLFreeVar{S}}$ consists of distinct involute pairs, so its cardinality is even:
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{\HOLConst{finite}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{f}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenImp{}}\;\HOLConst{\HOLConst{even}}\;\ensuremath{|}\HOLConst{pairs}\;\HOLFreeVar{f}\;\HOLFreeVar{S}\ensuremath{|}
\end{HOLmath}
So both \HOLinline{\ensuremath{|}\HOLFreeVar{S}\ensuremath{|}} and \HOLinline{\ensuremath{|}\HOLConst{fixes}\;\HOLFreeVar{f}\;\HOLFreeVar{S}\ensuremath{|}} have the same parity. This leads to:
\begin{theorem}
\label{thm:involute-two-fixes-both-odd}
\script{involuteFix}{1182}
If two involutions act on the same finite set $S$, their fixes have the same parity.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{\HOLConst{finite}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{f}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{g}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;(\HOLConst{\HOLConst{odd}}\;\ensuremath{|}\HOLConst{fixes}\;\HOLFreeVar{f}\;\HOLFreeVar{S}\ensuremath{|}\;\HOLSymConst{\HOLTokenEquiv{}}\;\HOLConst{\HOLConst{odd}}\;\ensuremath{|}\HOLConst{fixes}\;\HOLFreeVar{g}\;\HOLFreeVar{S}\ensuremath{|})
\end{HOLmath}
\end{theorem}
\noindent
We shall meet the two involutions on \HOLinline{\HOLConst{mills}\;\HOLFreeVar{n}}, a set which is finite for non-square $n$ (by Theorem~\ref{thm:mills-finite}).
\section{Windmill Involutions}
\label{sec:windmill-involutions}
Zagier's one-sentence proof is the interplay of two involutions on the set of windmills (\HOLinline{\HOLConst{mills}\;\HOLFreeVar{n}}) for a prime \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{k}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}}.
\subsection{Flip Map}
\label{sec:flip-map}
The first involution just swaps the $y$ and $z$ in the triple $(x,y,z)$:
\begin{definition}
\label{def:flip-def}
The flip map for a triple.
\begin{HOLmath}
\;\;\HOLConst{flip}\;(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{z})\;\HOLTokenDefEquality{}\;(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{z}\HOLSymConst{,}\HOLFreeVar{y})
\end{HOLmath}
\end{definition}
\noindent
The set \HOLinline{\HOLFreeVar{S}\;\HOLSymConst{=}\;\HOLConst{mills}\;\HOLFreeVar{n}} of windmill triples of a number $n$ can be partitioned by $y, z$ into:
\[
\begin{array}{c}
S_{y < z} \ee \{(x,y,z) \in S \mid y < z\}\\
S_{y \ee z} \ee \{(x,y,z) \in S \mid y \ee z\}\\
S_{y > z} \ee \{(x,y,z) \in S, \mid y > z\}\\
\end{array}
\]
An example for \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLNumLit{29}} is shown in Figure~\ref{fig:windmills-29-by-flip}.
Clearly there is a bijection: $\HOLConst{flip}\colon S_{y < z} \leftrightarrow S_{y > z}$,
and $S_{y \ee z} \ee \HOLinline{\HOLConst{fixes}\;\HOLConst{flip}\;\HOLFreeVar{S}}$.
Thus the inverse of flip is itself:
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{flip}\;(\HOLConst{flip}\;(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{z}))\;\HOLSymConst{=}\;(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{z})
\end{HOLmath}
showing that:
\begin{theorem}
\label{thm:flip-involute-mills}
\script{windmill}{933}
The flip map is an involution on the set of windmills.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{flip}\;\HOLConst{involute}\;\HOLConst{mills}\;\HOLFreeVar{n}
\end{HOLmath}
\end{theorem}
\begin{figure*}[h]
\begin{center}
\begin{tikzpicture}[scale=0.22]
\draw[step=1, color=white!60!black] (0,0) grid (65,17);
\windmill{8}{8}{1}{1}{7}
\windmill{23}{7}{3}{1}{5}
\windmill{34}{6}{5}{1}{1}
\windmill{44}{7}{3}{5}{1}
\windmill{57}{8}{1}{7}{1}
\node at (4,1) {$(1,1,7)$};
\node at (22,1) {$(3,1,5)$};
\node at (36,1) {$(5,1,1)$};
\node at (45,1) {$(3,5,1)$};
\node at (55,1) {$(1,7,1)$};
\coordinate (a) at (0.5,0.5);
\coordinate (b) at (31,16.5);
\node[draw,thick,color=purple,fit= (a) (b),rounded corners=.55cm,inner sep=2pt] {};
\coordinate (a) at (32.5,0.5);
\coordinate (b) at (40,16.5);
\node[draw,thick,color=purple,fit= (a) (b),rounded corners=.55cm,inner sep=2pt] {};
\coordinate (a) at (41.5,0.5);
\coordinate (b) at (64.5,16.5);
\node[draw,thick,color=purple,fit= (a) (b),rounded corners=.55cm,inner sep=2pt] {};
\begin{scope}[scale=10]
\coordinate (a) at (1.0,1.1);
\coordinate (b) at (6.0,1.1);
\draw[<->] (a) to [bend left,looseness=0.8] node[midway,below] {\HOLConst{flip}} (b);
\coordinate (a) at (2.5,1.2);
\coordinate (b) at (4.6,1.2);
\draw[<->] (a) to [bend left,looseness=0.8] node[midway,below] {\HOLConst{flip}} (b);
\coordinate (a) at (3.6,0.5);
\coordinate (b) at (3.6,0.51);
\draw[<->] (a) to [out=-30, in=-150, looseness=200]
node[midway,above] {\HOLConst{flip}} (b);
\end{scope}
\end{tikzpicture}
\caption{Partition of windmills of \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLNumLit{29}} for \HOLConst{flip}: those with $y < z, y \ee z$, and $y > z$. Note left and right pairing.}
\Description{This figure shows a partition of the windmills of n = 29 for the flip map: those with y < z, y = z, and y > z. Note the left and right pairing between y < z and y > z.}
\label{fig:windmills-29-by-flip}
\end{center}
\end{figure*}
\subsection{Zagier Map}
\label{sec:zagier-map}
The other involution is the one devised by Don Zagier, as shown in Equation~\eqref{eqn:zagier-map}:
\begin{definition}
\label{def:zagier-def}
The Zagier map for a triple.
\begin{HOLmath}
\;\;\HOLConst{zagier}\;(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{z})\;\HOLTokenDefEquality{}\\
\;\;\;\;\HOLKeyword{if}\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{y}\;\HOLSymConst{\ensuremath{-}}\;\HOLFreeVar{z}\;\HOLKeyword{then}\;(\HOLFreeVar{x}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{2}\HOLSymConst{\ensuremath{}}\HOLFreeVar{z}\HOLSymConst{,}\HOLFreeVar{z}\HOLSymConst{,}\HOLFreeVar{y}\;\HOLSymConst{\ensuremath{-}}\;\HOLFreeVar{z}\;\HOLSymConst{\ensuremath{-}}\;\HOLFreeVar{x})\\
\;\;\;\;\HOLKeyword{else}\;\HOLKeyword{if}\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLNumLit{2}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\;\HOLKeyword{then}\;(\HOLNumLit{2}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\;\HOLSymConst{\ensuremath{-}}\;\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{x}\;\HOLSymConst{\ensuremath{+}}\;\HOLFreeVar{z}\;\HOLSymConst{\ensuremath{-}}\;\HOLFreeVar{y})\\
\;\;\;\;\HOLKeyword{else}\;(\HOLFreeVar{x}\;\HOLSymConst{\ensuremath{-}}\;\HOLNumLit{2}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{x}\;\HOLSymConst{\ensuremath{+}}\;\HOLFreeVar{z}\;\HOLSymConst{\ensuremath{-}}\;\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{y})
\end{HOLmath}
\end{definition}
\noindent
Algebraically, this is indeed an involution, as HOL4 can verify without a blink:
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLNumLit{0}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{z}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLNumLit{0}\;\HOLSymConst{\HOLTokenImp{}}\;\HOLConst{zagier}\;(\HOLConst{zagier}\;(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{z}))\;\HOLSymConst{=}\;(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{z})
\end{HOLmath}
\begin{equation}
\label{eqn:zagier-involute}
\end{equation}
That HOL4 can verify this directly from definition is a showcase of its excellent algebraic simplifier, especially for natural numbers.
However, we would like to see the magic behind, in terms of the geometry of windmills.
Note that this definition differs slightly from Equation~\eqref{eqn:zagier-map} since the else-parts include boundary cases.
They actually correspond to improper windmills, and they are irrelevant for the values of $n$ satisfying Equation~\eqref{eqn:mills-triple-nonzero}.
\subsection{Mind of a Windmill}
\label{sec:windmill-mind}
The main purpose of introducing windmills is to read their minds.
\begin{figure*}[h]
\begin{center}
\begin{tikzpicture}[scale=0.28]
\windmill{0}{0}{3}{8}{1}
\windmill{14}{0}{3}{8}{1}
\windmill{27}{-1}{5}{1}{4}
\node at (1.5,2.2) {$x$};
\node at (5.0,4.5) {$y$};
\node at (8.5,3.5) {$z$};
\draw[decorate,thick,decoration={brace,amplitude=5pt,mirror,raise=2pt}]
(14,2.8) -- (17,2.8) node[midway,below,yshift=-7pt] {$x$};
\draw[decorate,thick,decoration={brace,amplitude=5pt,raise=2pt}]
(13,4.2) -- (18,4.2) node[midway,above,yshift=7pt] {$x'$};
\node at (29.5,3.4) {$x'$};
\node at (27.5,8.6) {$y'$};
\node at (28.5,6.2) {$z'$};
\mind{13}{-1}{5}
\end{tikzpicture}
\caption{A typical \HOLinline{\HOLConst{windmill}\;\HOLFreeVar{x}\;\HOLFreeVar{y}\;\HOLFreeVar{z}\;\HOLSymConst{=}\;\HOLFreeVar{x}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\HOLSymConst{\ensuremath{}}\HOLFreeVar{z}}, with a mind (in dashes) and transforms to another windmill.}
\Description{This figure shows a typical windmill, with a mind in dashes, on the left. The figure also illustrates how the left windmill transforms to another windmill on the right, with the same mind.}
\label{fig:windmill-mind}
\end{center}
\end{figure*}
Referring to Figure~\ref{fig:windmill-mind}, a windmill has a mind (marked in dashes at middle), which is the maximum central square, with side $x'$, that can be fitted with the four arms.
When \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenLeq{}}\;\ensuremath{\HOLFreeVar{x}\sp{\prime{}}}}, the original square \HOLinline{\HOLFreeVar{x}\ensuremath{{\sp{\HOLNumLit{2}}}}} can grow to the mind \HOLinline{\ensuremath{\HOLFreeVar{x}\sp{\prime{}}}\ensuremath{{\sp{\HOLNumLit{2}}}}}, forming another windmill but keeping the overall shape (on the right).
Conversely, going from right to left, we can use the mind as a reference to shrink the square term from \HOLinline{\ensuremath{\HOLFreeVar{x}\sp{\prime{}}}\ensuremath{{\sp{\HOLNumLit{2}}}}} to \HOLinline{\HOLFreeVar{x}\ensuremath{{\sp{\HOLNumLit{2}}}}} by trimming four sides,
thereby restoring the arms to original.
Transforming a windmill's square term through the mind is the geometric interpretation of Equation~\eqref{eqn:zagier-map}.
\begin{table*}[h]
\caption{The five cases of Zagier map, transforming a triple $(x,y,z)$ to $(x',y',z')$.}
\Description{This figure shows the five cases of Zagier map, transforming a triple (x,y,z) to (x',y',z').}
\label{tbl:zagier-map}
\begin{tabular}{c@{\quad}l@{\quad}l@{\quad}r@{\quad}l@{\quad}r@{\quad}r@{\quad}r@{\quad}l@{\quad}l}
Case & Type & condition & Mind & Picture & $x'$ & $y'$ & $z'$ & condition & Type\\
\hline
$1$ & \multirow{ 2}{*}{$x < y$} & $x < y - z$ & $x + 2z$ & Figure~\ref{fig:zagier-map}~(a)
& $x + 2z$ & $z$ & $y - x - z$ & $2y' < x'$ & \multirow{ 2}{*}{$y' < x'$}\\
$2$ & & $y - z < x$ & $2y - x$ & Figure~\ref{fig:zagier-map}~(b)
& $2y - x$ & $y$ & $x + z - y$ & $x' < 2y'$ & \\
\hline
$3$ & $x = y$ & & $x$ & Figure~\ref{fig:zagier-map}~(c)
& $x$ & $y$ & $z$ & & $x' = y'$\\
\hline
$4$ & \multirow{ 2}{*}{$y < x$} & $x < 2y$ & $x$ & Figure~\ref{fig:zagier-map}~(d)
& $2y - x$ & $y$ & $x + z - y$ & $y' - z' < x'$ & \multirow{ 2}{*}{$x' < y'$}\\
$5$ & & $2y < x$ & $x$ & Figure~\ref{fig:zagier-map}~(e)
& $x - 2y$ & $x + z - y$ & $y$ & $x' < y' - z'$ & \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}[htbp]
\begin{center}
\begin{tikzpicture}[scale=0.3]
\windmill{0}{0}{3}{8}{2}
\windmill{15}{-2}{7}{2}{3}
\mind{-2}{-2}{7}
\mind{15}{-2}{7}
\node at (1.5,2.2) {$x$};
\node at (4.0,5.6) {$y$};
\node at (8.5,4.0) {$z$};
\draw[->,thick] (10,3) -- (12,3) node[midway,above] {\HOLConst{zagier}};
\node at (18.5,4.0) {$x'$};
\node at (26.0,4.0) {$y'$};
\node at (23.5,1.5) {$z'$};
\draw[color=red,ultra thick] (5,3) -- (8,3);
\draw[color=red,ultra thick] (22,3) -- (25,3);
\draw[decorate,thick,decoration={brace,amplitude=5pt,mirror,raise=2pt}]
(5,3) -- (8,3);
\draw[decorate,thick,decoration={brace,amplitude=5pt,mirror,raise=2pt}]
(22,3) -- (25,3);
\node at (0.8,10) {\large{(a) Case $1$: \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{y}\;\HOLSymConst{\ensuremath{-}}\;\HOLFreeVar{z}}.}};
\node at (33,3) {$\large{
\begin{array}{r@{\;=\;}l}
x' & x + 2z\\
y' & z\\
z' & y - x - z\\
\end{array}
}$};
\end{tikzpicture}
\begin{tikzpicture}[scale=0.3]
\windmill{0}{0}{3}{6}{4}
\windmillr{15}{-3}{9}{6}{1}
\mind{-3}{-3}{9}
\mind{15}{-3}{9}
\node at (1.5,2.2) {$x$};
\node at (3.5,7.5) {$y$};
\node at (6.5,4.5) {$z$};
\draw[->,thick] (10,3) -- (12,3) node[midway,above] {\HOLConst{zagier}};
\node at (19.5,5.0) {$x'$};
\node at (21.0,7.6) {$y'$};
\node at (17.0,6.8) {$z'$};
\draw[color=red,ultra thick] (0,6) -- (0,7);
\draw[color=red,ultra thick] (18,6) -- (18,7);
\draw[decorate,thick,decoration={brace,amplitude=2pt,raise=2pt}]
(0,6) -- (0,7);
\draw[decorate,thick,decoration={brace,amplitude=2pt,raise=2pt}]
(18,6) -- (18,7);
\node at (6,10) {};
\node at (6,9) {\large{(b) Case $2$: \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{\ensuremath{-}}\;\HOLFreeVar{z}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{x}} and \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{y}}, so \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLNumLit{2}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}}.}};
\node at (33,3) {$\large{
\begin{array}{r@{\;=\;}l}
x' & 2y - x\\
y' & y\\
z' & x + z - y\\
\end{array}
}$};
\node at (0,-4) {};
\end{tikzpicture}
\begin{tikzpicture}[scale=0.3]
\windmill{0}{0}{4}{4}{2}
\windmill{16}{0}{4}{4}{2}
\mind{0}{0}{4}
\mind{16}{0}{4}
\node at (2.2,3.5) {$x$};
\node at (2.2,6.5) {$y$};
\node at (4.5,5.2) {$z$};
\draw[->,thick] (10,3) -- (12,3) node[midway,above] {\HOLConst{zagier}};
\node at (18.0,3.5) {$x'$};
\node at (18.0,6.5) {$y'$};
\node at (14.8,5.2) {$z'$};
\draw[color=red,ultra thick] (0,4) -- (0,6);
\draw[color=red,ultra thick] (16,4) -- (16,6);
\draw[decorate,thick,decoration={brace,amplitude=4pt,raise=2pt}]
(0,4) -- (0,6);
\draw[decorate,thick,decoration={brace,amplitude=4pt,raise=2pt}]
(16,4) -- (16,6);
\node at (7,9) {};
\node at (6.5,8) {\large{(c) Case $3$: \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{=}\;\HOLFreeVar{x}}, so \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{\ensuremath{-}}\;\HOLFreeVar{z}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{x}} and \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLNumLit{2}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}}.}};
\node at (33,3) {$\large{
\begin{array}{r@{\;=\;}l}
x' & 2y - x = x\\
y' & y\\
z' & x + z - y = z\\
\end{array}
}$};
\end{tikzpicture}
\begin{tikzpicture}[scale=0.3]
\windmill{0}{0}{6}{4}{2}
\windmillr{18}{2}{2}{4}{4}
\mind{0}{0}{6}
\mind{16}{0}{6}
\node at (3.0,5.5) {$x$};
\node at (2.5,8.5) {$y$};
\node at (4.5,6.8) {$z$};
\draw[->,thick] (10,3) -- (12,3) node[midway,above] {\HOLConst{zagier}};
\node at (19.0,3.6) {$x'$};
\node at (18.0,8.6) {$y'$};
\node at (14.5,6.0) {$z'$};
\draw[color=red,ultra thick] (0,4) -- (0,8);
\draw[color=red,ultra thick] (16,4) -- (16,8);
\draw[decorate,thick,decoration={brace,amplitude=5pt,raise=2pt}]
(0,4) -- (0,8);
\draw[decorate,thick,decoration={brace,amplitude=5pt,raise=2pt}]
(16,4) -- (16,8);
\node at (3,11) {};
\node at (3,10) {\large{(d) Case $4$: \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{x}}, but \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLNumLit{2}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}}.}};
\node at (34,3) {$\large{
\begin{array}{r@{\;=\;}l}
x' & 2y - x\\
y' & y\\
z' & x + z - y\\
\end{array}
}$};
\end{tikzpicture}
\begin{tikzpicture}[scale=0.3]
\windmill{0}{0}{7}{3}{2}
\windmill{19}{3}{1}{6}{3}
\mind{0}{0}{7}
\mind{16}{0}{7}
\node at (3.5,6.2) {$x$};
\node at (1.5,9.5) {$y$};
\node at (3.5,8.0) {$z$};
\draw[->,thick] (10,3) -- (12,3) node[midway,above] {\HOLConst{zagier}};
\node at (19.5,3.5) {$x'$};
\node at (22.5,7.8) {$y'$};
\node at (25.6,5.8) {$z'$};
\draw[color=yellow,ultra thick] (0,9) -- (3,9);
\draw[color=yellow,ultra thick] (4,-2) -- (7,-2);
\draw[color=yellow,ultra thick] (16,3) -- (19,3);
\draw[color=yellow,ultra thick] (20,4) -- (23,4);
\draw[decorate,thick,decoration={brace,amplitude=5pt,mirror,raise=2pt}]
(0,9) -- (3,9);
\draw[decorate,thick,decoration={brace,amplitude=5pt,mirror,raise=2pt}]
(4,-2) -- (7,-2);
\draw[decorate,thick,decoration={brace,amplitude=5pt,mirror,raise=2pt}]
(16,3) -- (19,3);
\draw[decorate,thick,decoration={brace,amplitude=5pt,mirror,raise=2pt}]
(20,4) -- (23,4);
\node at (3,12) {};
\node at (3,11) {\large{(e) Case $5$: \HOLinline{\HOLNumLit{2}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{x}}, so \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{x}}.}};
\node at (34,3) {$\large{
\begin{array}{r@{\;=\;}l}
x' & x - 2y\\
y' & x + z - y\\
z' & y\\
\end{array}
}$};
\end{tikzpicture}
\caption{All five cases of the Zagier map, from $(x,y,z)$ to $(x',y',z')$ through the mind of a windmill.}
\Description{This figure shows all the 5 cases of the Zagier map, transform through the mind of a windmill.}
\label{fig:zagier-map}
\end{center}
\end{figure*}
The Zagier map transforms $(x,y,z)$ to $(x',y',z')$ via the mind of the windmill, keeping its overall shape.
There are three types, depending on whether $x < y$, $x = y$, or $y < x$.
Both the first and last types are divided into two cases, as the geometry for the mind is different.
Altogether there are five cases, as analysed in Table~\ref{tbl:zagier-map}, and illustrated in Figure~\ref{fig:zagier-map}.\footnote{Dubach and Muehlboeck~\cite{Dubach-Muehlboeck-2021-acm} also identified five types for windmills.}
Although five cases of Zagier map have been identified, note that the transformation rule:
\[
(x',y',z') \ee (2y - x, y, x + z - y)
\]
happens to be the same for case $2$ and case $4$.
The same rule actually applies to case $3$, which has \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{=}\;\HOLFreeVar{y}}.
Thus the Zagier map can be succinctly expressed as in Definition~\ref{def:zagier-def} with only three branches.
Moreover, we can define the mind of a windmill triple as (see Table~\ref{tbl:zagier-map}):
\begin{HOLmath}
\;\;\HOLConst{mind}\;(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{z})\;\HOLTokenDefEquality{}\\
\;\;\;\;\HOLKeyword{if}\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{y}\;\HOLSymConst{\ensuremath{-}}\;\HOLFreeVar{z}\;\HOLKeyword{then}\;\HOLFreeVar{x}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{2}\HOLSymConst{\ensuremath{}}\HOLFreeVar{z}\\
\;\;\;\;\HOLKeyword{else}\;\HOLKeyword{if}\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{y}\;\HOLKeyword{then}\;\HOLNumLit{2}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\;\HOLSymConst{\ensuremath{-}}\;\HOLFreeVar{x}\\
\;\;\;\;\HOLKeyword{else}\;\HOLFreeVar{x}
\end{HOLmath}
and verify that the mind is an invariant under the Zagier map for any triple:
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{mind}\;(\HOLConst{zagier}\;(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{z}))\;\HOLSymConst{=}\;\HOLConst{mind}\;(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{z})
\end{HOLmath}
Referring again to Table~\ref{tbl:zagier-map},
the windmills in \HOLinline{\HOLFreeVar{S}\;\HOLSymConst{=}\;\HOLConst{mills}\;\HOLFreeVar{n}} can be partitioned into three triple types:
\[
\begin{array}{c@{\quad}l}
S_{x < y} \ee \{(x,y,z) \in S \mid x < y\}
& \text{covering cases $1$ and $2$}\\
S_{x \ee y} \ee \{(x,y,z) \in S \mid x \ee y\}
& \text{covering case $3$}\\
S_{x > y} \ee \{(x,y,z) \in S \mid x > y\}
& \text{covering cases $4$ and $5$}\\
\end{array}
\]
Such a partition for \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLNumLit{29}} is shown in Figure~\ref{fig:windmills-29-by-zagier}.
Table~\ref{tbl:zagier-map} also shows that, for triples with proper windmills:
\begin{itemize}[leftmargin=*]
\item a triple of case $1$ maps to case $5$ and vice versa,
\item a triple of case $2$ maps to case $4$ and vice versa, and
\item a triple of case $3$ maps to itself.
\end{itemize}
Therefore the Zagier map is its own inverse for proper triples.
Combining Equation~\eqref{eqn:zagier-involute} and Equation~\eqref{eqn:mills-triple-nonzero}
for the windmills of a prime, we have:
\begin{theorem}
\label{thm:zagier-involute-mills-prime}
\script{windmill}{1475}
The Zagier map is an involution on \HOLinline{\HOLConst{mills}\;\HOLFreeVar{n}} for a prime $n$.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{prime}\;\HOLFreeVar{n}\;\HOLSymConst{\HOLTokenImp{}}\;\HOLConst{zagier}\;\HOLConst{involute}\;\HOLConst{mills}\;\HOLFreeVar{n}
\end{HOLmath}
\end{theorem}
\begin{figure*}[h]
\begin{center}
\begin{tikzpicture}[scale=0.22]
\draw[step=1, color=white!60!black] (0,0) grid (65,17);
\windmill{8}{8}{1}{1}{7}
\windmill{24}{8}{1}{7}{1}
\windmill{35}{7}{3}{5}{1}
\windmill{47}{7}{3}{1}{5}
\windmill{58}{6}{5}{1}{1}
\node at (4,1) {$(1,1,7)$};
\node at (22,1) {$(1,7,1)$};
\node at (36,1) {$(3,5,1)$};
\node at (46,1) {$(3,1,5)$};
\node at (60,1) {$(5,1,1)$};
\coordinate (a) at (0.5,0.5);
\coordinate (b) at (16.5,16.5);
\node[draw,thick,color=purple,fit= (a) (b),rounded corners=.55cm,inner sep=2pt] {};
\coordinate (a) at (18,0.5);
\coordinate (b) at (40,16.5);
\node[draw,thick,color=purple,fit= (a) (b),rounded corners=.55cm,inner sep=2pt] {};
\coordinate (a) at (41.5,0.5);
\coordinate (b) at (64.5,16.5);
\node[draw,thick,color=purple,fit= (a) (b),rounded corners=.55cm,inner sep=2pt] {};
\mind[ultra thick]{8}{8}{1}
\mind{23}{7}{3}
\mind{34}{6}{5}
\mind{47}{7}{3}
\mind{58}{6}{5}
\begin{scope}[scale=10]
\coordinate (a) at (2.5,1.2);
\coordinate (b) at (4.6,1.2);
\draw[<->] (a) to [bend left] node[midway,below] {\HOLConst{zagier}} (b);
\coordinate (a) at (3.4,0.5);
\coordinate (b) at (6.0,0.5);
\draw[<->] (a) to [bend right, looseness=0.8] node[midway,above] {\HOLConst{zagier}} (b);
\coordinate (a) at (1.2,0.6);
\coordinate (b) at (1.2,0.61);
\draw[<->] (a) to [out=-30, in=-150, looseness=250]
node[midway,above] {\HOLConst{zagier}} (b);
\end{scope}
\end{tikzpicture}
\caption{Partition of windmills of \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLNumLit{29}} for \HOLConst{zagier}: those with $x \ee y, x < y$, and $x > y$. Note pairing by minds.}
\Description{This figure shows a partition of windmills of n = 29 for the Zagier map, those with x = y, x < y, and x > y. Note the pairing of minds between those x < y and x > y.}
\label{fig:windmills-29-by-zagier}
\end{center}
\end{figure*}
\section{Two Squares Theorem}
\label{sec:two-squares-theorem}
Now we have enough tools to formalise Fermat's two squares theorem.
\subsection{Existence of Two Squares}
\label{sec:existence}
For the Zagier map,
it is straightforward to verify, as indicated in Table~\ref{tbl:zagier-map}, that only a triple of case $3$ can map to itself:
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLNumLit{0}\;\HOLSymConst{\HOLTokenImp{}}\;(\HOLConst{zagier}\;(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{z})\;\HOLSymConst{=}\;(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{z})\;\HOLSymConst{\HOLTokenEquiv{}}\;\HOLFreeVar{x}\;\HOLSymConst{=}\;\HOLFreeVar{y})
\end{HOLmath}
Hence $S_{x \ee y} \ee \HOLinline{\HOLConst{fixes}\;\HOLConst{zagier}\;(\HOLConst{mills}\;\HOLFreeVar{n})}$.
Applying Theorem~\ref{thm:mills-trivial-prime} which characterises such triples, for certain primes $S_{x \ee y}$ is a singleton:
\begin{theorem}
\label{thm:zagier-fixes-prime}
\script{twoSquares}{162}
A prime of the form \HOLinline{\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{k}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}} has only $(1,1,k)$ fixed by the Zagier map.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{prime}\;\HOLFreeVar{n}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{n}\;\ensuremath{\equiv}\;\HOLNumLit{1}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLNumLit{4}\ensuremath{)}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\HOLConst{fixes}\;\HOLConst{zagier}\;(\HOLConst{mills}\;\HOLFreeVar{n})\;\HOLSymConst{=}\;\HOLTokenLeftbrace{}(\HOLNumLit{1}\HOLSymConst{,}\HOLNumLit{1}\HOLSymConst{,}\HOLFreeVar{n}\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{4})\HOLTokenRightbrace{}
\end{HOLmath}
\end{theorem}
\noindent
The fixed points of two involutions play crucial roles in the existence of two squares for Theorem~\ref{thm:fermat-two-squares-thm}:
\begin{theorem}[\textbf{Two Squares Existence}]
\label{thm:fermat-two-squares-exists}
\script{twoSquares}{441}
A prime of the form \HOLinline{\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{k}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}} is a sum of two squares of different parity.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{prime}\;\HOLFreeVar{n}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{n}\;\ensuremath{\equiv}\;\HOLNumLit{1}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLNumLit{4}\ensuremath{)}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\HOLSymConst{\HOLTokenExists{}}(\HOLBoundVar{u}\HOLSymConst{,}\HOLBoundVar{v}).\;\HOLConst{\HOLConst{odd}}\;\HOLBoundVar{u}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLConst{\HOLConst{even}}\;\HOLBoundVar{v}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLBoundVar{u}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLBoundVar{v}\HOLSymConst{\ensuremath{\sp{2}}}
\end{HOLmath}
\end{theorem}
\begin{proof}
A prime is not a square, so \HOLinline{\HOLConst{mills}\;\HOLFreeVar{n}} is finite by Theorem~\ref{thm:mills-finite}.
, and both Zagier and flip maps are involutions on \HOLinline{\HOLConst{mills}\;\HOLFreeVar{n}},
by Theorem~\ref{thm:zagier-involute-mills-prime} and Theorem~\ref{thm:flip-involute-mills}.
Note that Zagier map has a single fixed point by Theorem~\ref{thm:zagier-fixes-prime}.
Thus \HOLinline{\ensuremath{|}\HOLConst{fixes}\;\HOLConst{zagier}\;(\HOLConst{mills}\;\HOLFreeVar{n})\ensuremath{|}\;\HOLSymConst{=}\;\HOLNumLit{1}},
so \HOLinline{\ensuremath{|}\HOLConst{fixes}\;\HOLConst{flip}\;(\HOLConst{mills}\;\HOLFreeVar{n})\ensuremath{|}} is odd by Theorem~\ref{thm:involute-two-fixes-both-odd}.
Hence \HOLinline{\HOLConst{fixes}\;\HOLConst{flip}\;(\HOLConst{mills}\;\HOLFreeVar{n})\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLSymConst{\HOLTokenEmpty{}}}, containing a triple $(x,y,y)$.
Thus \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLConst{windmill}\;\HOLFreeVar{x}\;\HOLFreeVar{y}\;\HOLFreeVar{y}} $\ee$ \HOLinline{\HOLFreeVar{x}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}\HOLSymConst{\ensuremath{\sp{2}}}}.
Take \HOLinline{\HOLFreeVar{u}\;\HOLSymConst{=}\;\HOLFreeVar{x}}, and \HOLinline{\HOLFreeVar{v}\;\HOLSymConst{=}\;\HOLNumLit{2}\HOLSymConst{\ensuremath{}}\HOLFreeVar{y}}, then \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLFreeVar{u}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLFreeVar{v}\HOLSymConst{\ensuremath{\sp{2}}}}.
Evidently \HOLinline{\HOLFreeVar{v}} is even, and $u$ is odd since $n$ is odd.
\end{proof}
\noindent
Current formalisations of Zagier's proof (HOL Light~\cite{Harrison-2010}, NASA PVS~\cite{Narkawicz-2012-acm} and Coq~\cite{Dubach-Muehlboeck-2021-acm}), or its close relative Heath-Brown's proof (Mizar~\cite{Riccardi-2009} and ProofPower~\cite{Arthan-2016}), stop at just showing the existence of two squares for the primes in \text{Fermat's} Theorem~\ref{thm:fermat-two-squares-thm},
most likely because this already meets the Formalizing 100 Theorems challenge~\cite{Wiedijk-2020}.
See also related work in Section~\ref{sec:related-work}.
\subsection{Uniqueness of Two Squares}
\label{sec:uniqueness}
The uniqueness of the two squares in Fermat's Theorem~\ref{thm:fermat-two-squares-thm}
is a consequence of the following property of a prime:
\begin{theorem}[\textbf{Two Squares Uniquenss}]
\label{thm:fermat-two-squares-unique}
\script{twoSquares}{205}
If a prime $n$ can be expressed as a sum of two squares, the expression is unique up to commutativity.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{prime}\;\HOLFreeVar{n}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLFreeVar{a}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLFreeVar{b}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLFreeVar{c}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLFreeVar{d}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\HOLTokenLeftbrace{}\HOLFreeVar{a};\;\HOLFreeVar{b}\HOLTokenRightbrace{}\;\HOLSymConst{=}\;\HOLTokenLeftbrace{}\HOLFreeVar{c};\;\HOLFreeVar{d}\HOLTokenRightbrace{}
\end{HOLmath}
\end{theorem}
\noindent
The proof is purely number-theoretic, which has also been formalised by Laurent Th{\'e}ry in Coq~\cite{Thery-2004}.
Moreover, we have:
\begin{theorem}
\label{thm:mod-4-not-squares}
\script{helperTwosq}{419}
A number of the form \HOLinline{\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{k}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{3}} cannot be expressed as a sum of two squares.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLFreeVar{n}\;\ensuremath{\equiv}\;\HOLNumLit{3}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLNumLit{4}\ensuremath{)}\;\HOLSymConst{\HOLTokenImp{}}\;\HOLSymConst{\HOLTokenForall{}}\HOLBoundVar{u}\;\HOLBoundVar{v}.\;\HOLFreeVar{n}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLBoundVar{u}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLBoundVar{v}\HOLSymConst{\ensuremath{\sp{2}}}
\end{HOLmath}
\end{theorem}
\noindent
This is an elementary result from possible remainders after division by 4:
while a number, such as $u$ or $v$, may have a remainer $0, 1, 2$, or $3$,
a square, such as $u^{2}$ or $v^{2}$, can only have a remainder $0$ or $1$.
Thus the sum of such remainders can never be $3$.
Now we can complete the proof of Fermat's two squares Theorem~\ref{thm:fermat-two-squares-thm}:
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{prime}\;\HOLFreeVar{n}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;(\HOLFreeVar{n}\;\ensuremath{\equiv}\;\HOLNumLit{1}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLNumLit{4}\ensuremath{)}\;\HOLSymConst{\HOLTokenEquiv{}}\\
\;\;\;\;\;\;\;\;\;\;\HOLSymConst{\HOLTokenUnique{}}(\HOLBoundVar{u}\HOLSymConst{,}\HOLBoundVar{v}).\;\HOLConst{\HOLConst{odd}}\;\HOLBoundVar{u}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLConst{\HOLConst{even}}\;\HOLBoundVar{v}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLBoundVar{u}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLBoundVar{v}\HOLSymConst{\ensuremath{\sp{2}}})
\end{HOLmath}
\begin{proof}
For the if part $(\Rightarrow)$,
existence is given by Theorem~\ref{thm:fermat-two-squares-exists}, and
uniqueness is provided by Theorem~\ref{thm:fermat-two-squares-unique}.
For the only-if part $(\Leftarrow)$,
an odd prime with \HOLinline{\HOLFreeVar{n}\;\ensuremath{\not\equiv}\;\HOLNumLit{1}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLNumLit{4}\ensuremath{)}} cannot be a sum of two squares by Theorem~\ref{thm:mod-4-not-squares}.
\end{proof}
\section{Two Squares Algorithm}
\label{sec:algorithm}
To make Zagier's proof constructive, we need to compute that single triple fixed by flip map.
Let $n$ be a prime of the form \HOLinline{\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{k}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}}.
By Theorem~\ref{thm:zagier-fixes-prime}, the only Zagier fixed point is \HOLinline{\HOLFreeVar{u}\;\HOLSymConst{=}\;(\HOLNumLit{1}\HOLSymConst{,}\HOLNumLit{1}\HOLSymConst{,}\HOLFreeVar{k})}, meaning \HOLinline{\HOLConst{zagier}\;\HOLFreeVar{u}\;\HOLSymConst{=}\;\HOLFreeVar{u}}.
To change the triple $u$, applying \HOLConst{flip} is the obvious choice.
To keep changing the triple, \HOLConst{zagier} should be applied.
Thus by applying the composition \HOLinline{\HOLConst{zagier}\;\HOLSymConst{\HOLTokenCompose}\;\HOLConst{flip}} repeatedly from the known Zagier fixed point, there is hope that the chain will lead to the only flip fixed point.
Figure~\ref{fig:zagier-flip-29} shows that this is indeed the case for \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLNumLit{29}}.
\begin{figure*}[h]
\begin{center}
\begin{tikzpicture}[scale=0.22]
\draw[step=1, color=white!60!black] (0,0) grid (65,17);
\windmill{8}{8}{1}{1}{7}
\windmill{24}{8}{1}{7}{1}
\windmill{38}{7}{3}{1}{5}
\windmill{50}{7}{3}{5}{1}
\windmill{58}{6}{5}{1}{1}
\node at (4,1) {$(1,1,7)$};
\node at (22,1) {$(1,7,1)$};
\node at (38,1) {$(3,1,5)$};
\node at (51,1) {$(3,5,1)$};
\node at (60,1) {$(5,1,1)$};
\mind[ultra thick]{8}{8}{1}
\mind{23}{7}{3}
\mind{38}{7}{3}
\mind{49}{6}{5}
\mind{58}{6}{5}
\begin{scope}[scale=10]
\coordinate (a) at (1.0,1.1);
\coordinate (b) at (2.2,1.1);
\draw[->] (a) to [bend left] node[midway,above] {\HOLConst{flip}} (b);
\coordinate (a) at (2.8,0.6);
\coordinate (b) at (3.8,0.6);
\draw[->] (a) to [bend right] node[midway,below] {\HOLConst{zagier}} (b);
\coordinate (a) at (4.2,1.2);
\coordinate (b) at (5.2,1.2);
\draw[->] (a) to [bend left] node[midway,above] {\HOLConst{flip}} (b);
\coordinate (a) at (5.2,0.5);
\coordinate (b) at (6.0,0.5);
\draw[->] (a) to [bend right] node[midway,below] {\HOLConst{zagier}} (b);
\end{scope}
\end{tikzpicture}
\caption{The iteration chain of \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLNumLit{29}} by the composition \HOLinline{\HOLConst{zagier}\;\HOLSymConst{\HOLTokenCompose}\;\HOLConst{flip}}, from Zagier fix to flip fix.}
\Description{This figure show the iteration chain of n = 29, by the composition of first flip then Zagier. The chain starts from Zagier fix, ends in flip fix.}
\label{fig:zagier-flip-29}
\end{center}
\end{figure*}
In terms of windmills, the flip map keeps the central square, but flips the arms of rectangles from $y$-by-$z$ to $z$-by-$y$. This generally changes the mind of the windmill. The Zagier map keeps the mind, but changes the central square.
Similar to the mind being an invariant of the Zagier map,
the absolute difference $\left|y - z\right|$ is an invariant of the flip map.
If the Zagier map can reduce this difference, the successive iterations of \HOLinline{\HOLConst{zagier}\;\HOLSymConst{\HOLTokenCompose}\;\HOLConst{flip}} will be able to locate the flip fixed point.
\subsection{Flip Fix Search}
\label{sec:flip-fix-search}
To find the fixed point of the flip map, we can experiment with this pseudo-code:
\bigskip
\fbox{\begin{minipage}{0.36\textwidth}
\begin{list}{$\circ$}{}
\item \emph{Input}: a number \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{k}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}}.
\item \emph{Output}: a triple fixed by the flip map.
\item \emph{Method}:
\item start with \HOLinline{\HOLFreeVar{u}\;\HOLSymConst{=}\;(\HOLNumLit{1}\HOLSymConst{,}\HOLNumLit{1}\HOLSymConst{,}\HOLFreeVar{k})}, the Zagier fix.
\item while ($u$ is not a flip fix) :
\item \qquad $u \leftarrow \HOLinline{(\HOLConst{zagier}\;\HOLSymConst{\HOLTokenCompose}\;\HOLConst{flip})\;\HOLFreeVar{u}}$
\item end while.
\end{list}
\end{minipage}}
\bigskip
\noindent
In an HOL4 interactive session, this pseudo-code can be implemented directly as:\footnote{This pseudo-code can be implemented directly in any programming language that supports while-loops and tuples.}
\begin{definition}
\label{def:two-sq-def}
Computing the flip fixed point of \HOLinline{\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{k}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}} using a \HOLConst{WHILE} loop.
\begin{HOLmath}
\;\;\HOLConst{two_sq}\;\HOLFreeVar{n}\;\HOLTokenDefEquality{}\\
\;\;\;\;\HOLConst{WHILE}\;((\HOLSymConst{\HOLTokenNeg{}})\;\HOLSymConst{\HOLTokenCompose}\;\HOLConst{found})\;(\HOLConst{zagier}\;\HOLSymConst{\HOLTokenCompose}\;\HOLConst{flip})\;(\HOLNumLit{1}\HOLSymConst{,}\HOLNumLit{1}\HOLSymConst{,}\HOLFreeVar{n}\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{4}),\\
\quad\textrm{where}
\;\;\HOLConst{found}\;(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{z})\;\HOLTokenDefEquality{}\;\HOLFreeVar{y}\;\HOLSymConst{=}\;\HOLFreeVar{z}
\end{HOLmath}
\end{definition}
\noindent
This simple while-loop may or may not terminate. We shall take up this issue in Section~\ref{sec:termination}.
For primes of the form \HOLinline{\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{k}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}}, it terminates and seems to work.
To prove its correctness, we shall develop a theory of permutation iteration, then apply the theory to this algorithm.
\section{Permutation Orbits}
\label{sec:orbits}
In general, the composition of two involutions is no longer an involution, but just a permutation.
Let $\varphi \colon S \rightarrow S$ be a permutation,
a bijection on the set $S$, denoted by \HOLinline{\HOLFreeVar{\varphi}\;\HOLConst{\HOLConst{permutes}}\;\HOLFreeVar{S}}.
For an element \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLFreeVar{S}}
the iteration sequence $\varphi(x)$, \HOLinline{\HOLFreeVar{\varphi}\ensuremath{\sp{\HOLNumLit{2}}(\HOLFreeVar{x})}}, \HOLinline{\HOLFreeVar{\varphi}\ensuremath{\sp{\HOLNumLit{3}}(\HOLFreeVar{x})}}, \textit{etc.}, form its \emph{orbit}.
The smallest positive index $n$ such that \HOLinline{\HOLFreeVar{\varphi}\ensuremath{\sp{\HOLFreeVar{n}}(\HOLFreeVar{x})}\;\HOLSymConst{=}\;\HOLFreeVar{x}} is called the \emph{period} of $x$ under $\varphi$.
If such a positive index does not exist, the period is defined to be $0$.
In HOL4, the definition makes use of \HOLConst{OLEAST}, the optional \HOLConst{LEAST} operator:
\begin{definition}
\label{def:period-def}
The period of function iteration of an element is the least nonzero index for the element iterate to wrap around, otherwise zero.
\begin{HOLmath}
\;\;\HOLConst{\HOLConst{period}}\;\HOLFreeVar{\varphi}\;\HOLFreeVar{x}\;\HOLTokenDefEquality{}\\
\;\;\;\;\HOLKeyword{case}\;\HOLConst{OLEAST}\;\HOLBoundVar{k}.\;\HOLNumLit{0}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLBoundVar{k}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{\varphi}\ensuremath{\sp{\HOLBoundVar{k}}(\HOLFreeVar{x})}\;\HOLSymConst{=}\;\HOLFreeVar{x}\;\HOLKeyword{of}\\
\;\;\;\;\HOLTokenBar{}\;\HOLConst{\HOLConst{none}}\;\ensuremath{\triangleright}\;\HOLNumLit{0}\\
\;\;\;\;\HOLTokenBar{}\;\HOLConst{\HOLConst{some}}\;\HOLBoundVar{k}\;\ensuremath{\triangleright}\;\HOLBoundVar{k}
\end{HOLmath}
\end{definition}
\noindent
When the set $S$ is finite, the iterates cannot be always distinct.
Thus the permutation orbit of any $x \in S$ is finite, with a nonzero period,
denoted by \HOLinline{\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLConst{period}\;\HOLFreeVar{\varphi}\;\HOLFreeVar{x}}:
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{\HOLConst{finite}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{\varphi}\;\HOLConst{\HOLConst{permutes}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\HOLSymConst{\HOLTokenExists{}}\HOLBoundVar{p}.\;\HOLNumLit{0}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLBoundVar{p}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLBoundVar{p}\;\HOLSymConst{=}\;\HOLConst{\HOLConst{period}}\;\HOLFreeVar{\varphi}\;\HOLFreeVar{x}
\end{HOLmath}
and by definition the period is minimal, which means that there is no wrap around for element iterates when the index is less than the period:
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLNumLit{0}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{j}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{j}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLConst{\HOLConst{period}}\;\HOLFreeVar{\varphi}\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenImp{}}\;\HOLFreeVar{\varphi}\ensuremath{\sp{\HOLFreeVar{j}}(\HOLFreeVar{x})}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLFreeVar{x}
\end{HOLmath}
This implies a criterion for an exponent index to be divisible by period:
\begin{theorem}
\label{thm:iterate-period-mod}
\script{iteration}{653}
For a nonzero period $p$ of $x$, $x$ is fixed by the $k$-th iterate of $\varphi$ if and only if $k$ is a multiple of period~$p$.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLNumLit{0}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{p}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLConst{\HOLConst{period}}\;\HOLFreeVar{\varphi}\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;(\HOLFreeVar{\varphi}\ensuremath{\sp{\HOLFreeVar{k}}(\HOLFreeVar{x})}\;\HOLSymConst{=}\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenEquiv{}}\;\HOLFreeVar{k}\;\ensuremath{\equiv}\;\HOLNumLit{0}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLFreeVar{p}\ensuremath{)})
\end{HOLmath}
\end{theorem}
\noindent
Moreover, the period is the same for all iterates in the same orbit:
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{\HOLConst{finite}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{\varphi}\;\HOLConst{\HOLConst{permutes}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{y}\;\HOLSymConst{=}\;\HOLFreeVar{\varphi}\ensuremath{\sp{\HOLFreeVar{j}}(\HOLFreeVar{x})}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\HOLConst{\HOLConst{period}}\;\HOLFreeVar{\varphi}\;\HOLFreeVar{y}\;\HOLSymConst{=}\;\HOLConst{\HOLConst{period}}\;\HOLFreeVar{\varphi}\;\HOLFreeVar{x}
\end{HOLmath}
\subsection{Involution Composition}
\label{sec:involution-composition}
When the permutation \HOLinline{\HOLFreeVar{\varphi}\;\HOLSymConst{=}\;\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}}, a composition of two involutions $f$ and \HOLinline{\HOLFreeVar{g}}, we shall investigate whether their fixed points are connected by a chain of composition iterations.
Note the following pattern of function application:
\begin{equation*}
\label{eqn:function-assoc}
\begin{split}
f\ \circ\ (\HOLinline{\HOLFreeVar{g}}\ \circ\ f)\ \circ\ (\HOLinline{\HOLFreeVar{g}}\ \circ\ f)\ \circ\ (\HOLinline{\HOLFreeVar{g}}\ \circ\ f)\\
\ee (f\ \circ\ \HOLinline{\HOLFreeVar{g}})\ \circ\ (f\ \circ\ \HOLinline{\HOLFreeVar{g}})\ \circ\ (f\ \circ\ \HOLinline{\HOLFreeVar{g}})\ \circ\ f
\end{split}
\end{equation*}
by associativity.
Also, $(\HOLinline{\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}})^{-1} \ee \HOLinline{\HOLFreeVar{g}}^{-1}\ \circ\ f^{-1} \ee \HOLinline{\HOLFreeVar{g}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{f}}$ for involutions,
so inverse is just reversal of application order in this case.
Let \HOLinline{\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLConst{\HOLConst{period}}\;(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\;\HOLFreeVar{x}} for \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLFreeVar{S}}.
With these notations, we can establish some basic results:
\begin{theorem}
\label{thm:involute-period-1}
\script{iterateCompose}{558}
When $f$ fixes $x$, the period for $x$ is $1$ if and only if \HOLinline{\HOLFreeVar{g}} also fixes $x$.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLFreeVar{f}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{g}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{f}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\\
\;\;\;\;\;\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLConst{\HOLConst{period}}\;(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;(\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLNumLit{1}\;\HOLSymConst{\HOLTokenEquiv{}}\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{g}\;\HOLFreeVar{S})
\end{HOLmath}
\end{theorem}
\noindent
Pick an element $x$ in the set $S$.
For involutions, an iterate of (\HOLinline{\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}}) can be equal to another iterate of (\HOLinline{\HOLFreeVar{g}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{f}}):
\begin{theorem}
\label{thm:involute-mod-period}
\script{iterateCompose}{401}
The $i$-th iterate of (\HOLinline{\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}}) equals the $j$-th iterate of (\HOLinline{\HOLFreeVar{g}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{f}}) if and only if (\HOLinline{\HOLFreeVar{i}\;\HOLSymConst{\ensuremath{+}}\;\HOLFreeVar{j}}) is a multiple of period $p$.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{\HOLConst{finite}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{f}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{g}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\\
\;\;\;\;\;\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLConst{\HOLConst{period}}\;(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;((\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\ensuremath{\sp{\HOLFreeVar{i}}(\HOLFreeVar{x})}\;\HOLSymConst{=}\;(\HOLFreeVar{g}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{f})\ensuremath{\sp{\HOLFreeVar{j}}(\HOLFreeVar{x})}\;\HOLSymConst{\HOLTokenEquiv{}}\;\HOLFreeVar{i}\;\HOLSymConst{\ensuremath{+}}\;\HOLFreeVar{j}\;\ensuremath{\equiv}\;\HOLNumLit{0}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLFreeVar{p}\ensuremath{)})
\end{HOLmath}
\end{theorem}
\noindent
When $f$ fixes point $x$,
the iterates \HOLinline{(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\ensuremath{\sp{\HOLFreeVar{i}}(\HOLFreeVar{x})}} and \HOLinline{(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\ensuremath{\sp{\HOLFreeVar{j}}(\HOLFreeVar{x})}} are related when the sum (\HOLinline{\HOLFreeVar{i}\;\HOLSymConst{\ensuremath{+}}\;\HOLFreeVar{j}}) is special:
\begin{theorem}
\label{thm:involute-two-fix-orbit-1}
\script{iterateCompose}{583}
When $f$ fixes $x$, the $i$-th and $j$-th iterate of (\HOLinline{\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}}) differ by one $f$ application if and only if (\HOLinline{\HOLFreeVar{i}\;\HOLSymConst{\ensuremath{+}}\;\HOLFreeVar{j}}) is a multiple of period $p$.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{\HOLConst{finite}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{f}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{g}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\\
\;\;\;\;\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{f}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLConst{\HOLConst{period}}\;(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;((\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\ensuremath{\sp{\HOLFreeVar{i}}(\HOLFreeVar{x})}\;\HOLSymConst{=}\;\HOLFreeVar{f}\;((\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\ensuremath{\sp{\HOLFreeVar{j}}(\HOLFreeVar{x})})\;\HOLSymConst{\HOLTokenEquiv{}}\;\HOLFreeVar{i}\;\HOLSymConst{\ensuremath{+}}\;\HOLFreeVar{j}\;\ensuremath{\equiv}\;\HOLNumLit{0}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLFreeVar{p}\ensuremath{)})
\end{HOLmath}
\end{theorem}
\noindent
There is a related result, with a similar proof:
\begin{theorem}
\label{thm:involute-two-fix-orbit-2}
\script{iterateCompose}{689}
When $f$ fixes $x$, the $i$-th and $j$-th iterate of (\HOLinline{\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}}) differ by one \HOLinline{\HOLFreeVar{g}} application if and only if (\HOLinline{\HOLFreeVar{i}\;\HOLSymConst{\ensuremath{+}}\;\HOLFreeVar{j}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}}) is a multiple of period $p$.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{\HOLConst{finite}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{f}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{g}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\\
\;\;\;\;\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{f}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLConst{\HOLConst{period}}\;(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;((\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\ensuremath{\sp{\HOLFreeVar{i}}(\HOLFreeVar{x})}\;\HOLSymConst{=}\;\HOLFreeVar{g}\;((\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\ensuremath{\sp{\HOLFreeVar{j}}(\HOLFreeVar{x})})\;\HOLSymConst{\HOLTokenEquiv{}}\\
\;\;\;\;\;\;\;\;\;\;\HOLFreeVar{i}\;\HOLSymConst{\ensuremath{+}}\;\HOLFreeVar{j}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}\;\ensuremath{\equiv}\;\HOLNumLit{0}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLFreeVar{p}\ensuremath{)})
\end{HOLmath}
\end{theorem}
\noindent
These theorems are useful in the study of iteration orbits starting from fixed points.
\subsection{Period Parity}
\label{sec:period-parity}
Given a finite set $S$, and an element $x \in S$, the iterates \HOLinline{(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\ensuremath{\sp{\HOLFreeVar{j}}(\HOLFreeVar{x})}} form an orbit, with length equal to the period \HOLinline{\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLConst{\HOLConst{period}}\;(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\;\HOLFreeVar{x}}.
Figure~\ref{fig:orbits-even-odd} shows two orbits, one with an even period, the other with an odd period.
\begin{figure*}[h]
\centering
\begin{tikzpicture}[scale=1.7,
ele/.style={fill=black,circle,minimum width=.8pt,inner sep=1pt},
every fit/.style={ellipse,draw,inner sep=5pt}]
\node[ele,label=left:{\tiny{$x$}}] (a) at (0,0.5) {};
\node[ele,label=below:{\tiny{$\varphi\ x$}}] (b) at (1,0) {};
\node[ele,label=below:{\tiny{$\varphi^{2}\ x$}}] (c) at (2,0) {};
\node[ele,label=right:{\tiny{$\varphi^{3}\ x$}}] (d) at (3,0.5) {};
\node[ele,label=above:{\tiny{$\varphi^{4}\ x$}}] (e) at (2,1) {};
\node[ele,label=above:{\tiny{$\varphi^{5}\ x$}}] (f) at (1,1) {};
\node[ele,label=above:{\tiny{$\varphi^{6}\ x\qquad$}}] (g) at (0,0.5) {};
\draw[->,dashed,shorten <=2pt,shorten >=2] (a) -- (b);
\draw[->,dashed,shorten <=2pt,shorten >=2] (b) -- (c);
\draw[->,dashed,shorten <=2pt,shorten >=2] (c) -- (d);
\draw[->,dashed,shorten <=2pt,shorten >=2] (d) -- (e);
\draw[->,dashed,shorten <=2pt,shorten >=2] (e) -- (f);
\draw[->,dashed,shorten <=2pt,shorten >=2] (f) -- (g);
\node[ele,draw,fill=white] (ab) at ($(a)!0.5!(b)$) {};
\node[ele,draw,fill=white] (bc) at ($(b)!0.5!(c)$) {};
\node[ele,draw,fill=white] (cd) at ($(c)!0.5!(d)$) {};
\node[ele,draw,fill=white] (de) at ($(d)!0.5!(e)$) {};
\node[ele,draw,fill=white] (ef) at ($(e)!0.5!(f)$) {};
\node[ele,draw,fill=white] (fg) at ($(f)!0.5!(g)$) {};
\draw[thick,color=purple] (a) to [bend right] node[midway,below] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (ab);
\draw[thick,color=blue] (ab) to [bend right] node[midway,below] {\tiny{$f$}} (b);
\draw[thick,color=purple] (b) to [bend right] node[midway,below] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (bc);
\draw[thick,color=blue] (bc) to [bend right] node[midway,below] {\tiny{$f$}} (c);
\draw[thick,color=purple] (c) to [bend right] node[midway,below] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (cd);
\draw[thick,color=blue] (cd) to [bend right] node[midway,below] {\tiny{$f$}} (d);
\draw[thick,color=purple] (d) to [bend right] node[midway,above] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (de);
\draw[thick,color=blue] (de) to [bend right] node[midway,above] {\tiny{$f$}} (e);
\draw[thick,color=purple] (e) to [bend right] node[midway,above] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (ef);
\draw[thick,color=blue] (ef) to [bend right] node[midway,above] {\tiny{$f$}} (f);
\draw[thick,color=purple] (f) to [bend right] node[midway,above] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (fg);
\draw[thick,color=blue] (fg) to [bend right] node[midway,above] {\tiny{$f$}} (g);
\end{tikzpicture}
\begin{tikzpicture}[scale=1.7,
ele/.style={fill=black,circle,minimum width=.8pt,inner sep=1pt},
every fit/.style={ellipse,draw,inner sep=5pt}]
\node[ele,label=left:{\tiny{$x$}}] (a) at (0,0.5) {};
\node[ele,label=below:{\tiny{$\varphi\ x$}}] (b) at (1,0) {};
\node[ele,label=below:{\tiny{$\varphi^{2}\ x$}}] (c) at (2,0) {};
\node[ele,label=right:{\tiny{$\varphi^{3}\ x$}}] (d) at (3,0.5) {};
\node[ele,label=above:{\tiny{$\varphi^{4}\ x$}}] (e) at (2.5,1) {};
\node[ele,label=above:{\tiny{$\varphi^{5}\ x$}}] (f) at (1.5,1) {};
\node[ele,label=above:{\tiny{$\varphi^{6}\ x$}}] (g) at (0.5,1) {};
\node[ele,label=above:{\tiny{$\varphi^{7}\ x\qquad$}}] (h) at (0,0.5) {};
\draw[->,dashed,shorten <=2pt,shorten >=2] (a) -- (b);
\draw[->,dashed,shorten <=2pt,shorten >=2] (b) -- (c);
\draw[->,dashed,shorten <=2pt,shorten >=2] (c) -- (d);
\draw[->,dashed,shorten <=2pt,shorten >=2] (d) -- (e);
\draw[->,dashed,shorten <=2pt,shorten >=2] (e) -- (f);
\draw[->,dashed,shorten <=2pt,shorten >=2] (f) -- (g);
\draw[->,dashed,shorten <=2pt,shorten >=2] (g) -- (h);
\node[ele,draw,fill=white] (ab) at ($(a)!0.5!(b)$) {};
\node[ele,draw,fill=white] (bc) at ($(b)!0.5!(c)$) {};
\node[ele,draw,fill=white] (cd) at ($(c)!0.5!(d)$) {};
\node[ele,draw,fill=white] (de) at ($(d)!0.5!(e)$) {};
\node[ele,draw,fill=white] (ef) at ($(e)!0.5!(f)$) {};
\node[ele,draw,fill=white] (fg) at ($(f)!0.5!(g)$) {};
\node[ele,draw,fill=white] (gh) at ($(g)!0.5!(h)$) {};
\draw[thick,color=purple] (a) to [bend right] node[midway,below] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (ab);
\draw[thick,color=blue] (ab) to [bend right] node[midway,below] {\tiny{$f$}} (b);
\draw[thick,color=purple] (b) to [bend right] node[midway,below] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (bc);
\draw[thick,color=blue] (bc) to [bend right] node[midway,below] {\tiny{$f$}} (c);
\draw[thick,color=purple] (c) to [bend right] node[midway,below] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (cd);
\draw[thick,color=blue] (cd) to [bend right] node[midway,below] {\tiny{$f$}} (d);
\draw[thick,color=purple] (d) to [bend right] node[midway,above] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (de);
\draw[thick,color=blue] (de) to [bend right] node[midway,above] {\tiny{$f$}} (e);
\draw[thick,color=purple] (e) to [bend right] node[midway,above] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (ef);
\draw[thick,color=blue] (ef) to [bend right] node[midway,above] {\tiny{$f$}} (f);
\draw[thick,color=purple] (f) to [bend right] node[midway,above] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (fg);
\draw[thick,color=blue] (fg) to [bend right] node[midway,above] {\tiny{$f$}} (g);
\draw[thick,color=purple] (g) to [bend right] node[midway,above] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (gh);
\draw[thick,color=blue] (gh) to [bend right] node[midway,above] {\tiny{$f$}} (h);
\end{tikzpicture}
\caption{Orbits of \HOLinline{\HOLFreeVar{\varphi}\;\HOLSymConst{=}\;\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}} for point $x$. Left one has even period $6$, right one has odd period $7$.}
\Description{This figure shows the orbits for point x of the composition: first g than f. Left one has even period 6, right one has odd period 7.}
\label{fig:orbits-even-odd}
\end{figure*}
In the figure, black dots indicate iterates of \HOLinline{\HOLFreeVar{\varphi}\;\HOLSymConst{=}\;\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}}, in dashes, and white dots indicate the intermediates, with \HOLinline{\HOLFreeVar{g}} first, then $f$, through the arcs.
Since $f$ and \HOLinline{\HOLFreeVar{g}} are involutions, the arcs can go both ways: forward or backward.
Let $\alpha$ denote a fixed point of $f$, and $\beta$ denote a fixed point of \HOLinline{\HOLFreeVar{g}},
\textit{i.e.}, \HOLinline{\HOLFreeVar{f}\;\HOLFreeVar{\alpha}\;\HOLSymConst{=}\;\HOLFreeVar{\alpha}}, and \HOLinline{\HOLFreeVar{g}\;\HOLFreeVar{\beta}\;\HOLSymConst{=}\;\HOLFreeVar{\beta}}.
We shall look at how these fixed points are related, which is crucial in the correctness proof of our algorithm (see Definition~\ref{def:two-sq-def}).
\subsection{Fixed Point Period Even}
\label{sec:fixed-point-period-even}
Consider an orbit with even period starting with $\alpha$, a fixed point of $f$.
Figure~\ref{fig:orbit-fix-even} shows one on the left, and its real picture on the right.
\begin{figure*}[h]
\centering
\begin{tikzpicture}[scale=1.7,
ele/.style={fill=black,circle,minimum width=.8pt,inner sep=1pt},
every fit/.style={ellipse,draw,inner sep=5pt}]
\node[ele,label=left:{\tiny{$\alpha$}}] (a) at (0,0.5) {};
\node[ele,label=below:{\tiny{$\varphi\ \alpha$}}] (b) at (1,0) {};
\node[ele,label=below:{\tiny{$\varphi^{2}\ \alpha$}}] (c) at (2,0) {};
\node[ele,label=right:{\tiny{$\varphi^{3}\ \alpha$}}] (d) at (3,0.5) {};
\node[ele,label=above:{\tiny{$\varphi^{4}\ \alpha$}}] (e) at (2,1) {};
\node[ele,label=above:{\tiny{$\varphi^{5}\ \alpha$}}] (f) at (1,1) {};
\node[ele,label=above:{\tiny{$\varphi^{6}\ \alpha\qquad$}}] (g) at (0,0.5) {};
\draw[->,dashed,shorten <=2pt,shorten >=2] (a) -- (b);
\draw[->,dashed,shorten <=2pt,shorten >=2] (b) -- (c);
\draw[->,dashed,shorten <=2pt,shorten >=2] (c) -- (d);
\draw[->,dashed,shorten <=2pt,shorten >=2] (d) -- (e);
\draw[->,dashed,shorten <=2pt,shorten >=2] (e) -- (f);
\draw[->,dashed,shorten <=2pt,shorten >=2] (f) -- (g);
\node[ele,draw,fill=white] (ab) at ($(a)!0.5!(b)$) {};
\node[ele,draw,fill=white] (bc) at ($(b)!0.5!(c)$) {};
\node[ele,draw,fill=white] (cd) at ($(c)!0.5!(d)$) {};
\node[ele,draw,fill=white] (de) at ($(d)!0.5!(e)$) {};
\node[ele,draw,fill=white] (ef) at ($(e)!0.5!(f)$) {};
\draw[thick,color=blue] (a) to [out=130, in=-130,looseness=50]
node[midway,left] {\tiny{$f$}} (a);
\draw[thick,color=purple] (a) to [bend right] node[midway,below] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (ab);
\draw[thick,color=blue] (ab) to [bend right] node[midway,below] {\tiny{$f$}} (b);
\draw[thick,color=purple] (b) to [bend right] node[midway,below] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (bc);
\draw[thick,color=blue] (bc) to [bend right] node[midway,below] {\tiny{$f$}} (c);
\draw[thick,color=purple] (c) to [bend right] node[midway,below] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (cd);
\draw[thick,color=blue] (cd) to [bend right] node[midway,below] {\tiny{$f$}} (d);
\draw[thick,color=purple] (d) to [bend right] node[midway,above] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (de);
\draw[thick,color=blue] (de) to [bend right] node[midway,above] {\tiny{$f$}} (e);
\draw[thick,color=purple] (e) to [bend right] node[midway,above] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (ef);
\draw[thick,color=blue] (ef) to [bend right] node[midway,above] {\tiny{$f$}} (f);
\draw[thick,color=purple] (f) to [bend right] node[midway,above] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (g);
\DoubleLine[0.3pt]{f}{ab}{red}{red}
\DoubleLine[0.3pt]{ef}{b}{red}{red}
\DoubleLine[0.3pt]{e}{bc}{red}{red}
\DoubleLine[0.3pt]{de}{c}{red}{red}
\DoubleLine[0.3pt]{cd}{d}{red}{red}
\path[ultra thick, glow=green] (cd) -- (d);
\end{tikzpicture}
\begin{tikzpicture}[scale=1.7,
ele/.style={fill=black,circle,minimum width=.8pt,inner sep=1pt},
every fit/.style={ellipse,draw,inner sep=5pt}]
\node[ele,label=left:{\tiny{$\alpha$}}] (a) at (0,0.5) {};
\node[ele,label=below:{\tiny{$\varphi\ \alpha$}}] (b) at (1,0) {};
\node[ele,label=below:{\tiny{$\varphi^{2}\ \alpha$}}] (c) at (2,0) {};
\node[ele,label=right:{\tiny{$\varphi^{3}\ \alpha$}}] (d) at (3,0.5) {};
\node[ele,label=above:{\tiny{$\varphi^{4}\ \alpha$}}] (e) at (2,1) {};
\node[ele,label=above:{\tiny{$\varphi^{5}\ \alpha$}}] (f) at (1,1) {};
\node[ele,label=above:{\tiny{$\varphi^{6}\ \alpha\qquad$}}] (g) at (0,0.5) {};
\draw[->,dashed,shorten <=2pt,shorten >=2] (a) -- (b);
\draw[->,dashed,shorten <=2pt,shorten >=2] (b) -- (c);
\draw[->,dashed,shorten <=2pt,shorten >=2] (c) -- (d);
\draw[->,dashed,shorten <=2pt,shorten >=2] (d) -- (e);
\draw[->,dashed,shorten <=2pt,shorten >=2] (e) -- (f);
\draw[->,dashed,shorten <=2pt,shorten >=2] (f) -- (g);
\draw[thick,color=blue] (a) to [out=130, in=-130,looseness=50]
node[midway,left] {\tiny{$f$}} (a);
\draw[thick,color=blue] (f) to [bend right] node[midway,left] {\tiny{$f$}} (b);
\draw[thick,color=blue] (e) to [bend left] node[midway,right] {\tiny{$f$}} (c);
\draw[thick,color=blue] (d) to [out=50, in=-50,looseness=50]
node[midway,right] {\tiny{$f$}} (d);
\draw[thick,color=purple] (c) to [bend right] node[midway,below] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (d);
\draw[thick,color=purple] (e) to [bend left] node[midway,above] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (b);
\draw[thick,color=purple] (f) to [bend right] node[midway,above] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (g);
\end{tikzpicture}
\caption{Orbit from an $f$ fixed point $\alpha$ with even period $6$. Identical points on the left (marked by two parallel lines) are merged on the right (move white dot to black dot). In particular, on the left the two vertices of the shaded line are the same, forming a fixed point of $f$ on the right.}
\Description{This figure shows an orbit from an f fixed point alpha, with even period 6. Identical points on the left, marked by two parallel lines, are merged on the right, by moving white dot to black dot. In particular, on the left the two vertices of the shaded line are the same, forming a fixed point of f on the right.}
\label{fig:orbit-fix-even}
\end{figure*}
This orbit is formed by taking the left diagram of Figure~\ref{fig:orbits-even-odd},
but identifying the black dot on $\alpha$ (the leftmost one) with its preceding white dot from $f$, since \HOLinline{\HOLFreeVar{f}\;\HOLFreeVar{\alpha}\;\HOLSymConst{=}\;\HOLFreeVar{\alpha}}, giving the left $f$-loop.
This node $\alpha$ is now preceded by two \HOLinline{\HOLFreeVar{g}}-arcs, one from a black dot and one from a white dot. However, \HOLinline{\HOLFreeVar{g}} is an involution, which is injective, so the two dots are identical. The same reasoning shows that all the dots linked by double lines are identical, so that the orbit on the left can be simplified to the one on the right, taking only black dots.
Moreover, the rightmost black dot and a preceding white dot from $f$ must be the same, due to \HOLinline{\HOLFreeVar{g}}-arcs from identical dots.
This means the half-period iterate, the rightmost black dot, is another fixed point of $f$, say $\alpha'$.
Note that $\alpha' \ne \alpha$, for otherwise the period will be affected.
This example motivates the following:
\begin{theorem}
\label{thm:involute-two-fixes-even}
\script{iterateCompose}{884}
When $f$ fixes $x$, and (\HOLinline{\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}}) has an even period $p$ for $x$,
then $f$ also fixes \HOLinline{(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\ensuremath{\sp{\HOLFreeVar{p}\;\HOLConst{div}\;2}(\HOLFreeVar{x})}}, which is not $x$ itself.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{\HOLConst{finite}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{f}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{g}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\\
\;\;\;\;\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{f}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLConst{\HOLConst{period}}\;(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenConj{}}\\
\;\;\;\;\;\HOLFreeVar{y}\;\HOLSymConst{=}\;(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\ensuremath{\sp{\HOLFreeVar{p}\;\HOLConst{div}\;2}(\HOLFreeVar{x})}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLConst{\HOLConst{even}}\;\HOLFreeVar{p}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\HOLFreeVar{y}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{f}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{y}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLFreeVar{x}
\end{HOLmath}
\end{theorem}
\begin{proof}
First we show that $f$ fixes $y$.
Let \HOLinline{\HOLFreeVar{h}\;\HOLSymConst{=}\;\HOLFreeVar{p}\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{2}}.
Since period $p$ is even, $p \ee \HOLinline{\HOLNumLit{2}\HOLSymConst{\ensuremath{}}\HOLFreeVar{h}\;\HOLSymConst{=}\;\HOLFreeVar{h}\;\HOLSymConst{\ensuremath{+}}\;\HOLFreeVar{h}}$.
This implies that \HOLinline{\HOLFreeVar{h}\;\HOLSymConst{\ensuremath{+}}\;\HOLFreeVar{h}\;\ensuremath{\equiv}\;\HOLNumLit{0}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLFreeVar{p}\ensuremath{)}},
so \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{=}\;\HOLFreeVar{f}\;\HOLFreeVar{y}} by Theorem~\ref{thm:involute-two-fix-orbit-1}.
Since (\HOLinline{\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}}) is a permutation, \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLFreeVar{S}}, so \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{f}\;\HOLFreeVar{S}}.
Next we show that \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLFreeVar{x}}.
Suppose \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{=}\;\HOLFreeVar{x}}.
Since for finite $S$ the period \HOLinline{\HOLFreeVar{p}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLNumLit{0}},
Theorem~\ref{thm:iterate-period-mod} shows that
$p$ divides \HOLinline{\HOLFreeVar{h}\;\HOLSymConst{=}\;\HOLFreeVar{p}\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{2}}.
Hence \HOLinline{\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLNumLit{1}}, which is not even.
\end{proof}
\noindent
Therefore if a fixed point of $f$ has an even period under \HOLinline{\HOLFreeVar{\varphi}\;\HOLSymConst{=}\;\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}}, it is not alone. This leads directly to:
\begin{corollary}
\label{cor:involute-fix-singleton-odd}
\script{iterateCompute}{1009}
If $f$ fixes only a single $x$, then \HOLinline{\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}} has an odd period for $x$.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{\HOLConst{finite}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{f}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{g}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\\
\;\;\;\;\;\HOLConst{fixes}\;\HOLFreeVar{f}\;\HOLFreeVar{S}\;\HOLSymConst{=}\;\HOLTokenLeftbrace{}\HOLFreeVar{x}\HOLTokenRightbrace{}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLConst{\HOLConst{period}}\;(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\HOLConst{\HOLConst{odd}}\;\HOLFreeVar{p}
\end{HOLmath}
\end{corollary}
\subsection{Fixed Point Period Odd}
\label{sec:fixed-point-period-odd}
Now consider an orbit with odd period starting with $\alpha$, a fixed point of $f$.
Figure~\ref{fig:orbit-fix-odd} shows one on the left, and its real picture on the right.
This orbit is formed by taking the right diagram of Figure~\ref{fig:orbits-even-odd},
but identifying the black dot on $\alpha$ (the leftmost one) with its preceding white dot from $f$, since \HOLinline{\HOLFreeVar{f}\;\HOLFreeVar{\alpha}\;\HOLSymConst{=}\;\HOLFreeVar{\alpha}}, giving the left $f$-loop.
The same reasoning as the even period orbit of Section~\ref{sec:fixed-point-period-even} shows that all the dots linked by double lines are identical, so that the orbit on the left can be simplified to the one on the right, again taking only black dots.
\begin{figure*}[h]
\centering
\begin{tikzpicture}[scale=1.7,
ele/.style={fill=black,circle,minimum width=.8pt,inner sep=1pt},
every fit/.style={ellipse,draw,inner sep=5pt}]
\node[ele,label=left:{\tiny{$\alpha$}}] (a) at (0,0.5) {};
\node[ele,label=below:{\tiny{$\varphi\ \alpha$}}] (b) at (1,0) {};
\node[ele,label=below:{\tiny{$\varphi^{2}\ \alpha$}}] (c) at (2,0) {};
\node[ele,label=right:{\tiny{$\varphi^{3}\ \alpha$}}] (d) at (3,0.5) {};
\node[ele,label=above:{\tiny{$\varphi^{4}\ \alpha$}}] (e) at (2.5,1) {};
\node[ele,label=above:{\tiny{$\varphi^{5}\ \alpha$}}] (f) at (1.5,1) {};
\node[ele,label=above:{\tiny{$\varphi^{6}\ \alpha$}}] (g) at (0.5,1) {};
\node[ele,label=above:{\tiny{$\varphi^{7}\ \alpha\qquad$}}] (h) at (0,0.5) {};
\draw[->,dashed,shorten <=2pt,shorten >=2] (a) -- (b);
\draw[->,dashed,shorten <=2pt,shorten >=2] (b) -- (c);
\draw[->,dashed,shorten <=2pt,shorten >=2] (c) -- (d);
\draw[->,dashed,shorten <=2pt,shorten >=2] (d) -- (e);
\draw[->,dashed,shorten <=2pt,shorten >=2] (e) -- (f);
\draw[->,dashed,shorten <=2pt,shorten >=2] (f) -- (g);
\draw[->,dashed,shorten <=2pt,shorten >=2] (g) -- (h);
\node[ele,draw,fill=white] (ab) at ($(a)!0.5!(b)$) {};
\node[ele,draw,fill=white] (bc) at ($(b)!0.5!(c)$) {};
\node[ele,draw,fill=white] (cd) at ($(c)!0.5!(d)$) {};
\node[ele,draw,fill=white] (de) at ($(d)!0.5!(e)$) {};
\node[ele,draw,fill=white] (ef) at ($(e)!0.5!(f)$) {};
\node[ele,draw,fill=white] (fg) at ($(f)!0.5!(g)$) {};
\draw[thick,color=blue] (a) to [out=130, in=-130,looseness=50]
node[midway,left] {\tiny{$f$}} (a);
\draw[thick,color=purple] (a) to [bend right] node[midway,below] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (ab);
\draw[thick,color=blue] (ab) to [bend right] node[midway,below] {\tiny{$f$}} (b);
\draw[thick,color=purple] (b) to [bend right] node[midway,below] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (bc);
\draw[thick,color=blue] (bc) to [bend right] node[midway,below] {\tiny{$f$}} (c);
\draw[thick,color=purple] (c) to [bend right] node[midway,below] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (cd);
\draw[thick,color=blue] (cd) to [bend right] node[midway,below] {\tiny{$f$}} (d);
\draw[thick,color=purple] (d) to [bend right] node[midway,above] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (de);
\draw[thick,color=blue] (de) to [bend right] node[midway,above] {\tiny{$f$}} (e);
\draw[thick,color=purple] (e) to [bend right] node[midway,above] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (ef);
\draw[thick,color=blue] (ef) to [bend right] node[midway,above] {\tiny{$f$}} (f);
\draw[thick,color=purple] (f) to [bend right] node[midway,above] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (fg);
\draw[thick,color=blue] (fg) to [bend right] node[midway,above] {\tiny{$f$}} (g);
\draw[thick,color=purple] (g) to [bend right] node[midway,above] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (h);
\DoubleLine[0.3pt]{g}{ab}{red}{red}
\DoubleLine[0.3pt]{fg}{b}{red}{red}
\DoubleLine[0.3pt]{f}{bc}{red}{red}
\DoubleLine[0.3pt]{ef}{c}{red}{red}
\DoubleLine[0.3pt]{e}{cd}{red}{red}
\DoubleLine[0.3pt]{de}{d}{red}{red}
\path[ultra thick, glow=green] (de) -- (d);
\end{tikzpicture}
\begin{tikzpicture}[scale=1.7,
ele/.style={fill=black,circle,minimum width=.8pt,inner sep=1pt},
every fit/.style={ellipse,draw,inner sep=5pt}]
\node[ele,label=left:{\tiny{$\alpha$}}] (a) at (0,0.5) {};
\node[ele,label=below:{\tiny{$\varphi\ \alpha$}}] (b) at (1,0) {};
\node[ele,label=below:{\tiny{$\varphi^{2}\ \alpha$}}] (c) at (2,0) {};
\node[ele,label=right:{\tiny{$\varphi^{3}\ \alpha$}}] (d) at (3,0.5) {};
\node[ele,label=above:{\tiny{$\varphi^{4}\ \alpha$}}] (e) at (2.5,1) {};
\node[ele,label=above:{\tiny{$\varphi^{5}\ \alpha$}}] (f) at (1.5,1) {};
\node[ele,label=above:{\tiny{$\varphi^{6}\ \alpha$}}] (g) at (0.5,1) {};
\node[ele,label=above:{\tiny{$\varphi^{7}\ \alpha\qquad$}}] (h) at (0,0.5) {};
\draw[->,dashed,shorten <=2pt,shorten >=2] (a) -- (b);
\draw[->,dashed,shorten <=2pt,shorten >=2] (b) -- (c);
\draw[->,dashed,shorten <=2pt,shorten >=2] (c) -- (d);
\draw[->,dashed,shorten <=2pt,shorten >=2] (d) -- (e);
\draw[->,dashed,shorten <=2pt,shorten >=2] (e) -- (f);
\draw[->,dashed,shorten <=2pt,shorten >=2] (f) -- (g);
\draw[->,dashed,shorten <=2pt,shorten >=2] (g) -- (h);
\draw[thick,color=purple] (f) to [bend left] node[midway,right] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (b);
\draw[thick,color=purple] (g) to [bend left] node[midway,below] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (a);
\draw[thick,color=purple] (e) to [bend left] node[midway,right] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (c);
\draw[thick,color=purple] (d) to [out=50, in=-50,looseness=50]
node[midway,right] {\tiny{\HOLinline{\HOLFreeVar{g}}}} (d);
\draw[thick,color=blue] (a) to [out=130, in=-130,looseness=50]
node[midway,left] {\tiny{$f$}} (a);
\draw[thick,color=blue] (d) to [bend right] node[midway,above] {\tiny{$f$}} (e);
\draw[thick,color=blue] (c) to [bend right] node[midway,right] {\tiny{$f$}} (f);
\draw[thick,color=blue] (b) to [bend right] node[midway,left] {\tiny{$f$}} (g);
\end{tikzpicture}
\caption{Orbit from an $f$ fixed point $\alpha$ with odd period $7$. Identical points on the left (marked by two parallel lines) are merged~on the right (move white dot to black dot). In particular, on the left the two vertices of the shaded line are the same, forming a fixed point of $g$ on the right.}
\Description{This figure shows an orbit from an f fixed point alpha with odd period 7. Identical points on the left, marked by two parallel lines, are merged on the right, by moving white dot to black dot. In particular, on the left the two vertices of the shaded line are the same, forming a fixed point of g on the right.}
\label{fig:orbit-fix-odd}
\end{figure*}
Moreover, the rightmost black dot and a preceding white dot from \HOLinline{\HOLFreeVar{g}} must be the same, due to $f$-arcs from identical dots.
This means the half-period iterate, the rightmost black dot, must be a fixed point of \HOLinline{\HOLFreeVar{g}}, say $\beta$.
If \HOLinline{\HOLFreeVar{\beta}\;\HOLSymConst{=}\;\HOLFreeVar{\alpha}}, then period \HOLinline{\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLNumLit{1}}, in accordance with Theorem~\ref{thm:involute-period-1}.
This example motivates the following:
\begin{theorem}
\label{thm:involute-two-fixes-odd}
\script{iterateCompose}{980}
When $f$ fixes $x$, and (\HOLinline{\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}}) has an odd period $p$ for $x$,
then \HOLinline{\HOLFreeVar{g}} fixes \HOLinline{(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\ensuremath{\sp{\HOLFreeVar{p}\;\HOLConst{div}\;2}(\HOLFreeVar{x})}}, which is not $x$ itself if and only if \HOLinline{\HOLFreeVar{p}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLNumLit{1}}.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{\HOLConst{finite}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{f}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{g}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\\
\;\;\;\;\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{f}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLConst{\HOLConst{period}}\;(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenConj{}}\\
\;\;\;\;\;\HOLFreeVar{y}\;\HOLSymConst{=}\;(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\ensuremath{\sp{\HOLFreeVar{p}\;\HOLConst{div}\;2}(\HOLFreeVar{x})}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLConst{\HOLConst{odd}}\;\HOLFreeVar{p}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\HOLFreeVar{y}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{g}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;(\HOLFreeVar{y}\;\HOLSymConst{=}\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenEquiv{}}\;\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLNumLit{1})
\end{HOLmath}
\end{theorem}
\begin{proof}
First we show that \HOLinline{\HOLFreeVar{g}} fixes $y$.
Let \HOLinline{\HOLFreeVar{h}\;\HOLSymConst{=}\;\HOLFreeVar{p}\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{2}}.
Since period $p$ is odd, $p \ee \HOLinline{\HOLNumLit{2}\HOLSymConst{\ensuremath{}}\HOLFreeVar{h}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}\;\HOLSymConst{=}\;\HOLFreeVar{h}\;\HOLSymConst{\ensuremath{+}}\;\HOLFreeVar{h}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}}$.
Thus \HOLinline{\HOLFreeVar{h}\;\HOLSymConst{\ensuremath{+}}\;\HOLFreeVar{h}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}\;\ensuremath{\equiv}\;\HOLNumLit{0}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLFreeVar{p}\ensuremath{)}},
so \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{=}\;\HOLFreeVar{g}\;\HOLFreeVar{y}} by Theorem~\ref{thm:involute-two-fix-orbit-2}.
As (\HOLinline{\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}}) is a permutation, \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLFreeVar{S}}, so \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{g}\;\HOLFreeVar{S}}.
Theorem~\ref{thm:involute-period-1} ensures that: \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{=}\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenEquiv{}}\;\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLNumLit{1}}.
\end{proof}
\subsection{Fixed Point Orbits}
\label{sec:fixed-point-orbits}
Let \HOLinline{\HOLFreeVar{\varphi}\;\HOLSymConst{=}\;\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}},
and $\alpha, \beta$ be fixed points of $f, \HOLinline{\HOLFreeVar{g}}$, respectively.
Theorem~\ref{thm:involute-two-fixes-even} and Theorem~\ref{thm:involute-two-fixes-odd} show that:
\begin{itemize}[leftmargin=*]
\item if the period $p$ of $\alpha$ is even, its orbit has another fixed point of $f$ at the \HOLinline{\HOLFreeVar{h}\;\HOLSymConst{=}\;\HOLFreeVar{p}\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{2}} iterate: \HOLinline{\HOLFreeVar{\varphi}\ensuremath{\sp{\HOLFreeVar{h}}(\HOLFreeVar{\alpha})}}.
\item if the period $p$ of $\alpha$ is odd, its orbit has another fixed point of \HOLinline{\HOLFreeVar{g}} at the \HOLinline{\HOLFreeVar{h}\;\HOLSymConst{=}\;\HOLFreeVar{p}\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{2}} iterate: \HOLinline{\HOLFreeVar{\beta}\;\HOLSymConst{=}\;\HOLFreeVar{\varphi}\ensuremath{\sp{\HOLFreeVar{h}}(\HOLFreeVar{\alpha})}}.
\end{itemize}
Figure~\ref{fig:orbit-fix-even} and Figure~\ref{fig:orbit-fix-odd} show that these orbits have no more fixed points. The only fixed point, of either $f$ or $g$, occurs at halfway point of the orbit.
Thus, fixed point orbits lead directly from one fixed point to another. This is because, assuming one of the intermediate iterate is a fixed point, the iteration path will turn back, due to either $f$ or $g$, both being involutions. This will produce an orbit with a shorter period, but period for an orbit is minimal.
Such considerations lead to the following stronger forms of Theorem~\ref{thm:involute-two-fixes-even} and Theorem~\ref{thm:involute-two-fixes-odd}:
\begin{theorem}
\label{thm:involute-two-fixes-even-odd}
\script{iterateCompose}{1100}
When $f$ fixes $x$, the $j$-th iterate of (\HOLinline{\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}}) from $x$ is a fixed point of either $f$ or \HOLinline{\HOLFreeVar{g}} if and only if $j$ is half of the period $p$.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{\HOLConst{finite}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{f}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{g}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\\
\;\;\;\;\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{f}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLConst{\HOLConst{period}}\;(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLConst{\HOLConst{even}}\;\HOLFreeVar{p}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\HOLSymConst{\HOLTokenForall{}}\HOLBoundVar{j}.\;\HOLNumLit{0}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLBoundVar{j}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLBoundVar{j}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{p}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\;\;\;\;\;\;((\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\ensuremath{\sp{\HOLBoundVar{j}}(\HOLFreeVar{x})}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{f}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenEquiv{}}\;\HOLBoundVar{j}\;\HOLSymConst{=}\;\HOLFreeVar{p}\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{2})\\
\HOLTokenTurnstile{}\HOLConst{\HOLConst{finite}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{f}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{g}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\\
\;\;\;\;\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{f}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLConst{\HOLConst{period}}\;(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLConst{\HOLConst{odd}}\;\HOLFreeVar{p}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\HOLSymConst{\HOLTokenForall{}}\HOLBoundVar{j}.\;\HOLNumLit{0}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLBoundVar{j}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLBoundVar{j}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{p}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\;\;\;\;\;\;((\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\ensuremath{\sp{\HOLBoundVar{j}}(\HOLFreeVar{x})}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{g}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenEquiv{}}\;\HOLBoundVar{j}\;\HOLSymConst{=}\;\HOLFreeVar{p}\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{2})
\end{HOLmath}
\end{theorem}
This completes our tour of the theory of permutation orbits and fixed points.
The results provide the key to formally prove that our two-squares algorithm by iterations is correct.
\section{Correctness of Algorithm}
\label{sec:correctness}
The algorithm to compute the flip fixed point from the known Zagier fixed point, given in Definition~\ref{def:two-sq-def}, makes use of a while-loop.
A while-loop consists of a guard $G$ and a body $B$, starting with an element $x$. The body is a function on $x$, producing iterates
$B(x)$, \HOLinline{\HOLFreeVar{B}\ensuremath{\sp{\HOLNumLit{2}}(\HOLFreeVar{x})}}, \HOLinline{\HOLFreeVar{B}\ensuremath{\sp{\HOLNumLit{3}}(\HOLFreeVar{a})}}, \textit{etc.}.
The guard is a predicate on each iterate: the loop continues only if the test result by the guard stays true.
In HOL4, the \HOLConst{WHILE} loop with guard $G$ and body $B$ starting with $x$ is defined as:
\begin{HOLmath}
\;\;\HOLConst{WHILE}\;\HOLFreeVar{G}\;\HOLFreeVar{B}\;\HOLFreeVar{x}\;\HOLTokenDefEquality{}\;\HOLKeyword{if}\;\HOLFreeVar{G}\;\HOLFreeVar{x}\;\HOLKeyword{then}\;\HOLConst{WHILE}\;\HOLFreeVar{G}\;\HOLFreeVar{B}\;(\HOLFreeVar{B}\;\HOLFreeVar{x})\;\HOLKeyword{else}\;\HOLFreeVar{x}
\end{HOLmath}
from which one can easily show by induction that:
\begin{HOLmath}
\HOLTokenTurnstile{}(\HOLSymConst{\HOLTokenForall{}}\HOLBoundVar{j}.\;\HOLBoundVar{j}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{k}\;\HOLSymConst{\HOLTokenImp{}}\;\HOLFreeVar{G}\;(\HOLFreeVar{B}\ensuremath{\sp{\HOLBoundVar{j}}(\HOLFreeVar{x})}))\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\HOLConst{WHILE}\;\HOLFreeVar{G}\;\HOLFreeVar{B}\;\HOLFreeVar{x}\;\HOLSymConst{=}\\
\;\;\;\;\;\;\;\;\;\HOLKeyword{if}\;\HOLFreeVar{G}\;(\HOLFreeVar{B}\ensuremath{\sp{\HOLFreeVar{k}}(\HOLFreeVar{x})})\;\HOLKeyword{then}\;\HOLConst{WHILE}\;\HOLFreeVar{G}\;\HOLFreeVar{B}\;(\HOLFreeVar{B}\ensuremath{\sp{\HOLFreeVar{k}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}}(\HOLFreeVar{x})})\;\HOLKeyword{else}\;\HOLFreeVar{B}\ensuremath{\sp{\HOLFreeVar{k}}(\HOLFreeVar{x})}
\end{HOLmath}
giving this expected result:
\begin{theorem}
\label{thm:iterate-while-thm}
\script{iterateCompute}{922}
The \HOLinline{\HOLConst{WHILE}} loop delivers the first body iterate that fails the guard test.
\begin{HOLmath}
\HOLTokenTurnstile{}(\HOLSymConst{\HOLTokenForall{}}\HOLBoundVar{j}.\;\HOLBoundVar{j}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{k}\;\HOLSymConst{\HOLTokenImp{}}\;\HOLFreeVar{G}\;(\HOLFreeVar{B}\ensuremath{\sp{\HOLBoundVar{j}}(\HOLFreeVar{x})}))\;\HOLSymConst{\HOLTokenConj{}}\;\HOLSymConst{\HOLTokenNeg{}}\HOLFreeVar{G}\;(\HOLFreeVar{B}\ensuremath{\sp{\HOLFreeVar{k}}(\HOLFreeVar{x})})\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\HOLConst{WHILE}\;\HOLFreeVar{G}\;\HOLFreeVar{B}\;\HOLFreeVar{x}\;\HOLSymConst{=}\;\HOLFreeVar{B}\ensuremath{\sp{\HOLFreeVar{k}}(\HOLFreeVar{x})}
\end{HOLmath}
\end{theorem}
\subsection{Iterate with WHILE}
\label{sec:iterate-while}
From Section~\ref{sec:orbits}, we learn that for two involutions $f$ and \HOLinline{\HOLFreeVar{g}},
a fixed point $\alpha$ of $f$ is paired up with a fixed point $\beta$ of \HOLinline{\HOLFreeVar{g}}
whenever the period of $\alpha$ under the composition \HOLinline{\HOLFreeVar{\varphi}\;\HOLSymConst{=}\;\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}} is odd.
In fact, $\beta$ lies in the orbit of $\alpha$ at halfway point, the iterate at half period.
Since a while-loop also gives an iterate, we have:
\begin{theorem}
\label{thm:involute-involute-fixes-while}
\script{iterateCompose}{1536}
For two involutions $f$ and \HOLinline{\HOLFreeVar{g}}, if $f$ fixes $x$ with an odd period,
a WHILE loop with \HOLinline{\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}} from $x$ can reach a fixed point of \HOLinline{\HOLFreeVar{g}}.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{\HOLConst{finite}}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{f}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{g}\;\HOLConst{involute}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\\
\;\;\;\;\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{f}\;\HOLFreeVar{S}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLConst{\HOLConst{period}}\;(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLConst{\HOLConst{odd}}\;\HOLFreeVar{p}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\HOLConst{WHILE}\;(\HOLTokenLambda{}\HOLBoundVar{t}.\;\HOLFreeVar{g}\;\HOLBoundVar{t}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLBoundVar{t})\;(\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g})\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{g}\;\HOLFreeVar{S}
\end{HOLmath}
\end{theorem}
\begin{proof}
Let guard \HOLinline{\HOLFreeVar{G}\;\HOLSymConst{=}\;(\HOLTokenLambda{}\HOLBoundVar{t}.\;\HOLFreeVar{g}\;\HOLBoundVar{t}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLBoundVar{t})}, and body \HOLinline{\HOLFreeVar{B}\;\HOLSymConst{=}\;\HOLFreeVar{f}\;\HOLSymConst{\HOLTokenCompose}\;\HOLFreeVar{g}}.
If period \HOLinline{\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLNumLit{1}},
then \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{g}\;\HOLFreeVar{S}} by Theorem~\ref{thm:involute-period-1}.
So \HOLinline{\HOLSymConst{\HOLTokenNeg{}}\HOLFreeVar{G}\;\HOLFreeVar{x}},
and \HOLinline{\HOLConst{WHILE}\;\HOLFreeVar{G}\;\HOLFreeVar{B}\;\HOLFreeVar{x}\;\HOLSymConst{=}\;\HOLFreeVar{x}} since the condition is not met at the start.
Therefore \HOLinline{\HOLConst{WHILE}\;\HOLFreeVar{G}\;\HOLFreeVar{B}\;\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{g}\;\HOLFreeVar{s}}.
If period \HOLinline{\HOLFreeVar{p}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLNumLit{1}}, let \HOLinline{\HOLFreeVar{h}\;\HOLSymConst{=}\;\HOLFreeVar{p}\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{2}},
and \HOLinline{\HOLFreeVar{z}\;\HOLSymConst{=}\;\HOLFreeVar{B}\ensuremath{\sp{\HOLFreeVar{h}}(\HOLFreeVar{x})}}.
Since \HOLinline{\HOLNumLit{1}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{p}}, $0 < h < p$.
Also $f$ and \HOLinline{\HOLFreeVar{g}} are involutions, so \HOLinline{\HOLFreeVar{B}\;\HOLConst{\HOLConst{permutes}}\;\HOLFreeVar{S}}.
Hence \HOLinline{\HOLFreeVar{z}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{g}\;\HOLFreeVar{S}} by Theorem~\ref{thm:involute-two-fixes-odd}.
so \HOLinline{\HOLSymConst{\HOLTokenNeg{}}\HOLFreeVar{G}\;\HOLFreeVar{z}}.
We claim \HOLinline{\HOLSymConst{\HOLTokenForall{}}\HOLBoundVar{j}.\;\HOLBoundVar{j}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{h}\;\HOLSymConst{\HOLTokenImp{}}\;\HOLFreeVar{G}\;(\HOLFreeVar{B}\ensuremath{\sp{\HOLBoundVar{j}}(\HOLFreeVar{x})})}.
To see this, let \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{=}\;\HOLFreeVar{B}\ensuremath{\sp{\HOLFreeVar{j}}(\HOLFreeVar{x})}}, which is an element of $S$.
If \HOLinline{\HOLFreeVar{j}\;\HOLSymConst{=}\;\HOLNumLit{0}}, then \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{=}\;\HOLFreeVar{x}}.
Since period \HOLinline{\HOLFreeVar{p}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLNumLit{1}},
\HOLinline{\HOLFreeVar{y}\;\HOLSymConst{\HOLTokenNotIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{g}\;\HOLFreeVar{S}} by Theorem~\ref{thm:involute-period-1}, so \HOLinline{\HOLFreeVar{G}\;\HOLFreeVar{y}}.
If \HOLinline{\HOLFreeVar{j}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLNumLit{0}}, then $0 < j < h < p$, and \HOLinline{\HOLFreeVar{j}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLFreeVar{h}}.
Hence \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{\HOLTokenNotIn{}}\;\HOLConst{fixes}\;\HOLFreeVar{g}\;\HOLFreeVar{S}} by Theorem~\ref{thm:involute-two-fixes-even-odd},
so \HOLinline{\HOLFreeVar{G}\;\HOLFreeVar{y}} again. The claim is proved.
By the claim and \HOLinline{\HOLSymConst{\HOLTokenNeg{}}\HOLFreeVar{G}\;\HOLFreeVar{z}},
apply Theorem~\ref{thm:iterate-while-thm} to conclude
$\HOLinline{\HOLConst{WHILE}\;\HOLFreeVar{G}\;\HOLFreeVar{B}\;\HOLFreeVar{x}\;\HOLSymConst{=}\;\HOLFreeVar{z}} \in \HOLinline{\HOLConst{fixes}\;\HOLFreeVar{g}\;\HOLFreeVar{S}}$.
\end{proof}
\subsection{Two Squares by WHILE}
\label{sec:two-squares-while}
We have developed the theory to show that the algorithm in Section~\ref{sec:algorithm} is correct:
\begin{theorem}
\label{thm:two-sq-thm}
\script{twoSquares}{840}
For a prime of the form \HOLinline{\HOLNumLit{4}\HOLSymConst{\ensuremath{}}\HOLFreeVar{k}\;\HOLSymConst{\ensuremath{+}}\;\HOLNumLit{1}},
the two squares algorithm of Definiton~\ref{def:two-sq-def} gives a flip fixed point.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{prime}\;\HOLFreeVar{n}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{n}\;\ensuremath{\equiv}\;\HOLNumLit{1}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLNumLit{4}\ensuremath{)}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;\HOLConst{two_sq}\;\HOLFreeVar{n}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLConst{flip}\;(\HOLConst{mills}\;\HOLFreeVar{n})
\end{HOLmath}
\end{theorem}
\begin{proof}
Let \HOLinline{\HOLFreeVar{S}\;\HOLSymConst{=}\;\HOLConst{mills}\;\HOLFreeVar{n}}, \HOLinline{\HOLFreeVar{\varphi}\;\HOLSymConst{=}\;\HOLConst{zagier}\;\HOLSymConst{\HOLTokenCompose}\;\HOLConst{flip}},
\HOLinline{\HOLFreeVar{u}\;\HOLSymConst{=}\;(\HOLNumLit{1}\HOLSymConst{,}\HOLNumLit{1}\HOLSymConst{,}\HOLFreeVar{n}\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{4})}, and period \HOLinline{\HOLFreeVar{p}\;\HOLSymConst{=}\;\HOLConst{\HOLConst{period}}\;\HOLFreeVar{\varphi}\;\HOLFreeVar{u}}.
By Definition~\ref{def:two-sq-def},
and noting that \HOLinline{(\HOLSymConst{\HOLTokenNeg{}})\;\HOLSymConst{\HOLTokenCompose}\;\HOLConst{found}\;\HOLSymConst{=}\;(\HOLTokenLambda{}\HOLBoundVar{t}.\;\HOLConst{flip}\;\HOLBoundVar{t}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLBoundVar{t})},
this is to show: \HOLinline{\HOLConst{WHILE}\;(\HOLTokenLambda{}\HOLBoundVar{t}.\;\HOLConst{flip}\;\HOLBoundVar{t}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLBoundVar{t})\;\HOLFreeVar{\varphi}\;\HOLFreeVar{u}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLConst{flip}\;\HOLFreeVar{S}}.
Since a prime is not a square, we have \HOLinline{\HOLConst{\HOLConst{finite}}\;\HOLFreeVar{S}}.
Now \HOLinline{\HOLFreeVar{\varphi}\;\HOLConst{\HOLConst{permutes}}\;\HOLFreeVar{S}} as Zagier map and flip map are both involutions,
by Theorem~\ref{thm:zagier-involute-mills-prime} and Theorem~\ref{thm:flip-involute-mills},
and \HOLinline{\HOLConst{fixes}\;\HOLConst{zagier}\;\HOLFreeVar{S}\;\HOLSymConst{=}\;\HOLTokenLeftbrace{}\HOLFreeVar{u}\HOLTokenRightbrace{}} by Theorem~\ref{thm:zagier-fixes-prime}.
Thus \HOLinline{\HOLFreeVar{u}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLConst{zagier}\;\HOLFreeVar{S}},
and period $p$ is odd by Corollary~\ref{cor:involute-fix-singleton-odd}.
So \HOLinline{\HOLConst{WHILE}\;(\HOLTokenLambda{}\HOLBoundVar{t}.\;\HOLConst{flip}\;\HOLBoundVar{t}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLBoundVar{t})\;\HOLFreeVar{\varphi}\;\HOLFreeVar{u}\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{fixes}\;\HOLConst{flip}\;\HOLFreeVar{S}} by Theorem~\ref{thm:involute-involute-fixes-while}.
\end{proof}
\noindent
It is almost trivial to convert \HOLinline{\HOLConst{two_sq}\;\HOLFreeVar{n}} to following algorithm:
\begin{definition}
\label{def:two-squares-def}
Compute the two squares for Fermat's two squares theorem.
\begin{HOLmath}
\;\;\HOLConst{two_squares}\;\HOLFreeVar{n}\;\HOLTokenDefEquality{}\;(\HOLKeyword{let}\;(\HOLBoundVar{x}\HOLSymConst{,}\HOLBoundVar{y}\HOLSymConst{,}\HOLBoundVar{z})\;=\;\HOLConst{two_sq}\;\HOLFreeVar{n}\;\HOLKeyword{in}\;(\HOLBoundVar{x}\HOLSymConst{,}\HOLBoundVar{y}\;\HOLSymConst{\ensuremath{+}}\;\HOLBoundVar{z}))
\end{HOLmath}
\end{definition}
\noindent
giving the two squares in a pair,
and its correctness is readily demonstrated:
\begin{theorem}
\label{thm:two-squares-thm}
\script{twoSquares}{1041}
The algorithm by Definition~\ref{def:two-squares-def} gives indeed Fermat's two squares.
\begin{HOLmath}
\HOLTokenTurnstile{}\HOLConst{prime}\;\HOLFreeVar{n}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{n}\;\ensuremath{\equiv}\;\HOLNumLit{1}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLNumLit{4}\ensuremath{)}\;\HOLSymConst{\HOLTokenImp{}}\\
\;\;\;\;\;\;\;(\HOLKeyword{let}\;(\HOLBoundVar{u}\HOLSymConst{,}\HOLBoundVar{v})\;=\;\HOLConst{two_squares}\;\HOLFreeVar{n}\;\HOLKeyword{in}\;\HOLFreeVar{n}\;\HOLSymConst{=}\;\HOLBoundVar{u}\HOLSymConst{\ensuremath{\sp{2}}}\;\HOLSymConst{\ensuremath{+}}\;\HOLBoundVar{v}\HOLSymConst{\ensuremath{\sp{2}}})
\end{HOLmath}
\end{theorem}
\begin{table*}[h]
\caption{Running Fermat's two squares algorithm in a HOL4 session, with timing.}
\Description{This table shows a sample run of Fermat's two squares algorithm in a HOL4 session, with timing information.}
\label{tbl:sample-run}
\begin{tabular}{p{0.8\textwidth}}
\begin{verbatim}
> time EVAL ``two_squares 97``;
runtime: 0.00770s, gctime: 0.00086s, systime: 0.00077s.
val it = |- two_squares 97 = (9,4): thm
> time EVAL ``two_squares 1999999913``;
runtime: 2m23s, gctime: 14.7s, systime: 11.3s.
val it = |- two_squares 1999999913 = (1093,44708): thm
> time EVAL ``two_squares 12345678949``;
runtime: 6m02s, gctime: 37.5s, systime: 26.0s.
val it = |- two_squares 12345678949 = (110415,12418): thm
> EVAL ``9 * 9 + 4 * 4``;
val it = |- 9 * 9 + 4 * 4 = 97: thm
> EVAL ``1093 * 1093 + 44708 * 44708``;
val it = |- 1093 * 1093 + 44708 * 44708 = 1999999913: thm
> EVAL ``110415 * 110415 + 12418 * 12418``;
val it = |- 110415 * 110415 + 12418 * 12418 = 12345678949: thm
\end{verbatim}
\end{tabular}
\end{table*}
Table~\ref{tbl:sample-run} shows a sample run in HOL4 session on a typical laptop,
using \HOLConst{EVAL} for evaluation and prefix \HOLConst{time} to obtain timing statistics.
Note that these \HOLConst{EVAL} executions are based on optimised symbolic rewriting in HOL4, thus orders of magnitude slower than running native code.
\paragraph*{Other algorithms}
A prime has a finite set of windmill triples, by Theorem~\ref{thm:mills-finite}.
Fermat's two squares for a prime $n$ with \HOLinline{\HOLFreeVar{n}\;\ensuremath{\equiv}\;\HOLNumLit{1}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLNumLit{4}\ensuremath{)}}, which must exist by Theorem~\ref{thm:fermat-two-squares-exists},
can be found by a brute-force search: subtract $n$ by successive odd squares, and check whether the difference is a square. Although there are better ways to test a square than the square-root test, they are not simple to implement.
Don Zagier, after his one-sentence proof, referred to an effective algorithm by Wagon~\cite{Wagon-1990-acm} to compute the two squares. The algorithm requires finding a quadratic non-residue of the given prime $n$.
The advantage of our algorithm in Definition~\ref{def:two-squares-def} over such alternative methods is that only addition and subtraction are performed.
The implementation is rather straightforward. The issue of termination is discussed next.
\subsection{Terminating Condition}
\label{sec:termination}
As mentioned in Section~\ref{sec:flip-fix-search}, for our algorithm the WHILE loop may or may not terminate. To gaurantee termination, convert the WHILE loop to a countdown loop, as follows. First, ensure that the input number $n$ is not a square, so that \HOLinline{\HOLConst{mills}\;\HOLFreeVar{n}} is finite (Theorem~\ref{thm:mills-finite}), and check \HOLinline{\HOLFreeVar{n}\;\ensuremath{\equiv}\;\HOLNumLit{1}\;\ensuremath{(}\ensuremath{\bmod}\;\HOLNumLit{4}\ensuremath{)}}, so that \HOLinline{(\HOLNumLit{1}\HOLSymConst{,}\HOLNumLit{1}\HOLSymConst{,}\HOLFreeVar{n}\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{4})\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{mills}\;\HOLFreeVar{n}}, \textit{i.e.}, \HOLinline{\HOLConst{mills}\;\HOLFreeVar{n}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLSymConst{\HOLTokenEmpty{}}}.
Obviously for any triple \HOLinline{(\HOLFreeVar{x}\HOLSymConst{,}\HOLFreeVar{y}\HOLSymConst{,}\HOLFreeVar{z})\;\HOLSymConst{\HOLTokenIn{}}\;\HOLConst{mills}\;\HOLFreeVar{n}}, each $x, y$ or $y$ is less than $n$, hence \HOLinline{\ensuremath{|}\HOLConst{mills}\;\HOLFreeVar{n}\ensuremath{|}\;\HOLSymConst{\HOLTokenLt{}}\;\HOLFreeVar{n}\HOLSymConst{\ensuremath{\sp{3}}}}.
Now, use a countdown loop from \HOLinline{\HOLFreeVar{n}\HOLSymConst{\ensuremath{\sp{3}}}} to $0$, start with the triple \HOLinline{(\HOLNumLit{1}\HOLSymConst{,}\HOLNumLit{1}\HOLSymConst{,}\HOLFreeVar{n}\;\HOLConst{\HOLConst{div}}\;\HOLNumLit{4})} for the \HOLinline{\HOLConst{zagier}\;\HOLSymConst{\HOLTokenCompose}\;\HOLConst{flip}} iteration.
The iterations trace an orbit. At half-way point, the orbit hits either a flip fixed point, detected by \HOLinline{\HOLFreeVar{y}\;\HOLSymConst{=}\;\HOLFreeVar{z}}, when the period is odd (Theorem~\ref{thm:involute-two-fixes-odd}), or another Zagier fixed point, detected by \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{=}\;\HOLFreeVar{y}}, when the period is even (Theorem~\ref{thm:involute-two-fixes-even}). They provide actual exits from the countdown loop, much earlier than the count drops to zero.
\subsection{Lessons Learnt}
\label{sec:lessons}
This formalisation work can be a self-contained project in a theorem-proving workshop.
The ideas are simple, but formulating the theorems properly is not simple.
For example, at first the author would like to prove:
\begin{equation*}
\label{eqn:zagier-inv}
\HOLinline{\HOLSymConst{\HOLTokenForall{}}\HOLBoundVar{x}\;\HOLBoundVar{y}\;\HOLBoundVar{z}.\;\HOLConst{zagier}\;(\HOLConst{zagier}\;(\HOLBoundVar{x}\HOLSymConst{,}\HOLBoundVar{y}\HOLSymConst{,}\HOLBoundVar{z}))\;\HOLSymConst{=}\;(\HOLBoundVar{x}\HOLSymConst{,}\HOLBoundVar{y}\HOLSymConst{,}\HOLBoundVar{z})}.
\end{equation*}
The interactive session produces several subgoals which he cannot resolve immediately.
A comparison of Definition~\ref{def:zagier-def} with Equation~\eqref{eqn:zagier-map}
shows differences in boundary cases.
Finally, some insight from windmills resolves why the boundaries are ignored, and provides the pre-condition \HOLinline{\HOLFreeVar{x}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLNumLit{0}\;\HOLSymConst{\HOLTokenConj{}}\;\HOLFreeVar{z}\;\HOLSymConst{\HOLTokenNotEqual{}}\;\HOLNumLit{0}}, see Equation~\eqref{eqn:zagier-involute}. The result is Theorem~\ref{thm:zagier-involute-mills-prime}.
Explaining the Zagier map is an involution through the mind of a windmill poses some challenges. Definition~\ref{def:zagier-def} of the Zagier map has $3$ branches, so the initial effort is to treat just $3$ cases. The first case is immediate, but the second case runs into a mess. It is only after drawing a lot of windmills that the author realises these finer points:
\begin{itemize}[leftmargin=*]
\item there are $3$ types: $x < y, x \ee y$, and $x > y$ for the windmill triple $(x,y,z)$,
\item the $3$ types are further subdivided due to geometry of the mind, giving $5$ cases in total,
\item the $5$ cases can be condensed into $3$ branches, as the definition shows.
\end{itemize}
The result is Table~\ref{tbl:zagier-map} in Section~\ref{sec:windmill-mind}.
For the permutation orbits in Section~\ref{sec:orbits},
the proofs about relations between iterates start as long-winded arguments treating if-part and only-if part separately.
Putting them in this paper prompts the author to rethink the logic.
The polished proofs simply employ a chain of logical equivalences.
Fixed point orbits have either even or odd period, as treated in Section~\ref{sec:fixed-point-period-even} and Section~\ref{sec:fixed-point-period-odd}.
The drawing of the diagrams helps to refine the proofs to be short and sweet, making good use of theorems already proved.
About the correctness proof of the algorithm using a while-loop in Section~\ref{sec:correctness}, the author initially applied Hoare logic assertions to derive the desired iterate upon loop exit.
\ifdefined
This is awkward, as pointed out by Michael Norrish who knows the HOL4 theorem-prover inside out.
\else
This is awkward, as pointed out by (omitted for anonymous review).
\fi
The reason is that \HOLinline{\HOLConst{WHILE}} is \emph{defined} as iteration of the body in HOL4.
The section had since been rewritten.
\paragraph*{Development Effort}
The proofs have been streamlined after several revisions.
Such refinements result in the script line counts for various theories developed, shown in Table~\ref{tbl:hol4-line-counts}.
\begin{table}[h]
\caption{Statistics of various theories in this work.}
\Description{This table gives the statistics of various theories in this formalisation work.}
\label{tbl:hol4-line-counts}
\[
\begin{array}{llr}
\text{HOL4 Theory} & \text{Description} & \text{\#Lines}\\
\hline
\text{involute} & \text{basic involution} & 231\\
\text{iteration} & \text{function iteration and period} & 917\\
\text{iterateCompose} & \text{iteration of involute composition} & 1648\\
\text{iterateCompute} & \text{iteration period computation} & 939\\
\text{windmill} & \text{windmills and their involutions} & 1844\\
\text{twoSquares} & \text{two-squares by windmills} & 1317\\
\end{array}
\]
\end{table}
\noindent
The scripts are fully documented, including the traditional proofs as comment before each theorem. Although comments almost double the script size, the line counts are still indicative of the effort to convert ideas into formal proofs.
\subsection{Related Work}
\label{sec:related-work}
As noted in Section~\ref{sec:introduction}, Fermat's two squares theorem has been formalised. However, none of these formal proofs is constructive, in the sense that there is no formal proof of an algorithm to compute the two squares for a prime satisfying the theorem.
Fermat's two squares theorem has two parts: existence and uniqueness.
All formal proofs include the existence part (see Theorem~\ref{thm:fermat-two-squares-exists}) , using classic and modern existence proofs: the method of infinite descent is used in one system, Gaussian integers are employed in three systems, both Heath-Brown's proof and Zagier's proof are treated in two systems.
Only two formal proofs include the uniqueness part (see Theorem~\ref{thm:fermat-two-squares-unique}): Th{\'e}ry~\cite{Thery-2004} proved by algebraic identities and divisibility, and Hughes~\cite{Hughes-2019} proved by unique factorisation of Gaussian integers.
Recently, Dubach and Muehlboeck~\cite{Dubach-Muehlboeck-2021-acm} formalised Zagier's proof using involutions in Coq's Mathematical Components Library. They illustrated their proof using the windmills as per this paper, and extended the use of involutions on the same set to formalise also an integer-partition proof of Fermat's two squares theorem by Christopher~\cite{Christopher-2016-acm}.
A summary of these formal proofs, in chronological order, is given in Table~\ref{tbl:chronology-formalise-two-squares}.
\begin{table*}[h]
\caption{Chronology of formalisation of Fermat's two squares theorem.}
\Description{This table lists, in chronological order, the formal proofs of Fermat's two squares theorem, by various authors in different theorem provers.}
\label{tbl:chronology-formalise-two-squares}
\begin{tabular}{llll}
Year & Author(s)[reference] & Theorem Prover & Comment\\
\hline
2004 & Laurent Th{\'e}ry~\cite{Thery-2004} & Coq & Gaussian integers, with uniqueness\\
2007 & Roelof Oosterhuis~\cite{Oosterhuis-2007} & Isabelle & Euler's proof with infinite descent\\
2009 & Marco Riccardi~\cite{Riccardi-2009} & Mizar & Heath-Brown's proof with involutions\\
2010 & John Harrison~\cite{Harrison-2010} & HOL Light & Zagier's proof with involutions\\
2012 & Anthony Narkawicz~\cite{Narkawicz-2012-acm} & NASA PVS & Zagier's proof with involutions\\
2015 & Mario Carneiro~\cite{Carneiro-2015} & MetaMath & Gaussian integers\\
2016 & Rob Arthan~\cite{Arthan-2016} & ProofPower & Heath-Brown's proof with involutions\\
2019 & Chris Hughes~\cite{Hughes-2019} & Lean & Principal Ideal Ring of Gaussian integers, with uniqueness\\
2021 & Dubach and Muehlboeck~\cite{Dubach-Muehlboeck-2021-acm} & Coq & Zagier's and Christopher's proofs with involutions\\
\end{tabular}
\end{table*}
\section{Conclusion}
\label{sec:conclusion}
About Fermat's two squares theorem,
G. H. Hardy wrote in his 1940 essay \emph{A Mathematician's Apology}~\cite[Section 13]{Hardy-1940-acm}:
\bigskip
\begin{mquote}[1em]
\textit{This is Fermat's theorem, which is ranked, very justly, as one of the finest of arithmetic. Unfortunately, there is no proof within the comprehension of anybody but a fairly expert mathematician.}
\end{mquote}
\bigskip
\noindent
This work has been a rewarding exercise in formalisation,
delivering a proof of Fermat's Theorem~\ref{thm:fermat-two-squares-thm} using only natural numbers, involutions, and counting.
There is a certain sense of mathematical beauty when a non-trivial result can be shown by elementary means, borrowing elegant ideas by Zagier and Spivak.
Moreover, by developing a theory of involution iteration, an algorithm to compute the two squares of the theorem can be formally shown to be correct.
\paragraph*{Future Work}
The theory in Section~\ref{sec:orbits}, about orbits and fixed points, can be developed using group actions, since the iteration indices form an addition cyclic group under \HOLConst{mod} $p$, where $p$ is the orbit period.
One can exploit the symmetry in permutation orbits, especially for permutations arising from two involutions, to improve the algorithm, as shown in the analysis by Shiu~\cite{Shiu-1996}.
In HOL4, this direction can start from the algebra of group theory in Chan and Norrish~\cite{Chan-Norrish-cpp-2012}.
A formal analysis of the performance of the algorithm for two squares described in Definition~\ref{def:two-sq-def} can be modelled using an approach in Chan~\cite{Chan-ANU-2019-acm}.
\ifdefined
\section*{Acknowledgements}
\label{sec:acknowledgements}
\addcontentsline{toc}{section}{Acknowledgments}
Many thanks to Michael Norrish for his careful review of the draft, providing useful advice and helpful recommendations to improve this paper.
The author is also grateful to the anonymous reviewers who pointed out typographical errors and suggested clarifications.
This paper has been revised to incorporate their comments.
\else
\fi
\ifdefined
\else
\section*{Appendices}
| {
"timestamp": "2022-01-17T02:10:37",
"yymm": "2112",
"arxiv_id": "2112.02556",
"language": "en",
"url": "https://arxiv.org/abs/2112.02556",
"abstract": "The two squares theorem of Fermat is a gem in number theory, with a spectacular one-sentence \"proof from the Book\". Here is a formalisation of this proof, with an interpretation using windmill patterns. The theory behind involves involutions on a finite set, especially the parity of the number of fixed points in the involutions. Starting as an existence proof that is non-constructive, there is an ingenious way to turn it into a constructive one. This gives an algorithm to compute the two squares by iterating the two involutions alternatively from a known fixed point.",
"subjects": "Logic in Computer Science (cs.LO); Number Theory (math.NT)",
"title": "Windmills of the minds: an algorithm for Fermat's Two Squares Theorem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.980280867283954,
"lm_q2_score": 0.8198933315126792,
"lm_q1q2_score": 0.8037257460955796
} |
https://arxiv.org/abs/1509.07716 | The width of quadrangulations of the projective plane | We show that every $4$-chromatic graph on $n$ vertices, with no two vertex-disjoint odd cycles, has an odd cycle of length at most $\tfrac12\,(1+\sqrt{8n-7})$. Let $G$ be a non-bipartite quadrangulation of the projective plane on $n$ vertices. Our result immediately implies that $G$ has edge-width at most $\tfrac12\,(1+\sqrt{8n-7})$, which is sharp for infinitely many values of $n$. We also show that $G$ has face-width (equivalently, contains an odd cycle transversal of cardinality) at most $\tfrac14(1+\sqrt{16 n-15})$, which is a constant away from the optimal; we prove a lower bound of $\sqrt{n}$. Finally, we show that $G$ has an odd cycle transversal of size at most $\sqrt{2\Delta n}$ inducing a single edge, where $\Delta$ is the maximum degree. This last result partially answers a question of Nakamoto and Ozeki. | \section{Introduction}
\label{sec:introduction}
Erd\H os~\cite{Erd74} asked whether there is a constant $c$ such that every
$n$-vertex $4$-chromatic graph has an odd cycle of length at most $c\sqrt n$.
Kierstead, Szemer\'edi and Trotter~\cite{KST84} proved the conjecture
with $c=8$, and the constant was gradually brought down to $c=2$~\cite{Jia01,Nil99}.
A natural question, asked by Ngoc and Tuza~\cite{NgoTuz95}, is to determine
the infimum of $c$ such that every $4$-chromatic graph on $n$ vertices has an
odd cycle of length at most $c\sqrt n$.
A construction due to Gallai~\cite{Gal63} shows that $c>1$, and this
was subsequently improved to $c>\sqrt 2$ by Ngoc and Tuza~\cite{NgoTuz95},
and independently by Youngs~\cite{You96}. The graphs they used---the so-called
\emph{generalized Mycielski graphs}---are a subclass of a rich family of graphs
known as \emph{non-bipartite projective quadrangulations}. These are graphs that
embed in the projective plane so that all faces are bounded by four edges, but
are not bipartite. This family of graphs plays an important role in the
study of the chromatic number of graphs on surfaces: it was shown by Youngs~\cite{You96} that all such
graphs are $4$-chromatic. Gimbel and Thomassen~\cite{GimTho97} later proved
that triangle-free projective-planar graphs are $3$-colorable if and
only if they do not contain a non-bipartite projective quadrangulation,
and used this to show that the $3$-colorability of triangle-free
projective-planar graphs can be decided in polynomial time.
Thomassen~\cite{Tho04} also used projective quadrangulations to give
negative answers to two questions of Bollob\'as~\cite{Bol78} about
$4$-chromatic graphs.
Let $G$ be a non-bipartite projective quadrangulation. A key
property of such a graph is that any cycle in $G$ is contractible on the
surface if and only if it has even length. In particular, the length of a
shortest odd cycle in $G$ is precisely the \emph{edge-width} of $G$,
the length of a shortest non-contractible cycle. Since any two
non-contractible closed curves on the projective plane intersect, it
also follows that $G$ does not contain two vertex-disjoint odd cycles.
The interest in the study of odd cycles in $4$-colorable graphs also
comes from the following question of Erd\H os~\cite{Erd68}: does every
$5$-chromatic $K_5$-free graph contain a pair of vertex-disjoint odd
cycles? Erd\H os's question may be rephrased as follows: is every
$K_5$-free graph without two vertex-disjoint odd cycles
$4$-colorable? This was answered in the affirmative by Brown and
Jung~\cite{BroJun69}. The non-bipartite projective quadrangulations
provide an infinite family of graphs showing that $4$ cannot be
replaced by $3$. Note that Erd\H os's question was generalized by
Lov\'asz and became known as the Erd\H os--Lov\'asz Tihany
Conjecture. So far only a few cases of the conjecture have been
proved.
\smallskip
Our first theorem settles the problem of Ngoc and Tuza for the case of
$4$-chromatic graphs with no two vertex-disjoint odd cycles (and in
particular, for non-bipartite projective
quadrangulations).
\begin{theorem}\label{thm:oddcycle2}
Let $G$ be a $4$-chromatic graph on $n$ vertices without two
vertex-disjoint odd cycles. Then $G$ contains an odd cycle of length
at most $\tfrac12(1+\sqrt{8n-7})$.
\end{theorem}
Note that in the generalized Mycielski graphs found by Ngoc and Tuza~\cite{NgoTuz95},
and independently by Youngs~\cite{You96}, the shortest odd cycles have precisely this number
of vertices, so Theorem~\ref{thm:oddcycle2} is sharp for infinitely
many values of $n$.
\smallskip
An \emph{odd cycle transversal} in a graph $G$ is a set of vertices
$S$ such that $G-S$ is bipartite. Since any two odd cycles intersect
in a non-bipartite projective quadrangulation $G$, it follows that
any odd cycle in $G$ is also an odd cycle transversal of $G$. The following
slightly more general result holds. If $\gamma$ is a non-contractible
closed curve whose intersection with $G$ is a subset $S \subseteq V(G)$
(the minimum size of such a set $S$ is called the \emph{face-width} of $G$),
then $G-S$ is bipartite. It follows that the minimum size of an odd cycle
transversal of $G$ cannot exceed the face-width of $G$, and it can be proved
that the two parameters are indeed equal.
\Cref{thm:oddcycle2} immediately implies that a non-bipartite
projective quadrangulation on $n$ vertices has an odd cycle
transversal with at most $\tfrac12(1+\sqrt{8n-7}) \approx \sqrt{2n}$ vertices. Our next
theorem improves the bound to roughly $\sqrt{n}$.
\begin{theorem}
\label{thm:OCT-upper}
Let $G$ be a non-bipartite projective quadrangulation on $n$
vertices. Then $G$ has an odd cycle transversal of size at most
$\tfrac14+\sqrt{n-\tfrac{15}{16}}$.
\end{theorem}
The next result shows that it is almost optimal.
\begin{theorem}
\label{thm:OCT-lower}
There are infinitely many values of $n$ for which there are non-bipartite
projective quadrangulations on $n$ vertices containing no odd cycle
transversal of size less than $\sqrt n$.
\end{theorem}
\smallskip
Nakamoto and Ozeki (private communication) have asked whether every
$n$-vertex non-bipartite projective quadrangulation can be
$4$-colored so that one color class has size $1$ and another has
size $o(n)$. While we were unable to answer their question in general,
our final theorem gives a positive answer when the maximum degree is
$o(n)$.
\begin{theorem}\label{thm:sqrtD}
Let $G$ be a non-bipartite projective quadrangulation on $n$
vertices, with maximum degree $\Delta$. There exists an odd cycle
transversal of size at most $\sqrt{2\Delta n}$ inducing a single edge.
\end{theorem}
The rest of the paper is organized as follows. In
\Cref{sec:preliminaries} we introduce the necessary terminology and
prove a number of lemmas that will be used later. In
\Cref{sec:oddcycle} we prove \Cref{thm:oddcycle2} using a theorem of
Lins on graphs embedded in the projective plane (an equivalent form of
the Okamura-Seymour theorem in Combinatorial Optimisation), and the
Two Disjoint Odd Cycles Theorem of Lov\'asz. In \Cref{sec:OCT} we
prove \Cref{thm:OCT-upper} using a theorem of Randby~\cite{Ran97}, and
then show that a similar result holds for any $4$-vertex-critical graph in
which any two odd cycles intersect. As a consequence of
\Cref{thm:OCT-upper}, we also deduce an (almost tight) lower bound on the independence
number of non-bipartite projective quadrangulations. In
Section~\ref{sec:jap}, we prove Theorem~\ref{thm:sqrtD} and finally,
we conclude with some open problems in Section~\ref{sec:conclusion}.
\section{preliminaries}
\label{sec:preliminaries}
Our graph theoretic terminology is standard, and follows Bondy and
Murty~\cite{BonMur08}. For the notions from algebraic topology, the
reader is referred to~\cite{Mun84}.
We denote the (real) projective plane by ${\mathbb P}^2$.
We will use the following properties of the projective plane, which
can be proved using basic algebraic topology. Namely, every simple closed
curve $\gamma:[0,1] \to {\mathbb P}^2$ is either nullhomotopic or non-contractible,
and two non-contractible essential simple closed curves intersect transversally
an odd number of times.
Given a non-bipartite projective quadrangulation $G$, it is not hard to show
(see e.g. ~\cite[Lemma 3.1]{KaiSte15}) that all contractible closed walks
in $G$ have even length, and all non-contractible closed walks in $G$ have
odd length.
Given a quadrangulation $G=(V,E)$ of the projective plane ${\mathbb P}^2$, a
\emph{support set} $S$ is a circularly ordered subset of $V$ (with
repetitions allowed), such that any two consecutive vertices of $S$
are on a common face of $G$. Since $G$ is a quadrangulation, it
follows that two distinct consecutive vertices of $S$ are either adjacent or
have a common neighbor (in this case, we say that they are
\emph{opposite}). The \emph{size} of $S$ is the number of pairs of
consecutive vertices of $S$, the \emph{order} of $S$ is the number of
pairs of consecutive vertices of $S$ that are adjacent in $G$, and the
\emph{parity} of $S$ is the parity of the order of $S$. Note that
to any support set $S$ of $G$ we can associate a closed curve of ${\mathbb P}^2$
meeting $G$ in $S$, and encountering the vertices of $S$ in their
circular order. We denote such a curve by $\rho(S)$. Conversely, to
any curve $\rho$ meeting $G$ in a subset $S\subseteq V$ we can
associate a support set $S(\rho)$ whose circular order coincides with
the order in which $\rho$ visits the vertices of $S$.
Given a support set $S$ of $G$, and two consecutive vertices
$v_i,v_{i+1}$ of $S$ that are opposite, a \emph{shift} in $S$ at
$(v_i,v_{i+1})$ is the support set obtained from $S$ by adding a
common neighbor $u_i$ of $v_i$ and $v_{i+1}$ between $v_i$ and
$v_{i+1}$ in the support set ($u_i$ may be chosen arbitrarily among
the common neighbors of $v_i$ and $v_{i+1}$ in $G$). Observe that
replacing a pair of opposite vertices by two pairs of adjacent
vertices does not change the parity of the support set. Moreover, if
$S'$ is obtained from $S$ by a shift, then $\rho(S)$ and $\rho(S')$
are homologous. For convenience, we write it as a lemma.
\begin{lemma}\label{lem:shift}
Let $G$ be a non-bipartite quadrangulation of ${\mathbb P}^2$ and $S$ a
support set of $G$. If a support set $S'$ is obtained from $S$ by a
sequence of shifts, then $S'$ and $S'$ have the same parity and
$\rho(S)$ and $\rho(S')$ are homologous.
\end{lemma}
Recall that a closed walk
in a non-bipartite quadrangulation of ${\mathbb P}^2$ is odd if and only if it
is non-contractible. The next lemma shows a similar result for
support sets.
\begin{lemma}\label{lem:parity}
If $G$ is a non-bipartite quadrangulation of ${\mathbb P}^2$ and $S$ a
support set of $G$, then $S$ is odd if and only if $\rho(S)$ is
non-contractible. In particular, if $S$ is odd, then $G-S$ is
bipartite.
\end{lemma}
\begin{proof}
For any two consecutive vertices $v_i$ and $v_{i+1}$ of $S$ that are
opposite, we make a shift at $(v_i,v_{i+1})$. Let $S'$ be the support
set thus obtained. By Lemma~\ref{lem:shift}, $S$ and $S'$ have the
same parity and $\rho(S)$ and $\rho(S')$ are homologous. By the
definition of $S'$, any two consecutive vertices are adjacent. It
follows that $S'$ is a closed walk in $G$. Since a closed walk in a
non-bipartite quadrangulation of ${\mathbb P}^2$ is odd if and only if it is
non-contractible, $S$ is odd if and only if $\rho(S)$ is
non-contractible.
Assume now that $S$ is odd, and so $\rho(S)$ is non-contractible. Then
the removal of $S$ yields a graph embedded in the plane, with all inner
faces bounded by an even number of edges. Therefore, $G-S$ is bipartite.
\end{proof}
\begin{lemma}\label{lem:shortest}
Let $G$ be a non-bipartite quadrangulation of ${\mathbb P}^2$ and $S$ an odd
support set of $G$. Then there is an odd support set $S' \subseteq S$,
with order at most the order of $S$, such that the vertices of $S'$
are pairwise distinct, and two vertices of $S'$ are adjacent or
opposite if and only if they are consecutive.
\end{lemma}
\begin{proof}
Let $S'\subseteq S$ be an odd support set of order at most the order
of $S$, and (with respect to these properties) with minimum
size. Assume first that some vertex appears at least twice in
$S'$. Then we can divide $S'$ into two support sets of different
parities. In particular there is an odd support set $S''\subseteq S'$,
of order at most the order of $S'$, and of size less than the size of
$S'$. This contradicts the minimality of $S'$.
Assume now that two non-consecutive vertices $u$ and $v$ of $S'$ are
opposite or adjacent. Again, we can divide $S'$ into two support sets
(where $u$ and $v$ are now consecutive), of order at most the order of
$S'$ plus one. Since $S'$ is odd, the two support sets have
different parities, so one of them is odd (and therefore has order at
most the order of $S'$), which contradicts the minimality of
$S'$.
\end{proof}
The same proof also gives the following similar result, which will be needed
later.
\begin{lemma}\label{lem:shortest2}
Let $G$ be a non-bipartite quadrangulation of ${\mathbb P}^2$ and $S$ an odd
support set of $G$ corresponding to a non-contractible closed walk in $G$. Then $G$
contains an odd cycle with vertex set $S' \subseteq S$.
\end{lemma}
\section{Short odd cycles}
\label{sec:oddcycle}
A key result is the following corollary of a theorem of Lins~\cite{Lin81}
(see also~\cite[Corollary 2.4]{ArcBon97}).
\begin{lemma}\label{lem:lins}
Let $G=(V,E)$ be a projective planar graph with a shortest non-contractible cycle of
length $\ell$ and a shortest non-contractible cycle of length $\ell^*$ in its dual
graph $G^*$. Then $|E|\ge \ell \cdot \ell^*$.
\end{lemma}
We now show that the following result follows from
Lemma~\ref{lem:lins} as a fairly simple consequence.
\begin{theorem}\label{thm:oddcycle}
Let $G$ be a non-bipartite projective quadrangulation on $n$
vertices. Then $G$ contains an odd cycle of length at most $\tfrac12(1+\sqrt{8n-7})$.
\end{theorem}
\begin{proof}
A standard application of Euler's formula shows that $G$ has
$m=2n-2$ edges. Let
$\ell$ be the length of a shortest odd cycle in $G$. By
Lemma~\ref{lem:lins}, we know that the dual graph $G^*$ of $G$
contains a non-contractible cycle of length $\ell^* \le
(2n-2)/\ell$. Let $C^*$ be such a cycle, and let
$(f_1,f_2,\ldots,f_k)$ be the faces of $G$ corresponding to the
vertices of $C^*$. Note that any two consecutive faces $f_i,f_{i+1}$
in $C^*$ share an edge, which we call $e_i$. For any $1\le i \le
k+1$ (where the indices $1$ and $k+1$ coincide), we will choose a vertex $v_i$
in each edge $e_i$, in a specific way. We start by choosing $v_1$ in
$e_1$ arbitrarily, and for any $i>1$ we distinguish two cases. If
$v_{i-1} \in e_i$, then we set $v_i=v_{i-1}$, and otherwise we
choose for $v_i$ a vertex adjacent to $v_{i-1}$ in $f_i$ (note that
such a vertex always exists). Let $S$ be the set of vertices thus
chosen (each maximal sequence of consecutive vertices
$v_i,v_{i+1},\ldots,v_{j}$ such that $v_i=v_{i+1}=\cdots =v_{j}$ is
reduced to a single vertex $v_i$).
Any two consecutive vertices are
adjacent, so we obtain a closed walk in $G$ of length at most
$\ell^*+1$ that is homotopic to $C^*$, and therefore non-contractible.
It follows that $S$ is an odd support set where any
two consecutive vertices are adjacent. By Lemma~\ref{lem:shortest2},
$G$ contains an odd cycle of length at most $\ell^*+1\le
1+(2n-2)/\ell$. It follows that $\ell^2-\ell \le 2n-2$, so
$\ell\le \tfrac12(1+\sqrt{8n-7})$, as desired.
\end{proof}
A \emph{$k$-separation} in a graph $G$ is a pair $(G_1,G_2)$ of
subgraphs of $G$ such that $V(G) = V(G_1) \cup V(G_2)$, $|V(G_1)\cap
V(G_2)|=k$, $E(G_1)\cap E(G_2)=\emptyset$, and $E(G_i)\cup V(G_i -
G_{3-i})\neq \emptyset$ for $i=1,2$. A graph $G$ is said to be
\emph{internally $4$-connected} if $G$ is 3-connected and for every
3-separation $(G_1, G_2)$ in $G$, $|G_1|\le 4$ or $|G_2|\le 4$.
The following characterisation of graphs without two vertex-disjoint
odd cycles will play a key role in our proofs. It was first proved by
Lov\'asz using Seymours's characterisation of regular matroids (see~\cite{Sey95}).
A simpler proof was recently given by Kawarabayashi and Ozeki~\cite{KawOze13}.
\begin{theorem}
\label{thm:disjoint-cycle}
Let $G$ be an internally $4$-connected graph. Then $G$ has no two
vertex-disjoint odd cycles if and only if $G$ satisfies one of the
following conditions:
\begin{enumerate}
\item $G-v$ is bipartite, for some $v \in V(G)$;
\item $G-\{e_1,e_2,e_3\}$ is bipartite for some edges $e_1, e_2, e_3 \in E(G)$
such that $e_1, e_2, e_3$ form a triangle;
\item $|V(G)| \leq 5$;
\item $G$ can be embedded into the projective plane so that every face boundary
has even length.
\end{enumerate}
\end{theorem}
In order to extend Theorem~\ref{thm:oddcycle} to
$4$-chromatic graphs without two vertex-disjoint odd cycles, we will
need the following technical lemma about precoloring
extension in bipartite graphs.
\begin{lemma}\label{lem:ext}
Let $G$ be a bipartite graph and $X$ be a subset of $V(G)$ of size
at most $3$. Then any (proper) precoloring of $X$
extends to a $3$-coloring of $G$, unless
\begin{enumerate}[{\normalfont (i)}]
\item $X =\{x,y,z\}$ for distinct $x,y,z$,
\item $x$, $y$, and $z$ are on the same side of the bipartition of $G$,
\item in the precoloring, $x$, $y$ and $z$ have pairwise different colors, and
\item any pair of vertices in $\{x,y,z\}$ have a common neighbor in $G$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $A,B$ be the bipartition of $G$. If in the precoloring of $X$, $A$
contains at most two different colors, then we color each vertex of $A-X$ with one of these two
colors, and each vertex of $B-X$ with the third color. This yields a
(proper) 3-coloring of $G$. Otherwise, by
symmetry, $X=\{x,y,z\}\in A$ and $x,y,z$ have distinct colors; this
proves conditions (i)--(iii) of the lemma.
Assume that $x$ and $y$ have no common neighbor
in $G$ (and thus in $B$), and $x,y,z$ are colored $1,2,3$
respectively. Then we color each vertex of $A-X$ with color 3, each
neighbor of $x$ with color 2, and the remaining vertices of $B$ with
color 1. Since $x$ and $y$ have no common neighbor, this is a proper
3-coloring of $G$ extending the precoloring of $X$.
\end{proof}
A $k$-chromatic graph is said to be \emph{$k$-vertex-critical} if for any vertex $v$,
$G-v$ is $(k-1)$-colorable. A graph is \emph{projective} if it can be embedded in ${\mathbb P}^2$.
We will need the following direct consequence of a result of Gimbel and
Thomassen~\cite[Theorem 5.4]{GimTho97}:
\begin{lemma}\label{lem:GT}
Let $G$ be a simple triangle-free projective graph. If $G$ is $4$-vertex-critical,
then $G$ is a (non-bipartite) projective quadrangulation.
\end{lemma}
\begin{proof}
Since $G$ is a $4$-chromatic triangle-free projective graph, $G$ contains
a non-bipartite projective quadrangulation $H$ as a subgraph by a result
of Gimbel and Thomassen~\cite[Theorem 5.4]{GimTho97}. Since $H$ is itself
$4$-chromatic and $G$ is vertex-critical, $H$ must be be a spanning
subgraph of $G$. If $H$ is a proper subgraph of $G$, then there is an edge
$e \in E(G) \setminus E(H)$. Both end vertices of $e$ must lie on the
boundary of the same face in some embedding of $H$ in ${\mathbb P}^2$, so
adding $e$ to $H$ creates a triangle or a pair of parallel edges, contradicting
the hypothesis of the lemma. Therefore $G=H$, so $G$ is a non-bipartite
projective quadrangulation.
\end{proof}
We now extend Theorem~\ref{thm:oddcycle} to
$4$-chromatic graphs without two vertex-disjoint odd cycles.
\begin{proof}[Proof of Theorem~\ref{thm:oddcycle2}]
We prove the result by induction on $n$. We can assume that $G$ is
$4$-vertex-critical. In
particular, $G$ is connected and does not contain a clique cutset (a
clique whose removal disconnects the graph). We may assume that $G$
has at least six vertices (otherwise $G$ has four or five vertices
and contains a triangle and the result clearly holds). As a
consequence, we may also assume that $G$ is triangle-free, since
otherwise $G$ has an odd cycle of length
$3\le \tfrac12(1+\sqrt{8n-7})$.
\smallskip
Assume first that $G$ is internally $4$-connected. In this case we can
apply Theorem~\ref{thm:disjoint-cycle}. As $G$ is $4$-chromatic, triangle-free, and
has at least six vertices, none of cases (i)--(iii) applies. It follows
that $G$ can be embedded into the projective plane. Since $G$ is
$4$-vertex-critical and triangle-free, by Lemma~\ref{lem:GT} it
is a non-bipartite projective quadrangulation and the result follows
directly from Theorem~\ref{thm:oddcycle}.
\smallskip
Assume now that $G$ is not internally $4$-connected. Since $G$ has no
clique-cutset, it is $2$-connected. Hence there exist graphs
$G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$, and sets $X_i \in V_i$ of two or
three vertices ($i=1,2$) with $|X_1|=|X_2|$, such that $G_1[X_1]$ and
$G_2[X_2]$ are equal (as labelled graphs), and $G$ can be obtained
from $G_1,G_2$ by idenfying $X_1$ in $G_1$ and $X_2$ in $G_2$ (call
$X$ the corresponding set of vertices in $G$, inducing the same
graph as $G_1[X_1]$ and $G_2[X_2]$). Moreover, if
$|X_1|=|X_2|=3$, then for $i=1,2$, $V_i - X_i$ contains at least two
vertices (this follows from the definition of internal
$4$-connectivity). Note that since $G$ is triangle-free, $X$ is
bipartite. In what follows, by a slight abuse of notation, we give
the same name to a vertex of $X_1$, the vertex of $X_2$ it is
indentified with, and the resulting vertex of $X$.
Assume that $|X|=2$, say $X=\{x,y\}$. Since
$G$ has no clique-cutset, $x$ and $y$ are non-adjacent. If $G$
contains an odd cycle disjoint from $X$ (say in $G_2- X_2$), then since $G$ has no two
vertex-disjoint odd cycles, $G_1$ is bipartite. Since $G$ is
$4$-vertex-critical, $G_2$ is 3-colorable and by Lemma~\ref{lem:ext} any
$3$-coloring of $G_2$ extends to $G_1$, a contradiction. It follows that every odd cycle of $G$ intersects
$X$, so $G-X$ is bipartite. As $X$ is a stable set, $G$ is
$3$-colorable, which is a contradiction.
We can now assume that $|X|=3$, say $X=\{x,y,z\}$. Assume first that
$X$ is a stable set. If all odd cycles of $G$ intersect $X$, then
$G-X$ is bipartite and since $X$ is stable, $G$ is $3$-colorable, a
contradiction. Otherwise $G$ contains an odd cycle disjoint from $X$
(say in $G_2- X_2$). Then $G_1$ is bipartite, and by
Lemma~\ref{lem:ext}, $x,y,z$ are on the same side of the bipartition of
$X$, and in each $3$-coloring of $G_2$ they have three distinct
colors. Moreover, each pair of vertices among $x,y,z$ has a common
neighbor in $G_1$. Let $H$ be the graph obtained from $G_2$ by adding
a vertex $v$ adjacent to $x,y,z$. Since $G_1-X_1$ contains at least
two vertices, $H$ has less vertices than $G$. Note that $H$ is
$4$-chromatic (since for any $3$-coloring of $H-v$, the neighbors of $v$
have three distinct colors), and has no two vertex-disjoint odd cycles:
each odd cycle $C$ of $H$ is either disjoint from $v$, and is
therefore an odd cycle of $G$, or intersects $X$ in two vertices, say
$x$ and $y$, and corresponds to an odd cycle of $G$ coinciding with
$C$ in $G_2$ and whose intersection with $G_1-X$ is a single common
neighbor of $x$ and $y$ in $G_1$ (which is known to exist). By the
induction hypothesis, $H$ (and therefore $G$) has an odd cycle of
length at most $\tfrac12(1+\sqrt{8n-7})$.
Since $G$ is triangle-free, we can assume that $G[X]$ has two
non-adjacent vertices, say $y,z$, while $x$ is adjacent to at least one
of them, say $y$. By Lemma~\ref{lem:ext}, none of $G_1,G_2$ is
bipartite and in particular, every odd cycle intersects $X$. Note that
$G-\{y,z\}$ is not bipartite (since otherwise $G$ would be
$3$-colorable), so we can assume that $G_2$ has an odd cycle $C_2$
containing $x$ and avoiding $y,z$. Therefore every odd cycle of
$G_1$ intersects $x$, so $G_1-x$ is bipartite, say with bipartition
$A,B$. Take a $3$-coloring $c$ of $G_2$, and assume without loss of generality that $x$ has color
$1$. If $y,z$ are both in $A$ and $x,y,z$ do not have pairwise distinct
colors, then $c$ easily extends to a $3$-coloring of $G_1$. Similarly,
if $y,z$ have distinct colors and are in distinct partite sets, then
$c$ easily extends to a $3$-coloring of $G_1$. So we can assume that
either
\begin{enumerate}
\item $y,z \in A$, $y$ has color $2$, and $z$ has color $3$, or
\item $y\in A$, $z\in B$, and $y,z$ are both colored $2$.
\end{enumerate}
Let $B_x$ be the set of
neighbors of $x$ in $B$. We color $A-y$ with color $3$, $B_x$ with
color $2$, and $B-(B_x\cup X)$ with color $1$. Since $G$ is triangle-free
there are no edges between $y$ and $B_x$, so the resulting $3$-coloring
of $G$ is proper, which contradicts the fact that $G$ is
$4$-chromatic. This concludes the proof of Theorem~\ref{thm:oddcycle2}.
\end{proof}
\section{Small odd cycle transversals}
\label{sec:OCT}
A (multi)graph $G$ embedded in a surface $\Sigma$ is \emph{minimal of
face-width $k$} if the face-width of $G$ is $k$, while for any edge
$e$ of $G$, the face-width of $G/e$ (the (multi)graph embedded in $\Sigma$
obtained from $G$ be contracting $e$) and the face-width of $G-e$ are
less than $k$.
We will use the following result of Randby~\cite{Ran97}:
\begin{theorem}\label{thm:ran97}
For any integer $k$, if a multigraph embedded in ${\mathbb P}^2$ is minimal of
face-width $k$, then it contains exactly $2k^2-k$ edges.
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:OCT-upper}]
Let $G$ be a non-bipartite quadrangulation on $n$ vertices of the
projective plane ${\mathbb P}^2$, and let $k$ be the face-width of $G$. By
Euler's formula, $G$ has $m=2n-2$ edges. If $G$ is not minimal with
face-width $k$, we delete or contract edges of $G$ until we obtain a
(multi)graph $H$ that is minimal with face-width $k$. Note that $H$
has at most $m=2n-2$ edges by construction, and exactly $2k^2-k$ edges
by Theorem~\ref{thm:ran97}. It follows that $2k^2-k\le 2n-2$ and so
$k\le \tfrac14+\sqrt{n-\tfrac{15}{16}}$, as desired.
\end{proof}
We believe that Theorem~\ref{thm:OCT-upper} can be extended to
$4$-chromatic graphs with no two vertex-disjoint odd cycles, in the same
way Theorem~\ref{thm:oddcycle2} extends
Theorem~\ref{thm:oddcycle}. However, we have only been able to prove
that $4$-vertex-critical graphs with no two vertex-disjoint odd cycles satisfy
the result.
\begin{theorem}
\label{thm:OCT-upper2}
Let $G$ be a $4$-vertex-critical graph on $n$ vertices without two
vertex-disjoint odd cycles. Then $G$ has an odd cycle transversal of
cardinality at most $\tfrac14+\sqrt{n-\tfrac{15}{16}}$.
\end{theorem}
\begin{proof}
The proof proceeds by induction on $n$. We can assume that $G$ has
at least nine vertices (by checking small $4$-vertex-critical graphs, for
instance using~\cite{CGSZ15}) and thus no odd cycle transversal on
at most three vertices (in particular, $G$ is triangle-free). If $G$ is
not internally $4$-connected then there exist graphs $G_1=(V_1,E_1)$
and $G_2=(V_2,E_2)$, and sets $X_i \subseteq V_i$ with at most three vertices
($i=1,2$) with $|X_1|=|X_2|$, such that $G_1[X_1]$ and $G_2[X_2]$
are equal (as labelled graphs), and $G$ can be obtained from
$G_1,G_2$ by identifying $X_1$ in $G_1$ and $X_2$ in $G_2$ (let $X$
be the corresponding set of vertices in $G$, inducing the same graph as
$G_1[X_1]$ and $G_2[X_2]$). Moreover, $G_1,G_2$ have the property
that if $|X_1|=|X_2|=3$, then for $i=1,2$, $V_i - X_i$ contains at
least two vertices.
If every odd cycle of $G$ intersects $X$, then $X$ is an odd cycle
transversal of size at most 3, which is a contradiction. It follows
that $G$ contains an odd cycle disjoint from $X$ (say in
$G_2-X_2$). Since any two odd cycles intersect, $G_1$ is bipartite.
Recall that $G$ is $4$-vertex-critical,
so $G_2$ is 3-colorable, and since no 3-coloring of $G_2$ extends to
$G_1$ (otherwise $G$ would be 3-colorable), by
Lemma~\ref{lem:ext}, we have (i) $X =\{x_1,x_2,x_3\}$ for distinct
$x_1,x_2,x_3$, (ii) $x_1$, $x_2$, and $x_3$ are on the same side of
the bipartition of $G_1$, (iii) in any 3-coloring of $G_2$, $x_1$,
$x_2$ and $x_3$ have pairwise different colors, and (iv) any pair of
vertices in $\{x_1,x_2,x_3\}$ have a common neighbor in $G_1$. If
there is a vertex $y$ in $G_1$, adjacent to each of $x_1,x_2,x_3$,
then $G_2-(V_1-\{y\})$ is $4$-chromatic, which contradicts the fact that
$G$ is $4$-vertex-critical (since $G_1-X_1$ contains at least
two vertices). It follows that no vertex of $G_1$ is adjacent to each
of $x_1,x_2,x_3$. So there is a set $Y=\{y_1,y_2,y_3\}$ of three vertices in $G_1$, such
that $y_1$ is adjacent to $x_2$ and $x_3$, $y_2$ is adjacent to $x_1$
and $x_3$, and $y_3$ is adjacent to $x_1$ and $x_2$. Since a
$4$-vertex-critical graph has minimum degree at least 3, $G_1$ contains at
least seven vertices. If $G_1$ has at least eight vertices, then remove
from $G$ all the vertices of $V_1-(X\cup Y)$, and add a vertex $z$
adjacent to $y_1,y_2,y_3$. The resulting graph $H$
is $4$-vertex-critical, has no two vertex-disjoint odd cycles, and is smaller
than $G$. By the induction hypothesis, $H$ has an odd cycle transversal $T$ with at
most $\tfrac14+\sqrt{n-\tfrac{15}{16}}$ vertices. Note that
$z$ does not appear in a minimum odd cycle transversal of $H$, so we
can assume that $z\not\in T$. It is easy to check that $T$ is
also an odd cycle transversal of $G$.
\smallskip
\begin{figure}[htbp]
\centering \includegraphics[scale=1]{7vg}
\caption{A 7-vertex bipartite graph.} \label{fig:7vg}
\end{figure}
By the above paragraph, we can assume that for any decomposition of
$G$ into $G_1$ and $G_2$ as above, on some set $X$ of at most three
vertices, $G_1$ induces the graph on seven vertices in Figure~\ref{fig:7vg}. In
particular, $X$ induces a stable set of size $3$. Moreover, it is not hard to
check that no vertex cutset of $G$ of size at most $3$ intersects $G_1-X$,
otherwise $G$ would contain a vertex cutset of size at most $2$. It
follows that $G$ can be constructed from some graph $G_0$ and a family
$t_1,t_2,\ldots,t_k$ of triples of vertices of $G_0$ by pasting the $X$-part
of a copy $G_i$ of the graph of Figure~\ref{fig:7vg} onto each triple $t_i$;
see Figure~\ref{fig:expl}, top left. Let $H$ be the graph obtained from
$G_0$ by adding, for each $1\le i \le k$, a vertex $z_i$ adjacent to
the vertices of $t_i$; see Figure~\ref{fig:expl},
bottom left. Observe that $H$ is $4$-vertex-critical (otherwise $G$
would be $3$-colorable), internally $4$-connected, and any two odd cycles
intersect. Note also that $H$ is triangle-free, since otherwise $G$
would also contain a triangle. By Theorem~\ref{thm:disjoint-cycle}, $H$ has an embedding
in the projective plane, and by Lemma~\ref{lem:GT}, $H$ is a non-bipartite
quadrangulation of the projective plane. We now replace each $z_i$ by
$G_i-X$ (see Figure~\ref{fig:expl}, right) and observe that $G$ is itself a quadrangulation
of the projective plane. By Theorem~\ref{thm:OCT-upper}, $G$ has an
odd cycle transversal with at most $\tfrac14+\sqrt{n-\tfrac{15}{16}}$ vertices, which concludes the proof.
\end{proof}
\begin{figure}[htbp]
\centering \includegraphics[scale=1]{expl}
\caption{Graphs $G$, $H$, and their representations as projective quadrangulations.} \label{fig:expl}
\end{figure}
\medskip
We now prove that the bound in Theorem~\ref{thm:OCT-upper} is at most $\tfrac14$
away from the optimum.
For a positive integer $k$, let $[k]$ denote the set
$\{0,\ldots,k-1\}$. Let $P_k$ be a path with vertex set $[k]$, with
vertices in the increasing order along $P_k$.
For $k\geq 2$, we define $G_k$ as the graph obtained from the
Cartesian product $P_k \Box P_k$ by adding the edges joining $(0,j)$
to $(k-1,k-j-1)$, and those joining $(j,0)$ to $(k-j-1,k-1)$. The
graphs $G_k$ embed as quadrangulations in ${\mathbb P}^2$; see \Cref{fig:grids}
for embeddings of $G_2$, $G_3$ and $G_4$ in ${\mathbb P}^2$.
\begin{figure}[ht]
\centering
\begin{tikzgraph}[scale=1.2,thin]
\def3{1.5}
\def1.5{0.5}
\draw[dashed] (0,0) circle (3);
\foreach\i in {1,...,4}
{
\path (45+90*\i:3) coordinate (c\i);
}
\draw (-1.5,1.5)--(-1.5,-1.5)
(-1.5,-1.5)--(1.5,-1.5)
(1.5,-1.5)--(1.5,1.5)
(1.5,1.5)--(-1.5,1.5)
(c1)--(-1.5,1.5)
(c2)--(-1.5,-1.5)
(c3)--(1.5,-1.5)
(c4)--(1.5,1.5);
\foreach \i in {-1,1}
{
\foreach \j in {-1,1}
{
\draw ({\i/2},{\j/2}) node[vertex] {};
}
}
\end{tikzgraph}
\hfil
\begin{tikzgraph}[scale=0.75,thin]
\def3{2.4}
\def1.5{1}
\def0{0}
\draw[dashed] (0,0) circle (3);
\foreach\i in {-1.5,...,1.5}
{
\path (\i,{sqrt(3^2-abs(\i^2))}) coordinate (n\i)
(\i,-{sqrt(3^2-abs(\i^2))}) coordinate (s\i)
(-{sqrt(3^2-abs(\i^2))},\i) coordinate (e\i)
({sqrt(3^2-abs(\i^2))},\i) coordinate (w\i);
}
\foreach\i in {1,...,4}
{
\path (45+90*\i:3) coordinate (c\i);
}
\foreach\i in {0}
{
\draw (n\i)--(s\i)
(e\i)--(w\i);
}
\draw (-1.5,1.5)--(-1.5,-1.5)
(-1.5,-1.5)--(1.5,-1.5)
(1.5,-1.5)--(1.5,1.5)
(1.5,1.5)--(-1.5,1.5)
(c1)--(-1.5,1.5)
(c2)--(-1.5,-1.5)
(c3)--(1.5,-1.5)
(c4)--(1.5,1.5);
\foreach \i in {-1.5,...,1.5}
{
\foreach \j in {-1.5,...,1.5}
{
\draw (\i,\j) node[vertex] {};
}
}
\end{tikzgraph}
\hfil
\begin{tikzgraph}[scale=0.6,thin]
\def3{3}
\def1.5{1.5}
\draw[dashed] (0,0) circle (3);
\foreach\i in {-3,-1,1,3}
{
\path ({\i/2},{sqrt(3^2-abs((\i/2)^2))}) coordinate (n\i)
({\i/2},-{sqrt(3^2-abs((\i/2)^2))}) coordinate (s\i)
(-{sqrt(3^2-abs((\i/2)^2))},{\i/2}) coordinate (e\i)
({sqrt(3^2-abs((\i/2)^2))},{\i/2}) coordinate (w\i);
}
\foreach\i in {1,...,4}
{
\path (45+90*\i:3) coordinate (c\i);
}
\foreach\i in {-1,1}
{
\draw (n\i)--(s\i)
(e\i)--(w\i);
}
\draw (-1.5,1.5)--(-1.5,-1.5)
(-1.5,-1.5)--(1.5,-1.5)
(1.5,-1.5)--(1.5,1.5)
(1.5,1.5)--(-1.5,1.5)
(c1)--(-1.5,1.5)
(c2)--(-1.5,-1.5)
(c3)--(1.5,-1.5)
(c4)--(1.5,1.5);
\foreach \i in {-3,-1,1,3}
{
\foreach \j in {-3,-1,1,3}
{
\draw ({\i/2},{\j/2}) node[vertex] {};
}
}
\end{tikzgraph}
\caption{The graphs $G_2$, $G_3$ and $G_4$ embedded in the projective plane ${\mathbb P}^2$.}
\label{fig:grids}
\end{figure}
\begin{proof}[Proof of \Cref{thm:OCT-lower}]
The argument is quite similar to the argument given in~\cite{Ree99}
for \emph{Escher walls}, a related construction of Lov\'asz and
Schrijver. Assume for the sake of contradiction that $G_k$ has an odd
cycle transversal $T$ of size at most $k-1$. Then for some $\ell \in
[k]$, $T$ is disjoint from the $\ell$-th row $R_\ell$ of $G_k$ (all
the vertices $(i,\ell)$, with $i \in [k]$). Consider, for each $i \in
[k]$, the set of vertices $L_i=\{(i,j)\,|\,j\in [\ell]\}\cup
\{(k-i-1,k-j-1)\,|\,j\in [k-\ell-1]\}$. Note that the sets $L_i$, $i
\in [k]$, are vertex-disjoint, so $T$ is disjoint from one of them, say
$L_m$. It follows that the set of vertices $C=L_m \cup \{(i,\ell)\,|\,
m \le i \le k-m-1\}\subseteq L_m \cup R_\ell$, is disjoint from $T$.
Observe that $C$ contains an odd cycle, which contradicts the fact
that $T$ was an odd cycle transversal.
\end{proof}
\Cref{thm:OCT-upper} immediately implies the following almost optimal lower bound on
the independence number of projective quadrangulations, which may be
new.
\begin{corollary}
Let $G$ be a non-bipartite projective quadrangulation on $n$
vertices. Then $\alpha(G) \geq \tfrac12\left(n-\tfrac14-\sqrt{n-15/16}\right)$.
Moreover, the graph $G_k$ satisfies
$\alpha(G_k) = \frac n2-\frac{\sqrt{n}}2$.
\end{corollary}
\begin{proof}
By \Cref{thm:OCT-upper} $G$ has an odd cycle transversal $S$ such
that $|S|\le \tfrac14+\sqrt{n-15/16}$. The graph $G-S$ is bipartite
on more than $n-\tfrac14-\sqrt{n-15/16}$ vertices, so at least one
color class of $G-S$ has more than
$\tfrac12\left(n-\tfrac14-\sqrt{n-15/16}\right)$ vertices.
We now focus on $G_k$. Since it contains an odd cycle transversal
$T$ of $k=\sqrt{n}$ vertices (for instance, take $T=\{(i,i)\,| i
\in [k]\}$), it also contains a stable set of size
$\tfrac12(n-\sqrt{n})$. We now prove that this bound is tight. For
the sake of contradiction, assume that there is a stable set $S$
of size more than $\frac n2-\frac{\sqrt{n}}2=\frac12
(k^2-k)$. Assume first that $k$ is even, and for $0\le i \le k/2$,
let $K_i=R_i \cup R_{k-1-i}$. Recall that the $\ell$-th row
$R_\ell$ of $G_k$ consists of the vertices $(i,\ell)$, with $i \in
[k]$. Observe that each $K_i$ contains a cycle of length $2k$, and
since $S$ is a stable set, $|S \cap K_i| \le k$. If $|S\cap K_i| \le k-1$
for every $i\in [k/2]$, then $S$ contains at most
$\tfrac{k}2(k-1)=\frac12 (k^2-k)$ vertices, which a contradiction.
Therefore $|S \cap K_i| = k$ for some index $i\in [k/2]$. As $(0,i)$
and $(k-1,k-1-i)$ are adjacent it follows that for each $j\in [k]$,
$(j,i)\in S$ if and only if $(j,k-1-i) \in S$. Since $k$ is even,
we also have that for each $j\in [k]$, $(j,i)\in S$ if and only if
$(j-1-j,i)\not\in S$.
Let $C_\ell$ be the $\ell$-th column of $G_k$, i.e., the vertices
$(\ell,j)$ with $j \in [k]$. By the same argument as above, there
exists an index $j$ such that $|S\cap L_j\|=k$, where $L_j=C_j \cup C_{k-1-j}$.
As before, we have $(j,i)\in S$ if and only if $(j,k-1-i) \not\in S$,
which contradicts the previous paragraph.
The proof of the case when $k$ is odd is quite similar and we therefore
omit it. The only difference is that in this case the middle row
$R_{\lfloor k/2\rfloor}$ and the middle column $C_{\lfloor k/2\rfloor}$ each
induce a cycle on $k$ vertices, which therefore contains at most
$\tfrac12(k-1)$ vertices of $S$.
\end{proof}
\section{Almost independent odd cycle transversals}\label{sec:jap}
Recall that by Theorem~\ref{thm:oddcycle}, every
non-bipartite projective quadrangulation contains an odd cycle of
length $O(\sqrt{n})$. This odd cycle is also and odd cycle
transversal, and by increasing its size by at most one, we can even make
sure that it induces a \emph{proper} subgraph of an odd cycle, i.e.,
a union of paths (in particular, a bipartite graph). Therefore,
every non-bipartite projective quadrangulation can be properly colored with colors
$1,2,3,4$ in such a way that only $O(\sqrt{n})$ vertices are colored 1
or 2.
Nakamoto and Ozeki (private communication) have asked
whether the $4$-coloring might be chosen in such a way that one color class has
size $1$ and another has size $o(n)$. We now give an
affirmative answer to their question for graphs with sublinear maximum
degree.
We start with a lemma, whose proof is quite similar to that of
Theorem~\ref{thm:oddcycle}. In what follows, the neighborhood of a vertex $v$ is denoted by $N(v)$.
\begin{lemma}\label{lem:neighborhood}
Let $G=(V,E)$ be a non-bipartite quadrangulation of ${\mathbb P}^2$ and $S$
an odd support set of $G$. Then there is a subset
$T \subseteq \bigcup_{v \in S} N(v)$
such that $G[T]$ contains a single edge, and $G-T$ is
bipartite.
\end{lemma}
\begin{proof}
Recall that the order of a support set $S$ of $G$ is the number of
pairs of consecutive vertices of $S$ that are adjacent in $G$.
We prove that there is a support set of order $1$ that is included in
$\bigcup_{i=1}^s N(v_i)$.
This support set can be obtained as follows. Choose a pair $v_i,v_{i+1}$ of
consecutive vertices of $S$ that are adjacent, and let
$v_j,v_{j+1}$ be the next pair of consecutive vertices that are
adjacent (possibly, $j=i+1$). Then all pairs of consecutive vertices
$v_k,v_{k+1}$, with $i<k<j$, are opposite. For each vertex $v_k$, $i<k\le j$,
let $s_k$ be a sequence of consecutive neighbors of $v_k$, in their circular order
around $v_k$, such that $s_{i+1}$ starts with $v_i$, $s_j$ ends with
$v_{j+1}$, and for any $i<k< j$, the last vertex of $s_k$ and the
first vertex of $v_{k+1}$ coincide (see Figure~\ref{fig:neighborhood}). We then replace the
sequence $v_i,v_{i+1},\ldots,v_{j+1}$ in $S$ by the concatenation of
the sequences $s_k$, for $i<k\le j$ (where the last vertex of each
sequence $s_k$ is identified with the first vertex of $s_{k+1}$). Note that
any two consecutive vertices in this new subsequence are opposite.
\smallskip
\begin{figure}[htbp]
\centering \includegraphics[scale=1]{neighborhood}
\caption{Obtaining an odd support set of order 1. Vertices of $S$ are
depicted with white dots and vertices of $T$ are depicted with white
squares.} \label{fig:neighborhood}
\end{figure}
We repeat the operation, starting at the next pair (after $v_{j+1}$)
of consecutive vertices that are adjacent, until only one such pair
remains. The support set thus obtained has order $1$ (and is therefore odd)
and is a subset of $\bigcup_{v\in S} N(v)$, as desired. By
Lemma~\ref{lem:shortest}, there is a support set $T\subseteq
\bigcup_{v\in S} N(v)$ of order $1$ that is not self-intersecting and such
that there is a unique pair of adjacent vertices in $T$, and these
two vertices are consecutive. It follows that $T$ induces a subgraph
with a unique edge, and by Lemma~\ref{lem:parity}, $G-T$ is
bipartite.
\end{proof}
We now turn to the proof of \Cref{thm:sqrtD}.
\begin{proof}[Proof of Theorem~\ref{thm:sqrtD}]
Let $C$ be a shortest odd cycle in $G$, and let $\ell$ be the length of
$C$. Assume first that $\ell\le \sqrt{2n/\Delta}$. Since an odd cycle
is an odd support set, it follows from Lemma~\ref{lem:neighborhood} that
the union of the neighborhoods of the vertices of $C$ contain a set
$T$ of vertices inducing a single edge, and such that $G-T$ is
bipartite. Since $G$ has maximum degree at most $\Delta$, $T$
contains at most $\ell \Delta\le \sqrt{2n \Delta}$ vertices, as
desired.
Assume now that $\ell\ge \sqrt{2n/\Delta}$. Since $G$ has
$2n-2$ edges, the dual graph $G^*$ of $G$ has a non-contractible cycle $C^*$ of length
less than $2n/\ell$ by Lemma~\ref{lem:lins}. Let
$(f_1,f_2,\ldots,f_k)$ be the faces of $G$ corresponding to the
vertices of $C^*$. Note that any two consecutive faces $f_i,f_{i+1}$ in
$C^*$ share an edge, call it $e_i$. For any edge $e_i$, we choose
one of the endpoints $v_i$ of $e_i$ as follows: we start by choosing
$v_1$ in $e_1$ arbitrarily, and for any $i>1$ we distinguish two
cases. If $v_{i-1} \in e_i$, then we set $v_i=v_{i-1}$, and otherwise
we choose for $v_i$ the vertex opposite to $v_{i-1}$ in $f_i$ (note
that in this case such vertex is necessarily an endpoint of
$e_i$). Note that $S=(v_i\,|\,1\le i \le
k)$ is a support set in $G$, and $\rho(S)$ is homologous to
$C^*$, and thus non-contractible.
By Lemma~\ref{lem:parity}, $S$ is an odd support set (in
particular $v_k$ and $v_1$ are adjacent, since all the other pairs of
consecutive vertices of $S$ are opposite). By
Lemma~\ref{lem:shortest}, $G$ contains a set $S'$ of less than
$2n/\ell \le \sqrt{2n \Delta} $ vertices such that $G-S'$ is bipartite and $S'$ induces a
subgraph of $G$ with a single edge. This concludes the proof of
Theorem~\ref{thm:sqrtD}.
\end{proof}
Note that a subgraph with a single edge has a proper $2$-coloring such
that one of the color classes is a singleton. It follows that
$n$-vertex projective quadrangulations with
$\Delta=o(n)$ can be $4$-colored in such way that one color class is a
singleton and another has size $o(n)$.
\section{Conclusion}\label{sec:conclusion}
We have seen that Theorem~\ref{thm:oddcycle2} is sharp for infinitely
many values of $n$. A natural question is whether the generalized
Mycielski graphs are the only extremal graphs.
As for the problem of finding a smallest odd cycle transversal, we
believe that Theorem~\ref{thm:OCT-upper} is not sharp and that the
right bound should be $\sqrt{n}$, which would be tight by
Theorem~\ref{thm:OCT-lower}.
It was proved by Tardif~\cite{Tar01} that the generalized Mycielki graphs have
\emph{fractional} chromatic number $2+o(\tfrac1{n})$, so a natural
question is whether the same holds for projective quadrangulations of
large edge-width. Note that it was proved by Goddyn~\cite{Goddyn}
(see also~\cite{DGMVZ05}), in the same spirit as the result of Youngs~\cite{You96}
on the chromatic number, that the \emph{circular} chromatic number of a projective
quadrangulation is either $2$ or $4$.
\bibliographystyle{plain}
| {
"timestamp": "2015-09-28T02:10:30",
"yymm": "1509",
"arxiv_id": "1509.07716",
"language": "en",
"url": "https://arxiv.org/abs/1509.07716",
"abstract": "We show that every $4$-chromatic graph on $n$ vertices, with no two vertex-disjoint odd cycles, has an odd cycle of length at most $\\tfrac12\\,(1+\\sqrt{8n-7})$. Let $G$ be a non-bipartite quadrangulation of the projective plane on $n$ vertices. Our result immediately implies that $G$ has edge-width at most $\\tfrac12\\,(1+\\sqrt{8n-7})$, which is sharp for infinitely many values of $n$. We also show that $G$ has face-width (equivalently, contains an odd cycle transversal of cardinality) at most $\\tfrac14(1+\\sqrt{16 n-15})$, which is a constant away from the optimal; we prove a lower bound of $\\sqrt{n}$. Finally, we show that $G$ has an odd cycle transversal of size at most $\\sqrt{2\\Delta n}$ inducing a single edge, where $\\Delta$ is the maximum degree. This last result partially answers a question of Nakamoto and Ozeki.",
"subjects": "Combinatorics (math.CO)",
"title": "The width of quadrangulations of the projective plane",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180627184412,
"lm_q2_score": 0.8152324938410783,
"lm_q1q2_score": 0.8035893944941512
} |
https://arxiv.org/abs/2206.15452 | On residues of rounded shifted fractions with a common numerator | For any positive integer $n$ along with parameters $\alpha$ and $\nu$, we define and investigate $\alpha$-shifted, $\nu$-offset, floor sequences of length $n$. We find exact and asymptotic formulas for the number of integers in such a sequence that are in a particular congruence class. As we will see, these quantities are related to certain problems of counting lattice points contained in regions of the plane bounded by conic sections. We give specific examples for the number of lattice points contained in elliptical regions and make connections to a few well-known rings of integers, including the Gaussian integers and Eisenstein integers. | \section{Introduction}
For a fixed positive integer $n$, consider the integer sequence
\begin{equation}\label{seq:intro}
\Floor{\frac{n}{1}}, \Floor{\frac{n}{2}}, \Floor{\frac{n}{3}}, \dots, \Floor{\frac{n}{n}},
\end{equation}
where $\Floor{x}$ denotes the floor function. Among the $n$ terms in this sequence, how many are odd? At first glance, it maybe seems reasonable to expect the proportion of odd numbers in the sequence to be roughly half. For $n=10$, the sequence is $10, 5, 3, 2, 2, 1, 1, 1, 1, 1$, of which $7/10=70\%$ are odd. In general, terms in the second half of the sequence are all 1 and thus odd, implying the proportion of odd terms is always at least $50\%$. Looking at more data, this proportion appears to hover around $69\%$ as $n$ grows.
This leads to a few natural questions. Why $69\%$? What happens if we replace the floor function with the ceiling function? Or if we round to the nearest integer?
In order to answer these questions, we consider three integer sequences defined, for positive integers $n$, in terms of the floor, ceiling, and nearest integer rounding functions:
\begin{align*}
\Fseq_n &= \#\left\{k\in\Z : 1\le k\le n,\, \Floor{n/k}\text{ is odd}\right\},\\
\Cseq_n &= \#\left\{k\in\Z : 1\le k\le n,\, \Ceil{n/k}\text{ is odd}\right\},\\
\Rseq_n &= \#\left\{k\in\Z : 1\le k\le n,\, \nint{n/k}\text{ is odd}\right\},
\end{align*}
where $\Ceil{x}$ denotes the ceiling function and $\nint{x}$ the nearest integer rounding function\footnote{The definition of the nearest integer function can be ambiguous for half-integers. A common convention is to round to the nearest even integer. Since we are interested in parity, we instead choose to always round half-integers up. Hence, $\nint{2.5}=3$, $\nint{3.5}=4$, and so on. As we will see, the asymptotic formula for $\Rseq_n$ will be the same regardless of how we choose to round half-integers.}.
\begin{figure}
\centering
\begin{tabular}{c|rrrrrrrrrrrrrrrrrrrr}
$n$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\ \hline
$\Fseq_n$ & 1 & 1 & 3 & 2 & 4 & 4 & 6 & 4 & 7 & 7 & 9 & 7 & 9 & 9 & 13 & 10 & 12 & 12 & 14 & 12 \\
$\Cseq_n$ & 1 & 1 & 2 & 1 & 3 & 2 & 3 & 2 & 5 & 3 & 4 & 3 & 6 & 5 & 6 & 3 & 7 & 6 & 7 & 6 \\
$\Rseq_n$ & 1 & 1 & 2 & 2 & 4 & 3 & 4 & 4 & 6 & 7 & 6 & 5 & 9 & 8 & 9 & 9 & 10 & 10 & 11 & 12
\end{tabular}
\caption{Terms in the sequences $\Fseq_n$, $\Cseq_n$, $\Rseq_n$ for $1\le n\le 20$}
\label{fig:first-20-terms}
\end{figure}
The first 20 terms of each of these three sequences can be found in Figure~\ref{fig:first-20-terms}. The sequences $\Fseq_n$ and $\Cseq_n$ appear in The On-Line Encyclopedia of Integer Sequences (\cite{OEIS}) as, respectively, sequences \href{https://oeis.org/A059851}{A059851} and \href{https://oeis.org/A330926}{A330926}.
Plots of $\Fseq_n$, $\Cseq_n$, and $\Rseq_n$, for $1\le n\le 1000$, appear in Figure~\ref{fig:first-three-sequences}.
The plots in Figure~\ref{fig:first-three-sequences} suggest that each sequence is roughly linear. As we will show, this is true asymptotically. We will find asymptotic formulas for each of these sequences, and from those we will conclude that
\[
\lim\limits_{n\to\infty}\dfrac{\Fseq_n}{n}=\log2,
\quad
\lim\limits_{n\to\infty}\dfrac{\Cseq_n}{n}=1-\log2,
\quad\text{ and }
\lim\limits_{n\to\infty}\dfrac{\Rseq_n}{n}=\frac{\pi}{2}-1.
\]
In particular, the proportion of odd terms in $\Fseq_n$ is asymptotically $\log2\approx 0.693147$. This explains the $69\%$ mentioned in the first paragraph.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{first-three-sequences.eps}
\caption{Plots of $y=\Fseq_n$ (red, top-most), $y=\Cseq_n$ (black, middle), and $y=\Rseq_n$ (blue, bottom-most), for $1\le n\le 1000$}
\label{fig:first-three-sequences}
\end{figure}
Our results follow from two classical results in analytic number theory: the \emph{Dirichlet divisor problem} (Theorem~\ref{thm:dirichlet}), in which one wants to count the number of lattice points in the first quadrant of the plane beneath a hyperbola; and the \emph{Gauss circle problem} (Theorem~\ref{thm:gauss}), in which one wants to count the number of lattice points contained within a circle in the plane.
Each of these classical results leads to a geometric interpretation for one of our sequences. In Proposition~\ref{prop:F_n-Dirichlet}, we will see that $\Fseq_n$ is the difference of the numbers of lattice points in two hyperbolic regions in the plane. In Proposition~\ref{prop:gauss-R_n}, we will see that the number of lattice points contained in a circle of radius $\sqrt{2n}$ is equal to $4\Rseq_n+4n+1$.
We then consider more general sequences. Since $\nint{x}=\Floor{x+1/2}$, we can think of the nearest integer function as the usual floor function shifted by $1/2$. If we replace this $1/2$ with an arbitrary real number $\alpha$ and also incorporate a real number $\nu$ to offset the numerator $n$, we obtain an \emph{$\alpha$-shifted, $\nu$-offset, floor sequence of length $n$}
\begin{equation}\label{seq:alpha-shift-intro}
\Floor{\frac{n-\nu}{1}+\alpha},
\Floor{\frac{n-\nu}{2}+\alpha},
\dots,
\Floor{\frac{n-\nu}{n}+\alpha}.
\end{equation}
We can determine how many of the integers in Sequence~\eqref{seq:alpha-shift-intro} are odd. A natural problem, which we will address, is to find $\alpha$ and $\nu$ for which (asymptotically) half of these integers are odd and half are even. We will see that there is a unique value for $\alpha\in[0,1]$, independent of $\nu$, for which this occurs, and numerically approximate it.
Another problem is to count the number of integers in Sequence~\eqref{seq:alpha-shift-intro} that belong to a given congruence class of integers. We will find exact and asymptotic formulas for these counts. In particular, by Corollary~\ref{cor:N_m-slope}, for any $m\in\N$ and $\alpha\in[0,1)$, the proportion of integers in an $\alpha$-shifted, $\nu$-offset, floor sequence of length $n$ that are congruent to 1 modulo $m$ is asymptotically equal to
\[
\frac{-\alpha}{1-\alpha}+
\int\limits_0^1 \frac{(1-x)x^{-\alpha}}{1-x^m}\mathrm{d}x
\]
and, for $2\le r\le m$, the proportion of such integers that are congruent to $r$ modulo $m$ is asymptotically equal to
\[
\int\limits_0^1 \frac{(1-x)x^{r-1-\alpha}}{1-x^m}\mathrm{d}x.
\]
Finally, we will look more closely at connections between the number of integers in a certain congruence class in an $\alpha$-shifted, $\nu$-offset, floor sequence of length $n$ and the number of lattice points in a certain region of the plane. We will demonstrate the connections by obtaining formulas in terms of these counts for the number of lattice points in the following elliptical regions: $x^2+y^2\le n$, $x^2+xy+y^2\le n$, and $x^2+2y^2\le n$, each for any $n\in\N$. Our results here follow from the theory of binary quadratic forms.
\subsection{Notation}
$\N$ denotes the set of positive integers. All logarithms in this paper use base $\ee$. The cardinality of a finite set $A$ is denoted $\#A$. We use big O notation as follows. For functions $f(x)$ and $g(x)$, if there exist constants $M$ and $a$ such that $|f(x)|\le M g(x)$ for all $x\ge a$, then we write $f(x)=\bigO(g(x))$.
\subsection{Organization}
This paper is organized as follows. In Section~\ref{sec:F_n and C_n}, we find exact and asymptotic formulas for $\Fseq_n$ and $\Cseq_n$. In Section~\ref{sec:Rseq_n}, we do the same for $\Rseq_n$. In Section~\ref{sec:generalized}, for integers $r,m$ with $1\le r\le m$, and for $\alpha\in[0,1)$, we find exact and asymptotic formulas for $\Num_{n,\alpha,\nu,r,m}$, the number of integers in the $\alpha$-shifted, $\nu$-offset, floor sequence of length $n$ that are congruent to $r$ modulo $m$. Finally, in Section~\ref{sec:applications}, we have two tasks. The first task is to compute a shift $\alpha=\alpha_0$ for which (asymptotically) the $\alpha$-shifted, $\nu$-offset, floor sequence of length $n$ contains as many odd terms as even terms. The second task is to determine the number of lattice points in certain elliptical regions of the plane in terms of the sequences $\Num_{n,\alpha,\nu,r,m}$.
\section{The floor and ceiling sequences}\label{sec:F_n and C_n}
\subsection{The floor sequence}
As was mentioned in the introduction, for $n\in\N$, $\Fseq_n$ is equal to the number of odd integers in the floor sequence
\begin{equation}\label{seq:floor}
\Floor{\frac{n}{1}}, \Floor{\frac{n}{2}}, \dots, \Floor{\frac{n}{n}},
\end{equation}
where the floor function is defined, for $x\in\mathbb{R}$, by $\Floor{x}=\max\{z\in\Z : z\le x\}$. In this subsection, we will find an exact formula for $\Fseq_n$. Then, with that formula and the solution to Dirichlet's divisor problem, we will find an asymptotic formula for $\Fseq_n$.
We start by counting the number of integers in Sequence~\eqref{seq:floor} that are greater than or equal to a given integer.
\begin{lem}\label{lem:consec_flr}
Fix $n\in\N$. For $k\in \mathbb{N}$, Sequence~\eqref{seq:floor}
contains $\Floor{n/k}$ terms that are greater than or equal to $k$.
\end{lem}
\begin{proof}
Via the division algorithm, there are $q,r\in \mathbb{Z}$ for which $n=kq+r$ with $0\leq r< k$. Then $\Floor{n/k}=q$. We will show that the first $q$ terms in Sequence~\eqref{seq:floor} are greater than or equal to $k$.
Note that
\[
\flrf{n}{q}
= \flrf{kq+r}{q}
= \Floor{k+\frac{r}{q}}\ge k.
\]
Now consider an arbitrary term $\Floor{n/d}$ in Sequence~\eqref{seq:floor}. Then $1\leq d\leq n$. If $d < q$, then $n/d>n/q$ and hence $\Floor{n/d}\ge\Floor{n/q}\ge k$. On the other hand, if $d > q$, then $d\ge q+1$. This means $kd \ge k(q+1) > n$ and therefore $k>n/d\geq \Floor{n/d}$. We have already shown that $\Floor{n/d}\geq k$ if $d=q$. We conclude that $\Floor{n/d}\geq k$ if and only if $d \in \left\{1,2,\dots ,q\right\}$.
\end{proof}
We can now find an exact formula for $\Fseq_n$.
\begin{prop}\label{prop:floor_seq}
For $n\in\N$,
\[
\Fseq_n
=\sum\limits_{d=1}^n\Floor{\frac{n}{d}}(-1)^{d+1}.
\]
\end{prop}
\begin{proof}
We wish to count the number of odd terms in Sequence~\eqref{seq:floor}. Here we can use Lemma~\ref{lem:consec_flr} to note that for a given $k$ there are $\Floor{n/k}$ terms whose value is at least $k$, and thus $\Floor{n/k}-\Floor{n/(k+1)}$ terms whose value is exactly $k$. To compute $\Fseq_n$, we sum for all odd $k$. We find
\[
\Fseq_n
=\sum\limits_{\substack{1\le k\leq n \\ k \text{ odd}}} \left(\flrf{n}{k}-\flrf{n}{k+1}\right)
=\sum\limits_{d=1}^n\Floor{\frac{n}{d}}(-1)^{d+1},
\]
as desired.
\end{proof}
Next, we wish to find an asymptotic formula for $\Fseq_n$. To start, we will manipulate the formula for $\Fseq_n$ obtained in Proposition~\ref{prop:floor_seq} by considering odd and even $d$ separately.
\begin{align*}
\Fseq_n
&=\sum\limits_{d\text{ odd}} \Floor{\frac{n}{d}}-\sum\limits_{d\text{ even}} \Floor{\frac{n}{d}}\\
&=\sum\limits_{d\text{ odd}} \Floor{\frac{n}{d}}-\sum\limits_{d\text{ even}} \Floor{\frac{n}{d}}+\sum\limits_{d\text{ even}} \Floor{\frac{n}{d}}-\sum\limits_{d\text{ even}} \Floor{\frac{n}{d}}\\
&=\sum\limits_{d=1}^n\Floor{\frac{n}{d}} - 2\sum\limits_{b=1}^{\Floor{n/2}}\Floor{\frac{n}{2b}}\\
&=\sum\limits_{d=1}^n\Floor{\frac{n}{d}} - 2\sum\limits_{b=1}^{\Floor{n/2}}\Floor{\frac{n/2}{b}}.
\end{align*}
We will now use $\tau(n)$, the multiplicative function which counts the number of positive divisors of $n$, and $\mathrm{D}(x)=\sum\limits_{m\le x}\tau(m)$, the \emph{divisor summatory function}, where the summation is taken over positive integers $m$. With this function, we have the following:
\[
\mathrm{D}(n)
= \sum\limits_{m=1}^n\tau(m)
= \sum\limits_{m=1}^n\sum\limits_{d\mid m}1
= \sum\limits_{d=1}^n\Floor{\frac{n}{d}}.
\]
Geometrically, $\mathrm{D}(n)$ is the number of lattice points in the interior and boundary of the region in the $xy$-plane bounded by the graphs of $x=1$, $y=1$, and the hyperbola $xy=n$.
Observe that $\Fseq_n=\mathrm{D}(n)-2\mathrm{D}(n/2)$. For $n\in\N$, we define two regions. Let $H_{1,n}$ be the hyperbolic region in the $xy$-plane with $x,y\ge1$ bounded by the graphs of the hyperbola $xy=n$ and the hyperbola $xy=n/2$. Let $H_{2,n}$ be the region in the $xy$-plane bounded by the graphs of $x=1$, $y=1$, and the hyperbola $xy=n/2$. Then $\mathrm{D}(n)$ counts the number of lattice points in $H_{1,n}\cup H_{2,n}$, and $\mathrm{D}(n/2)$ counts the number of lattice points in $H_{2,n}$. This gives us a geometric interpretation for $\Fseq_n$.
\begin{prop}\label{prop:F_n-Dirichlet}
For $n\in\N$, $\Fseq_n$ is equal to the number of lattice points in $H_{1,n}$ minus the number of lattice points in $H_{2,n}$. For $H_{1,n}$, we include all points on the boundary except for those on the boundary curve $xy=n/2$. For $H_{2,n}$, we include all points on the boundary.
\end{prop}
\begin{figure}
\centering
\includegraphics[scale=.7]{hyperbola-points.eps}
\caption{For $n=17$, the graphs of $xy=17$ and $xy=17/2$ along with lattice points in the hyperbolic regions $H_{1,17}$ (circles) and in $H_{2,17}$ (stars).}
\label{fig:hyperbola-F_n}
\end{figure}
\begin{example}
To illustrate Proposition~\ref{prop:F_n-Dirichlet}, we have drawn the regions $H_{1,n}$ and $H_{2,n}$ for $n=17$ in Figure~\ref{fig:hyperbola-F_n}. Since $H_{1,17}$ contains 32 points and $H_{2,17}$ contains 20 points, we find $\Fseq_{17}=32-20=12$.
\end{example}
If we have an asymptotic formula for $\mathrm{D}(x)$, known as the \emph{Dirichlet divisor problem}, then we get an asymptotic formula for $\Fseq_n$. The following theorem of Dirichlet is exactly what we need. (For details, see, e.g., \cite[Theorem 3.3]{Apostol1976}. We will use a slightly modified version of this approach in the proof of Proposition~\ref{prop:sum-of-floors-to-sum-of-fractions} later in this paper.)
\begin{thm}\label{thm:dirichlet}
For all $x\ge1$,
\[
\mathrm{D}(x)
=x\log{x}+(2\gamma-1)x+\bigO(\sqrt{x}),
\]
where $\gamma = \lim\limits_{n\to\infty}\left(-\log{n}+\sum\limits_{k=1}^n 1/k\right)\approx 0.577216$ is the Euler-Mascheroni constant.
\end{thm}
Next, let $F(x)=\mathrm{D}(x)-2\mathrm{D}(x/2)$. Then $\Fseq_n=F(n)$ and, via Theorem~\ref{thm:dirichlet} we have $F(x)=x\log2+\bigO(\sqrt{x})$. Restricting to integers $n$, we obtain the following result.
\begin{prop}\label{prop:floor_seq_asymp}
For $n\in\N$,
$\Fseq_n
= n\log2+\bigO\left(\sqrt{n}\right)$,
and hence $\lim\limits_{n\to\infty}\frac{1}{n}\Fseq_n=\log{2}\approx0.693147$.
\end{prop}
\subsection{The ceiling sequence}
As was mentioned in the introduction, for $n\in\N$, $\Cseq_n$ is equal to the number of odd integers in the ceiling sequence
\begin{equation}\label{seq:ceiling}
\Ceil{\frac{n}{1}}, \Ceil{\frac{n}{2}}, \dots, \Ceil{\frac{n}{n}},
\end{equation}
where the ceiling function is defined, for $x\in\mathbb{R}$, by $\Ceil{x}=\min\{z\in\Z : z\ge x\}$. In this subsection, we will find a relation between $\Cseq_n$ and $\Fseq_{n-1}$ which, along with results from the previous subsection, will lead to exact and asymptotic formulas for $\Cseq_n$.
\begin{prop}\label{prop:ceiling_seq}
For $n\in\N$,
\[
\Cseq_n
= n-\Fseq_{n-1} = \sum\limits_{d=2}^n \Floor{\frac{n}{d}}(-1)^d.
\]
\end{prop}
\begin{proof}
We begin by showing that $\Ceil{n/k}=\Floor{(n-1)/k}+1$ for all $k\in\mathbb{N}$. As in Lemma~\ref{lem:consec_flr}, via the division algorithm, there are $q,r\in \mathbb{Z}$ for which $n=kq+r$ with $0\leq r< k$.
If $r=0$, then $n/k=q$ and so $\Ceil{n/k}=\Ceil{q}=q$. We also have
\[
\flrf{n-1}{k}=\left\lfloor\frac{n}{k}-\frac{1}{k}\right\rfloor
=\left\lfloor q-\frac{1}{k}\right\rfloor
=q-1.
\]
Thus, $\Ceil{n/k}=\Floor{(n-1)/k}+1$.
If $r\ne0$, then $1\le r<k$. We have
\[
\clf{n}{k}
=\clf{kq+r}{k}
=\left\lceil q+\frac{r}{k}\right\rceil
= q+1.
\]
and
\[
\flrf{n-1}{k}
=\flrf{kq+r-1}{k}
=\left\lfloor q+\frac{r-1}{k}\right\rfloor
=q,
\]
with the final equality following from the fact that $1\le r<k$. Thus, $\Ceil{n/k}=\Floor{(n-1)/k}+1$.
Since we have $\Ceil{n/k}=\Floor{(n-1)/k}+1$ for all $k\in\N$, for each pair of integers $\left(\Ceil{n/k},\Floor{(n-1)/k}\right)$, exactly one integer is odd. Thus, the total number of odd integers in the two sequences
\[
\Ceil{\frac{n}{1}},\Ceil{\frac{n}{2}},\dots,\Ceil{\frac{n}{n}}
\text{ and }
\Floor{\frac{n-1}{1}},\Floor{\frac{n-2}{2}},\dots,\Floor{\frac{n-1}{n}}
\] is $n-1$. The number of odd integers in the first sequence is $\Cseq_n$. Since the last term in the second sequence is $\Floor{(n-1)/n}=0$, the number of odd integers in the second sequence is $\Fseq_{n-1}$. Thus, $\Fseq_{n-1}+\Cseq_n=n$. The stated result the follows from Proposition~\ref{prop:floor_seq}.
\end{proof}
We immediately obtain the following asymptotic formula for $\Cseq_n$.
\begin{cor}\label{cor:ceiling_seq_asymp}
For $n\in\N$, $\Cseq_n
= (1-\log2)n+\bigO\left(\sqrt{n}\right)$,
and hence $
\lim\limits_{n\to\infty}\frac{1}{n}\Cseq_n
=1-\log2 \approx 0.306853$.
\end{cor}
\begin{proof}
Combining Proposition~\ref{prop:ceiling_seq} with the asymptotic formula for $\Fseq_n$ in Proposition~\ref{prop:floor_seq_asymp}, we find
\[
\Cseq_n
= n - \Fseq_{n-1}
= n - (n-1)\log2 + \bigO\left(\sqrt{n-1}\right)
= n-n\log2 + \bigO\left(\sqrt{n}\right).
\]
The limit result follows.
\end{proof}
\begin{remark}\label{rmk:an-and-bn}
If we just wanted an asymptotic formula for $\Cseq_n$ without finding an exact formula first, we could have used the asymptotic formula for $\Fseq_n$, the observation that
\[
\Ceil{n/k} =
\begin{dcases*}
\Floor{n/k} & if $k\mid n$
\\ \Floor{n/k}+1 & if $k\nmid n$,
\end{dcases*}
\]
and the following lemma.
\begin{lem}\label{lem:num_divisors_of_n}
For $n\in\N$, the number of positive divisors of $n$ is at most $2\sqrt{n}$.
\end{lem}
\begin{proof}
Suppose $n=de$ for positive integers $d\le e$. Then $1\le d\le \sqrt{n}$. It follows that there are at most $\sqrt{n}$ pairs of divisors $d,e$, and thus at most $2\sqrt{n}$ positive divisors of $n$.
\end{proof}
Thus, whenever $k$ does not divide $n$, exactly one of $\Floor{n/k}$ and $\Ceil{n/k}$ is odd. When $k$ does divide $n$, then either both $\Floor{n/k}$ and $\Ceil{n/k}$ are odd, or neither is. Since there are at most $2\sqrt{n}$ divisors $d$ of $n$, we have
\[
\Fseq_n+\Cseq_n \in [n-2\sqrt{n},n+2\sqrt{n}].
\]
Thus $\Fseq_n+\Cseq_n = n + \bigO\left(\sqrt{n}\right)$,
from which we conclude that $\Cseq_n=n-\Fseq_n+\bigO\left(\sqrt{n}\right) = (1-\log2)n+\bigO\left(\sqrt{n}\right)$. This gives an alternate proof of Corollary~\ref{cor:ceiling_seq_asymp}.
\end{remark}
\section{The nearest integer sequence}\label{sec:Rseq_n}
As was mentioned in the introduction, for $n\in\N$, $\Rseq_n$ counts the number of odd integers in the nearest integer (rounding) sequence
\begin{equation}\label{seq:nint-seq}
\nint{\frac{n}{1}}, \nint{\frac{n}{2}}, \dots, \nint{\frac{n}{n}}
\end{equation}
where the nearest integer function is defined, for $x\in\mathbb{R}$, by
\[\nint{x}=\max\{z\in\Z : |z-x|\le |z'-x|\text{ for all } z'\in\Z\}.\]
(In other words, we round to the nearest integer, and half-integers are rounded up.) In this subsection, we will find an exact formula for $\Rseq_n$. Then, with that formula and the solution to Gauss's circle problem, we will find an asymptotic formula for $\Rseq_n$.
We should first note that $\nint{n/k}=\Floor{n/k+1/2}$, and thus Sequence~\eqref{seq:nint-seq} is equal to the sequence \begin{equation}\label{seq:rounding-seq}
\Floor{\frac{n}{1}+\frac{1}{2}}, \Floor{\frac{n}{2}+\frac{1}{2}}, \Floor{\frac{n}{3}+\frac{1}{2}}, \dots, \Floor{\frac{n}{n}+\frac{1}{2}}.
\end{equation}
Just as we did in obtaining an exact formula for $\Fseq_n$ (Proposition~\ref{prop:floor_seq}), we begin with a lemma.
\begin{lem}\label{lem:round_count}
Fix $n\in\N$. For $k\in\N$, let $g(k)$ equal the number of terms in Sequence~\eqref{seq:rounding-seq} that are greater than or equal to $k$. Then $g(1)=n$ and, for $k\ge2$, $g(k)=\Floor{2n/(2k-1)}$.
\end{lem}
\begin{proof}
Consider an arbitrary term $\Floor{n/d+1/2}$ in the Sequence~\eqref{seq:rounding-seq}, so that $1\le d\le n$. We first observe that $\Floor{n/d+1/2}\ge\Floor{n/n+1/2} = 1$. Hence, $g(1)=n$.
Next, for $k\ge2$, we have either $d\le 2n/(2k-1)$ or $d>2n/(2k-1)$. We'll consider these cases separately.
If $d\leq 2n/(2k-1)$, then $k-1/2 \leq n/d$ which implies $k \leq n/d+1/2$. Since $k$ is an integer, we take the floor of both sides to find $k \leq\Floor{n/d+1/2}$.
If $d>2n/(2k-1)$, then $k-1/2>n/d$, which implies $k > n/d+1/2 \geq \Floor{n/d+1/2}$.
Thus, for $k\ge2$, we have $\Floor{n/d+1/2}\geq k$ for $d \in\left\{1, 2, \dots,\Floor{2n/(2k-1)}\right\}$. Since $\Floor{2n/(2k-1)}\le n$, we conclude that $g(k)=\Floor{2n/(2k-1)}$.
\end{proof}
With this lemma, we can now find an exact formula for $\Rseq_n$.
\begin{prop}\label{prop:rounding_seq}
For $n\in\N$,
\[
\Rseq_n
= -n+\sum\limits_{d=1}^{n}\Floor{\frac{2n}{2d-1}}(-1)^{d+1}.
\]
\end{prop}
\begin{proof}
We wish to count the number of odd terms in Sequence~\eqref{seq:nint-seq}. By Lemma~\ref{lem:round_count}, there are $n$ terms that are at least 1, and, for $k\ge2$, there are $\Floor{2n/(2k-1)}$ terms that are at least $k$. We can use this to count the number of terms that are equal to a given value.
For $k=1$, there are $n-\Floor{2n/3}$ terms that are equal to 1. For $k\ge2$, there are $\Floor{2n/(2k-1)}-\Floor{2n/(2k+1)}$ terms that are equal to $k$. Summing over odd $k$, we find
\begin{align*}
\Rseq_n
&=n-\Floor{\frac{2n}{3}}+\sum\limits_{\substack{3\le k\le n \\ k\text{ odd}}}\Floor{\frac{2n}{2k-1}}-\Floor{\frac{2n}{2k+1}} \\
&=-n+\sum\limits_{\substack{1\le k\leq n\\ k\text{ odd}}} \flrf{2n}{2k-1}-\flrf{2n}{2k+1}\\
&=-n+\sum\limits_{d=1}^n\Floor{\frac{2n}{2d-1}}(-1)^{d+1},
\end{align*}
which completes the proof.
\end{proof}
Next, we will work toward an asymptotic formula for the summation in $\Rseq_n$. We will use results of Jacobi and Gauss.
To start, we need some notation. For $n\in\N$, $r\in\Z$, and $m\in\N$, let $\divs_{r,m}(n)$ denote the number of positive divisors of $n$ that are congruent to $r$ modulo $m$. We will be interested in values of this function for $m=4$ and $r$ equal to 1 or 3. For example, with $n=45$, we have $\divs_{1,4}(45)=4$ and $\divs_{3,4}(45)=2$ because the positive divisors of 45 are 1, 3, 5, 9, 15, and 45.
\begin{lem}
For $n\in\N$,
\[
\Rseq_n
= -n + \sum\limits_{k=1}^{2n}\left(\divs_{1,4}(k)-\divs_{3,4}(k)\right).
\]
\end{lem}
\begin{proof}
Each integer $d$ divides $\Floor{2n/d}$ integers in the interval $[1,2n]$. If we want to count the number of divisors that are 1 modulo 4 for all of the integers from 1 to $2n$, we see that 1 is a divisor of $\Floor{2n/1}$ terms, 5 is a divisor of $\Floor{2n/5}$ terms, 9 is a divisor of $\Floor{2n/9}$ terms, and so on. In other words, we get
\[
\sum\limits_{k=1}^{2n}\divs_{1,4}(k)
=\Floor{\frac{2n}{1}}+\Floor{\frac{2n}{5}}+\Floor{\frac{2n}{9}}+\dots.
\]
Similarly, if we want to add up the number of divisors that are 3 modulo 4 for all of the integers from 1 to $2n$, we get
\[
\sum\limits_{k=1}^{2n}\divs_{3,4}(k)
=\Floor{\frac{2n}{3}}+\Floor{\frac{2n}{7}}+\Floor{\frac{2n}{11}}+\dots.
\]
Note that the terms in each of the two above summations are eventually all 0, and thus these are finite sums.
Hence,
\begin{align*}
\sum\limits_{d=1}^n\Floor{\frac{2n}{2d-1}}(-1)^{d+1}
&= \Floor{\frac{2n}{1}}-\Floor{\frac{2n}{3}}+\Floor{\frac{2n}{5}}-\Floor{\frac{2n}{7}}+\cdots+(-1)^{n+1}\Floor{\frac{2n}{2n-1}} \\
&= \sum\limits_{k=1}^{2n}\left(\divs_{1,4}(k)-\divs_{3,4}(k)\right).
\end{align*}
The stated result then follows from the formula for $\Rseq_n$ in Proposition~\ref{prop:rounding_seq}.
\end{proof}
Next, let $\Jacr_2(n)$ be the number of representations of $n$ as a sum of two integer squares. More precisely, $\Jacr_2(n)=\#\{(a,b)\in\Z^2 : a^2+b^2=n\}$. For example, still with $n=45$, $\Jacr_2(45)=8$ because
\[
45=(\pm3)^2+(\pm6)^2=(\pm6)^2+(\pm3)^2,
\]
which is a total of 8 combinations. Jacobi's two-square theorem relates $\Jacr_2(n)$ to the number of divisors of $n$ that are 1 modulo 4 and that are 3 modulo 4.
\begin{thm}[Jacobi's two-square theorem]\label{thm:jacobi}
For $n\in\N$, $\Jacr_2(n)=4(\divs_{1,4}(n)-\divs_{3,4}(n))$.
\end{thm}
Jacobi proved this theorem, along with theorems about the number of representations of $n$ using four squares, using six squares, and using eight squares, in 1829 with the use of elliptic theta functions. (See \cite{Grosswald1985}.)
One may also prove Theorem~\ref{thm:jacobi} in the context of the Gaussian integers, which is the ring
\[
\Z[\ii]=\{a+b\ii : a,b\in\Z,\, \ii^2=-1\}.
\]
The norm of the Gaussian integer $a+b\ii$ is $a^2+b^2$. It follows that $\Jacr_2(n)$ is equal to the number of Gaussian integers with norm equal to a given positive integer $n$. With an understanding of what the prime elements of $\Z[\ii]$ are, along with the fact that $\Z[\ii]$ is a unique factorization domain, one may prove Theorem~\ref{thm:jacobi}. (For details, see, e.g., \cite[Theorem 278]{HardyWright1979} or a wonderful video by 3Blue1Brown on YouTube \cite{3b1b-pi-prime-regularities}.)
Visualizing $\Z[\ii]$ as a lattice in the complex plane (with $a+b\ii\in\Z[\ii]$ corresponding to the point $(a,b)$ in the plane), Jacobi's two-square theorem says that the number of lattice points on a circle of radius $\sqrt{n}$ centered at the origin is $4(\divs_{1,4}(n)-\divs_{3,4}(n))$. In general, the point $(a,b)$ is on a circle of radius $\sqrt{a^2+b^2}$ centered at the origin. Thus, adding the numbers of points on circles of radii $\sqrt{1}, \sqrt{2},\dots,\sqrt{2n}$ gives us the total number of lattice points different from the origin in the interior or on the boundary of a circle of radius $\sqrt{2n}$ centered at the origin. Back to our formula for $\Rseq_n$, we now have
\begin{equation}\label{eqn:Rseq_n-circle}
\Rseq_n
= -n + (1/4)\cdot \#\{(a,b)\in\Z^2 : 0<a^2+b^2\le 2n\}.
\end{equation}
Let $\mathrm{C}(x)$ denote the number of lattice points in the interior or on the boundary of a circle of radius $\sqrt{x}$ centered at the origin. Finding an asymptotic formula for $\mathrm{C}(x)$ is known as the \emph{Gauss circle problem}. Equation~\eqref{eqn:Rseq_n-circle} shows us how to calculate $\mathrm{C}(2n)$ in terms of $\Rseq_n$.
\begin{prop}\label{prop:gauss-R_n}
For $n\in\N$, the number of lattice points within a circle of radius $\sqrt{2n}$ centered at the origin is $4\Rseq_n+4n+1$. That is, $\mathrm{C}(2n)=4\Rseq_n+4n+1$.
\end{prop}
\begin{figure}
\centering
\includegraphics[scale=.5]{gauss-circle.eps}
\caption{The number of lattice points within and on a circle of radius $6=\sqrt{2\cdot18}$ is, by Proposition~\ref{prop:gauss-R_n}, $\mathrm{C}(2\cdot18)=4\Rseq_{18}+4\cdot18+1=4\cdot10+4\cdot18+1=113$.}
\label{fig:gauss-circle}
\end{figure}
See Figure~\ref{fig:gauss-circle} for an example with $n=18$, which uses data from Figure~
\ref{fig:first-20-terms}.
Geometrically, we see that $\mathrm{C}(n)$ is a non-decreasing function. For any $n\in\N$,
\[
0
\le \mathrm{C}(2n+2)-\mathrm{C}(2n)
= 4\Rseq_{n+1}+4(n+1)+1-4\Rseq_{n}-4n-1
= 4\Rseq_{n+1}+4-4\Rseq_n.
\]
Thus, $\Rseq_{n+1}-\Rseq_n\ge-1$. This proves the following corollary.
\begin{cor}
The sequence $\Rseq_n$ decreases by at most 1 in any step.
\end{cor}
A decrease of 1 from $\Rseq_n$ to $\Rseq_{n+1}$ occurs precisely when $2n+2$ and $2n+1$ each cannot be written as a sum of two integer squares. This occurs when the prime factorizations of $2n+2$ and $2n+1$ each contain some prime which is 3 modulo 4 raised to an odd power. From our data in Figure~\ref{fig:first-20-terms}, we see a decrease by 1 for $n=5$. Observe that $2n+2=12=2^2\cdot3^1$ and $2n+1=11^1$. Neither 11 nor 12 is a sum of two integer squares.
For comparison, there is no bound for the amount in which the sequences $\Fseq_n$ and $\Cseq_n$ can decrease in any step. Indeed, for $k\in\N$ and $n=2^k$, $\Fseq_n-\Fseq_{n-1}=\Cseq_n-\Cseq_{n-1}=-(k-1)$.
We now want an asymptotic formula for $\Rseq_n$. To get there, we use an asymptotic result for $\mathrm{C}(x)$ that is due to Gauss.
\begin{thm}\label{thm:gauss}
For $x\ge1$, $\mathrm{C}(x) = \pi x + \bigO(\sqrt{x})$.
\end{thm}
This result appears widely in the literature. See, for instance, \cite[Theorem 41]{Rademacher77} or \cite[Chapter 2, Section 7]{Grosswald1985}.
We can now compute an asymptotic formula for $\Rseq_n$.
\begin{prop}\label{prop:rounding_seq_asymp}
For $n\in\N$, $\Rseq_n = (\pi/2-1)n+\bigO\left(\sqrt{n}\right)$,
and hence
$\lim\limits_{n\to\infty}\frac{1}{n}\Rseq_n
=\pi/2-1
\approx 0.570796$.
\end{prop}
\begin{proof}
Combining Proposition~\ref{prop:gauss-R_n} and Theorem~\ref{thm:gauss}, we find
\begin{align*}
\Rseq_n
&= -n + (1/4)\cdot \left(\mathrm{C}(2n) - 1\right) \\
&= -n + (1/4)\cdot \left(2\pi n - 1 + \bigO(\sqrt{2n})\right) \\
&= -n + (\pi/2) n + \bigO\left(\sqrt{n}\right),
\end{align*}
which proves the first part. The limit behavior follows.
\end{proof}
\begin{remark}
When we defined $\nint{x}$ in the introduction, we chose to always round up half-integers. Suppose we choose a different convention for rounding half-integers (e.g., always rounding down, or always rounding to the nearest even integer, or something else). Call the new rounding function $\nintp{x}$ and consider the resulting sequence
\[
\Rseq_n'
=\#\left\{1\le k\le n : \nintp{n/k}\text{ is odd}\right\}.
\]
To compute $\left|\Rseq_n'-\Rseq_n\right|$, we need only consider those $k$ for which $n/k$ is a half-integer. But $n/k$ is a half-integer when $n/k=l/2$ for odd $l$, which means $k$ is a divisor of $2n$. By Lemma~\ref{lem:num_divisors_of_n}, $2n$ has at most $2\sqrt{2n}$ divisors. Thus, there are at most $2\sqrt{2n}$ such $k$, and so
\[
|\Rseq_n'-\Rseq_n|
\le 2\sqrt{2n},
\]
from which we conclude $\Rseq_n'=\Rseq_n+\bigO\left(\sqrt{n}\right)=n(\pi/2-1)+\bigO\left(\sqrt{n}\right)$.
\end{remark}
\section{Counting by congruence class with shifted floors}\label{sec:generalized}
We can now generalize our results from the preceding sections in three ways.
First, we note that we can write $\nint{x}$ in terms of the floor function, by $\nint{x}=\Floor{x+1/2}$. We may therefore think of $\nint{x}$ as a \emph{$1/2$-shifted floor function} and the corresponding sequence
\[
\nint{\frac{n}{1}}, \nint{\frac{n}{2}}, \dots, \nint{\frac{n}{n}}
=
\Floor{\frac{n}{1}+\frac{1}{2}}, \Floor{\frac{n}{2}+\frac{1}{2}}, \dots, \Floor{\frac{n}{n}+\frac{1}{2}}
\]
as a \emph{$1/2$-shifted floor sequence of length $n$}. We will consider an arbitrary shift $\alpha\in\mathbb{R}$.
Second, we now have sequences of length $n$ where the $k$th term (for $k\in\{1,2,\dots,n\}$) is $\Floor{n/k + \alpha}$. We can offset the numerator $n$ by some real number $\nu$, resulting in a general term of the form $\Floor{(n-\nu)/k+\alpha}$.
Third, in our earlier work we focused on the number of odd terms in each sequence, which means counting the number of terms in each sequence which are congruent to 1 modulo 2. We will instead count the number of terms in each sequence that are congruent to $r$ modulo $m$ for integers $r$ and $m$ with $m\ge1$.
Let's set some notation. For $\alpha,\nu\in\mathbb{R}$, the \emph{$\alpha$-shifted, $\nu$-offset, floor sequence of length $n$} is
\begin{equation}\label{seq:alpha-shift}
\Floor{\frac{n-\nu}{1}+\alpha}, \Floor{\frac{n-\nu}{2}+\alpha}, \dots, \Floor{\frac{n-\nu}{n}+\alpha}.
\end{equation}
For $r\in\Z$ and $m\in\N$, let $\Num_{n,\alpha,\nu,r,m}$ equal the number of integers in Sequence~\eqref{seq:alpha-shift} that are congruent to $r$ modulo $m$. In other words, let
\[
\Num_{n,\alpha,\nu,r,m}
=\#\left\{1\le k\le n : \Floor{\frac{n-\nu}{k}+\alpha}\equiv r\pmod*{m} \right\}.
\]
Connecting to our earlier work, we see that for $n\in\N$, $\Fseq_n=\Num_{n,0,0,1,2}$ and $\Rseq_n=\Num_{n,1/2,0,1,2}$.
Since every integer is in exactly one congruence class modulo $m$, we immediately see that
\begin{equation}\label{eqn:sum-is-n}
\sum\limits_{r=1}^m \Num_{n,\alpha,\nu,r,m}
=n
\end{equation}
for all $\alpha$, $\nu$, and $m$. If we let $m=1$, we find $\Num_{n,\alpha,\nu,r,1}=n$ for all $\alpha$, $\nu$, and $r$. In what follows, we will assume $m\ge2$. Furthermore, noting that $\Num_{n,\alpha,\nu,r,m} = \Num_{n,\alpha-1,\nu,r-1,m,}$, we may suppose $\alpha\in[0,1)$. Next, since the difference $\Num_{n,\alpha,\nu,r,m}-\Num_{n-1,\alpha,\nu-1,r,m,}$ is 1 or 0 depending on whether $\Floor{(n-\nu)/n+\alpha}$ is congruent to $r$ modulo $m$ or not, we may suppose $\nu\in[0,1)$. Finally, it will be useful to take $r\in[1,m]$, the least positive integer in a given congruence class modulo $m$.
\subsection{A sum of differences of floors}
Our first task is to write $\Num_{n,\alpha,\nu,r,m}$ as a sum of differences of floors. We will generalize the approaches taken in writing $\Fseq_n$ (Proposition~\ref{prop:floor_seq}) and $\Rseq_n$ (Proposition~\ref{prop:rounding_seq}) as summations involving differences of floors.
The following lemma generalizes Lemma~\ref{lem:consec_flr} and Lemma~\ref{lem:round_count}.
\begin{lem}\label{lem:consec-general}
For $\alpha,\nu\in[0,1)$, $k\in\N$, and $n\in\N$ with $n\alpha\ge\nu$, let $g(k)$ equal the number of integers in an $\alpha$-shifted, $\nu$-offset, floor sequence of length $n$ (Sequence~\eqref{seq:alpha-shift}) that are greater than or equal to $k$. Then
\[
g(k) =
\begin{dcases*}
n & if $k=1$,\\
\Floor{(n-\nu)/(k-\alpha)} & if $k\ge2$.
\end{dcases*}
\]
\end{lem}
\begin{proof}
To start, note that $\nu<1\le n$ for all $n$ Thus $n-\nu>0$, which implies $(n-\nu)/1+\alpha, (n-\nu)/2+\alpha, \dots, (n-\nu)/n+\alpha$ is a decreasing sequence. Looking at the final term, since $\alpha\ge\nu/n$, we have
\[
\frac{n-\nu}{n}+\alpha
\ge \frac{n-\nu}{n}+\frac{\nu}{n}
=1.
\]
Thus, every term in this sequence is at least 1, and hence their floors are at least 1. This means $g(1)=n$.
Now, suppose $k\ge2$ and let $t=(n-\nu)/(k-\alpha)$. Observe that $0<t<n-\nu\le n$. We will show that the first $\Floor{t}$ terms of Sequence~ \eqref{seq:alpha-shift} are at least $k$ and that the remaining terms are less than $k$. Since Sequence~\eqref{seq:alpha-shift} is a non-increasing sequence, it will suffice to show that
\begin{equation}\label{eq:ineq-chain}
\Floor{\frac{n-\nu}{t}+\alpha}\ge k
>
\Floor{\frac{n-\nu}{t+1}+\alpha}.
\end{equation}
For the first inequality in Equation~\eqref{eq:ineq-chain}, observe that
\[
\frac{n-\nu}{t}+\alpha
=
\frac{n-\nu}{(n-\nu)/(k-\alpha)}+\alpha
= k.
\]
Since $k$ is an integer, we have $\Floor{(n-\nu)/t+\alpha}=\Floor{k}=k\ge k$, as desired.
For the second inequality Equation~\eqref{eq:ineq-chain}, we have
\[
\Floor{\frac{n-\nu}{t+1}+\alpha}
\le
\frac{n-\nu}{t+1}+\alpha
<
\frac{n-\nu}{t}+\alpha
=
k.
\]
This completes the proof.
\end{proof}
Note that in the above proof, we considered the cases $k=1$ and $k\ge2$ separately. Our argument for $k\ge2$ doesn't work for $k=1$ because, for $n\alpha>\nu$, our value of $t$ would be $t=(n-\nu)/(1-\alpha)>n$, and we cannot have more than $n$ terms in a sequence of $n$ terms. This explains why we had to get rid of ``extra'' terms in the summation formula for $\Rseq_n$ (Proposition~\ref{prop:rounding_seq}), which involves $k=1$, $\alpha=1/2$, and $\nu=0$, whereas we had no such adjustment in the summation formula for $\Fseq_n$ (Proposition~\ref{prop:floor_seq}), which has $\alpha=\nu=0$.
In what follows, we will count the number of terms in Sequence~\eqref{seq:alpha-shift} that are congruent to $r$ modulo $m$. Since we have slightly different results for $k=1$ and for $k\ge2$ in Lemma~\ref{lem:consec-general}, we will have slightly different results for congruence classes with $r=1$ and with $2\le r\le m$ in the proposition (and subsequent results) below.
\begin{prop}\label{prop:generalized-sum-of-floors}
Suppose $m\ge2$ and $\alpha,\nu\in[0,1)$. Then for all $n\in\N$ with $n\alpha\ge\nu$,
\[
\Num_{n,\alpha,\nu,1,m}
= n-\Floor{\frac{(n-\nu)}{1-\alpha}}+\sum\limits_{i\ge0} \left(\Floor{\frac{(n-\nu)}{1+im-\alpha}} - \Floor{\frac{(n-\nu)}{2+im-\alpha}}\right)
\]
and, for $2\le r\le m$,
\[
\Num_{n,\alpha,\nu,r,m}
= \sum\limits_{i\ge0} \left(\Floor{\frac{(n-\nu)}{r+im-\alpha}} - \Floor{\frac{(n-\nu)}{r+1+im-\alpha}}\right).
\]
\end{prop}
\begin{proof}
By Lemma~\ref{lem:consec-general}, we have a formula for $g(d)$, the number of integers in Sequence~\eqref{seq:alpha-shift} that are greater than or equal to a given value $d$. Now, let $G(d)$ equal the number of integers in Sequence~\eqref{seq:alpha-shift} that are equal to a given value $d$. Then $G(d) = g(d)-g(d+1)$.
We first compute $G(1)$ by $G(1) = g(1)-g(2) = n - \Floor{(n-\nu)/2-\alpha}$. Then, for $d\ge2$,
\[
G(d)
= g(d)-g(d+1)
= \Floor{(n-\nu)/(d-\alpha)}-\Floor{(n-\nu)/(d+1-\alpha)}.
\]
To compute $\Num_{n,\alpha,\nu,r,m}$, we need to count the number of integers in Sequence~\eqref{seq:alpha-shift} that are congruent to $r$ modulo $m$. Since $1\le r\le m$, we have $\Num_{n,\alpha,\nu,r,m} = G(r) + G(r+m) + G(r+2m) + \dots$.
For $2\le r\le m$, we have
\[
\Num_{n,\alpha,\nu,r,m}
= \sum\limits_{i\ge0}G(r+im)
= \sum\limits_{i\ge0}\left(\Floor{\frac{(n-\nu)}{r+im-\alpha}} - \Floor{\frac{(n-\nu)}{r+1+im-\alpha}}\right).
\]
For $r=1$, we have
\[
\Num_{n,\alpha,\nu,1,m}
= \sum\limits_{i\ge0}G(1+im)
= n - \Floor{\frac{(n-\nu)}{2-\alpha}} + \sum\limits_{i\ge1}\left(\Floor{\frac{(n-\nu)}{1+im-\alpha}} - \Floor{\frac{(n-\nu)}{2+im-\alpha}}\right).
\]
If we include the $i=0$ term in the summation, to make it more closely resemble the formula for $r\ne 1$, we get the stated result.
\end{proof}
\subsection{An asymptotic formula via summation}
Our next task is to evaluate the sum
\[
\sum\limits_{i\ge0}\left(\Floor{\frac{(n-\nu)}{r+im-\alpha}}-\Floor{\frac{(n-\nu)}{r+1+im-\alpha}}\right)
\]
that appears in Proposition~\ref{prop:generalized-sum-of-floors}.
We'll start by simplifying $\frac{1}{n-\nu}$ times this sum. For $x\in\mathbb{R}$, let $\{x\}$ denote the fractional part of $x$. (I.e., let $\{x\}=x-\Floor{x}$.) Then
\[
\frac{1}{n-\nu}\sum\limits_{i\ge0}\left(\Floor{\frac{n-\nu}{r+im-\alpha}}-\Floor{\frac{n-\nu}{r+1+im-\alpha}}\right)
=A_{\alpha,r,m}+\frac{1}{n-\nu}B_{n,\alpha,\nu,r,m}
\]
for the quantities
\[
A_{\alpha,r,m}=\sum\limits_{i\ge0}\left(\frac{1}{r+im-\alpha}-\frac{1}{r+1+im-\alpha}\right)
\]
and
\[
B_{n,\alpha,\nu,r,m}=\sum\limits_{i\ge0}\left(\fpf{n-\nu}{r+im-\alpha}-\fpf{n-\nu}{r+1+im-\alpha}\right).
\]
In general, $A_{\alpha,r,m}$ is an alternating series in which the absolute values of the terms decrease to zero. By the Alternating Series Test, $A_{\alpha,r,m}$ converges. $B_{n,\alpha,\nu,r,m}$ is also an alternating series. If its terms, in absolute value, were decreasing, then we would have $B_{n,\alpha,\nu,r,m}=\bigO(1)$ and thus $(1/n) B_{n,\alpha,\nu,r,m}=\bigO(1/n)$. Unfortunately, this isn't the case.
If we can get an asymptotic formula for $B_{n,\alpha,\nu,r,m}$, then we will have an asymptotic formula for $\Num_{n,\alpha,\nu,r,m}$. To start, we will revisit our results for $\Fseq_n$ and $\Rseq_n$ from earlier in this paper.
\begin{example}\label{ex:floor}
The floor sequence $\Fseq_n$.
For $\alpha=\nu=0$, $r=1$, and $m=2$, we have $\Num_{n,0,0,1,2}=\Fseq_n$ for all $n\in\N$. Then
\[
A_{0,1,2}
=\frac{1}{1}-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\dots
=\sum\limits_{k=1}^\infty (-1)^{k+1}\frac{1}{k}
\]
and
\[
B_{n,0,0,1,2}
=\fpf{n}{1}-\fpf{n}{2}+\fpf{n}{3}-\fpf{n}{4}+\dots
=\sum\limits_{k=1}^\infty (-1)^{k+1}\fpf{n}{k}.
\]
We see that $A_{0,1,2}=\log2$. (This is the Maclaurin series for $\ln(1+x)$, which converges for $-1<x\le 1$, evaluated at $x=1$.) By Proposition~\ref{prop:floor_seq_asymp}, $\Num_{n,0,0,1,2}=\Fseq_n=n\log2+\bigO\left(\sqrt{n}\right)$. By Proposition~\ref{prop:generalized-sum-of-floors}, $B_{n,0,0,1,2}=\bigO\left(\sqrt{n}\right)$.
\end{example}
\begin{example}\label{ex:rounding}
The rounding sequence $\Rseq_n$. For $\alpha=1/2$, $\nu=0$, $r=1$, and $m=2$, we have $\Num_{n,1/2,0,1,2}=\Rseq_n$ for all $n\in\N$. Then
\[
A_{1/2,1,2}
=\frac{2}{1}-\frac{2}{3}+\frac{2}{5}-\frac{2}{7}+\dots
=\sum\limits_{k=1}^\infty(-1)^{k+1}\frac{2}{2k+1}
\]
and
\[
B_{n,1/2,0,1,2}
=\fpf{2n}{1}-\fpf{2n}{3}+\fpf{2n}{5}-\fpf{2n}{7}+\dots
=\sum\limits_{k=1}^\infty(-1)^{k+1}\fpf{2n}{2k+1}.
\]
We see that $A_{1/2,1,2}=\pi/2$. (This is the Maclaurin series for $\arctan(x)$, which converges for $-1\le x\le 1$, evaluated at $x=1$.) By Proposition~\ref{prop:rounding_seq_asymp}, $\Num_{n,1/2,0,1,2}=\Rseq_n=(\pi/2-1)n+\bigO\left(\sqrt{n}\right)$.
Applying Proposition~\ref{prop:generalized-sum-of-floors}, we find $B_{n,1/2,0,1,2}=\bigO\left(\sqrt{n}\right)$.
\end{example}
As we will see with the following proposition, we have $B_{n,\alpha,\nu,r,m}=\bigO\left(\sqrt{n}\right)$ in general. For the proof, we will use Dirichlet's hyperbola method. Our method is adapted from the approach given in an answer on Mathematics Stack Exchange \cite{MSE-floor-summation} which proved that, for an increasing sequence of positive integers $b_1,b_2,b_3,\dots$,
\[
\sum\limits_{k\le n}\Floor{\frac{n}{b_k}}(-1)^k
= n\sum\limits_{k\le n}\frac{1}{b_k}(-1)^k + \bigO\left(\sqrt{n}\right).
\]
The terms in our series are not quite in this form, so we will prove a slightly more general result. To do so, we need a generalized notion of the term ``divides'' to work with real numbers.
\begin{defn}[Real-ly divides]
Let $a\in\mathbb{R}$ and $b\in\mathbb{Z}$. We say $a$ \emph{real-ly divides} $b$ if there is some $d\in\Z$ for which $\Ceil{da}=b$. We denote this by $a\reallydivs b$, and we say $a$ is a \emph{real divisor} of $b$.
\end{defn}
To see that this definition generalizes the usual definition of ``divides,'' let's suppose that we have $a,b\in\Z$ with $a\mid b$. Then there is some $d\in\Z$ for which $da=b$. Hence, $\Ceil{da}=da=b$, and so $a\reallydivs b$.
\begin{remark}
We will only consider use this definition of \emph{real-ly divides} with positive numbers. However, if one wants to use this with negative numbers as well, it may be beneficial to modify this definition so that it has some symmetry with positive and negative numbers. One could say a real number $a$ real-ly divides an integer $b$ if there is some integer $d$ for which one of the following holds: either $b\ge0$ and $\Ceil{da}=b$; or $b<0$ and $\Floor{da}=b$. With this, one would additionally have $a\reallydivs b$ if and only if $(-a)\reallydivs b$.
\end{remark}
The key property that we need is the following lemma.
\begin{lem}\label{lem:really-divides-summation}
Let $n\in\N$, $\nu\in[0,1)$, and $a\in\mathbb{R}$ with $a\ge1$. Then
\[
\Floor{\frac{n-\nu}{a}} = \sum\limits_{\substack{1\le d\le n-\nu \\ a\reallydivs d}} 1.
\]
\end{lem}
\begin{proof}
To start, we have $n-\nu>0$ and
\[
\Floor{(n-\nu)/a}
= \#\left\{ka : k\in\Z,\, k\ge 1,\, ka\le n-\nu\right\}.
\]
Since $a\ge1$, $\Ceil{k_1a}\ne\Ceil{k_2a}$ for all integers $k_1\ne k_2$. Thus,
\[
\Floor{(n-\nu)/a}
= \#\left\{\Ceil{ka} : k\in\Z,\, k\ge 1,\, ka\le n-\nu\right\}.
\]
But this is just the cardinality of the set of numbers that $a$ real-ly divides. We therefore have
\[
\Floor{(n-\nu)/a}
= \#\left\{d\in\Z : 1\le d\le n-\nu,\, a\reallydivs d\right\} = \sum\limits_{\substack{1\le d\le n-\nu \\ a\reallydivs d}} 1,
\]
as desired.
\end{proof}
We can now apply Dirichlet's hyperbola method to show $B_{n,\alpha,\nu,r,m}=\bigO\left(\sqrt{n}\right)$.
\begin{prop}\label{prop:sum-of-floors-to-sum-of-fractions}
For any increasing sequence $b_0,b_1,b_2,\dots$ of positive real numbers with the property that $b_k\ge1$ and $b_k\ge k$ for all $k$, we have
\[
\sum\limits_{k\le n-\nu}\Floor{\frac{n-\nu}{b_k}}(-1)^k = n\sum\limits_{k\le n-\nu}\dfrac{(-1)^k}{b_k}+\bigO\left(\sqrt{n}\right).
\]
\end{prop}
\begin{proof}
Let $f(n)=\displaystyle\sum\limits_{k\le n-\nu} \Floor{\frac{n-\nu}{b_k}}(-1)^k$. To start, by Lemma~\ref{lem:really-divides-summation}, we have
\[
f(n)
= \sum\limits_{k\le n-\nu}(-1)^k\sum\limits_{\substack{d\le n-\nu \\ b_k\reallydivs d}} 1.
\]
Changing the order of summation,
\[
f(n)
= \sum\limits_{d\le n-\nu}\sum\limits_{\substack{b_k\reallydivs d \\ k\le n-\nu}} (-1)^k.
\]
We are summing over $d\le n-\nu$ and real divisors $b_k$ of $d$. Thus, $b_k\le n-\nu$. As we have assumed that $b_k\ge k$, this implies $k\le n-\nu$. Thus, we need not explicitly state that $k\le n-\nu$.
%
%
We have
\[
f(n)
= \sum\limits_{d\le n-\nu}\sum\limits_{b_k\reallydivs d} (-1)^k.
\]
If $b_k\reallydivs d$, then $d-1<b_kd'\le d$. Thus, instead of summing over $d\le n-\nu$, we may sum over $d'$ and $k$ such that $b_kd'\le n-\nu$:
\[
f(n)
= \sum\limits_{b_kd'\le n-\nu}(-1)^k.
\]
We will now use Dirichlet's hyperbola method. Consider the region $R$ in the first quadrant of the $xy$-plane that is bounded by the hyperbola $xy=n-\nu$, the line $x=1$, and the line $y=1$. Let $A>0$. We split the region $R$ into 3 subregions: $R_1$, the portion of $R$ which lies above the line $y=(n-\nu)/A$; $R_2$, the portion of $R$ which lies to the right of the line $x=A$; and $R_3$, which is the rectangle $[1,A]\times[1,(n-\nu)/A]$. (See Figure~\ref{fig:hyperbola}.)
\begin{figure}
\centering
\hspace{1in}
\includegraphics[scale=.7]{hyperbola.eps}
\caption{The regions $R_1$, $R_2$, $R_3$}
\label{fig:hyperbola}
\end{figure}
It follows that for each combination of $b_k$ and $d'$ such that $b_kd'\le n-\nu$, there is a point $(x,y)=(d',b_k)$ in $R$. Hence, this point is in exactly one of $R_1$, $R_2$, $R_3$.
We can sum over points $(d',b_k)$ in $R_1\cup R_3$ and $R_2\cup R_3$, and then subtract a summation over $R_3$ because we have double counted. We have
\begin{equation}\label{eqn:dirichlet-eq-5}
f(n)
=
\sum\limits_{d'\le A}\sum\limits_{b_k\le (n-\nu)/d'}(-1)^k
+ \sum\limits_{b_k\le (n-\nu)/A}\sum\limits_{d'\le (n-\nu)/b_k}(-1)^k
- \sum\limits_{d'\le A}\sum\limits_{b_k\le (n-\nu)/A} (-1)^k.
\end{equation}
Since $b_0,b_1,b_2,\dots$ is an increasing sequence, $\sum\limits_{b_k\le x}(-1)^k\in\{0,1\}$. This summation is $\bigO(1)$. The first and third double sums in Equation~\eqref{eqn:dirichlet-eq-5} are $\bigO(A)$. Hence,
\[
f(n)
= \sum\limits_{b_k\le (n-\nu)/A}\sum\limits_{d'\le (n-\nu)/b_k}(-1)^k + \bigO(A)
=\sum\limits_{b_k\le (n-\nu)/A}(-1)^k\Floor{\frac{(n-\nu)}{b_k}}+\bigO(A).
\]
Then, since $\Floor{(n-\nu)/b_k}=(n-\nu)/b_k + \bigO(1)$,
\[
f(n) =(n-\nu) \sum\limits_{b_k\le (n-\nu)/A} \frac{(-1)^k}{b_k} + \bigO\left(A+\frac{(n-\nu)}{A}\right).
\]
Taking $A=\sqrt{(n-\nu)}$, we have
\[
f(n)
= (n-\nu)\sum\limits_{b_k\le \sqrt{(n-\nu)}}\frac{(-1)^k}{b_k}+\bigO\left(\sqrt{n-\nu}\right)
\]
All that remains to do is modify the summation so that we sum over $k\le n-\nu$. Let $K_n=\min\{k : b_k>\sqrt{n-\nu}\}$. Then
\[
\left|\sum\limits_{K_n\le k\le n}\dfrac{(-1)^k}{b_k}\right|
<\left|\frac{(-1)^{K_n}}{b_{K_n}}\right|
=\dfrac{1}{b_{K_n}}<\dfrac{1}{\sqrt{n-\nu}}
=\bigO\left(\dfrac{1}{\sqrt{n-\nu}}\right),
\]
where the first inequality holds because we have a finite alternating series.
Thus,
\begin{align*}
\sum\limits_{b_k\le\sqrt{n-\nu}}\dfrac{(-1)^k}{b_k}
&= \sum\limits_{k< K_n}\dfrac{(-1)^k}{b_k} \\
&= \sum\limits_{k\le n-\nu}\dfrac{(-1)^k}{b_k} - \sum\limits_{K_n\le k\le n-\nu}\dfrac{(-1)^k}{b_k} \\
&= \sum\limits_{k\le n-\nu}\dfrac{(-1)^k}{b_k}+\bigO\left(1/\sqrt{n-\nu}\right).
\end{align*}
Therefore,
\begin{align*}
f(n)
&= (n-\nu)\sum\limits_{b_k\le \sqrt{n-\nu}}\frac{(-1)^k}{b_k}+\bigO\left(\sqrt{n-\nu}\right) \\
&= (n-\nu)\sum\limits_{k\le n-\nu}\dfrac{(-1)^k}{b_k}+\bigO\left(\sqrt{n-\nu}\right) \\
&= (n-\nu)\sum\limits_{k\le n-\nu}\dfrac{(-1)^k}{b_k}+\bigO\left(\sqrt{n}\right).
\end{align*}
Since $\nu$ times the convergent alternating sum is $\bigO(1)$, we get the stated result.
\end{proof}
Combining Proposition~\ref{prop:generalized-sum-of-floors} and Proposition~\ref{prop:sum-of-floors-to-sum-of-fractions}, we obtain an asymptotic formula for $\Num_{n,\alpha,\nu,r,m}$. We'll first give the result for $2\le r\le m$, followed by a slight modification to get the result for $r=1$. (In the proof of Proposition~\ref{prop:generalized-sum-of-floors}, we saw that counting with $r=1$ is slightly different than counting with $r\ne1$. Fortunately, via Equation~\eqref{eqn:sum-is-n}, if we can count for $r=2,\dots,m$, then we get a count for $r=1$ for free.)
\begin{cor}\label{cor:generalized-sum-of-floors}
For $2\le r\le m$, and $\alpha,\nu\in[0,1)$, and $n\in\N$ with $n\alpha\ge\nu$,
\[
\Num_{n,\alpha,\nu,r,m}
= n\sum\limits_{i\ge0}\left(\frac{1}{r+im-\alpha}-\frac{1}{r+1+im-\alpha}\right) + \bigO\left(\sqrt{n}\right).
\]
\end{cor}
\begin{proof}
For integers $r,m$ with $2\le r\le m$ and for $\alpha\in[0,1)$, define the sequence $b_0,b_1,b_2,\dots$ as follows. For $i\ge0$, let $b_{2i}=(r+im)-\alpha$ and $b_{2i+1}=(r+im)-\alpha+1$. Then, for $\nu\in[0,1)$,
\begin{equation}\label{eqn:summation-floor-difference}
\sum\limits_{k\le n-\nu}\Floor{\frac{n-\nu}{b_k}}(-1)^k
=
\sum\limits_{i\ge0}\left(\Floor{\frac{n-\nu}{r+im-\alpha}}-\Floor{\frac{n-\nu}{r+1+im-\alpha}}\right).
\end{equation}
(While one series is finite and the other is infinite, note that the terms in the infinite series are zero for all $i>(n-\nu-r+\alpha)/m$.)
We wish to show the sequence $b_0,b_1,b_2,\dots$ satisfies the conditions of Proposition~\ref{prop:sum-of-floors-to-sum-of-fractions}. We will show $b_k\ge 1$, $b_k\ge k$, and $b_{k+1}>b_k$ for all $k\ge0$.
If $k$ is even, then $k=2i$ for some $i$ and we have $b_k=(r+km/2)-\alpha$. Since $m\ge2$, we have $b_k\ge k + r-\alpha \ge k$. If $k$ is odd, then $k=2i+1$ for some $i$ and we have $b_k=r+(k-1)m/2-\alpha+1$. Since $m\ge2$, we have $b_k\ge (k-1)+r-\alpha+1\ge k$. Thus, $b_k\ge k$ for all $k\ge0$. Additionally, since $b_0=r-\alpha\ge2-\alpha>1$ and $b_k\ge k\ge1$ for all $k\ge1$, we have $b_k\ge1$ for all $k\ge0$. Finally, for $k$ even, $b_{k+1}-b_k=1$, and for $k$ odd, $b_{k+1}-b_k=m-1\ge1$. Hence, $b_{k+1}>b_k$ for all $k\ge0$.
Since the sequence $b_0,b_1,b_2,\dots$ satisfies the conditions of Proposition~\ref{prop:sum-of-floors-to-sum-of-fractions}, we conclude that
\[
\sum\limits_{k\le n-\nu}\Floor{\frac{n-\nu}{b_k}}(-1)^k
= n\sum\limits_{k\le n-\nu}\frac{(-1)^k}{b_k} + \bigO\left(\sqrt{n}\right).
\]
Next, since we have an alternating series and $b_k\ge k$ for all $k$, we have
\[
\left|\sum\limits_{k>n-\nu}\frac{(-1)^k}{b_k} \right|
<\left|\frac{(-1)^{n}}{b_{n}}\right|
=\frac{1}{b_{n}}
<\frac{1}{n}
=\bigO\left(\frac{1}{n}\right).
\]
Thus,
\begin{align*}
\sum\limits_{k\le n-\nu}\Floor{\frac{n-\nu}{b_k}}(-1)^k
&= n\sum\limits_{k\ge0}\frac{(-1)^k}{b_k}-n\sum\limits_{k>n-\nu}\frac{(-1)^k}{b_k}+\bigO\left(\sqrt{n}\right) \\
&= n\sum\limits_{k\ge0}\frac{(-1)^k}{b_k} -n\bigO\left(\frac{1}{n}\right)+\bigO\left(\sqrt{n}\right) \\
&= n\sum\limits_{k\ge0}\frac{(-1)^k}{b_k}+\bigO(1)+ \bigO\left(\sqrt{n}\right).
\end{align*}
Combined with Equation~\eqref{eqn:summation-floor-difference}, we conclude
\[
\sum\limits_{i\ge0}\left(\Floor{\frac{n-\nu}{r+im-\alpha}}-\Floor{\frac{n-\nu}{r+1+im-\alpha}}\right)
=
n\sum\limits_{i\ge0}\left(\frac{1}{r+im-\alpha}-\frac{1}{r+1+im-\alpha}\right)+\bigO\left(\sqrt{n}\right).
\]
Together with Proposition~\ref{prop:generalized-sum-of-floors}, this proves the result for $2\le r\le m$.
\end{proof}
We now use Equation~\eqref{eqn:sum-is-n} and Corollary~\ref{cor:generalized-sum-of-floors} to compute $\Num_{n,\alpha,\nu,r,m}$ for $r=1$.
\begin{cor}\label{cor:generalized-sum-of-floors-r=1}
For any $m\ge2$, $\alpha,\nu\in[0,1)$, and $n\in\N$ with $n\alpha\ge\nu$,
\[
\Num_{n,\alpha,\nu,1,m}
= \frac{-\alpha n}{1-\alpha}+n\sum\limits_{i\ge0}\left(\frac{1}{1+im-\alpha}-\frac{1}{2+im-\alpha}\right) + \bigO\left(\sqrt{n}\right).
\]
\end{cor}
\begin{proof}
By Equation~\eqref{eqn:sum-is-n},
\[
\Num_{n,\alpha,\nu,1,m}
= n-\sum\limits_{r=2}^m \Num_{n,\alpha,\nu,r,m}.
\]
Then, by Corollary~\ref{cor:generalized-sum-of-floors}
\begin{align*}
\Num_{n,\alpha,\nu,1,m}
& = n - \sum\limits_{r=2}^m \left[n\sum\limits_{i\ge0}\left(\frac{1}{r+im-\alpha}-\frac{1}{r+1+im-\alpha}\right)+\bigO\left(\sqrt{n}\right)\right] \\
& = n - \sum\limits_{r=2}^m n\sum\limits_{i\ge0}\left(\frac{1}{r+im-\alpha}-\frac{1}{r+1+im-\alpha}\right)+\bigO\left(\sqrt{n}\right) \\
& = n - n\sum\limits_{i\ge0}\sum\limits_{r=2}^m\left(\frac{1}{r+im-\alpha}-\frac{1}{r+1+im-\alpha}\right) +\bigO\left(\sqrt{n}\right),
\end{align*}
where we have changed the order of summation in the last step because we have a finite number of convergent alternating series. We have a telescoping series for each $i\ge0$ which we can manipulate by
\begin{align*}
\Num_{n,\alpha,\nu,1,m}
& = n - n\sum\limits_{i\ge0} \left(\frac{1}{2+im-\alpha}-\frac{1}{m+1+im-\alpha}\right)+\bigO\left(\sqrt{n}\right) \\
& = n -\frac{n}{1-\alpha}+\frac{n}{1-\alpha}- n\sum\limits_{i\ge0} \left(\frac{1}{2+im-\alpha}-\frac{1}{m+1+im-\alpha}\right)+\bigO\left(\sqrt{n}\right) \\
&= n - \frac{n}{1-\alpha} + n\sum\limits_{i\ge0} \left(\frac{1}{1+im-\alpha}-\frac{1}{2+im-\alpha}\right)+\bigO\left(\sqrt{n}\right).
\end{align*}
Note that we have merely inserted $-n/(1-\alpha)+n/(1-\alpha)$ into our expression, and that we have not changed the order of summation in doing so. Hence,
\[
\Num_{n,\alpha,\nu,1,m}
= \frac{-\alpha n}{1-\alpha} + n\sum\limits_{i\ge0} \left(\frac{1}{1+im-\alpha}-\frac{1}{2+im-\alpha}\right)+\bigO\left(\sqrt{n}\right),
\]
as desired.
\end{proof}
\subsection{An asymptotic formula via integration}
From integral calculus, we know how to evaluate sums like $1-1/2+1/3-1/4+\dots$ and $1-1/3+1/5-1/7+\dots$ via integration of Maclaurin series as mentioned in Example~\ref{ex:floor} and Example~\ref{ex:rounding}. We will take the same approach to evaluate the summation that appears in Corollary~\ref{cor:generalized-sum-of-floors} and Corollary~\ref{cor:generalized-sum-of-floors-r=1}.
The following proposition shows us how. As with previous work in this section, we will first obtain a result $2\le r\le m$ and then use Equation~\eqref{eqn:sum-is-n} to obtain a result for $r=1$.
\begin{prop}\label{prop:alt-sum-integral-formula}
For integers $r,m$ with $2\le r\le m$, and for any $\alpha\in [0,1)$,
\[
\sum\limits_{i\ge0}\left(\frac{1}{r+im-\alpha}-\frac{1}{r+1+im-\alpha}\right)
=\int\limits_0^1\dfrac{(1-x)x^{r-1-\alpha}}{1-x^m}\mathrm{d}x.
\]
\end{prop}
\begin{proof}
To start, for $\beta=r-1-\alpha$, a positive real number, and for any non-negative integer $k$, let
\[
f_k(x)
=x^{\beta}\sum\limits_{i=0}^k\left(x^{im}-x^{im+1}\right).
\]
Then, for all $k\ge0$ we have
\[
|f_k(x)|
=\left|x^\beta (1-x)\sum\limits_{i=0}^k x^{im} \right|
= \left| x^\beta (1-x) \frac{1-x^{(k+1)m}}{1-x^m} \right|
= \left| \frac{x^\beta}{1+x+x^2+\dots+x^{m-1}} \right| \left|\left(1-x^{(k+1)m}\right)\right|,
\]
\iffalse \begin{align*}
|f_k(x)|
&=\left|x^\beta (1-x)\sum\limits_{i=0}^k x^{im} \right| ^\\
&= \left| x^\beta (1-x) \frac{1-x^{(k+1)m}}{1-x^m} \right| \\
&= \left| x^\beta (1-x^{(k+1)m}) \frac{1}{(1-x^m)/(1-x)}\right| \\
&= \left| x^\beta \right| \left|(1-x^{(k+1)m})\right| \left| \frac{1}{1+x+x^2+\dots+x^{m-1}}\right|,
\end{align*}
\fi
a product of absolute values of two functions. Each absolute value is at most 1 for all $x\in[0,1]$. (We're using the fact that $\beta>0$ here.)
Thus, $|f_k(x)|\le 1\cdot1=1$ for all $x\in[0,1]$. Since 1 is integrable on $[0,1]$, by the Dominated Convergence Theorem we have
\[
\lim\limits_{k\to\infty}\int\limits_0^1 f_k(x)\,\mathrm{d}x
= \int\limits_0^1 \lim\limits_{k\to\infty} f_k(x)\,\mathrm{d}x.
\]
Starting with the integral in the statement of this proposition, and applying this limit and integration interchange, we find
\begin{align*}
\int\limits_0^1\dfrac{(1-x)x^{r-1-\alpha}}{1-x^m}\mathrm{d}x
&= \int\limits_0^1 \lim\limits_{k\to\infty} f_k(x)\mathrm{d}x
= \lim\limits_{k\to\infty}\int\limits_0^1 f_k(x)\mathrm{d}x
= \lim\limits_{k\to\infty} \int\limits_0^1 x^{\beta}\sum\limits_{i=0}^k \left(x^{im}-x^{im+1}\right)\mathrm{d}x \\
&= \lim\limits_{k\to\infty}\sum\limits_{i=0}^k \left(\frac{x^{\beta+1+im}}{\beta+1+im} - \frac{x^{\beta+2+im}}{\beta+2+im}\right)\Bigg|_{x=0}^{x=1} \\
& = \lim\limits_{k\to\infty}\sum\limits_{i=0}^k\left(\frac{1}{\beta+1+im}-\frac{1}{\beta+2+im}\right) \\
& = \sum\limits_{i=0}^\infty\left(\frac{1}{r+im-\alpha}-\frac{1}{r+1+im-\alpha}\right),
\end{align*}
as desired.
\end{proof}
Combining Corollary~\ref{cor:generalized-sum-of-floors} and Proposition~\ref{prop:alt-sum-integral-formula}, we obtain the following asymptotic formula for $\Num_{n,\alpha,\nu,r,m}$ with $2\le r\le m$, which we can extend to $r=1$ via Equation~\eqref{eqn:sum-is-n}. This is our main result.
\begin{thm}\label{thm:N_m-asymp}
Suppose $m\ge1$, $\alpha,\nu\in[0,1)$, and $n\in\N$ with $n\alpha\ge\nu$. Then
\[
\Num_{n,\alpha,\nu,1,m} = \frac{-\alpha n}{1-\alpha}+n\int\limits_0^1\dfrac{(1-x)x^{-\alpha}}{1-x^m}\mathrm{d}x + \bigO\left(\sqrt{n}\right),
\]
and, for $2\le r\le m$,
\[
\Num_{n,\alpha,\nu,r,m} = n\int\limits_0^1\dfrac{(1-x)x^{r-1-\alpha}}{1-x^m}\mathrm{d}x + \bigO\left(\sqrt{n}\right).
\]
\end{thm}
\begin{proof}
For $2\le r\le m$, the result follows from Corollary~\ref{cor:generalized-sum-of-floors} and Proposition~\ref{prop:alt-sum-integral-formula}. We need to prove the result for $1\le r\le m$, and $\alpha,\nu\in[0,1)$.
By Equation~\eqref{eqn:sum-is-n},
\[\Num_{n,\alpha,\nu,1,m}
= n - \sum\limits_{r=2}^m\Num_{n,\alpha,\nu,r,m}.\]
Thus,
\begin{align*}
\Num_{n,\alpha,\nu,1,m}
= n - \sum\limits_{r=2}^m\Num_{n,\alpha,\nu,r,m}
&= n - \sum\limits_{r=2}^m \left(n\int\limits_0^1\dfrac{(1-x)x^{r-1-\alpha}}{1-x^m}\mathrm{d}x+\bigO\left(\sqrt{n}\right)\right) \\
&= n - n\int\limits_0^1\dfrac{x^{-\alpha}(1-x)}{1-x^m}\sum\limits_{r=2}^m x^{r-1}\mathrm{d}x+\bigO\left(\sqrt{n}\right) \\
&= n - n\int\limits_0^1\dfrac{x^{-\alpha}(1-x)}{1-x^m}\sum\limits_{r=2}^m x^{r-1}\mathrm{d}x+\bigO\left(\sqrt{n}\right).
\end{align*}
The finite series inside the integral will cancel with the denominator nicely if we include one more term. We do so as follows:
\begin{align*}
\Num_{n,\alpha,\nu,1,m} - n\int\limits_0^1 \dfrac{(1-x)x^{-\alpha}}{1-x^m}\mathrm{d}x &= n - n\int\limits_0^1\dfrac{x^{-\alpha}(1-x)}{1-x^m}\sum\limits_{r=1}^m x^{r-1}\mathrm{d}x+\bigO\left(\sqrt{n}\right) \\
&=n-n\int\limits_0^1 x^{-\alpha}\mathrm{d}x +\bigO\left(\sqrt{n}\right).
\end{align*}
Since $0\le\alpha<1$, this improper integral converges to $1/(1-\alpha)$. Solving for $\Num_{n,\alpha,\nu,1,m}$, we find
\[
\Num_{n,\alpha,\nu,1,m} = n - \frac{n}{1-\alpha} + n\int\limits_0^1\dfrac{(1-x)x^{-\alpha}}{1-x^m}\mathrm{d}x + \bigO\left(\sqrt{n}\right),
\]
which simplifies to the stated result.
\end{proof}
Thus, $\Num_{n,\alpha,\nu,r,m}$ is asymptotically linear and our formula is independent of $\nu$. We record the corresponding slope below in the following corollary.
\begin{cor}\label{cor:N_m-slope}
For $\alpha,\nu\in[0,1)$ and $1\le r\le m$,
\[
\lim\limits_{n\to\infty}\frac{1}{n}\Num_{n,\alpha,\nu,r,m} =
\begin{dcases*}
\frac{-\alpha}{1-\alpha}+\int\limits_0^1\dfrac{(1-x)x^{-\alpha}}{1-x^m}\mathrm{d}x & if $r=1$,\\
\int\limits_0^1\dfrac{(1-x)x^{r-1-\alpha}}{1-x^m}\mathrm{d}x & if $2\le r\le m$.
\end{dcases*}
\]
\end{cor}
Via integration, we can compute specific values of $\lim\limits_{n\to\infty}\frac{1}{n}\Num_{n,\alpha,\nu,r,m}$ for $1\le r\le m\le 4$. For $\alpha=\nu=0$, exact and rounded values are in Figure~\ref{fig:shift-0-exact-values} and Figure~\ref{fig:shift-0-decimals}. For $\alpha=1/2$ and $\nu=0$, exact and rounded values are in Figure~\ref{fig:shift-1/2-exact-values} and Figure~\ref{fig:shift-1/2-decimals}.
\begin{figure}
\centering
\begin{tabular}{c||c|c|c|c|}
\backslashbox{$r$}{$m$} & 1 & 2 & 3 & 4 \\ \hline \hline
1 & $1$ & $\log\left(2\right)$ & $\frac{1}{9} \, \sqrt{3} \pi$ & $\frac{1}{8} \, \pi + \frac{1}{4} \, \log\left(2\right)$ \\ \hline
2 & & $-\log\left(2\right) + 1$ & $-\frac{1}{18} \, \sqrt{3} \pi + \frac{1}{2} \, \log\left(3\right)$ & $\frac{1}{8} \, \pi - \frac{1}{4} \, \log\left(2\right)$ \\ \hline
3 & & & $-\frac{1}{18} \, \sqrt{3} \pi - \frac{1}{2} \, \log\left(3\right) + 1$ & $-\frac{1}{8} \, \pi + \frac{3}{4} \, \log\left(2\right)$ \\ \hline
4 & & & & $-\frac{1}{8} \, \pi - \frac{3}{4} \, \log\left(2\right) + 1$ \\ \hline
\end{tabular}
\caption{Values of $\lim\limits_{n\to\infty}\frac{1}{n}\Num_{n,0,0,r,m}$ for $1\le m\le 4$}
\label{fig:shift-0-exact-values}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{c||c|c|c|c|}
\backslashbox{$r$}{$m$} & 1 & 2 & 3 & 4 \\ \hline \hline
1 & $1.000000$ & $0.693147$ & $0.604600$ & $0.565986$ \\ \hline
2 & & $0.306853$ & $0.247006$ & $0.219412$ \\ \hline
3 & & & $0.148394$ & $0.127161$ \\ \hline
4 & & & & $0.087441$ \\ \hline
\end{tabular}
\caption{$\lim\limits_{n\to\infty}\frac{1}{n}\Num_{n,0,0,r,m}$ for $1\le m\le 4$, rounded to 6 decimal places}
\label{fig:shift-0-decimals}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{c||c|c|c|c|}
\backslashbox{$r$}{$m$} & 1 & 2 & 3 & 4 \\ \hline \hline
1 & $1$ & $\frac{1}{2} \, \pi - 1$ & $\frac{1}{6} \, \sqrt{3} \pi + \frac{1}{2} \, \log\left(3\right) - 1$ & $\frac{1}{4} \, \pi + \frac{1}{4} \, \sqrt{2} \log\left(3+2\sqrt{2}\right) - 1$ \\ \hline
2 & & $-\frac{1}{2} \, \pi + 2$ & $\frac{1}{6} \, \sqrt{3} {\left(\pi - \sqrt{3} \log\left(3\right)\right)}$ & $\frac{1}{4} \, \pi {\left(\sqrt{2} - 1\right)}$ \\ \hline
3 & & & $-\frac{1}{3} \, \sqrt{3} \pi + 2$ & $\frac{1}{4} \, \pi - \frac{1}{4} \, \sqrt{2} \log\left(3+2\sqrt{2}\right)$ \\ \hline
4 & & & & $-\frac{1}{4} \, \pi {\left(\sqrt{2} + 1\right)} + 2$ \\ \hline
\end{tabular}
\caption{$\lim\limits_{n\to\infty}\frac{1}{n}\Num_{n,1/2,0,r,m}$ for $1\le m\le 4$}
\label{fig:shift-1/2-exact-values}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{c||c|c|c|c|}
\backslashbox{$r$}{$m$} & 1 & 2 & 3 & 4 \\ \hline \hline
1 & $1.000000$ & $0.570796$ & $0.456206$ & $0.408623$ \\ \hline
2 & & $0.429204$ & $0.357594$ & $0.325323$ \\ \hline
3 & & & $0.186201$ & $0.162173$ \\ \hline
4 & & & & $0.103881$ \\ \hline
\end{tabular}
\caption{$\lim\limits_{n\to\infty}\frac{1}{n}\Num_{n,1/2,0,r,m}$ for $1\le m\le 4$, rounded to 6 decimal places}
\label{fig:shift-1/2-decimals}
\end{figure}
\section{Applications to finding parity and counting lattice points}\label{sec:applications}
Now that we have a formula for $\Num_{n,\alpha,\nu,r,m}$, we focus on a few applications: computing a floor shift which results in an asymptotic 50/50 split of even and odd terms; and counting lattice points in a few families of ellipses.
\subsection{Shifting for parity}
We return to the case where $m=2$ and consider the problem of determining a shift $\alpha$ so that half of the terms are odd and half are even. It amounts to computing $\alpha$ for which
\[
\lim\limits_{n\to\infty}\frac{1}{n}\Num_{n,\alpha,\nu,1,2}
=\lim\limits_{n\to\infty}\frac{1}{n}\Num_{n,\alpha,\nu,2,2}
=1/2.
\]
By Corollary~\ref{cor:N_m-slope} (with the formula for $r=2$ to avoid the extra term out front), we need $\alpha$ such that
\[
\int\limits_0^1\dfrac{(1-x)x^{1-\alpha}}{1-x^2}\mathrm{d}x
= \int\limits_0^1\dfrac{x^{1-\alpha}}{1+x}\mathrm{d}x
= 1/2.
\]
(Note that this is independent of $\nu$.) Since we're using $r=2$, for the remainder of this subsection, we will count even entries instead of odd entries in an $\alpha$-shifted floor sequence.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{f-alpha-plot.eps}
\caption{Plot of $\displaystyle y=f(\alpha) = \int\limits_0^1\dfrac{x^{1-\alpha}}{1+x}\mathrm{d}x$ for $\alpha\in[0,1]$.
}
\label{fig:f(x)-plot}
\end{figure}
Let
\[
f(\alpha)
=\int\limits_0^1\dfrac{x^{1-\alpha}}{1+x}\mathrm{d}x.
\]
Then $f(\alpha)$ is the (asymptotic) proportion of terms in an $\alpha$-shifted floor sequence of length $n$ that are even. We immediately see that $f$ is continuous, increasing, and concave up for $\alpha\in[0,1]$. Furthermore, $f(0)=1-\log2 < 1/2$ and $f(1)=\log2 > 1/2$. Thus, there is a unique shift $\alpha_0\in(0,1)$ for which $f(\alpha_0)=1/2$. A plot of $f$ appears in Figure~\ref{fig:f(x)-plot}. We see that the value of $\alpha$ for which $f(\alpha)=1/2$ is between $0.6$ and $0.7$.
Computing with Sage \cite{sage}, we can shrink the interval down. We compute
\[
f(0.682379227335) < 1/2
\text{ and }
f(0.682379227345) > 1/2.
\]
Hence, we have $f(\alpha_0)=1/2$ for
\[
\alpha_0
\approx 0.68237922734.
\]
We can look at some data with this approximate value. A plot of $y=\Num_{n,0.68237922734,0,2,2}$ appears in Figure~\ref{fig:alpha_0_sequence_plot} along with the graph of $y=n/2$. In Figure~\ref{fig:alpha_0_random_table} gives values of $\Num_{n,0.68237922734,0,2,2}$ for random $n$ with $10^5<n<10^6$.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{alpha_0_sequence.eps}
\caption{Plot of $y=\Num_{n,\alpha,0,2,2}$ for $\alpha=0.68237933734$ and $1\le n\le 1000$ (black) along with the graph of $y=n/2$ (blue).}
\label{fig:alpha_0_sequence_plot}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{|c|c|c|}\hline & & \\
$n$ & number of even terms
& proportion of even terms
\\ & & \\
\hline
$ 7015 $ & $ 3503 $ & $49.9359\%$ \\
$ 179220 $ & $ 89632 $ & $50.0123\%$ \\
$ 213788 $ & $ 106901 $ & $50.0033\%$ \\
$ 267093 $ & $ 133562 $ & $50.0058\%$ \\
$ 439675 $ & $ 219839 $ & $50.0003\%$ \\
$ 491213 $ & $ 245600 $ & $49.9987\%$ \\
$ 503521 $ & $ 251741 $ & $49.9961\%$ \\
$ 689325 $ & $ 344631 $ & $49.9954\%$ \\
$ 775294 $ & $ 387629 $ & $49.9977\%$ \\
$ 978029 $ & $ 489010 $ & $49.9995\%$ \\
\hline
\end{tabular}
\caption{The number and proportion of even terms in an $\alpha$-shifted, $\nu$-offset, floor sequence of length $n$ for $\alpha=0.68237933634$ and various random $n$ with $10^5<n<10^6$.}
\label{fig:alpha_0_random_table}
\end{figure}
\subsection{Lattice points in selected ellipses, number rings, and plane tilings}
As we saw with Proposition~\ref{prop:gauss-R_n}, the number of lattice points in a circle of radius $\sqrt{2n}$ centered at the origin is $4\Rseq_n+4n+1$, where $\Rseq_n=\Num_{n,1/2,0,1,2}$ is the $1/2$-shifted floor sequence of length $n$. If we think of the plane as being tiled with $1\times1$ squares, then we have a formula for the number of vertices contained in a circle of radius $\sqrt{2n}$ centered at any vertex.
In this subsection, we will find formulas involving $\Num_{n,\alpha,\nu,r,m}$ for the number of lattice points contained in the ellipses
\[
x^2+y^2=n,\quad x^2+xy+y^2=n,\quad\text{and } x^2+2y^2=n,
\]
for any $n\in\N$. We will also count vertices contained in a circle of radius equal to the square root of any integer for tilings of the plane by squares or triangles.
In Proposition~\ref{prop:generalized-sum-of-floors}, we wrote $\Num_{n,\alpha,\nu,1,m}$ with a summation involving a difference of floors. In what follows, it will be useful to have the formula for $\Num_{n,\alpha,\nu,1,m}$ in the case where $\alpha$ is rational. If we suppose $\alpha=p/q$, then
\begin{equation} \label{eqn:num-rational-alpha}
\Num_{n,p/q,\nu,1,m}=n - \Floor{\frac{(n-\nu)q}{q-p}} + \sum\limits_{i\ge0}\left(\Floor{\frac{(n-\nu)q}{q+qim-p}}-\Floor{\frac{(n-\nu)q}{2q+qim-p}} \right)
\end{equation}
for all $n\in\N$ with $n\alpha\ge\nu$.
Also, for $n\in\N$, recall the function $\divs_{r,m}(n)$, which counts the number of positive divisors of $n$ that are congruent to $r$ modulo $m$.
\subsubsection{Lattice points in the region $x^2+y^2\le n$}
In Proposition~\ref{prop:gauss-R_n}, we found a formula for the number of lattice points $(x,y)$ contained in the disc $x^2+y^2\le 2n$ in terms of $\Rseq_n$. This is a result for discs with radius equal to the square root of an even number. We'll extend the result to square roots of odd numbers as well, eventually obtaining a formula for the number of lattice points contained in the disc $x^2+y^2\le n$. If we think of the plane as being tiled with $1\times1$ squares, then our formula will give us the total number of vertices that lie in a disc of radius $\sqrt{n}$ centered at one of the vertices.
\begin{prop}\label{prop:circle-radius-n}
For $n\in\N$, let $F(n)$ be the number of lattice points in a disc of radius $\sqrt{n}$ centered at the origin. Then
\[
F(n) =
4\Num_{\Ceil{n/2},1/2,\{n/2\},1,2}+4\Floor{\frac{n}{2}}+1,
\]
where $\{n/2\}$ denotes the fractional part of $n/2$.
\end{prop}
\begin{proof}
Let $F(n)=\#\left\{(x,y)\in\Z^2 : x^2+y^2\le n\right\}$. We'll show that the formula works for even $n$ and for odd $n$.
If $n$ is even, then $n=2k$ for some $k\in\N$. By Proposition~\ref{prop:gauss-R_n}
\[
F(n)
= F(2k)
= 4\Rseq_k+4k+1=4\Num_{k,1/2,0,1,2}+4k+1=4\Num_{n/2,1/2,0,1,2}+2n+1.
\]
Note that $\Ceil{n/2}=k=n/2$, $\{n/2\}=\{k\}=0$, and $4\Floor{n/2}=4\Floor{k}=4k=2n$. This proves the formula for $F(n)$ with $n$ even.
If $n$ is odd, then $n=2k-1$ for some $k\in\N$. By Jacobi's two-square theorem (Theorem~\ref{thm:jacobi}),
\[
F(n)
= 1 + 4\sum\limits_{j=1}^n\left(\divs_{1,4}(j)-\divs_{3,4}(j)\right)
= 1 + 4\left(\Floor{\frac{n}{1}}-\Floor{\frac{n}{3}}+\Floor{\frac{n}{5}}-\Floor{\frac{n}{7}}+\dots\right).
\]
In order to get this alternating floor sum, we use $\alpha=\nu=1/2$, $r=1$, and $m=2$ with Equation~\eqref{eqn:num-rational-alpha}. We have
\[
\Num_{k,1/2,1/2,1,2}
=1-k+\sum\limits_{i\ge0}\left(\Floor{\frac{2k-1}{4i+1}}-\Floor{\frac{2k-1}{4i+3}}\right)
= \frac{1-n}{2}+\sum\limits_{i\ge0}\left(\Floor{\frac{n}{4i+1}}-\Floor{\frac{n}{4i+3}}\right)
\]
for all $k\ge\nu/\alpha=1$. Then,
\[
F(n)
= 1 + 4\left(\Num_{k,1/2,1/2,1,2}+\frac{n-1}{2}\right)
= 1 + 4\Num_{(n+1)/2,1/2,1/2,1,2}+2n-2.
\]
Note that $\Ceil{n/2}=k=(n+1)/2$, $\{n/2\}=\{k+1/2\}=1/2$, and $4\Floor{n/2}=4\Floor{k-1/2}=4(k-1)=2n-2$. This proves the formula for $F(n)$ with $n$ odd.
\end{proof}
\begin{example}\label{ex:gaussian-ints}
To illustrate Proposition~\ref{prop:circle-radius-n}, we'll compute the number of lattice points in a disc of radius $\sqrt{13}$. Note that $\Ceil{13/2}=7$. The number of lattice points in a disc of radius $\sqrt{13}$ involves the quantity $4\Num_{7,1/2,1/2,1,2}$. To compute this, we examine the length-7 sequence
\[
\Floor{\frac{7-1/2}{1}+\frac{1}{2}},\Floor{\frac{7-1/2}{2}+\frac{1}{2}},\dots,\Floor{\frac{7-1/2}{7}+\frac{1}{2}}
= 7,3,2,2,1,1,1,
\]
which contains 5 odd terms. Thus, $\Num_{7,1/2,1/2,1,2}=5$. By Proposition~\ref{prop:circle-radius-n}, the number of lattice points in a disc of radius $\sqrt{13}$ is therefore
\[
F(13)
=4\Num_{7,1/2,1/2,1,2}+4\Floor{\frac{13}{2}}+1
= 4\cdot5+4\cdot6+1=45.
\]
A disc of radius $\sqrt{13}$ appears in Fig.~\ref{fig:sqrt-13-disc}, and one confirms that it contains 45 lattice points.
\end{example}
\begin{figure}
\centering
\includegraphics[scale=.6]{circle-sqrt-13.eps}
\caption{The 45 lattice points in a disc of radius $\sqrt{13}$, which are also the 45 Gaussian integers with norm at most 13.}
\label{fig:sqrt-13-disc}
\end{figure}
\begin{remark}
We mentioned the ring of Gaussian integers, $\Z[\ii]$, earlier. For any $z=a+b\ii\in\Z[\ii]$, the norm of $z$ is $N(z)=N(a+b\ii)=a^2+b^2$. Thus, Proposition~\ref{prop:circle-radius-n} gives a formula for the number of Gaussian integers with norm at most $n$. Following from Example~\ref{ex:gaussian-ints}, we know there are 45 Gaussian integers with norm at most 13. We visualize these Gaussian integers in Figure~\ref{fig:sqrt-13-disc}.
Viewing $\Z[\ii]$ in the complex plane, the Gaussian integers are the vertices for a tiling of the plane by $1\times1$ square tiles. If we draw a circle of radius $\sqrt{n}$, for $n\in\N$, around any lattice point, then Proposition~\ref{prop:circle-radius-n} gives us a formula for the number of vertices contained in the circle. Any other $1\times1$ square tiling of the plane would involve a rotation and/or shift of this tiling. A circle of radius $\sqrt{n}$ centered at any vertex would contain the same number of lattice points. Thus, Proposition~\ref{prop:circle-radius-n} gives us a formula for the number of vertices contained in a circle of radius $\sqrt{n}$, for $n\in\N$, centered at any vertex of any $1\times1$ square tiling of the plane.
\end{remark}
\subsubsection{Lattice points in the region $x^2+xy+y^2\le n$}
Next, we consider the ellipse $x^2+xy+y^2=n$. In general, the quantity $ax^2+bxy+cy^2$, for constants $a,b,c$, is a \emph{binary quadratic form}. We take results about the number of representations of an integer $n$ by a binary quadratic form from \cite{Dickson1958}.
\begin{prop}[{\cite[Exercise XXII.2]{Dickson1958}}]\label{prop:dickson-x^2+xy+y^2}
Let $n\in\N$. Then the number of representations of $n=x^2+xy+y^2$, for integers $x,y$, is $6\left(\divs_{1,3}(n)-\divs_{2,3}(n) \right)$.
\end{prop}
\begin{cor}\label{cor:lattice-points-eisenstein-ellipse}
For $n\in\N$, let $F(n)$ be the number of lattice points contained in the elliptical region $x^2+xy+y^2=n$. Then
\[
F(n)
= 6\Num_{n,0,0,1,3}+1.
\]
\end{cor}
\begin{proof}
Let $f(n)=\#\left\{(x,y)\in\Z^2 : x^2+xy+y^2=n\right\}$. Observe that $f(0)=1$. Thus,
\begin{equation}\label{eqn:f-step-2}
F(n) = \sum\limits_{k=0}^n f(k) = 1 + \sum\limits_{k=1}^n f(k).
\end{equation}
Next, by Proposition~\ref{prop:dickson-x^2+xy+y^2},
\[
\sum\limits_{k=1}^n f(k)
= 6\left(\Floor{\frac{n}{1}}-\Floor{\frac{n}{2}}+\Floor{\frac{n}{4}}-\Floor{\frac{n}{5}}+\Floor{\frac{n}{7}}-\dots\right),
\]
which is a finite sum since the floors are eventually zero. Using $\alpha=\nu=0$, $r=1$, and $m=3$ with Equation~\eqref{eqn:num-rational-alpha}, we get
\[
\Num_{n,0,0,1,3}
=\left(\Floor{\frac{n}{1}}-\Floor{\frac{n}{2}}+\Floor{\frac{n}{4}}-\Floor{\frac{n}{5}}+\Floor{\frac{n}{7}}-\dots\right)
\]
for all $n\in\N$. We conclude that $F(n) = 1 + 6\Num_{n,0,0,1,3}$,
as desired.
\end{proof}
\begin{example}\label{ex:ellipse-30}
We'll compute the number of lattice points contained in the ellipse $x^2+xy+y^2=30$. To do so, we consider the sequence
\[
\Floor{\frac{30}{1}},\Floor{\frac{30}{2}},\dots,\Floor{\frac{30}{30}} = 30, 15, 10, 7, 6, 5, 4, 3, 3, 3, 2, 2, 2, 2, 2, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1.
\]
There are 18 terms in this sequence that are congruent to 1 modulo 3. Hence,
the ellipse $x^2+xy+y^3=30$ contains $6\Num_{30,0,0,1,3}+1=6\cdot18+1=109$ lattice points. See Figure~\ref{fig:ellipse-30}.
\end{example}
\begin{figure}
\centering
\includegraphics[scale=.6]{x2pxypy2e30.eps}
\includegraphics[scale=.6]{eisensten-30.eps}
\caption{The 109 lattice points within the ellipse $x^2+xy+y^2=30$, and the 109 Eisenstein integers with norm at most 30.}
\label{fig:ellipse-30}
\end{figure}
From Theorem~\ref{thm:N_m-asymp} and Figure~\ref{fig:shift-0-exact-values}, we see that $\Num_{n,0,0,1,3}=\frac{1}{9}\sqrt{3}\pi n+\bigO\left(\sqrt{n}\right)$. We immediately obtain the following corollary.
\begin{cor}\label{cor:lattice-points-x2+xy+y2}
For $n\in\N$, the number of lattice points in the elliptical region $x^2+xy+y^2\le n$ is $\frac{2}{3}\sqrt{3}\pi n + \bigO\left(\sqrt{n}\right)$.
\end{cor}
\begin{remark}
Consider the ring of Eisenstein integers, $\Z[\omega]$, where $\omega=\frac{-1+\sqrt{-3}}{2}$, a primitive 3rd root of unity. For $z=a-b\omega\in\Z[\omega]$, the norm of $z$ is $N(z)=N(a-b\omega)=a^2+ab+b^2$. Thus, Corollary~\ref{cor:lattice-points-eisenstein-ellipse} gives a formula for the number of Eisenstein integers with norm at most $n$. Following from Example~\ref{ex:ellipse-30}, we know there are 109 Eisenstein integers with norm at most 30. We visualize these Eisenstein integers in Figure~\ref{fig:ellipse-30}.
Viewing $\Z[\omega]$ in the complex plane, the Eisenstein integers are the vertices for a plane tiling involving equilateral triangles of side length 1. If we draw a circle of radius $\sqrt{n}$ around any vertex, Corollary~\ref{cor:lattice-points-eisenstein-ellipse} gives us a formula for the number of vertices contained in the circle. Rotating the plane (and hence the tiles) about the center of the circle will leave the number of vertices in the circle unchanged. Thus, Corollary~\ref{cor:lattice-points-eisenstein-ellipse} gives us a formula for the number of vertices contained in a circle of radius $\sqrt{n}$, for $n\in\N$, centered at a vertex of any side-length 1 equilateral triangle tiling of the plane.
\end{remark}
\subsubsection{Lattice points in the region $x^2+2y^2\le n$}
We now consider the ellipse $x^2+2y^2=n$. The result below gives the number of representations of a natural number $n$ in terms of the binary quadratic form $x^2+2y^2$.
\begin{prop}[{\cite[Exercise XXII.1]{Dickson1958}}]
\label{prop:dickson-x^2+2y^2}
Let $n\in\N$. Then the number of representations of $n=x^2+2y^2$, for integers $x,y$, is $2\left(\divs_{1,8}(n)+\divs_{3,8}(n)-\divs_{5,8}(n)-\divs_{7,8}(n) \right)$.
\end{prop}
\begin{cor}\label{cor:x^2+2y^2}
Let $F(n)$ be the number of lattice points contained in the elliptical region $x^2+2y^2=n$. Then $F(0)=1$, $F(1)=3$, $F(2)=5$, $F(5)=11$, and for all $n\in\N$ with $n\ne 1,2,5$,
\[
F(n)
= 1+2\Num_{\Ceil{n/4},3/4,\{-n/4\},1,2}+2\Num_{\Ceil{n/4},1/4,\{-n/4\},1,2}+2n+2\Floor{\frac{n}{3}}-4\Ceil{n/4}.
\]
\end{cor}
\begin{proof}
Let $f(n) = \#\left\{(x,y)\in\Z^2 : x^2+2y^2=n\right\}$. Observe that $f(0)=1$. To start, we have
\begin{equation}\label{eqn:f-step-1}
F(n)
= \sum\limits_{k=0}^n f(k)
= 1 + \sum\limits_{k=1}^n f(k).
\end{equation}
Next, by Proposition~\ref{prop:dickson-x^2+2y^2},
\
\sum\limits_{k=1}^n f(k)
= 2\left(\Floor{\frac{n}{1}}+\Floor{\frac{n}{3}}-\Floor{\frac{n}{5}}-\Floor{\frac{n}{7}}+\Floor{\frac{n}{9}}+\dots\right),
\
which is a finite sum since the floors are eventually zero. We'll need two alternating floor sums here -- one for $\Floor{n/1}-\Floor{n/5}+\Floor{n/9}-\dots$ and one for $\Floor{n/3}-\Floor{n/7}+\Floor{n/11}-\dots$. Each will need $q=4$ and $m=2$. As we did with Proposition~\ref{prop:circle-radius-n}, we will have $\nu\ne0$ here.
For $n\in\N$, let $a=\Ceil{n/4}$ and $b=4a-n$. Then $0\le b<4$. Using $\alpha=p/q=3/4$, $\nu=b/4$, $r=1$, and $m=2$ with Equation~\eqref{eqn:num-rational-alpha}, we get
\[
\Num_{a,3/4,b/4,1,2} = a - \Floor{\frac{n}{1}} + \sum\limits_{i\ge0}\left(\Floor{\frac{n}{1+4i}}-\Floor{\frac{n}{5+4i}}\right)
\]
for all $a\ge\nu/\alpha=b/3$.
Using $\alpha=p/q=1/4$, $\nu=b/4$, $r=1$, and $m=2$ with Equation~\eqref{eqn:num-rational-alpha}, we get
\[
\Num_{a,1/4,b/4,1,2} = a - \Floor{\frac{n}{3}} + \sum\limits_{i\ge0}\left(\Floor{\frac{n}{3+4i}}-\Floor{\frac{n}{7+4i}}\right)
\]
for all $a\ge\nu/\alpha=b$.
Both equations hold for $a\ge b$. Since $a\ge1$ and $0\le b<4$, this inequality is satisfied for all $a,b$ except for $(a,b)=(1,2), (1,3), (2,3)$, which correspond, respectively, to $n=2$, $n=1$, and $n=5$.
Thus, for $n\in\N$ with $n\ne 1,2,5$,
\begin{align*}
F(n)
&=1+2\left(\Num_{a,3/4,b/4,1,2}+\Num_{a,1/4,b/4,1,2}-a+\Floor{\frac{n}{3}}-a+n\right)\\
&=1+2\Num_{\Ceil{n/4},3/4,\{-n/4\},1,2}+2\Num_{\Ceil{n/4},1/4,\{-n/4\},1,2}+2n+2\Floor{\frac{n}{3}}-4\Ceil{\frac{n}{4}}
\end{align*}
We can fill in the missing values by hand. We compute $f(1)=2$, $f(2)=2$, and $f(5)=0$. And by the formula above with $n=4$,
\[
F(4)
=1+2\Num_{1,3/4,0,1,2}+2\Num_{1,1/4,0,1,2}+4+2\Floor{4/3}-0
=1+2\cdot1+2\cdot1+4+2
=11.
\]
Thus, $F(0)=1$, $F(1)=F(0)+f(1)=1+2=3$, $F(2)=F(1)+f(2)=3+2=5$, and $F(5)=F(4)+f(5)=11+0=11$.
\end{proof}
\begin{figure}
\centering
\includegraphics[scale=.6]{x2p2y2e29.eps}
\includegraphics[scale=.6]{sqrtm2lattice.eps}
\caption{The 65 lattice points within the ellipse $x^2+2y^2=29$, and the 65 elements of $\Z[\sqrt{-2}]$ with norm at most 29.}
\label{fig:x^2+2y^2=29}
\end{figure}
\begin{example}\label{ex:x^2+2y^2=29}
We'll compute the number of lattice points contained in the ellipse $x^2+2y^2=29$. We have $n=29$, $\Ceil{29/4}=8$, and $\{-29/4\}=3/4$.
To compute $\Num_{8,3/4,3/4,1,2}$, we consider the sequence
\[
\Floor{\frac{8-3/4}{1}+3/4},\Floor{\frac{8-3/4}{2}+3/4},\dots,\Floor{\frac{8-3/8}{1}+3/4}
= 8,4,3,2,2,1,1,1.
\]
There are 4 odd numbers, so $\Num_{8,3/4,3/4,1,2}=4$.
To compute $\Num_{8,1/4,3/4,1,2}$, we consider the sequence
\[
\Floor{\frac{8-3/4}{1}+1/4},\Floor{\frac{8-3/4}{2}+1/4},\dots,\Floor{\frac{8-3/8}{1}+1/4}
= 7,3,2,2,1,1,1,1.
\]
There are 6 odd numbers, so $\Num_{8,1/4,3/4,1,2}=6$.
Thus, the number of lattice points in this ellipse is
\[
F(29)
= 1+2\Num_{8,3/4,3/4,1,2}+2\Num_{8,1/4,3/4,1,2}+29+2\Floor{\frac{29}{3}}-3
= 1+2\cdot4+2\cdot6+29+2\cdot9-3
= 65.
\]
See Figure~\ref{fig:x^2+2y^2=29}.
\end{example}
\begin{remark}
Consider the ring $\Z[\sqrt{-2}]$. For $z=a+b\sqrt{-2}\in\Z[\sqrt{-2}]$, the norm of $z$ is $N(z)=a^2+2b^2$. Thus, Corollary~\ref{cor:x^2+2y^2} gives a formula for the number of elements of $\Z[\sqrt{-2}]$ with norm at most $n$. Following from Example~\ref{ex:x^2+2y^2=29}, we know there are 65 elements of $\Z[\sqrt{-2}]$ with norm at most 29. We visualize these elements in Figure~\ref{fig:x^2+2y^2=29}.
\end{remark}
| {
"timestamp": "2022-07-01T02:24:11",
"yymm": "2206",
"arxiv_id": "2206.15452",
"language": "en",
"url": "https://arxiv.org/abs/2206.15452",
"abstract": "For any positive integer $n$ along with parameters $\\alpha$ and $\\nu$, we define and investigate $\\alpha$-shifted, $\\nu$-offset, floor sequences of length $n$. We find exact and asymptotic formulas for the number of integers in such a sequence that are in a particular congruence class. As we will see, these quantities are related to certain problems of counting lattice points contained in regions of the plane bounded by conic sections. We give specific examples for the number of lattice points contained in elliptical regions and make connections to a few well-known rings of integers, including the Gaussian integers and Eisenstein integers.",
"subjects": "Number Theory (math.NT)",
"title": "On residues of rounded shifted fractions with a common numerator",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.985718063977109,
"lm_q2_score": 0.815232489352,
"lm_q1q2_score": 0.8035893910952926
} |
https://arxiv.org/abs/1203.6668 | Making Markov chains less lazy | The mixing time of an ergodic, reversible Markov chain can be bounded in terms of the eigenvalues of the chain: specifically, the second-largest eigenvalue and the smallest eigenvalue. It has become standard to focus only on the second-largest eigenvalue, by making the Markov chain "lazy". (A lazy chain does nothing at each step with probability at least 1/2, and has only nonnegative eigenvalues.)An alternative approach to bounding the smallest eigenvalue was given by Diaconis and Stroock and Diaconis and Saloff-Coste. We give examples to show that using this approach it can be quite easy to obtain a bound on the smallest eigenvalue of a combinatorial Markov chain which is several orders of magnitude below the best-known bound on the second-largest eigenvalue. | \section{Introduction}\label{s:intro}
Let $\mathcal{M}$ be an ergodic, reversible Markov chain with
finite state space $\Omega$ and transition matrix $P$.
It is well known that the eigenvalues of $\mathcal{M}$ satisfy
\[ 1 = \lambda_0 > \lambda_1 \geq \lambda_2 \geq \cdots
\geq \lambda_{N-1} > -1,\]
where $N=|\Omega|$.
We refer to $\lambda_{N-1}$ as the \emph{smallest eigenvalue}
of $\mathcal{M}$.
The connection between the mixing time of a Markov chain and its
eigenvalues is well-known (see~\cite[Proposition 1]{sinclair}):
\begin{equation}
\label{mix-time} \tau(\varepsilon) \leq (1-\lambda_\ast)^{-1}\,
\ln \frac{1}{\epsilon\, \pi_{\min}}
\end{equation}
where $\tau(\varepsilon)$ denotes the mixing time of the Markov chain,
$\pi_{\min} = \min_{x\in \Omega} \pi(x)$ and
\[ \lambda_\ast = \max\{ \lambda_1, \, |\lambda_{N-1}|\}.\]
When studying the mixing time of a Markov chain $\mathcal{M}$
using (\ref{mix-time}),
the approach which has become standard is to make the chain $\mathcal{M}$
\emph{lazy} by replacing $P$ by $(I+P)/2$,
where $I$ denotes the identity matrix. Then all eigenvalues of the lazy
chain are nonnegative, and only the second-largest eigenvalue must
be investigated.
A lazy chain can be implemented so that its expected
running time is the same as the mixing time of the original chain.
So the problem with lazy chains is not their efficiency.
In our opinion, the main problem with lazy Markov chains is conceptual:
in order to prove that a Markov chain is fast, we first slow it down.
The device of using lazy Markov chains has been called
``crude''~\cite[p.~110]{SJ89} and
``unnatural''~\cite[Chapter 5]{jerrum-book}.
In this note, we aim to advertise an approach for bounding the
smallest eigenvalue of a Markov chain.
This approach was first proposed by Diaconis and Stroock
in 1991~\cite[Proposition 2]{DS91}, and a modified version was
presented by
Diaconis and Saloff-Coste two years later~\cite[p.702]{DSC}
(restated as Lemma~\ref{lazy} below).
The method of~\cite{DSC} has been applied in~\cite{DSC,DS91,goel},
but in the theoretical computer science community it has become common to
work with lazy chains. We urge researchers to first try
the approach of~\cite{DSC,DS91}
before choosing to work with a lazy version of their chain.
Finally we remark that in~\cite{directed} the author wrongly claimed
that their~\cite[Lemma 1.3]{directed} was new, when in fact it is precisely
the result of~\cite[p.702]{DSC}. We sincerely apologise for this error.
\subsection{The method}
See~\cite{jerrum-book} for Markov chain definitions not given here.
Write $\mathcal{G}$ for the underlying directed graph of the Markov chain
$\mathcal{M}$, where $\mathcal{G} = (\Omega, \Gamma)$
and each directed edge $e\in \Gamma$ corresponds to a transition of
$\mathcal{M}$.
If $P(x,x)>0$ then the edge $xx$ is called a \emph{self-loop}
at $x$. Define $Q(e) = Q(x,y) = \pi(x)P(x,y)$ for the edge $e=xy$.
A walk in $\mathcal{G}$ is a sequence of states
$x_0 x_1 \cdots x_\ell$
such that $P(x_j,x_{j+1})> 0$ for $j=0,\ldots, \ell-1$.
The walk is \emph{closed} if $x_\ell=x_0$.
If a walk has odd length then we call it an \emph{odd walk}.
For each $x\in\Omega$ let $w_x$ be an odd
walk from $x$ to $x$ in $\mathcal{G}$.
(Such a walk exists for each $x$, since the Markov chain
is aperiodic.) Define
$\mathcal{W} = \{ w_x : x\in\Omega\}$, a set
of ``canonical closed odd walks''.
For each transition $e\in\Gamma$ and each $w\in\mathcal{W}$,
let $r(e,w)$ denote the number of times that $e$ appears as a
directed edge of $w$. We can assume that $r(e,w)\leq 2$ for all
transitions $e$ (indeed, if $e$ is a self-loop then we can assume
that $r(e,w)\leq 1$.) The \emph{congestion} of $\mathcal{W}$,
denoted by $\eta(\mathcal{W})$, is defined by
\[ \eta(\mathcal{W}) = \max_{e\in\Gamma} \, Q(e)^{-1}\,
\sum_{x\in\Omega,\, e\in w_x}\, r(e,w_x)\, \pi(x)\, |w_x|.\]
\begin{lemma} \emph{\cite[p.702]{DSC}}\
Suppose that $\mathcal{M}$ is a reversible, ergodic Markov chain with
state space $\Omega$, and let $\mathcal{W}$ be a set of odd walks
defined as above. Then
\[ (1 + \lambda_{N-1})^{-1} \leq \frac{\eta(\mathcal{W})}{2}.\]
\label{lazy}
\end{lemma}
If $|w_x|=1$ for all $x\in\Omega$ then
the bound of Lemma~\ref{lazy} simplifies further to
\begin{equation}
\label{all-loops}
(1+\lambda_{N-1})^{-1} \leq \dfrac{1}{2}\operatorname{max}_{x\in\Omega}
P(x,x)^{-1}.
\end{equation}
\begin{remark}
\emph{
Suppose that the graph underlying a Markov chain $\mathcal{M}$
can be obtained from a connected bipartite graph by adding loops
to an exponentially small proportion of states.
For example, many instances
of the \emph{knapsack chain}~\cite{MS99} satisfy this property.
Since every closed odd walk must traverse at least one of
these self-loop edges, it is very difficult to define a set of
canonical closed odd walks with low congestion.
So Lemma~\ref{lazy} is unlikely to be easy to apply in this case.
}
\end{remark}
\section{Applications of the method}\label{s:applications}
We illustrate the use of Lemma~\ref{lazy}
by applying it to three combinatorial Markov chains.
Our applications are all ergodic and reversible with uniform stationary
distribution, and no edge will be used more than once
in any walk $w_x$ that we define. In this case the congestion
can be simplified to
\begin{equation}
\label{simple}
\eta(\mathcal{W}) = \operatorname{max}_{e\in\Gamma} \, P(e)^{-1}
\, \sum_{x\in\Omega,\,\, e\in w_x} \, |w_x|,
\end{equation}
where $P(e) = P(x,y) = P(y,x)$ for the transition $e=xy$.
\subsection{The switch chain for sampling regular graphs}
Our first application is to the Markov chain for sampling regular
graphs known as the \emph{switch chain}.
A transition of the chain is performed as follows:
from the current state $G$ (a $d$-regular graph on vertex set $[n]$)
choose an unordered pair of non-incident edges uniformly at random,
let $G'$ be the multigraph obtained from $G$ by deleting these edges
and inserting a perfect matching
of their four endvertices, selected uniformly at random. If $G'$ has no
repeated edges then the new state is $G'$, otherwise it is $G$.
The lazy version
of this chain was analysed by
Cooper et al.~\cite{CDG,CDG-corrigendum}.
Clearly $P(G,G)\geq \nfrac{1}{3}$ for every
state $G$ of this chain, so by (\ref{all-loops}) we immediately
conclude that
\[ (1 + \lambda_{N-1})^{-1}\leq \nfrac{3}{2}.
\]
This is several orders of magnitude smaller than the best-known
bound on $(1-\lambda_1)^{-1}$, which is $O(d^{23} n^8)$
(see~\cite{CDG-corrigendum}).
\subsection{Jerrum and Sinclair's matchings chain }
The next application is to the well-known
Markov chain for sampling perfect and near-perfect matchings
of a fixed graph $G$. A transition of the chain is performed as
follows: from the current state $M$ (which is a
perfect or near-perfect matching of $G$), choose an edge $e\in E(G)$
uniformly at random. If $M$ is a perfect matching and $e\in M$ then the
new state is $M - \{e\}$. If $M$ is a near-perfect matching and both
endvertices of $e$ are unmatched in $M$ then the new state is $M \cup \{e\}$.
If $M$ is a near-perfect matching, and exactly one endvertex of $e$
is unmatched in $M$ then let $e'$ be the edge of $M$ which matches the other
endvertex of $e$: the new state is $(M - \{e'\})\cup\{e\}$.
In all other cases the new state is $M$.
The lazy version of this chain was analysed by Jerrum and Sinclair~\cite{JS89,JS96},
If $G$ itself is not a
perfect matching then $P(M,M)\geq 1/|E|$ for all states $M$
of the chain (that is, for all perfect or near-perfect matchings $M$
of $G$).
Therefore (\ref{all-loops}) implies that
\[ (1 + \lambda_{N-1})^{-1} \leq \frac{|E|}{2}.\]
This bound is at least a factor $n^2$ smaller than the smallest-known
bound on $(1-\lambda_1)^{-1}$, which is $O(n|E|q(n))$
for graphs $G$ for which the ratio between the number of
near-perfect and perfect matchings is $q(n)$ (see~\cite{JS96}).
\subsection{A heat-bath chain for sampling contingency tables}
Our final application involves contingency tables.
Let $\mathbf{r}=(r_1,\ldots, r_m)$ and $\mathbf{c}=(c_1,\ldots, c_n)$
be two vectors of positive
integers with the same sum. A \emph{contingency table} with row sums $r$ and
column sums $c$ is an $m\times n$ matrix $X=(x_{i,j})$ with nonnegative
integer entries, such that
$\sum_{j=1}^n x_{i,j} = r_i$ for $i=1,\ldots, m$ and
$\sum_{i=1}^m x_{i,j} = c_j$ for $j=1,\ldots, n$.
Let $\Omega_{\mathbf{r},\mathbf{c}}$ denote the set of all contingency tables
with row sums $\mathbf{r}$ and column sums $\mathbf{c}$.
To avoid trivialities we assume throughout this section that
$\min\{ m,n\}\geq 2$.
Dyer and Greenhill~\cite{DG-contingency} proposed a Markov chain for sampling
contingency tables, which we will call the \emph{contingency chain}.
A transition of the chain is performed as follows: choose a $2\times 2$
subsquare of the current table uniformly at random, then
replace this $2\times 2$ subsquare by a uniformly chosen $2\times 2$
nonnegative integer matrix with the same row and column sums.
The \emph{lazy} contingency
chain does nothing at each step with probability $\nfrac{1}{2}$, and otherwise
performs a transition as described above.
Cryan et al.~\cite{CDGJM} analysed the lazy contingency chain for a constant
number of rows. They proved that $(1-\lambda_1)^{-1}\leq n^{f(m)}$ for
$m$-rowed contingency tables with $n$ columns, where $m$ is constant
and $f(m)$ is an expression satisfying $f(m)\geq 68m^4$.
We now analyse the smallest eigenvalue of the (non-lazy) contingency chain.
There is always a positive probability that the next state $X'$ of the
contingency chain is equal to the current state $X$,
since the heat-bath step may simply replace the
chosen $2\times 2$ subsquare with its current contents.
However,
the minimum of $P(X,X)$ over all states $X$ depends on
$\mathbf{r}$ and $\mathbf{c}$.
(To see this, consider $2\times 2$ squares.)
We prefer a bound which depends only on $m$ and $d$, and so we do not
simply apply (\ref{all-loops}).
\begin{lemma}
\label{non-lazy-contingency}
Let $\mathbf{r}=(r_1,\ldots, r_m)$ and $\mathbf{c}=(c_1,\ldots, c_n)$
be vectors of positive
integers with a common sum which satisfy
\[ r_1\geq r_2\geq \cdots \geq r_m\quad \text{ and } \quad
c_1\geq c_2 \geq \cdots \geq c_n.\]
Suppose that $\min\{r_1,\, c_1\}\geq 2$ and $\max\{ m,n\}\geq 3$.
The smallest eigenvalue of the contingency chain on
$\Omega_{\mathbf{r},\mathbf{c}}$ satisfies
\[ (1+\lambda_{N-1})^{-1} \leq 45\, m^3n^3. \]
\end{lemma}
\begin{proof}
Write $[a] = \{ 1,2,\ldots, a\}$ for $a\in\mathbb{Z}^+$.
From $X = (x_{i,j})\in\Omega_{\mathbf{r},\mathbf{c}}$,
first suppose that there exists a
5-tuple $(i_1,i_2,i_3,j_1,j_2)$ such
that
\begin{itemize}
\item $i_1,i_2,i_3$ are distinct elements of $[m]$,
\item $j_1,j_2$ are distinct elements of $[n]$,
\item $x_{i_1,j_1},\, x_{i_2,j_1},\, x_{i_3,j_2}$ are all positive.
\end{itemize}
Then $(i_1,i_2,i_3,j_1,j_2)$ is called \emph{row-good for $X$}, and $X$ is called \emph{row-good}.
If $X$ is row-good, fix the lexicographically least 5-tuple $(i_1,i_2,i_3,j_1,j_2)$ which is row-good for
$X$ and consider the following sequence of three transitions on the $3\times 2$ subsquare defined
by rows $i_1,i_2,i_3$ and columns $j_1,j_2$:
\[ \begin{pmatrix} y_{1,1} & y_{1,2}\\ y_{2,1} & y_{2,2}\\ y_{3,1} & y_{3,2}\end{pmatrix} \,\, \Longrightarrow
\,\, \begin{pmatrix} y_{1,1}-1 & y_{1,2}+1\\ y_{2,1} & y_{2,2}\\ y_{3,1}+1 & y_{3,2}-1\end{pmatrix} \,\, \Longrightarrow
\,\, \begin{pmatrix} y_{1,1} & y_{1,2}\\ y_{2,1} -1 & y_{2,2}+1\\ y_{3,1}+1 & y_{3,2}-1\end{pmatrix} \,\, \Longrightarrow
\begin{pmatrix} y_{1,1} & y_{1,2}\\ y_{2,1} & y_{2,2}\\ y_{3,1} & y_{3,2}\end{pmatrix}.
\]
(For notational convenience we have written $y_{k,\ell}$ for $x_{i_k,j_\ell}$ in the above.)
Note that all intermediate matrices are nonnegative, due to the row-good property.
This defines a walk $w_X$ of length 3 from $X$ to $X$ in the graph underlying the contingency chain.
We can define 5-tuples $(i_1,i_2,j_1,j_2,j_3)$ which are \emph{column-good for $X$}
in the analogous way, and say that
$X$ is column-good if there is a 5-tuple which is column-good for $X$. If $X$ is column-good then
taking the transpose of each matrix in the sequence of transitions above defines
an odd walk $w_X$ of length 3 from $X$ to $X$.
Finally, suppose that $X\in\Omega_{\mathbf{r},\mathbf{c}}$ is not row-good and
is not column-good.
Such an $X$ is said to be \emph{bad}.
Then no row or column of $X$ contains more than one positive entry. Since all row and column
sums are positive, it follows that $m=n\geq 3$ and that every row and column contains exactly one
positive entry.
Let $(i_1,i_2,i_3,j_1,j_2,j_3)$ be the lexicographically-least 6-tuple such that
\begin{itemize}
\item $i_1,i_2,i_3$ are distinct elements of $[m]$,
\item $j_1,j_2,j_3$ are distinct elements of $[n]$,
\item $x_{i_1,j_1} \geq 2$, while $x_{i_2,j_2}$ and $x_{i_3,j_3}$ are positive.
\end{itemize}
(The conditions on $\mathbf{r}$ and $\mathbf{c}$ guarantee that such a 6-tuple exists.) Consider the following sequence
of 5 transitions, performed on the $3\times 3$ subsquare defined by rows $i_1,i_2,i_3$ and columns
$j_1,j_2,j_3$:
\begin{align*}
\begin{pmatrix} y_{1,1} & 0 & 0\\ 0 & y_{2,2} & 0\\ 0 & 0 & y_{3,3}\end{pmatrix}
& \Longrightarrow \,\,
\begin{pmatrix} y_{1,1}-1 & 1 & 0\\ 1 & y_{2,2}-1 &0\\ 0 & 0 & y_{3,3}\end{pmatrix} \,\, \Longrightarrow \,\,
\begin{pmatrix} y_{1,1}-1 & 1 & 0\\ 0 & y_{2,2}-1 &1\\ 1 & 0 & y_{3,3}-1\end{pmatrix} \\
& \Longrightarrow \,\,
\begin{pmatrix} y_{1,1}-2 & 1 & 1\\ 1 & y_{2,2}-1 &0\\ 1 & 0 & y_{3,3}-1\end{pmatrix} \,\, \Longrightarrow \,\,
\begin{pmatrix} y_{1,1}-1& 0 & 1\\ 0 & y_{2,2} &0\\ 1 & 0 & y_{3,3}-1\end{pmatrix} \\
& \Longrightarrow \,\,
\begin{pmatrix} y_{1,1}& 0 & 0\\ 0 & y_{2,2} &0\\ 0 & 0 & y_{3,3}\end{pmatrix}.
\end{align*}
This defines a walk $w_X$ of length 5 from $X$ to $X$ in the graph underlying the chain.
Now we must analyse the set
$\mathcal{W} = \{ w_X : X\in\Omega_{\mathbf{r},\mathbf{c}}\}$ of odd walks
defined above.
Let $e=(Z,Z')$ be a transition of the contingency chain.
Then $Z$ and $Z'$ only differ in a $2\times 2$
subsquare defined by rows $i,i'$ and columns $j,j'$.
First we seek row-good $X$ with $e\in w_X$. Let $i''\not\in\{i,i'\}$
be another row index, and fix one of
the 6 ways to arrange $(i,i',i'',j,j')$ as $(i_1,i_2,i_3,j_1,j_2)$. This gives enough information to uniquely
identify a potential candidate for $X$. For example, if the transition $e$ involves rows $i_1$ and $i_3$ then
$X=Z$, while if the transition $e$ involves rows $i_2$ and $i_3$ then $X=Z'$. If $e$ involves rows $i_1$ and $i_2$
then $e$ is the second transition in the sequence, and $X$ can be obtained from $Z$ by reversing
the first transition in the sequence: namely, adding 1 to entries $(i_1,j_1)$ and $(i_3,j_2)$ and
subtracting 1 from entries $(i_1,j_2)$ and $(i_3,j_1)$. If $X$ is a valid contingency table then
$(i_1,i_2,i_3,j_1,j_2)$ is row-good for $X$. If it is the lexicographically
least such 5-tuple for $X$ then $e\in w_X$.
This identifies at most $12(m-2)$ tables $X$ such that $e\in w_X$.
(This is an overcount, but good enough for our purposes.)
By choosing a third column index $j''\not\in\{j,j'\}$, an analogous argument shows that there are at most
$12(n-2)$ column-good tables $X$ with $e\in w_X$.
Finally, we seek bad tables $X$ such that $e\in w_X$.
Choose a row index $i''\not\in\{ i,i'\}$ and a column index
$j''\not\in\{j,j'\}$, and fix one of the at most
36 ways to arrange $(i,i',i'',j,j',j'')$ as $(i_1,i_2,i_3,j_1,j_2,j_3)$. Now
each transition in the sequence alters a different $2\times 2$ subsquare except the first and fourth,
which both alter rows $i_1,i_2$ and columns $j_1,j_2$. Hence, arguing as above, there are at
most two choices for $X$, for each fixed 6-tuple. This gives at most $72(m-2)(n-2)$ bad tables
$X$ such that $e\in w_X$.
Combining all this, we find that the congestion parameter $\eta(\mathcal{W})$ satisfies
\[ \eta(\mathcal{W}) \leq \binom{m}{2}\,\binom{n}{2}\, \left(36(m-2)+36(n-2)+360(m-2)(n-2)\right)
\leq 90\, m^3n^3,
\]
and applying Lemma~\ref{lazy} completes the proof.
\end{proof}
Again we observe that this bound on $(1+\lambda_{N-1})^{-1}$ is several orders
of magnitude lower than the best-known
bound on the second-largest eigenvalue~\cite{CDGJM}.
\begin{remark}
It has recently been shown~\cite{GU} that the contingency chain described above has no negative eigenvalues.
We include Lemma~\ref{non-lazy-contingency} here to illustrate an application of Lemma~\ref{lazy}
involving walks of length greater than one.
\end{remark}
| {
"timestamp": "2013-01-22T02:02:44",
"yymm": "1203",
"arxiv_id": "1203.6668",
"language": "en",
"url": "https://arxiv.org/abs/1203.6668",
"abstract": "The mixing time of an ergodic, reversible Markov chain can be bounded in terms of the eigenvalues of the chain: specifically, the second-largest eigenvalue and the smallest eigenvalue. It has become standard to focus only on the second-largest eigenvalue, by making the Markov chain \"lazy\". (A lazy chain does nothing at each step with probability at least 1/2, and has only nonnegative eigenvalues.)An alternative approach to bounding the smallest eigenvalue was given by Diaconis and Stroock and Diaconis and Saloff-Coste. We give examples to show that using this approach it can be quite easy to obtain a bound on the smallest eigenvalue of a combinatorial Markov chain which is several orders of magnitude below the best-known bound on the second-largest eigenvalue.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)",
"title": "Making Markov chains less lazy",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9773707960121133,
"lm_q2_score": 0.822189121808099,
"lm_q1q2_score": 0.8035836364540822
} |
https://arxiv.org/abs/1212.4551 | Condition Numbers of Random Toeplitz and Circulant Matrices | Estimating the condition numbers of random structured matrices is a well known challenge, linked to the design of efficient randomized matrix algorithms. We deduce such estimates for Gaussian random Toeplitz and circulant matrices. The former estimates can be surprising because the condition numbers grow exponentially in n as n grows to infinity for some large and important classes of n-by-n Toeplitz matrices, whereas we prove the opposit for Gaussian random Toeplitz matrices. Our formal estimates are in good accordance with our numerical tests, except that circulant matrices tend to be even better conditioned according to the tests than according to our formal study. | \section{Introduction}\label{sintro}
It is well known that random matrices tend to be well conditioned
\cite{D88}, \cite{E88}, \cite{ES05},
\cite{CD05}, \cite{SST06}, \cite{B11},
and this property can be exploited for advancing
matrix computations (see e.g.,
\cite{PGMQ}, \cite{PIMR10}, \cite{PQ10}, \cite{PQ12},
\cite{PQa}, \cite{PQZa},
\cite{PQZb}, \cite{PQZC}
\cite{PY09}).
Exploiting matrix structure in these
applications was supported empirically in the latter papers
and formally in \cite{T11}. An important step
in this direction is the estimation of the condition
numbers of
structured
matrices
stated as a challenge in \cite{SST06}.
We reply to this challenge
by estimating the condition numbers
of Gaussian random Toeplitz and circulant
matrices both formally
(see Sections \ref{scgrtm} and \ref{scgrcm})
and experimentally
(see Tables \ref{nonsymtoeplitz}--\ref{tabcondcirc}).
Our study shows that
Gaussian random Toeplitz circulant
matrices do not tend to be ill conditioned
and the condition numbers of
Gaussian random circulant $n\times n$
matrices tend to grow extremely slow as $n$ grows large.
Our numerical tests
(the contribution of the second author)
are in good accordance
with our formal estimates,
except that circulant matrices tended to be even
better conditioned in the tests than according to our formal
study.
Our results on Toeplitz matrices are
quite surprising because
the condition numbers
grow exponentially in $n$ as $n\rightarrow \infty$
for some large and important classes
of $n\times n$
Toeplitz matrices \cite{BG05},
which is opposit to the behavior of
Gaussian random Toeplitz $n\times n$ matrices
as we proved and consistently observed in our tests.
Clearly, our study of
Toeplitz matrices can be equally applied to Hankel
matrices.
We organize our paper as follows.
We recall some definitions and basic results on general matrix computations
in the next
section and on Toeplitz, Hankel and circulant matrices in Section \ref{stplc}.
We define Gaussian random matrices and study their
ranks and extremal singular values in Section \ref{srvrm}.
In Sections \ref{scgrtm} and \ref{scgrcm} we extend this study
to Gaussian random
Toeplitz and circulant matrices, respectively.
In Section \ref{sexp} we cover numerical tests,
which constitute the contribution of the second author.
In Section \ref{simprel} we recall some applications
of random circulant and Toeplitz matrices, which
provide some implicit empirical
support for our estimates for their condition
numbers.
We end with conclusions in Section \ref{srel}.
\section{Some definitions and basic results}\label{sdef}
Except for Theorem \ref{thcpw}
and its application in the proof of Theorem \ref{thcircsing}
we work in the field $\mathbb R$ of real numbers.
Next we recall some customary definitions of matrix computations
\cite{GL96}, \cite{S98}.
$A^T$ is the transpose of a
matrix $A$.
$||A||_h$ is its $h$-norm for
$h=1,2,\infty$. We write $||A||$ to denote the 2-norm $||A||_2$.
We have
\begin{equation}\label{eqnorm12}
\frac{1}{\sqrt m}||A||_1\le||A||\le \sqrt n ||A||_1,~~||A||_1=||A^T||_{\infty},~~
||A||^2\le||A||_1||A||_{\infty},
\end{equation}
for an $m\times n$ matrix $A$,
\begin{equation}\label{eqnorm12inf}
||AB||_h\le ||A||_h||B||_h~{\rm for}~h=1,2,\infty~{\rm and~any~matrix~product}~AB.
\end{equation}
Define an {\em SVD} or {\em full SVD} of an $m\times n$ matrix $A$ of a rank
$\rho$ as follows,
\begin{equation}\label{eqsvd}
A=S_A\Sigma_AT_A^T.
\end{equation}
Here
$S_AS_A^T=S_A^TS_A=I_m$, $T_AT_A^T=T_A^TT_A=I_n$,
$\Sigma_A=\diag(\widehat \Sigma_A,O_{m-\rho,n-\rho})$,
$\widehat \Sigma_A=\diag(\sigma_j(A))_{j=1}^{\rho}$,
$\sigma_j=\sigma_j(A)=\sigma_j(A^T)$
is the $j$th largest singular value of a matrix $A$
for $j=1,\dots,\rho$, and we write
$\sigma_j=0$ for $j>\rho$. These values have
the minimax property
\begin{equation}\label{eqminmax}
\sigma_j=\max_{{\rm dim} (\mathbb S)=j}~~\min_{{\bf x}\in \mathbb S,~||{\bf x}||=1}~~~||A{\bf x}||,~j=1,\dots,\rho,
\end{equation}
where $\mathbb S$ denotes linear spaces \cite[Theorem 8.6.1]{GL96}.
\begin{fact}\label{faccondsub}
If $A_0$ is a
submatrix of a
matrix $A$,
then
$\sigma_{j} (A)\ge \sigma_{j} (A_0)$ for all $j$.
\end{fact}
\begin{proof}
\cite[Corollary 8.6.3]{GL96} implies
the claimed bound
where $A_0$ is any block of columns of
the matrix $A$. Transposition of a matrix and permutations
of its rows and columns do not change singular values,
and thus we can extend the bounds to
all submatrices $A_0$.
\end{proof}
$A^+=T_A\diag(\widehat \Sigma_A^{-1},O_{n-\rho,m-\rho})S_A^T$ is the Moore--Penrose
pseudo-inverse of the matrix $A$ of (\ref{eqsvd}), and
\begin{equation}\label{eqnrm+}
||A^+||=1/\sigma{_\rho}(A)
\end{equation}
for
a matrix $A$ of a rank $\rho$.
$\kappa (A)=\frac{\sigma_1(A)}{\sigma_{\rho}(A)}=||A||~||A^+||$ is the condition
number of an $m\times n$ matrix $A$ of a rank $\rho$. Such matrix is {\em ill conditioned}
if $\sigma_1(A)\gg\sigma_{\rho}(A)$ and is {\em well conditioned}
otherwise. See \cite{D83}, \cite[Sections 2.3.2, 2.3.3, 3.5.4, 12.5]{GL96},
\cite[Chapter 15]{H02}, \cite{KL94}, and \cite[Section 5.3]{S98}
on the estimation of the norms and condition numbers
of nonsingular matrices.
\section{Toeplitz, Hankel and $f$-circulant
matrices}\label{stplc}
A {\em Toep\-litz} $m\times n$ matrix $T_{m,n}=(t_{i-j})_{i,j=1}^{m,n}$
(resp. Hankel matrices $H=(h_{i+j})_{i,j=1}^{m,n}$)
is defined by its first row and first (resp. last) column, that is by
the vector $(t_h)_{h=1-n}^{m-1}$ (resp. $(h_g)_{g=2}^{m+n}$)
of dimension $m+n-1$. We write $T_n=T_{n,n}=(t_{i-j})_{i,j=1}^{n,n}$
(see equation (\ref{eqtz}) below).
${\bf e}_i$ is the $i$th coordinate vector of dimension $n$ for
$i=1,\dots,n$. The
reflection matrix
$J=J_n({\bf e}_n~|~\dots~|~{\bf e}_1)$ is the Hankel $n\times n$ matrix
defined by its first column ${\bf e}_n$ and its last column
${\bf e}_1$. We have $J=J^T=J^{-1}$.
A lower {\em triangular Toep\-litz} $n\times n$ matrix $Z({\bf t})=(t_{i-j})_{i,j=1}^n$
(where $t_k=0$ for $k<0$)
is defined by its first column ${\bf t}=(t_h)_{h=0}^{n-1}$.
We write $Z({\bf t})^T=(Z({\bf t}))^T$.
$Z=Z_0=Z({\bf e}_2)$
is the downshift $n\times n$ matrix
(see (\ref{eqtz})). We have
$Z{\bf v}=(v_i)_{i=0}^{n-1}$ and
$Z({\bf v})=Z_0({\bf v})=\sum_{i=1}^{n}v_{i}Z^{i-1}$
for ${\bf v}=(v_i)_{i=1}^n$ and $v_0=0$,
\begin{equation}\label{eqtz}
T_n=\begin{pmatrix}t_0&t_{-1}&\cdots&t_{1-n}\\ t_1&t_0&\smallddots&\vdots\\ \vdots&\smallddots&\smallddots&t_{-1}\\ t_{n-1}&\cdots&t_1&t_0\end{pmatrix},~Z=\begin{pmatrix}
0 & & \dots & & 0\\
1 & \ddots & & & \\
\vdots & \ddots & \ddots & & \vdots \\
& & \ddots & 0 & \\
0 & & \dots & 1 & 0
\end{pmatrix}, ~Z_f=\begin{pmatrix}
0 & & \dots & & f\\
1 & \ddots & & & \\
\vdots & \ddots & \ddots & & \vdots \\
& & \ddots & 0 & \\
0 & & \dots & 1 & 0
\end{pmatrix}.
\end{equation}
Combine the equations $||Z({\bf v})||_1=||Z({\bf v})||_{\infty}=||{\bf v}||_1$
with (\ref{eqnorm12}) to obtain
\begin{equation}\label{eqttn}
||Z({\bf v})||\le ||{\bf v}||_1.
\end{equation}
\begin{theorem}\label{thgs}
Write $T_{k}=(t_{i-j})_{i,j=0}^{k-1}$ for $k=n,n+1$.
(a) Let the matrix $T_n$ be nonsingular and write
${\bf p}=T_n^{-1}{\bf e}_1$ and ${\bf q}=T_n^{-1}{\bf e}_{n}$.
If
$p_{1}={\bf e}_1^T{\bf p}\neq 0$,
then
$p_{1}T_n^{-1}=Z({\bf p})Z(J{\bf q})^T-Z(Z{\bf q})Z(ZJ{\bf p})^T.$
In parts (b) and (c) below let the matrix $T_{n+1}$ be nonsingular and write
$\widehat {\bf v}=(v_i)_{i=0}^n=T_{n+1}^{-1}{\bf e}_1$,
${\bf v}=(v_i)_{i=0}^{n-1}$, ${\bf v}'=(v_i)_{i=1}^{n}$,
$\widehat {\bf w}=(w_i)_{i=0}^n=T_{n+1}^{-1}{\bf e}_{n+1}$,
${\bf w}=(w_i)_{i=0}^{n-1}$, and ${\bf w}'=(w_i)_{i=1}^{n}$.
(b) If $v_0\neq 0$, then the matrix $T_n$ is nonsingular and
$v_0T_n^{-1}=Z({\bf v})Z(J{\bf w'})^T-Z({\bf w})Z(J{\bf v}')^T$.
(c) If $v_n\neq 0$, then the matrix $T_{1,0}=(t_{i-j})_{i=1,j=0}^{n,n-1}$ is nonsingular and
$v_nT_{1,0}^{-1}=Z({\bf w})Z(J{\bf v'})^T-Z({\bf v})Z(J{\bf w}')^T$.
\end{theorem}
\begin{proof}
See \cite{GS72} on parts (a) and (b); see \cite{GK72} on part (c).
\end{proof}
$Z_f=Z+f{\bf e}_1^T{\bf e}_n$ for a scalar $f\neq 0$
denotes the
$n\times n$ matrix of
$f$-{\em circular shift} (see (\ref{eqtz})).
An $f$-{\em circulant matrix} $Z_f({\bf v})=\sum_{i=1}^{n}v_iZ_f^{i-1}$
is a special Toep\-litz $n\times n$ matrix defined by its first column vector
${\bf v}=(v_i)_{i=1}^{n}$ and a scalar $f$.
$f$-circulant matrix is called {\em circulant} if $f=1$ and {\em skew circulant} if $f=-1$.
By replacing $f$ with $0$ we arrive at a lower triangular
Toep\-litz matrix $Z({\bf v})$.
The following theorem implies that the inverses
(wherever they are defined) and pairwise products of
$f$-circulant $n\times n$ matrices are $f$-circulant and can be computed
in $O(n\log n)$ flops.
\begin{theorem}\label{thcpw} (See \cite{CPW74}.)
We have
$Z_1({\bf v})=\Omega^{-1}D(\Omega{\bf v})\Omega.$
More generally, for any $f\ne 0$, we have
$Z_{f^n}({\bf v})=U_f^{-1}D(U_f{\bf v})U_f$
where
$U_f=\Omega D({\bf f}),~~{\bf f}=(f^i)_{i=0}^{n-1}$,
$D({\bf u})=\diag(u_i)_{i=0}^{n-1}$ for a vector ${\bf u}=(u_i)_{i=0}^{n-1}$,
$\Omega=(\omega_n^{ij})_{i,j=0}^{n-1}$ is the $n\times n$ matrix of the
discrete Fourier transform at $n$ points,
$\omega_n={\rm exp}(\frac{2\pi}{n}\sqrt{-1})$ being
a primitive $n$-th root of $1$, and $\Omega^{-1}=\frac{1}{n}(\omega_n^{-ij})_{i,j=0}^{n-1}=\frac{1}{n}\Omega^H$.
\end{theorem}
{\em Hankel} $m\times n$ matrices $H=(h_{i+j})_{i,j=1}^{m,n}$ can be
defined equivalently
as the products $H=TJ_n$ or $H=J_mT$ of
$m\times n$ Toep\-litz
matrices $T$ and the Hankel reflection matrices $J=J_m$ or $J_n$.
Note that $J=J^{-1}=J^T$ and obtain the following simple fact.
\begin{fact}\label{fath}
For $m=n$ we have
$T=HJ$, $H^{-1}=JT^{-1}$ and $T^{-1}=JH^{-1}$ if $H=TJ$,
whereas $T=JH$, $H^{-1}=JT^{-1}$ and $T^{-1}=H^{-1}J$ if $H=JT$.
Furthermore in both cases $\kappa (H)=\kappa (T)$.
\end{fact}
By using the equations above we can
readily extend any Toep\-litz matrix inversion algorithm
to Hankel
matrix inversion
and vice versa,
preserving the flop count and condition numbers.
E.g. $(JT)^{-1}=T^{-1}J$, $(TJ)^{-1}=JT^{-1}$,
$(JH)^{-1}=H^{-1}J$ and $(HJ)^{-1}=JH^{-1}$.
\section{Gaussian random matrices and their
ranks}\label{srvrm}
\begin{definition}\label{defcdf}
$F_{\gamma}(y)=$ Probability$\{\gamma\le y\}$ (for a real random variable $\gamma$)
is the {\em cumulative
distribution function (cdf)} of $\gamma$ evaluated at $y$.
$F_{g(\mu,\sigma)}(y)=\frac{1}{\sigma\sqrt {2\pi}}\int_{-\infty}^y \exp (-\frac{(x-\mu)^2}{2\sigma^2}) dx$
for a Gaussian random variable $g(\mu,\sigma)$ with a mean $\mu$ and a positive variance $\sigma^2$,
and so
\begin{equation}\label{eqnormal}
\mu-4\sigma\le y \le \mu+4\sigma~{\rm with ~a ~probability ~near ~1}.
\end{equation}
\end{definition}
\begin{definition}\label{defrndm}
A matrix (or a vector) is a {\em Gaussian random matrix (or vector)} with a mean
$\mu$ and a positive variance $\sigma^2$ if it is filled with
independent identically distributed Gaussian random
variables, all having the mean $\mu$ and variance $\sigma^2$.
$\mathcal G_{\mu,\sigma}^{m\times n}$ is the set of such
Gaussian random $m\times n$ matrices
(which are {\em standard} for $\mu=0$
and $\sigma^2=1$). By restricting this set
to Toeplitz or $f$-circulant matrices we obtain the sets
$\mathcal T_{\mu,\sigma}^{m\times n}$ and
$\mathcal Z_{f,\mu,\sigma}^{n\times n}$ of
{\em Gaussian random Toep\-litz}
and {\em Gaussian random $f$-circulant matrices},
respectively.
\end{definition}
\begin{definition}\label{defchi}
$\chi_{\mu,\sigma,n}(y)$ is the cdf of the norm
$||{\bf v}||=(\sum_{i=1}^n v_i^2)^{1/2}$ of a Gaussian random vector
${\bf v}=(v_i)_{i=1}^n\in \mathcal G_{\mu,\sigma}^{n\times 1}$. For
$y\ge 0$ we have
$\chi_{0,1,n}(y)= \frac {2}{2^{n/2}\Gamma(n/2)}\int_{0}^yx^{n-1}\exp(-x^2/2) dx$
where $\Gamma(h)=\int_0^{\infty}x^{h-1}\exp(-x) dx$, $\Gamma (n+1)=n!$ for nonnegative integers $n$.
\end{definition}
The total degree of a multivariate monomial is the sum of its degrees
in all its variables. The total degree of a polynomial is the maximal total degree of
its monomials.
\begin{lemma}\label{ledl} \cite{DL78}, \cite{S80}, \cite{Z79}.
For a set $\Delta$ of a cardinality $|\Delta|$ in any fixed ring
let a polynomial in $m$ variables have a total degree $d$ and let it not vanish
identically on this set. Then the polynomial vanishes in at most
$d|\Delta|^{m-1}$ points.
\end{lemma}
We assume that Gaussian random variables range
over infinite sets $\Delta$,
usually over the real line or its interval. Then
the lemma implies that a nonzero polynomial vanishes with probability 0.
Consequently a square Gaussian random general, Toeplitz or circulant
matrix is nonsingular
with probability 1
because its determinant is a polynomials
in the entries.
Likewise rectangular
Gaussian random general, Toeplitz and circulant
matrices have full rank with probability 1.
Hereafter,
wherever this causes no confusion,
we assume by default that
{\em Gaussian random
general, Toeplitz and circulant
matrices have full rank}.
\section{Extremal singular values of Gaussian random matrices}\label{ssvrm}
Besides having full rank with probability 1,
Gaussian random matrices in Definition \ref{defrndm} are likely to be well conditioned
\cite{D88}, \cite{E88}, \cite{ES05}, \cite{CD05}, \cite{B11}, and
even the sum $M+A$ for $M\in \mathbb R^{m\times n}$ and
$A\in \mathcal G_{\mu,\sigma}^{m\times n}$ is likely to
be well conditioned unless the ratio $\sigma/||M||$ is small
or large
\cite{SST06}.
The following theorem
states an upper bound
proportional to $y$ on
the cdf $F_{1/||A^+||}(y)$, that is
on the probability that the
smallest positive singular value $1/||A^+||=\sigma_l(A)$ of a Gaussian random matrix $A$
is less than a nonnegative scalar $y$ (cf. (\ref{eqnrm+}))
and consequently on the probability that the norm $||A^+||$
exceeds a positive scalar $x$.
The stated bound still holds if we replace the matrix $A$ by
$A-B$ for any fixed matrix $B$, and
for $B=O_{m,n}$
the bounds
can be strengthened
by a factor $y^{|m-n|}$ \cite{ES05}, \cite{CD05}.
\begin{theorem}\label{thsiguna}
Suppose
$A\in \mathcal G_{\mu,\sigma}^{m\times n}$,
$B\in \mathbb R^{m\times n}$,
$l=\min\{m,n\}$, $x>0$, and $y\ge 0$.
Then
$F_{\sigma_l(A-B)}(y)\le 2.35~\sqrt l y/\sigma$,
that is
$Probability \{||(A-B)^+||\ge 2.35x\sqrt {l}/\sigma\}\le 1/x$.
\end{theorem}
\begin{proof}
For $m=n$ this is \cite[Theorem 3.3]{SST06}. Apply
Fact \ref{faccondsub} to extend it to any pair $\{m,n\}$.
\end{proof}
The following two theorems supply lower bounds
$F_{||A||}(z)$ and
$F_{\kappa (A)}(y)$
on the probabilities
that $||A||\le z$
and $\kappa(A)\le y$ for two scalars $y$ and $z$,
respectively,
and a Gaussian random matrix $A$.
We do not use the second theorem, but state it for the sake of completeness
and only for square $n\times n$ matrices $A$.
The theorems
imply that
the functions
$1-F_{||A||}(z)$
and
$1-F_{\kappa (A)}(y)$
decay as
$z\rightarrow \infty$ and
$y\rightarrow \infty$, respectively,
and that the two decays are exponential in $-z^2$ and proportional
to $\sqrt{\log y}/y$, respectively.
For small values $y\sigma$ and a fixed $n$
the lower bound of Theorem \ref{thmsiguna}
becomes negative, in which case
the theorem becomes trivial.
Unlike Theorem \ref{thsiguna}, in both theorems we assume that $\mu=0$.
\begin{theorem}\label{thsignorm} \cite[Theorem II.7]{DS01}.
Suppose $A\in \mathcal G_{0,\sigma}^{m\times n}$,
$h=\max\{m,n\}$ and
$z\ge 2\sigma\sqrt h$.
Then $F_{||A||}(z)\ge 1- \exp(-(z-2\sigma\sqrt h)^2/(2\sigma^2))$, and so
the norm $||A||$ is likely to have order $\sigma\sqrt h$.
\end{theorem}
\begin{theorem}\label{thmsiguna} \cite[Theorem 3.1]{SST06}.
Suppose
$0<\sigma\le 1$,
$y\ge 1$,
$A\in \mathcal G_{0,\sigma}^{n\times n}$. Then the matrix $A$
has full rank with
probability $1$ and
$F_{\kappa (A)}(y)\ge 1-(14.1+4.7\sqrt{(2\ln y)/n})n/ (y\sigma)$.
\end{theorem}
\begin{proof}
See \cite[the proof of Lemma 3.2]{SST06}.
\end{proof}
\section{Extremal singular values of Gaussian random Toeplitz matrices}\label{scgrtm}
A matrix
$T_n=(t_{i-j})_{i,j=1}^n$
is the sum of two triangular
Toeplitz matrices
\begin{equation}\label{eqt2tt}
T_n= Z({\bf t})+Z({\bf t_-})^T,~{\bf t}=(t_{i})_{i=0}^{n-1},~{\bf t}_-=(t'_{-i})_{i=0}^{n-1},~
t'_0=0.
\end{equation}
If $T_n\in \mathcal T_{\mu,\sigma}^{n\times n}$, then
$T_n$ has $2n-1$ pairwise independent entries in $\mathcal G_{\mu,\sigma}$. Thus
(\ref{eqttn}) implies that
$$ ||T_n||\le ||Z({\bf t})||+||Z({\bf t_-})^T||\le
||{\bf t}||_1+||{\bf t_-}||_1= ||(t_{i})_{i=1-n}^{n-1}||_1\le \sqrt {2n-1}~||(t_{i})_{i=1-n}^{n-1}||.$$
\noindent Recall Definition \ref{defrndm} and obtain
\begin{equation}\label{eqtn}
F_{||T_n||}(y)\ge \chi_{\mu,\sigma,2n-1}(y/\sqrt {2n-1}).
\end{equation}
Next we estimate
the norm $||T_n^{-1}||$
for
$T_{n}\in \mathcal T_{\mu,\sigma}^{n\times n}$.
\begin{lemma}\label{leinp} \cite[Lemma A.2]{SST06}.
For a nonnegative scalar $y$, a unit vector ${\bf t}\in \mathbb R^{n\times 1}$, and a vector
${\bf b}\in \mathcal G_{\mu,\sigma}^{n\times 1}$,
we have
$F_{|{\bf t}^T{\bf b}|}(y)\le \sqrt{\frac{2}{\pi}}\frac{y}{\sigma}$.
\end{lemma}
\begin{remark}\label{reinp}
The latter bound is independent of $\mu$ and $n$;
it holds for any $\mu$ even if
all coordinates of the vector ${\bf b}$ are fixed except for a
single coordinate in $\mathcal G_{\mu,\sigma}$.
\end{remark}
\begin{theorem}\label{thsigunat1}
Given a matrix
$T_{n}=(t_{i-j})_{i,j=1}^n\in \mathcal T_{\mu,\sigma}^{n\times n}$,
assumed to be nonsingular (cf. Section \ref{srvrm}),
write
$p_{1}={\bf e}_1^TT_n^{-1}{\bf e}_1$.
Then $F_{1/||p_{1}T_n^{-1}||}(y)\le 2n\alpha \beta$
for two random variables $\alpha$ and $\beta$
such that
\begin{equation}\label{eqprtinv}
F_{\alpha}(y)\le \sqrt{\frac{2n}{\pi}}\frac{y}{\sigma}~{\rm and}~
F_{\beta}(y)\le \sqrt{\frac{2n}{\pi}}\frac{y}{\sigma}~{\rm for}~y\ge 0.
\end{equation}
\end{theorem}
\begin{proof}
Recall from part (a) of Theorem \ref{thgs} that
$p_{1}T_n^{-1}=Z({\bf p})Z(J{\bf q})^T-Z(Z{\bf q})Z(ZJ{\bf p})^T$.
Therefore
$||p_{1}T_n^{-1}||\le ||Z({\bf p})||~||Z(J{\bf q})^T||+||Z(Z{\bf q})||~||Z(ZJ{\bf p})^T||$
for ${\bf p}=T_n^{-1}{\bf e}_1$, ${\bf q}=T_n^{-1}{\bf e}_n$, and $p_1={\bf p}^T{\bf e}_1$.
It follows that
$||p_{1}T_n^{-1}||\le ||Z({\bf p})||~||Z(J{\bf q})||+||Z(Z{\bf q})||~||Z(ZJ{\bf p})||$
since $||A||=||A^T||$ for all matrices $A$.
Furthermore
$||p_{1}T_n^{-1}||\le||{\bf p}||_1~||J{\bf q}||_1+||Z{\bf q}||_1~||ZJ{\bf p}||_1$
due to (\ref{eqttn}).
Clearly $||J{\bf v}||_1=||{\bf v}||_1$ and $||Z{\bf v}||_1\le ||{\bf v}||_1$
for every vector ${\bf v}$, and so (cf. (\ref{eqnorm12}))
\begin{equation}\label{eqtpq}
||p_{1}T_n^{-1}||\le 2 ||{\bf p}||_1~||{\bf q}||_1\le 2n ||{\bf p}||~||{\bf q}||.
\end{equation}
By definition the vector ${\bf p}$ is orthogonal
to the vectors $T_n{\bf e}_2,\dots,T_n{\bf e}_n$,
whereas ${\bf p}^TT_n{\bf e}_1=1$ (cf. \cite{SST06}).
Consequenty the vectors $T_n{\bf e}_2,\dots,T_n{\bf e}_n$
uniquely define the vector
${\bf u}={\bf p}/||{\bf p}||$,
whereas
$|{\bf u}^TT_n{\bf e}_1|=1/||{\bf p}||$.
The last coordinate $t_{n-1}$ of the vector $T_n{\bf e}_1$
is independent of the vectors $T_n{\bf e}_2,\dots,T_n{\bf e}_n$
and consequently of the vector ${\bf u}$.
Apply
Remark \ref{reinp} to estimate the cdf of the random
variable $\alpha=1/||{\bf p}||=|{\bf u}^TT_n{\bf e}_1|$
and obtain that
$F_{\alpha}(y)\le \sqrt{\frac{2n}{\pi}}\frac{y}{\sigma}$ for $y\ge 0$.
Likewise the $n-1$ column vectors $T{\bf e}_1,\dots,T_{n-1}$
define the vector ${\bf v}=\beta{\bf q}$ for
$\beta=1/||{\bf q}||=|{\bf v}^TT_n{\bf e}_n|$.
The first coordinate $t_{1-n}$ of the vector $T_n{\bf e}_n$
is independent of the vectors $T{\bf e}_1,\dots,T_{n-1}$
and consequently of the vector ${\bf v}$.
Apply
Remark \ref{reinp} to
estimate the cdf of the random
variable $\beta$ and obtain that
$F_{\beta}(y)\le \sqrt{\frac{2n}{\pi}}\frac{y}{\sigma}$ for $y\ge 0$.
Finally combine these bounds on the cdfs $F_{\alpha}(y)$ and
$F_{\beta}(y)$ with (\ref{eqtpq}).
\end{proof}
By applying parts (b) and (c)
of Theorem \ref{thgs} instead of its part (a),
we similarly
deduce the
bounds $||v_0T_{n+1}^{-1}||\le 2\alpha\beta$ and
$||v_nT_{n+1}^{-1}||\le 2\alpha\beta$
for two pairs of random variables $\alpha$ and $\beta$
that
satisfy (\ref{eqprtinv}) for $n+1$ replacing $n$.
We have $p_{1}=\frac{\det T_{n-1}}{\det T_{n}}$,
$v_0=\frac{\det T_n}{\det T_{n+1}}$, and
$v_n=\frac{\det T_{0,1}}{\det T_{n+1}}$
for $T_{0,1}=(t_{i-j})_{i=0,j=1}^{n-1,n}$.
Next we bound the
geometric means
of the
ratios
$|\frac{\det T_{h+1}}{\det T_{h}}|$
for
$h=1,\dots,k-1$.
$1/|p_1|$ and $1/|v_0|$
are such ratios for $k=n-1$ and $k=n$,
respectively,
whereas the ratio $1/|v_n|$ is similar to
$1/|v_0|$, under slightly distinct notation.
\begin{theorem}\label{thhdmr}
Let $T_h\neq O$ denote $h\times h$ matrices
for $h=1,\dots,k$
whose entries have absolute values at most $t$
for a fixed scalar or random variable $t$, e.g. for $t=||T||$.
Furthermore let $T_1=(t)$.
Then the geometric mean $(\prod_{h=1}^{k-1}|\frac{\det T_{h+1}}{\det T_{h}}|)^{1/(k-1)}=\frac{1}{t}|\det T_{k}|^{1/(k-1)}$
is at most $k^{\frac{1}{2}(1+\frac{1}{k-1})}t$.
\end{theorem}
\begin{proof}
The theorem follows from
Hadamard's upper bound
$|\det M|\le k^{k/2}t^k$, which holds
for any $k\times k$ matrix $M=(m_{i,j})_{i,j=1}^k$
with $\max_{i,j=1}^k|m_{i,j}|\le t$.
\end{proof}
The theorem says that
the geometric mean of the ratios $|\det T_{h+1}/\det T_{h}|$
for
$h=1,\dots,k-1$
is not greater than $k^{0.5+\epsilon(k)}t$
where $\epsilon(k)\rightarrow 0$ as $k\rightarrow \infty$.
Furthermore if
$T_n\in \mathcal T_{\mu,\sigma}^{n\times n}$
we can write
$t=||T||$
and
apply (\ref{eqtn}) to bound the cdf of $t$.
\section{Extremal singular values of Gaussian random circulant matrices}\label{scgrcm}
Next we estimate the norms of a random Gaussian $f$-circulant matrix
and its inverse.
\begin{theorem}\label{thcircsing}
Assume $y\ge 0$ and a circulant $n\times n$ matrix $T=Z_1({\bf v})$
for ${\bf v}\in \mathcal G_{\mu,\sigma}^{n\times 1}$. Then
(a) $F_{||T||}(y)\ge \chi_{\mu,\sigma,n} (\sqrt {\frac{2}{n}}y)$
for $\chi_{\mu,\sigma,n}(y)$ in Definition \ref{defchi} and
(b) $F_{1/||T^{-1}||}(y)\le \sqrt{\frac{2}{\pi}} \frac{ny}{\sigma}$.
\end{theorem}
\begin{proof}
For the matrix $T=Z_1({\bf v})$
we have both equation (\ref{eqt2tt}) and the bound
$||{\bf t_-}||_1\le||{\bf t}||_1$,
and so $||T||_1\le 2||{\bf t}||_1$. Now
part (a) of the theorem follows similarly to (\ref{eqtn}).
To prove part (b)
recall
Theorem \ref{thcpw} and write
$B=\Omega T\Omega^{-1}=D({\bf u})$,
${\bf u}=(u_i)_{i=0}^{n-1}=\Omega {\bf v}$. We have
$\sigma_j(T)=\sigma_j(B)$ for all $j$ because
$\frac{1}{\sqrt n}\Omega$
and $\sqrt n\Omega^{-1}$ are unitary matrices.
By combining the equations $u_i={\bf e}_i^T\Omega{\bf v}$, the bounds
$||\Re ({\bf e}_i^T\Omega)||\ge 1$ for all $i$,
and Lemma \ref{leinp}, deduce that
$F_{|\Re (u_i)|}(y)\le \sqrt{\frac{2}{\pi}} \frac{y}{\sigma}$
for $i=1,\dots,n$.
We have $F_{\sigma_n(B)}(y)=F_{\min_i|u_i|}(y)$ because
$B=\diag(u_i)_{i=0}^{n-1}$, and
clearly $|u_i|\ge |\Re (u_i)|$.
\end{proof}
\begin{remark}\label{retcond}
Our extensive experiments suggest that
the estimates of Theorem \ref{thcircsing} are overly pessimistic
(cf. Table \ref{tabcondcirc}).
\end{remark}
Combining Theorem \ref{thcpw} with minimax property (\ref{eqminmax}) implies that
$$\frac{1}{g(f)}\sigma_j(Z_1({\bf v}))\le \sigma_j(Z_f({\bf v}))\le g(f) \sigma_j(Z_1({\bf v}))$$
for all vectors ${\bf v}$, scalars $f\neq 0$,
$g(f)=\max\{|f|^2,{1/|f|^2}\}$, and $j=1,\dots,n$. Thus we can readily extend
the estimates of Theorem \ref{thcircsing} to $f$-circulant matrices for $f\neq 0$.
In particular Gaussian random $f$-circulant matrices
tend to be
well conditioned unless $f\approx 0$ or $1/f\approx 0$.
\section{Numerical Experiments}\label{sexp}
Our numerical experiments with random general, Hankel, Toeplitz and circulant matrices
have been performed in the Graduate Center of the City University of New York
on a Dell server with a dual core 1.86 GHz
Xeon processor and 2G memory running Windows Server 2003 R2. The test
Fortran code was compiled with the GNU gfortran compiler within the Cygwin
environment. Random numbers were generated with the random\_number
intrinsic Fortran function, assuming the uniform probability distribution
over the range $\{x:-1 \leq x < 1\}$. The tests have been designed by the first author
and performed by his coauthors.
We have computed the condition numbers of random general $n\times n$ matrices for
$n=2^k$, $k=5,6,\dots,$ with entries sampled in the range $[-1,1)$ as well as
complex general, Toeplitz, and circulant matrices
whose entries had real and imaginary parts sampled at random in the same range $[-1,1)$.
We performed 100 tests for each class of inputs, each dimension $n$, and each nullity $r$.
Tables \ref{tab01}--\ref{tabcondcirc} display
the test results. The last four columns of each table
display the average (mean), minimum, maximum, and standard deviation
of the computed condition numbers of the input matrices, respectively. Namely we
computed
the values $\kappa (A)=||A||~||A^{-1}||$ for general, Toeplitz, and circulant matrices $A$ and
the values $\kappa_1 (A)=||A||_1~||A^{-1}||_1$ for Toeplitz matrices $A$.
We computed and displayed in Table \ref{tabcondtoep} the 1-norms of
Toeplitz matrices and their inverses rather than their 2-norms
to facilitate the computations in the case of inputs of large sizes.
Relationships (\ref{eqnorm12}) link
the 1-norms and 2-norms to one another, but
the empirical data in
Table \ref{nonsymtoeplitz} consistently show
even closer links,
in all cases of
general, Toeplitz, and circulant $n\times n$ matrices $A$ where
$n=32,64,\dots, 1024$.
\begin{table}[h]
\caption{The norms of random general, Toeplitz and circulant $n\times n$ matrices and of their inverses}
\label{nonsymtoeplitz}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\textbf{matrix $A$}&\textbf{$n$}&\textbf{$||A||_1$}&\textbf{$||A||_2$}&\textbf{$\frac{||A||_1}{||A||_2}$}&\textbf{$||A^{-1}||_1$}&\textbf{$||A^{-1}||_2$}&\textbf{$\frac{||A^{-1}||_1}{||A^{-1}||_2}$}\\\hline
General & $32$ & $1.9\times 10^{1}$ & $1.8\times 10^{1}$ & $1.0\times 10^{0}$ & $4.0\times 10^{2}$ & $2.1\times 10^{2}$ & $1.9\times 10^{0}$ \\ \hline
General & $64$ & $3.7\times 10^{1}$ & $3.7\times 10^{1}$ & $1.0\times 10^{0}$ & $1.2\times 10^{2}$ & $6.2\times 10^{1}$ & $2.0\times 10^{0}$ \\ \hline
General & $128$ & $7.2\times 10^{1}$ & $7.4\times 10^{1}$ & $9.8\times 10^{-1}$ & $3.7\times 10^{2}$ & $1.8\times 10^{2}$ & $2.1\times 10^{0}$ \\ \hline
General & $256$ & $1.4\times 10^{2}$ & $1.5\times 10^{2}$ & $9.5\times 10^{-1}$ & $5.4\times 10^{2}$ & $2.5\times 10^{2}$ & $2.2\times 10^{0}$ \\ \hline
General & $512$ & $2.8\times 10^{2}$ & $3.0\times 10^{2}$ & $9.3\times 10^{-1}$ & $1.0\times 10^{3}$ & $4.1\times 10^{2}$ & $2.5\times 10^{0}$ \\ \hline
General & $1024$ & $5.4\times 10^{2}$ & $5.9\times 10^{2}$ & $9.2\times 10^{-1}$ & $1.1\times 10^{3}$ & $4.0\times 10^{2}$ & $2.7\times 10^{0}$ \\ \hline
Toeplitz & $32$ & $1.8\times 10^{1}$ & $1.9\times 10^{1}$ & $9.5\times 10^{-1}$ & $2.2\times 10^{1}$ & $1.3\times 10^{1}$ & $1.7\times 10^{0}$ \\ \hline
Toeplitz & $64$ & $3.4\times 10^{1}$ & $3.7\times 10^{1}$ & $9.3\times 10^{-1}$ & $4.6\times 10^{1}$ & $2.4\times 10^{1}$ & $2.0\times 10^{0}$ \\ \hline
Toeplitz & $128$ & $6.8\times 10^{1}$ & $7.4\times 10^{1}$ & $9.1\times 10^{-1}$ & $1.0\times 10^{2}$ & $4.6\times 10^{1}$ & $2.2\times 10^{0}$ \\ \hline
Toeplitz & $256$ & $1.3\times 10^{2}$ & $1.5\times 10^{2}$ & $9.0\times 10^{-1}$ & $5.7\times 10^{2}$ & $2.5\times 10^{2}$ & $2.3\times 10^{0}$ \\ \hline
Toeplitz & $512$ & $2.6\times 10^{2}$ & $3.0\times 10^{2}$ & $8.9\times 10^{-1}$ & $6.9\times 10^{2}$ & $2.6\times 10^{2}$ & $2.6\times 10^{0}$ \\ \hline
Toeplitz & $1024$ & $5.2\times 10^{2}$ & $5.9\times 10^{2}$ & $8.8\times 10^{-1}$ & $3.4\times 10^{2}$ & $1.4\times 10^{2}$ & $2.4\times 10^{0}$ \\ \hline
Circulant & $32$ & $1.6\times 10^{1}$ & $1.8\times 10^{1}$ & $8.7\times 10^{-1}$ & $9.3\times 10^{0}$ & $1.0\times 10^{1}$ & $9.2\times 10^{-1}$ \\ \hline
Circulant & $64$ & $3.2\times 10^{1}$ & $3.7\times 10^{1}$ & $8.7\times 10^{-1}$ & $5.8\times 10^{0}$ & $6.8\times 10^{0}$ & $8.6\times 10^{-1}$ \\ \hline
Circulant & $128$ & $6.4\times 10^{1}$ & $7.4\times 10^{1}$ & $8.6\times 10^{-1}$ & $4.9\times 10^{0}$ & $5.7\times 10^{0}$ & $8.5\times 10^{-1}$ \\ \hline
Circulant & $256$ & $1.3\times 10^{2}$ & $1.5\times 10^{2}$ & $8.7\times 10^{-1}$ & $4.7\times 10^{0}$ & $5.6\times 10^{0}$ & $8.4\times 10^{-1}$ \\ \hline
Circulant & $512$ & $2.6\times 10^{2}$ & $3.0\times 10^{2}$ & $8.7\times 10^{-1}$ & $4.5\times 10^{0}$ & $5.4\times 10^{0}$ & $8.3\times 10^{-1}$ \\ \hline
Circulant & $1024$ & $5.1\times 10^{2}$ & $5.9\times 10^{2}$ & $8.7\times 10^{-1}$ & $5.5\times 10^{0}$ & $6.6\times 10^{0}$ & $8.3\times 10^{-1}$ \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h]
\caption{The condition numbers $\kappa (A)$ of random $n\times n$ matrices $A$}
\label{tab01}
\begin{center}
\begin{tabular}{| c | c | c | c | c | c |}
\hline
\bf{$n$}&\bf{input} & \bf{min} &\bf{max} &\bf{mean} &\bf{std} \\ \hline
$ 32 $ & $ {\rm real} $ & $2.4\times 10^{1}$ & $1.8\times 10^{3}$ & $2.4\times 10^{2}$ & $3.3\times 10^{2}$ \\ \hline
$ 64 $ & $ {\rm real} $ & $4.6\times 10^{1}$ & $1.1\times 10^{4}$ & $5.0\times 10^{2}$ & $1.1\times 10^{3}$ \\ \hline
$ 128 $ & $ {\rm real} $ & $1.0\times 10^{2}$ & $2.7\times 10^{4}$ & $1.1\times 10^{3}$ & $3.0\times 10^{3}$ \\ \hline
$ 256 $ & $ {\rm real} $ & $2.4\times 10^{2}$ & $8.4\times 10^{4}$ & $3.7\times 10^{3}$ & $9.7\times 10^{3}$ \\ \hline
$ 512 $ & $ {\rm real} $ & $3.9\times 10^{2}$ & $7.4\times 10^{5}$ & $1.8\times 10^{4}$ & $8.5\times 10^{4}$ \\ \hline
$ 1024 $ & $ {\rm real} $ & $8.8\times 10^{2}$ & $2.3\times 10^{5}$ & $8.8\times 10^{3}$ & $2.4\times 10^{4}$ \\ \hline
$ 2048 $ & $ {\rm real} $ & $2.1\times 10^{3}$ & $2.0\times 10^{5}$ & $1.8\times 10^{4}$ & $3.2\times 10^{4}$ \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h]
\caption{The condition numbers $\kappa_1 (A)=\frac{||A||_1}{||A^{-1}||_1}$ of random Toeplitz
$n\times n$ matrices $A$}
\label{tabcondtoep}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{$n$}&\textbf{min}&\textbf{mean}&\textbf{max}&\textbf{std}\\\hline
$256$ & $9.1\times 10^{2}$ & $9.2\times 10^{3}$ & $1.3\times 10^{5}$ & $1.8\times 10^{4}$ \\ \hline
$512$ & $2.3\times 10^{3}$ & $3.0\times 10^{4}$ & $2.4\times 10^{5}$ & $4.9\times 10^{4}$ \\ \hline
$1024$ & $5.6\times 10^{3}$ & $7.0\times 10^{4}$ & $1.8\times 10^{6}$ & $2.0\times 10^{5}$ \\ \hline
$2048$ & $1.7\times 10^{4}$ & $1.8\times 10^{5}$ & $4.2\times 10^{6}$ & $5.4\times 10^{5}$ \\ \hline
$4096$ & $4.3\times 10^{4}$ & $2.7\times 10^{5}$ & $1.9\times 10^{6}$ & $3.4\times 10^{5}$ \\ \hline
$8192$ & $8.8\times 10^{4}$ & $1.2\times 10^{6}$ & $1.3\times 10^{7}$ & $2.2\times 10^{6}$ \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h]
\caption{The condition numbers $\kappa (A)$ of random circulant $n\times n$ matrices $A$}
\label{tabcondcirc}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{$n$}&\textbf{min}&\textbf{mean}&\textbf{max}&\textbf{std}\\\hline
$256$ & $9.6\times 10^{0}$ & $1.1\times 10^{2}$ & $3.5\times 10^{3}$ & $4.0\times 10^{2}$ \\ \hline
$512$ & $1.4\times 10^{1}$ & $8.5\times 10^{1}$ & $1.1\times 10^{3}$ & $1.3\times 10^{2}$ \\ \hline
$1024$ & $1.9\times 10^{1}$ & $1.0\times 10^{2}$ & $5.9\times 10^{2}$ & $8.6\times 10^{1}$ \\ \hline
$2048$ & $4.2\times 10^{1}$ & $1.4\times 10^{2}$ & $5.7\times 10^{2}$ & $1.0\times 10^{2}$ \\ \hline
$4096$ & $6.0\times 10^{1}$ & $2.6\times 10^{2}$ & $3.5\times 10^{3}$ & $4.2\times 10^{2}$ \\ \hline
$8192$ & $9.5\times 10^{1}$ & $3.0\times 10^{2}$ & $1.5\times 10^{3}$ & $2.5\times 10^{2}$ \\ \hline
$16384$ & $1.2\times 10^{2}$ & $4.2\times 10^{2}$ & $3.6\times 10^{3}$ & $4.5\times 10^{2}$ \\ \hline
$32768$ & $2.3\times 10^{2}$ & $7.5\times 10^{2}$ & $5.6\times 10^{3}$ & $7.1\times 10^{2}$ \\ \hline
$65536$ & $2.4\times 10^{2}$ & $1.0\times 10^{3}$ & $1.2\times 10^{4}$ & $1.3\times 10^{3}$ \\ \hline
$131072$ & $3.9\times 10^{2}$ & $1.4\times 10^{3}$ & $5.5\times 10^{3}$ & $9.0\times 10^{2}$ \\ \hline
$262144$ & $6.3\times 10^{2}$ & $3.7\times 10^{3}$ & $1.1\times 10^{5}$ & $1.1\times 10^{4}$ \\ \hline
$524288$ & $8.0\times 10^{2}$ & $3.2\times 10^{3}$ & $3.1\times 10^{4}$ & $3.7\times 10^{3}$ \\ \hline
$1048576$ & $1.2\times 10^{3}$ & $4.8\times 10^{3}$ & $3.1\times 10^{4}$ & $5.1\times 10^{3}$ \\ \hline
\end{tabular}
\end{center}
\end{table}
\section{Implicit empirical support of the estimates of
Sections \ref{scgrtm} and \ref{scgrcm}}\label{simprel}
The papers \cite{PQa} and \cite{PQZa} describe successful
applications of randomized
circulant and Toeplitz multipliers to some fundamental matrix computations.
These applications were bound to fail if the multipliers were ill conditioned,
and so the success
gives some implicit empirical support to our probabilistic estimates of
Sections \ref{scgrtm} and \ref{scgrcm} and motivates the effort
for proving these estimates.
Namely it is well known that Gaussian elimination with no pivoting fails numerically
where the input matrix has an ill conditioned leading block,
even if the matrix itself is nonsingular and well conditioned.
In our extensive tests in \cite{PQa} and \cite{PQZa}
we consistently fixed this problem by means of multiplication
by random circulant matrices. This implies
that the random circulant matrices tend to be nonsingular and well conditioned
for otherwise the products would be singular or ill conditioned.
Likewise in other tests in \cite{PQZa} the column sets of
the products $A^TG$ of an $n\times m$ matrix $A^T$ having a numerical
rank $\rho$ by
random Toeplitz $m\times \rho$ multipliers consisently
approximated some bases for the singular spaces
associated with the $\rho$ largest singular vaues of the matrix $A$,
and this was readily extended to computing a rank-$\rho$ approximation
of the matrix $A$,
which is a fundamental task of matrix computations
\cite{HMT11}. Then again one can immediately observe that
these tests would have failed numerically
if the multipliers and consequently the products were ill conditioned.
\section{Conclusions}\label{srel}
Estimating the condition numbers of random structured matrices
is a well known challenge (cf. \cite{SST06}).
We deduce such estimates for Gaussian random Toeplitz and circulant
matrices.
The former estimates can be surprising because
the condition numbers
grow exponentially in $n$ as $n\rightarrow \infty$
for some large and important classes
of $n\times n$
Toeplitz matrices \cite{BG05},
whereas we prove the opposit for
Gaussian random Toeplitz matrices.
Our formal estimates are in good accordance
with our numerical tests,
except that circulant matrices tended to be even
better conditioned in the tests than according to our formal
study.
The study of the condition number of Hankel matrices
is immediately reduced to the study for Toeplitz matrices
and vice versa. Can
our progress be extended to other important classes
of structured matrices?
$~$
{\bf Acknowledgements:}
Our research has been supported by NSF Grant CCF--1116736 and
PSC CUNY Awards 64512--0042 and 65792--0043.
| {
"timestamp": "2012-12-20T02:01:00",
"yymm": "1212",
"arxiv_id": "1212.4551",
"language": "en",
"url": "https://arxiv.org/abs/1212.4551",
"abstract": "Estimating the condition numbers of random structured matrices is a well known challenge, linked to the design of efficient randomized matrix algorithms. We deduce such estimates for Gaussian random Toeplitz and circulant matrices. The former estimates can be surprising because the condition numbers grow exponentially in n as n grows to infinity for some large and important classes of n-by-n Toeplitz matrices, whereas we prove the opposit for Gaussian random Toeplitz matrices. Our formal estimates are in good accordance with our numerical tests, except that circulant matrices tend to be even better conditioned according to the tests than according to our formal study.",
"subjects": "Numerical Analysis (math.NA); Probability (math.PR)",
"title": "Condition Numbers of Random Toeplitz and Circulant Matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232874658904,
"lm_q2_score": 0.8175744850834648,
"lm_q1q2_score": 0.8035312431779634
} |
https://arxiv.org/abs/2007.13155 | Revisiting a sharpened version of Hadamard's determinant inequality | Hadamard's determinant inequality was refined and generalized by Zhang and Yang in [Acta Math. Appl. Sinica 20 (1997) 269-274]. Some special cases of the result were rediscovered recently by Rozanski, Witula and Hetmaniok in [Linear Algebra Appl. 532 (2017) 500-511]. We revisit the result in the case of positive semidefinite matrices, giving a new proof in terms of majorization and a complete description of the conditions for equality in the positive definite case. We also mention a block extension, which makes use of a result of Thompson in the 1960s. | \section{Introduction}
Perhaps the best known determinantal inequality in mathematical sciences is the Hadamard inequality (e.g., \cite[p. 505]{HJ13}) which says that
if $A=(a_{ij})$ is an $n\times n$ (Hermitian) positive definite matrix, then
\begin{eqnarray}\label{h1} \det A\le a_{11}\cdots a_{nn},\end{eqnarray}
and equality holds if and only if $A$ is diagonal.
Two decades ago, Zhang and Yang obtained an elegant sharpening of the Hadamard inequality. They proved it for a more general class of matrices and included a term involving off-diagonal entries of the matrix.
Let $A$ be an $n\times n$ complex matrix. For a non-empty proper subset $G$ of $\{1,\dots,n\}$, let $A[G]$ denote the principal submatrix of $A$ formed by discarding the $i$th row and column of $A$, for each $i\notin G$. We say $A$ is an $F$-matrix if for all such $G$, $\det(A[G])\ge0$ and $\det(A)\le\det(A[G])\det(A[G^C])$ That is, all principal minors of $A$ are non-negative and they satisfy a Fischer-type inequality. Standard results show that the Hermitian $F$-matrices are exactly the positive semi-definite matrices.
\begin{thm}\label{ZYmain} (Zhang-Yang \cite{ZY97}) If $A=(a_{ij})$ is an $F$-matrix, then $a_{ij}a_{ji}\ge0$ for all $i,j$ and, if $\sigma$ is a non-trivial permutation of $\{1,\dots,n\}$, then
\begin{equation}\label{maxZY}
\det(A)+\Big(\prod_{i=1}^na_{i,\sigma(i)}a_{\sigma(i),i}\Big)^{1/2}\le \prod_{i=1}^na_{ii}.
\end{equation}
\end{thm}
Without noticing the work of Zhang and Yang in \cite{ZY97}, which was written in Chinese, Rozanski, Witula and Hetmaniok recently rediscovered some special cases of (\ref{maxZY}) in \cite{RWH2017}. For this reason, we think it worthwhile to bring the nice result of Theorem \ref{ZYmain} to the attention of the linear algebra community.
Besides Theorem \ref{ZYmain}, \cite{ZY97} also contains necessary and sufficient conditions for an $F$-matrix $A$ to satisfy the equation $\det(A)=a_{11}\dots a_{nn}$. The same was done for the equation $\operatorname{permanent}(A)=a_{11}\dots a_{nn}$. However, Zhang and Yang did not give conditions for equality to hold in (\ref{maxZY}). Conditions for equality were included by Rozanski, et al. for the special cases considered in \cite{RWH2017}.
There are multiple known ways to prove the Hadamard inequality (\ref{h1}) in the literature (see, e.g., \cite{Hol07, HJ13, MO82}). We recall that one insightful way of seeing the Hadamard inequality is via Schur's majorization inequality (see Lemma \ref{lem1}). For a quick summary of the intimate connection between majorization and determinant inequalities, we refer to \cite{Lin17}.
Our initial motivation was that since the original Hadamard inequality (\ref{h1}) is immediate from majorization (see, e.g. \cite[p. 44]{And94}, \cite[p. 67]{Zha13}), it would be nice if Theorem \ref{ZYmain} could also be seen from that perspective. In this note, we give a new proof of Theorem \ref{ZYmain} for positive semi-definite matrices using majorization techniques. Our results include necessary and sufficient conditions for equality to hold when the matrix $A$ is positive definite.
Before proceeding, let us fix some notation. For a vector $x\in\mathbb{R}^n$, we denote by $x^{\downarrow}=(x_1^{\downarrow}, \ldots, x_n^{\downarrow})\in\mathbb{R}^n$ the vector with the same components as $x$, but sorted in nonincreasing order. Given $x, y\in \mathbb{R}^n$, we say that $x$ majorizes $y$ (or $y$ is majorized by $x$), written as $x\succ y$, if
$$
\sum_{i=1}^k x_i^{\downarrow} \geq \sum_{i=1}^k y_i^{\downarrow} \quad \text{for } k=1,\dots, n-1
$$
and equality holds at $k=n$.
Three basic facts about majorization are given below. The first is a matrix characterization, the next is Schur's majorization inequality, and the last is a consequence for the elementary symmetric functions. A {\it doubly stochastic matrix} is square matrix with non-negative entries and all row and column sums equal to 1.
\begin{lemma}\label{lem0} \cite[p. 253]{HJ13} If $x$ and $y$ are real row vectors then $x$ majorizes $y$ if and only if there exists a doubly stochastic matrix $S$ such that $y=xS$.
\end{lemma}
\begin{lemma}\label{lem1} \cite[p. 249]{HJ13} The eigenvalues of a Hermitian matrix majorize its diagonal entries. \end{lemma}
Fix a positive integer $n$ and let $e_k(x)$, $k=1, 2, \ldots, n$, denote the $k$th elementary symmetric function in the $n$ variables $x_1, \ldots, x_n$. See \cite[p.114]{MOA11}. By convention, $e_0(x)=1$.
\begin{lemma}\label{lem2} \cite[p.115]{MOA11} Let $x, y\in [0,\infty)^n$. If $n\ge2$ and $x\succ y$, then $e_k(x)\le e_k(y)$ for $k=0, 1, \ldots, n$. If $k>1$ and $x, y\in (0,\infty)^n$ then equality holds if and only if $x^\downarrow=y^\downarrow$. \end{lemma}
\section{A new proof of Theorem \ref{ZYmain} and more}
In this section, $\Lambda$, $V$ and $B$ will be as follows: Let $\Lambda$ be a diagonal matrix with non-negative diagonal entries $\lambda_1,\dots,\lambda _n$. Let $V=(v_{ij})$ be an $n\times n$ matrix whose rows and columns are all unit vectors, that is, all diagonal entries of $V^*V$ and $VV^*$ are equal to $1$. Set $B=(b_{ij})=V^*\Lambda V$.
It is important to point out that, in general, $\prod_{i=1}^n\lambda_i\ne \det(B)$, although they are equal when $V$ is unitary.
\begin{lemma}\label{2.L} Let $P(t)=\prod_{i=1}^n(\lambda_{i}-t)$ and $Q(t)=\prod_{i=1}^n(b_{ii}-t)$. Fix $s<t\le\min(\lambda_1,\dots,\lambda_n)$. Then $P(t)\le Q(t)$ and $P(s)-P(t)\le Q(s)-Q(t)$. If $n\ge3$ and $P(s)-P(t)= Q(s)-Q(t)$ then $(b_{11},\dots,b_{nn})$ is a permutation of $(\lambda_1,\dots,\lambda_n)$.
\end{lemma}
\begin{proof}
The conditions on $V$ show that the matrix $S=(|v_{ij}|^2)$ is doubly stochastic and a calculation shows that $(b_{11}-t,\dots,b_{nn}-t)=(\lambda_1-t,\dots,\lambda _n-t)S$.
By Lemma \ref{lem0} and Lemma \ref{lem2},
$$
e_k(\lambda_1-t,\dots,\lambda _n-t)\le e_k(b_{11}-t,\dots,b_{nn}-t)
$$
for $k=0,\dots,n$. In particular,
$$
P(t)=e_n(\lambda_1-t,\dots,\lambda _n-t)\le e_n(b_{11}-t,\dots,b_{nn}-t)=Q(t).
$$
Also,
\begin{align*}
P(s)-P(t)&=\prod_{i=1}^n(\lambda_i-t+t-s)-\prod_{i=1}^n(\lambda_i-t)\\
&=\sum_{k=0}^{n-1}e_k(\lambda_1-t,\dots,\lambda _n-t)(t-s)^{n-k}\\
&\le\sum_{k=0}^{n-1}e_k(b_{11}-t,\dots,b_{nn}-t)(t-s)^{n-k} \\
&=\prod_{i=1}^n(b_{ii}-t+t-s)- \prod_{i=1}^n(b_{ii}-t)
=Q(s)-Q(t).
\end{align*}
If $n\ge3$ and $P(s)-P(t)= Q(s)-Q(t)$, then the above estimate reduces to equality throughout, which implies $e_2(\lambda_1-t,\dots,\lambda _n-t)=e_2(b_{11}-t,\dots,b_{nn}-t)$. But $e_2$ is strictly Schur concave on all of $\Bbb R^n$. (See \cite[A.4]{MOA11} to prove concavity and then \cite[A.3.a]{MOA11} to prove strict concavity.) Thus $(b_{11}-t,\dots,b_{nn}-t)$ is a permutation of $(\lambda_1-t,\dots,\lambda_n-t)$ and therefore $(b_{11},\dots,b_{nn})$ is a permutation of $(\lambda_1,\dots,\lambda_n)$.
\end{proof}
\begin{thm} If $\lambda_1,\dots,\lambda_n$ are non-negative, then
\begin{equation}\label{H+}
\prod_{i=1}^n\lambda_i\le\prod_{i=1}^n b_{ii}.
\end{equation}
If $n\ge3$ and $\lambda_1,\dots,\lambda_n$ are strictly positive and distinct then equality holds if and only if $V$ has exactly one non-zero entry in each row and column.
\end{thm}
\begin{proof} Lemma \ref{2.L} shows that $P(0)\le Q(0)$, which is (\ref{H+}). Note that $V$ has exactly one non-zero entry in each row and column if and only if $S=(|v_{ij}|^2)$ is a permutation matrix. In that case, since $(b_{11},\dots,b_{nn})=(\lambda_1,\dots,\lambda _n)S$, (\ref{H+}) holds with equality.
Now suppose $\lambda_1,\dots,\lambda_n$ are strictly positive and distinct. If equality holds in (\ref{H+}), that is, if $e_n(\lambda_1,\dots,\lambda _n)=e_n(b_{11},\dots,b_{nn})$, then $b_{11},\dots,b_{nn}$ are also strictly positive. Therefore Lemma \ref{lem2} shows $(b_{11},\dots,b_{nn})$ is a permutation of $(\lambda_1,\dots,\lambda _n)$. It follows that there are permutation matrices $R$ and $R'$ such that $(\lambda_1^\downarrow,\dots,\lambda _n^\downarrow)=(\lambda_1^\downarrow,\dots,\lambda _n^\downarrow)R'SR$. Clearly, $R'SR=(t_{ij})$ is also a doubly stochastic matrix.
Assume that for some $i$, there exists an $m<i$ such that $t_{im}\ne0$. For this $i$ choose the largest such $m$. Then
$t_{ij}(\lambda_j^\downarrow-\lambda_i^\downarrow)=0$ for $j>m$, $t_{ij}(\lambda_j^\downarrow-\lambda_i^\downarrow)\ge0$ for $j<m$, and $t_{im}(\lambda_m^\downarrow-\lambda_i^\downarrow)>0$. This shows that
$\sum_{j=1}^nt_{ij}(\lambda_j^\downarrow-\lambda_i^\downarrow)>0$, which contradicts $(\lambda_1^\downarrow,\dots,\lambda _n^\downarrow)=(\lambda_1^\downarrow,\dots,\lambda _n^\downarrow)R'SR$. We conclude that $R'PR$ is upper triangular. It is easy to see that the only upper triangular, doubly stochastic matrix is $I$. Thus, $S$ is a permutation matrix.
\end{proof}
The following examples show that the conditions for equality may change in the non-generic cases where the $\lambda_1,\dots,\lambda_n$ are not distinct or include one or more zeros.
\begin{eg} Let $\lambda_1=\lambda_2=1$ and $V=\left(\begin{smallmatrix}i\cos\theta&i\sin\theta&0\\\sin\theta&\cos\theta&0\\0&0&-1\end{smallmatrix}\right)$ for some $\theta$. Then $B=V^*V=\left(\begin{smallmatrix}1&\sin2\theta&0\\\sin2\theta&1&0\\0&0&1\end{smallmatrix}\right)$, but $VV^*=\left(\begin{smallmatrix}1&i\sin2\theta&0\\-i\sin2\theta&1&0\\0&0&1\end{smallmatrix}\right)$. So we have equality in (\ref{H+}) but $V$ does not have only one non-zero entry in each row and column. Significantly, the doubly stochastic matrix $S=\left(\begin{smallmatrix}\cos^2\theta&\sin^2\theta&0\\\sin^2\theta&\cos^2\theta&0\\0&0&1\end{smallmatrix}\right)$ is not uniquely determined; it varies with $\theta$.
\end{eg}
\begin{eg} With $\Lambda$ and $V$ as defined above, let $\Lambda'=\left(\begin{smallmatrix}\Lambda&0\\0&0\end{smallmatrix}\right)$ and $V'=\left(\begin{smallmatrix}V&0\\0&1\end{smallmatrix}\right)$. Then we have equality in (\ref{H+}) because both sides are zero, but $V'$ does not have one non-zero entry in each row and column unless $V$ does.
\end{eg}
The conditions imposed on $V$, above, are satisfied by any unitary matrix but the unitaries are only a small subclass of the possible matrices $V$. The next example gives large class of matrices $V$ that are not unitary.
\begin{eg}
Suppose $T=(t_{ij})$ is an $n\times n$ Hermitian matrix with spectral norm $\|T\|\le 1$ satisfying $t_{ii}=0$ for all $i$. Since $\|T\|\le1$, $I+T$ is positive semi-definite and therefore has a positive semi-definite square root $V=(I+T)^{1/2}$. Note that $V^*V=VV^*=I+T$, a matrix with ones on the diagonal. If $T$ is not zero the matrix $V$ is not unitary.
\end{eg}
On the other hand, if $V$ is unitary, the conditions on $V$ are satisfied automatically, the product $\lambda_1\cdots\lambda_n$ is the determinant of $B$, and $B$ could be any positive semi-definite matrix.
Let $S_n$ be the group of permutations of $\{1,\dots,n\}$ and let $D_n$ denote the
collection of permutations that have no fixed point. These are the so-called
derangements of $\{1,\dots,n\}$. We also let $e_i$ be the $i$th column of the $n\times n$ identity matrix so that $Ae_i$ extracts the $i$th column of $A$. We remind the reader not to confuse $e_i$ with $e_i(x)$.
\begin{thm}\label{refined} Let $n\ge 3$. If $A=(a_{ij})$ is positive semi-definite and $\tau\in D_n$, then
\begin{equation}\label{maxtau}
\det(A)+\prod_{i=1}^n|a_{i,\tau(i)}|\le \prod_{i=1}^na_{ii}.
\end{equation}
Equality holds if and only if $A$ is diagonal or the two vectors $Ae_i$ and $Ae_{\tau(i)}$ are collinear for each $i$.
\end{thm}
\begin{proof} Choose a unitary $V$ such that $A=V^*\Lambda V$, where $\lambda_1,\dots,\lambda_n$ are the (necessarily non-negative) eigenvalues of $A$. This makes $B=A$. Then set $t=\min(\lambda_1,\dots,\lambda_n)$. Lemma \ref{2.L} shows that $P(0)-P(t)\le Q(0)-Q(t)$. The choice of $t$ ensures that $P(t)=0$. Also $P(0)=\det(A)$ and $Q(0)=\prod_{i=1}^na_{ii}$. To prove (\ref{maxtau}) we use the Cauchy-Schwarz inequality to show that $Q(t)\ge\prod_{i=1}^n|a_{i,\tau(i)}|$ for each $\tau\in D_n$. Fix $\tau\in D_n$ and observe that
\begin{equation}\label{CS}\begin{aligned}
|a_{i,\tau(i)}|^2&=\bigg|\sum_{j=1}^n\bar v_{ji}\lambda_jv_{j,\tau(i)}\bigg|^2\\
&=\bigg|\sum_{j=1}^n\bar v_{ji}(\lambda_j-t)v_{j,\tau(i)}\bigg|^2\\
&\le\sum_{j=1}^n|v_{ji}|^2(\lambda_j-t)\sum_{j=1}^n|v_{j,\tau(i)}|^2(\lambda_j-t)\\
&=(a_{ii}-t)(a_{\tau(i),\tau(i)}-t).
\end{aligned}\end{equation}
Thus,
\begin{equation}\label{aCS}
\prod_{i=1}^n|a_{i,\tau(i)}|
\le\bigg(\prod_{i=1}^n(a_{ii}-t)\prod_{i=1}^n(a_{\tau(i),\tau(i)}-t)\bigg)^{1/2}=\prod_{i=1}^n(a_{ii}-t)=Q(t).
\end{equation}
Next we consider conditions for equality. If $A$ is diagonal it is clear that (\ref{maxtau}) holds with equality. Suppose the two vectors $Ae_i$ and $Ae_{\tau(i)}$ are collinear for each $i$. Then $A$ is singular, $\det(A)=0$, and $t=0$. Fix $i$ and choose $a$ and $b$, not both zero, such that $aAe_i=bAe_{\tau(i)}$. Then for each $j$, $e_j^*VA(ae_i-be_{\tau(i)})=0$. But $AV^*e_j=V^*\Lambda e_j=\lambda_jV^*e_j$ so $\lambda_je_j^*V(ae_i-be_{\tau(i)})=0$. Therefore $av_{ji}=bv_{j,\tau(i)}$ for all $j$ such that $\lambda_j\ne0$. This gives equality in (\ref{CS}). Since this holds for all $i$, we have equality in (\ref{aCS}) as well. Since $\det(A)=0$ we also have equality in (\ref{maxtau}).
Conversely, suppose equality holds in (\ref{maxtau}). The above proof shows that we must have $P(0)-P(t)= Q(0)-Q(t)$ and equality in (\ref{CS}) for all $i$. If $t>0$, Lemma \ref{2.L} shows that $(a_{11},\dots,a_{nn})$ is a permutation of $(\lambda_1,\dots,\lambda_n)$ giving equality in Hadamard's inequality. Therefore $A$ is diagonal. If $t=0$ then equality in (\ref{CS}) implies that for all $i$ there exist constants $a$ and $b$, not both zero, such that $av_{ji}=bv_{j,\tau(i)}$ for all $j$ such that $\lambda_j\ne 0$. Thus, for all $j$, $\lambda_je_j^*V(ae_i-be_{\tau(i)})=0$ and hence, as above, $e_j^*VA(ae_i-be_{\tau(i)})=0$. This holds for all $j$, and $V$ is invertible, so we conclude that $aAe_i=bAe_{\tau(i)}$, that is, $Ae_i$ and $Ae_{\tau(i)}$ are collinear.
\end{proof}
\begin{rem} The collinearity condition for equality above may be expressed in terms of matrix rank: For $J\subseteq \{1,\dots,n\}$, let $P_J$ be the orthogonal projection onto $\operatorname{span}\{e_j:j\in J\}$. Let $J^\tau_1,\dots,J^\tau_{n_\tau}$ be the orbits of $\tau\in D_n$. Then $I=\sum_{k=1}^{n_\tau}P_{J_k}$ so $A=\sum_{k=1}^{n_\tau}AP_{J_k}$. The condition that the two vectors $Ae_i$ and $Ae_{\tau(i)}$ are collinear for each $i$ is equivalent to saying that the rank of $AP_{J_k}$ is at most 1 for $k=1,\dots,n_\tau$.
\end{rem}
The next result follows from Lemma \ref{2.L} and the Cauchy-Schwarz estimates of Theorem \ref{refined}. We state it without proof.
\begin{thm} Let $\tau\in D_n$. If for all $i$, the $(i,\tau(i))$ entry of $V^*V$ is zero then
$$
\prod_{i=1}^n\lambda_i+\prod_{i=1}^n|b_{i,\tau(i)}|\le\prod_{i=1}^n b_{ii}.
$$
\end{thm}
Next is our proof of the motivating result: Theorem \ref{ZYmain} for positive semi-definite matrices, including conditions for equality.
Observe that if $n=2$, (\ref{maxtau}) reduces to equality for every $A$.
\begin{cor}\label{refinedcor} \cite{ZY97} Let $A=(a_{ij})$ be positive semi-definite matrix, and $\sigma\in S_n$. If $\sigma$ is not the identity permutation, then
\begin{equation}\label{maxsigma}
\det(A)+\prod_{i=1}^n|a_{i,\sigma(i)}|\le \prod_{i=1}^na_{ii}.
\end{equation}
Equality holds if $A$ is diagonal, or if for each $i$ either: $\sigma(i)=i$ and $Ae_i$, $e_i$ are collinear; $\sigma(i)\ne i$ and $\sigma$ is a transposition; or $\sigma(i)\ne i$ and $Ae_i$, $Ae_{\sigma(i)}$ are collinear. If $A$ is positive definite, these conditions are also necessary for equality.
\end{cor}
\begin{proof} Let $F$ be the set of fixed points of $\sigma$, a proper subset of $\{1,\dots,n\}$, and let $G$ be its complement. If $F$ is empty, the result follows from Theorem \ref{refined}. Note that both $A[F]$ and $A[G]$ are positive semi-definite. Fischer's inequality (see \cite[Theorem 7.8.5]{HJ13}), followed by Hadamard's inequality, gives
\begin{equation}\label{F-H}
\det(A)\le\det(A[G])\det(A[F])
\le\det(A[G])\prod_{i\in F}a_{ii}.
\end{equation}
Note that the restriction of $\sigma$ to $G$ is a permutation of $G$ with no fixed point. By Theorem \ref{refined}, we have
\begin{equation}\label{refG}
\det(A[G])+\prod_{i\in G}|a_{i,\sigma(i)}|\le\prod_{i\in G}a_{ii},
\end{equation}
provided $G$ has at least three elements. We can remove that restriction, however, because (\ref{refG}) becomes equality when $G$ has exactly two elements, it is impossible for $G$ to have exactly one element, and we have excluded the case that $G$ is empty.
Combining the last two inequalities gives (\ref{maxsigma}).
If $A$ is diagonal then we clearly have equality in (\ref{maxsigma}). Now suppose that for each $i$ either: $\sigma(i)=i$ and $Ae_i$, $e_i$ are collinear; or $\sigma(i)\ne i$ and, if $\sigma$ is not just a transposition, then $Ae_i$, $Ae_{\sigma(i)}$ are collinear. This implies that $A[F]$ is diagonal, and $A$ has (up to reordering of the standard basis) a block diagonal decomposition with blocks $A[F]$ and $A[G]$. Thus we have equality in (\ref{F-H}). The conditions for equality in Theorem \ref{refined} give equality in (\ref{refG}) when $n\ge3$ and equality is trivial when $n=2$ so we have equality in (\ref{maxsigma}).
Now suppose that $A$ is positive definite and
equality holds in (\ref{maxsigma}).
Since $\det(A)>0$, the Fischer inequality shows that $\det(A[F])>0$ and $\det(A[G])>0$. Therefore we have equality in both (\ref{F-H}) and (\ref{refG}).
Equality in the Fischer inequality from (\ref{F-H}) implies that $A$ has (up to reordering of the standard basis) a block diagonal decomposition with blocks $A[F]$ and $A[G]$ (see \cite[p. 217]{Zha11}). Equality in the Hadamard inequality from (\ref{F-H}) implies that $A[F]$ is diagonal. Together, these show that if $\sigma(i)=i$, then $Ae_i$ and $e_i$ are collinear. Equality in (\ref{refG}) implies, via Theorem \ref{refined}, that if $G$ has at least three elements and $\sigma(i)\ne i$, then $Ae_i$, $Ae_{\sigma(i)}$ are collinear. If $G$ has fewer than three elements then $\sigma$ can only be a transposition. This completes the proof.
\end{proof}
Recall that the Hadamard product ``$\circ$" is the entrywise product of matrices. So the Hadamard inequality may be written as $\det A\le \det( A \circ I)$ for a positive semi-definite $A$. Theorem \ref{refined} enables us to state the following result.
\begin{thm} \label{t4} Let $\tau\in D_n$ be a derangement and $P=(p_{ij})$ where $p_{ij}$ is $1$ when $j=\tau(i)$ and zero otherwise.
For a positive semi-definite matrix $A=(a_{ij})$, \begin{eqnarray}
\det (A\circ I)\ge \det A+|\det (A\circ P)|.
\end{eqnarray} Equality holds if and only if the matrix $A$ is a diagonal matrix or the two vectors $Ae_i$ and $Ae_{\tau(i)}$ are collinear for each $i$.\end{thm}
\begin{rem}
We expect that Theorem \ref{t4} will stimulate further investigation of Oppenheim-Schur inequalities (see \cite[p. 509]{HJ13}). \end{rem}
In 1961, Thompson \cite{Tho61} published a remarkable determinant inequality.
\begin{thm}\label{Th}If $A=(A_{ij})$ is positive definite with each block $A_{ij}$ square, then
\begin{eqnarray}\label{tho}
\det A\le \det (\det A_{ij}).
\end{eqnarray} Equality holds if and only if $A$ is block diagonal.
\end{thm}
We point out an extension of Theorem \ref{refined} to the block matrix case.
\begin{thm}If $A=(A_{ij})$ is an $n\times n$ block positive definite matrix with each block $A_{ij}$ square, then for any derangement $\tau\in D_n$,
\begin{eqnarray*}
\det A+\prod_{i=1}^{n}|\det A_{i,\tau(i)}|\le \prod_{i=1}^{n}\det A_{ii}.
\end{eqnarray*}
Equality holds if and only if $A$ is block diagonal.
\end{thm}
\begin{proof} By (\ref{tho}), it suffices to work with the $n\times n$ positive definite matrix $(\det A_{ij})$. The conclusion then follows by Theorem \ref{t4}.
\end{proof}
\section*{Acknowledgments} Both authors are grateful to Professor Xingzhi Zhan for pointing to the early work of Xiao-Dong Zhang and Shangjun Yang on this topic. The authors also acknowledge some comments from the referee which help improve the presentation. The work of M. Lin is supported by the National Natural
Science Foundation of China (Grant No. 11601314). The work of G. Sinnamon is supported by the
Natural Sciences and Engineering Research Council of Canada.
| {
"timestamp": "2020-07-28T02:24:04",
"yymm": "2007",
"arxiv_id": "2007.13155",
"language": "en",
"url": "https://arxiv.org/abs/2007.13155",
"abstract": "Hadamard's determinant inequality was refined and generalized by Zhang and Yang in [Acta Math. Appl. Sinica 20 (1997) 269-274]. Some special cases of the result were rediscovered recently by Rozanski, Witula and Hetmaniok in [Linear Algebra Appl. 532 (2017) 500-511]. We revisit the result in the case of positive semidefinite matrices, giving a new proof in terms of majorization and a complete description of the conditions for equality in the positive definite case. We also mention a block extension, which makes use of a result of Thompson in the 1960s.",
"subjects": "Functional Analysis (math.FA)",
"title": "Revisiting a sharpened version of Hadamard's determinant inequality",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232924970204,
"lm_q2_score": 0.8175744761936438,
"lm_q1q2_score": 0.8035312385541638
} |
https://arxiv.org/abs/1811.04846 | Gaussian Quadrature Rule using ε-Quasiorthogonality | We introduce a new type of quadrature, known as approximate Gaussian quadrature (AGQ) rules using {\epsilon}-quasiorthogonality, for the approximation of integrals of the form \int f(x)d \alpha(x). The measure {\alpha}(\cdot) can be arbitrary as long as it possesses finite moments {\mu}n for sufficiently large n. The weights and nodes associated with the quadrature can be computed in low complexity and their count is inferior to that required by classical quadratures at fixed accuracy on some families of integrands. Furthermore, we show how AGQ can be used to discretize the Fourier transform with few points in order to obtain short exponential representations of functions. | \section{Introduction}
\label{Intro}
In this paper, we present a new kind of quadrature rule for approximating integrals by sums of the form,
\begin{equation}
\label{discrete}
\int f(x) \, \d \alpha(x) \approx \sum_{i=1}^n w_i f(x_i)
\end{equation}
having the following characteristics:
\begin{enumerate}
\item The measure $\alpha( \cdot )$ can be \emph{arbitrary} (positive, signed, complex, ...) as long as it satisfies some weak condition.
\item The nodes and weights associated with the quadrature rule can be obtained in low computational complexity through a simple numerical algorithm.
\item The quadrature is at least as accurate as the Gaussian quadrature rule, and in many cases is significantly more accurate.
\item Low-order rules are able to integrate high-order polynomials with high accuracy.
\end{enumerate}
The scheme presented in the current work uses a strategy similar to classical Gaussian quadrature rules (of which a few examples can be found in Table~\ref{classicalGaussQuad}). The Gaussian quadrature rule is designed to integrate exactly polynomials of degree at most $2n-1$ using $n$ quadrature points and weights:
\[ \int x^k \, \d \alpha(x) \]
for various weight functions $\frac{\d \alpha}{\d x}$ (see Table~\ref{classicalGaussQuad}).
\begin{table}[htbp]
\caption{Examples of classical Gaussian quadratures}
\begin{center}
\begin{tabular}{ccc} \toprule
Name & Interval & Measure ($\d \alpha / \d x$ ) \\ \midrule
Gauss-Legendre & $[-1,1]$ & $1$ \\
Gauss-Laguerre & $[0, \infty)$ & $e^{-x}$ \\
Gauss-Hermite & $(-\infty, \infty)$ & $e^{-x^2} $\\
Gauss-Jacobi & $(-1,1)$ & $(1-x)^{\alpha}(1+x)^{\beta} \; , \;\; \alpha, \beta > -1 $ \\
Chebyshev-Gauss (1st kind) & $(-1,1)$ & $1/\sqrt{1-x^2}$ \\
Chebyshev-Gauss (2nd kind) & $[-1,1]$ & $\sqrt{1-x^2}$ \\
\bottomrule
\end{tabular}
\end{center}
\label{classicalGaussQuad}
\end{table}
The paper is structured as follows. In Section~\ref{GQ}, a brief overview of classical Gaussian quadratures will be presented. In Section~\ref{sec:agq}, the concept of quasiorthogonal polynomial and approximate Gaussian quadrature will be introduced together with an error analysis. This will be followed in Section~\ref{NS} by numerical results. In the same section, we will discuss representations of functions by short sums of exponentials.
\section{Gaussian quadrature}
\label{GQ}
Gaussian quadratures are schemes used to approximate definite integrals of the form,
\begin{equation*}
\int_a^b f(x) \, \d \alpha(x)
\end{equation*}
by a finite weighted sum of the form,
\begin{equation*}
\sum_{n=0}^N w_n \, f(x_n)
\end{equation*}
where $a<b \in \mathbb{R}$. The coefficients $\{ w_n \}$ are generally referred to as the \emph{weights} of the quadrature, whereas the points $\{ x_n \}$ are referred to as the \emph{nodes}. An $(N+1)$-node Gaussian quadrature can integrate polynomials up to degree $2N+1$ \emph{exactly} and is generally well-suited for the integration of functions that are well-approximated by polynomials.
In what follows, we will briefly describe how the nodes and weights of classical Gaussian quadratures can be obtained based on the classical theory of orthogonal polynomials. For this purpose, we shall denote the real and complex numbers by $\mathbb{R}$ and $\mathbb{C}$ respectively. $\alpha( \cdot )$ will represent an arbitrary measure (possibly complex) on $(\mathbb{R}, \mathcal{B})$ or $(\mathbb{C}, \mathcal{B})$ unless otherwise stated. Vectors are represented by lower case letter e.g., $v$. The $i^{th}$ component of a vector $v$ will be written as $v_i$, and we shall use super-indices of the form $v^{(j)}$ when multiple vectors are under consideration.
We begin by introducing four key objects: the orthogonal polynomials, the Lagrange interpolants, the moments of a measure $\alpha(\cdot)$ and the Hankel matrix associated with such a measure.
\begin{defn}{\bf (Orthogonal polynomial)}
A sequence $\{ p^{(k)} (x) \}_{k=0}^{\infty}$ of polynomials of degree $k$ is said to be a sequence of orthogonal polynomials with respect to a positive measure $\alpha(\cdot)$ if,
\begin{equation*}
\int p^{(k)}(x) p^{(l)} (x) \, \d \alpha(x) =
\left\{
\begin{array}{ll}
0 & \mbox{if } k \not = l \\
c_k & \mbox{if } k = l
\end{array}
\right.
\end{equation*}
If in addition $c_k = 1 \; \forall k \in \mathbb{N}$, then the sequence is called \emph{orthonormal}.
\end{defn}
We shall hereafter assume that all such polynomials are monic, i.e., that they can be written as,
\begin{equation*}
p^{(k)} (x) = x^k + \sum_{n=0}^{k-1} p^{(k)}_n x^n
\end{equation*}
where $\{ p^{(k)}_n \}_{n=0}^{k-1}$ are some (potentially complex) coefficients. We then introduce Lagrange interpolants,
\begin{defn}{\bf (Lagrange interpolant)}
Given a set of $(d+1)$ data points $\{ (x_n, y_n) \}_{n=0}^d$, the Lagrange interpolant is the unique polynomial $L (x)$ of degree $d$ such that,
\begin{equation*}
L (x_n) = y_n , \; n = 0 ... d
\end{equation*}
It can be written explicitly as,
\begin{equation*}
L (x) = \sum_{n=0}^d y_n\, \ell_n (x)
\end{equation*}
where,
\begin{equation*}
\ell_n (x) = \prod_{\substack{{m=0}\\ {m \not = n}}}^d \frac{x-x_m}{x_n-x_m}
\end{equation*}
and $\ell_n (x)$ is referred to as the $n^{th}$ Lagrange basis polynomial.
\end{defn}
Finally we introduce the moments as well as the Hankel matrix associated with a measure $\alpha(\cdot)$,
\begin{defn}{\bf(Moment)}
Given an arbitrary measure $\alpha(\cdot)$ on $(\mathbb{R}, \mathcal{B})$, its $n^{th}$ moment $\mu_n$ is defined by the following Lebesgue integral,
\begin{equation*}
\mu_n = \int x^n \, \d \alpha(x)
\end{equation*}
whenever it exists.
\end{defn}
\begin{defn}{\bf (Hankel matrix)}
An $(N+1) \times (M+1)$ matrix $H$ is called the $(N+1) \times (M+1)$ Hankel matrix associated with the measure $\alpha( \cdot )$ if its entries take the form,
\begin{equation}
\begin{pmatrix}
\mu_0 & \mu_1 & \cdots & \mu_{M} \\
\mu_1 & \mu_2 & \cdots & \mu_{M+1}\\
\vdots &\vdots &\vdots &\vdots \\
\mu_{N} & \mu_{N+1} & \cdots & \mu_{N+M} \end{pmatrix}
\label{eq:hankel}
\end{equation}
i.e., $H_{ij} = \mu_{i+j}$, where $(\mu_0, \mu_1, \ldots, \mu_{N+M} )$ are the first $(N+M)$ moments of $\alpha(\cdot)$ whenever they exist.
\end{defn}
With these quantities we can now present the main results associated with classical Gaussian quadratures,
\begin{thm}{\bf(Gaussian quadrature)}
Consider a positive measure $\alpha(\cdot)$ on $([a,b], \mathcal{B})$ (with $a,b \in \mathbb{R}$ potentially infinity) and a sequence of orthonormal polynomials $\{ p^{(k)} (x) \}_{k=0}^{\infty}$ with respect to $\alpha( \cdot)$. Then, the quadrature rule with nodes $\{ x_n\}_{n=0}^k$ consisting in the zeros of $p^{(k+1)}(x)$ and weights $\{ w_n \}_{n=0}^k$ given by,
\begin{equation*}
w_n = \int \ell_n (x) \, \d \alpha(x)
\end{equation*}
integrates polynomials of degree $\leq 2k+1$ exactly.
\end{thm}
This is a classical result which can be found in \cite{Meurant} for instance. Explicit expression for the error incurred in the case of smooth integrand also exist.
To close this section, we introduce a further result characterizing the coefficients of the orthogonal polynomials $\{ p^{(k)}(x) \} $. As we shall see in the next section, this characterization lies at the heart of our scheme,
\begin{lemma}
Consider a positive measure $\alpha(\cdot)$ on $([a,b], \mathcal{B})$ (with $a,b \in \mathbb{R}$ potentially infinity) and a sequence of orthogonal polynomials $\{ p^{(k)} (x) \}_{k=0}^{\infty}$ with respect to $\alpha( \cdot)$. Then, the coefficients $\{ p^{(k+1)}_n \}_{n=0}^k$ of the $(k+1)^{th}$ orthogonal polynomial $p^{(k+1)}(x)$ satisfy the following Hankel system,
\[ Hp =
\begin{pmatrix}
\mu_0 & \mu_1 & \cdots & \mu_{k+1} \\
\mu_1 & \mu_2 & \cdots & \mu_{k+2}\\
\vdots &\vdots &\vdots &\vdots \\
\mu_{k} & \mu_{k+1} & \cdots & \mu_{2k+1}
\end{pmatrix}
\begin{pmatrix}
p_0^{(k+1)} \\
p_1^{(k+1)} \\
\cdots \\
p_{k}^{(k+1)}
\end{pmatrix} = 0\]
where $\{ \mu_n \}$ are the moments of the measure $\alpha ( \cdot ) $, whenever they exist.
\label{AGQ:characterization}
\end{lemma}
\begin{proof}
First write,
\begin{equation*}
p^{(k+1)} (x) = \sum_{n=0}^{k+1} p^{(k+1)}_n x^n
\end{equation*}
Let $0 \leq j \leq k $. Then, from orthogonality we have,
\begin{equation*}
0 = \int p^{(k+1)} (x) \, x^j \, \d \alpha(x) = \sum_{n=0}^{k+1} p^{(k+1)}_n \int x^{n+j}\, \d \alpha(x) = \sum_{n=0}^{k+1} p^{(k+1)}_n \mu_{n+j}
\end{equation*}
Putting all these equations in matrix form provides the desired result.
\end{proof}
The Hankel matrices associated with positive measures commonly encountered with classical Gaussian quadratures have been the subject of extensive study in the past (known as the moment problem). In some cases, they can be proved to be invertible although extremely ill-conditioned (see $\cite{Shohat}$ for details). On the other hand, less is known per regards to more general measures. In any case, in the event where the resulting Hankel matrix would be invertible, it can be expected to be ill-conditioned. Indeed, as an example it can be shown that for a large class of positive measures, the smallest eigenvalue of the $N \times N$ associated Hankel matrix scales like $\O \left (\frac{ \sqrt{N} }{\sigma^{2N}} \right ) $, where $\sigma$ depends only on the interval considered and is equal to $(1+\sqrt{2})$ for the interval $[-1,1]$ (see \cite{Widom:1966}).
The question we treat in the next section is whether such Hankel matrices arising from arbitrary measures can be used to derive Gaussian-like quadratures, and what this inherent ill-conditioning entails.
\section{Approximate Gaussian quadrature (AGQ)}
\label{sec:agq}
In this section, we describe the concept of approximate Gaussian quadrature. For this purpose, we will need the concept of $\epsilon$-quasiorthogonal polynomial, which we introduce for the first time below. Before doing so however, we first point to the following key observation.
\bigskip
\begin{thm}
Let $H$ be a $N \times M$ with rank $0 < d < M $. Then, there exists $D \le d+1$ and a vector $a \not = 0$ such that
\begin{equation*}
Ha = 0, \quad \text{with $a_i = 0$ for all $i > D$}
\end{equation*}
\label{AGQ:low_rank}
\end{thm}
\begin{proof}
The rank of $H$ is $d$. Therefore if we consider the first $d+1$ columns for $H$ they are linearly dependent. Denote $D$ the smallest integer such that the first $D$ columns of $H$ are linearly dependent. We have $D \le d+1$ and, by definition, there is $a \neq 0$ such that $Ha = 0$ with $a_i = 0$, $i>D$.
\end{proof}
\bigskip
We also have the following corollary,
\begin{cor}
\label{AGQ:quasiortho_cor}
Assume that the $N \times (N+1)$ Hankel matrix $H$ associated with the measure $\alpha(\cdot)$ exists. If $H$ has rank $d<N$ then there exists a nontrivial polynomial $p(x)$ with degree $(D-1)$ where $D \leq d+1$ such that,
\begin{equation*}
\int p(x) x^j \, \d \alpha(x) = 0
\end{equation*}
for all $j = 0,$ \ldots, $N$.
\end{cor}
\begin{proof}
Let $K$ be such that
\begin{equation*}
D = \inf \{ 0\leq n \leq N : \mathrm{rank}( H(:, 1:n) )= n \}
\end{equation*}
where $H(:, 1:n)$ is the matrix containing the first $n$ columns of $H$. By theorem \ref{AGQ:low_rank}, there exists a vector $a \not = 0$ such that $Ha = 0$ and $a_i = 0$ for $i>D$.\\
Let $p(x)$ be the polynomial with coefficients given by $a$, i.e.
\begin{equation*}
p(x) = \sum_{n=0}^D a_n x^n
\end{equation*}
Then,
\begin{align*}
\int p(x) x^j \, \d \alpha(x) &= \int \sum_{n=0}^{D} a_n x^{n+j} \, \d \alpha(x)\\
&= \sum_{n=0}^D a_n \mu_{n+j} \\
& = (H a)_j = 0
\end{align*}
since $a$ belongs to the null-space of $H$.
\end{proof}
The consequences of this corollary are far-reaching and constitute the crux of the scheme presented here. Indeed, although we do not generally expect the Hankel matrix $H$ associated with some measure $\alpha( \cdot)$ to be \emph{exactly} low-rank as in the case of Theorem \ref{AGQ:low_rank} (e.g., $H$ has full rank in the case of classical Gaussian quadratures) we can expect that in some cases $H$ will be \emph{approximately} low rank. In other words, given $0 < \epsilon \ll 1 $ we expect,
\begin{equation*}
D \approx \max \{ 1 \leq i \leq N : \sigma_i > \epsilon \, \sigma_1 \}
\end{equation*}
where $\{ \sigma_i \}$ are the singular values of $H$, to be much smaller than $N$, i.e., $D \ll N$. We show for instance in Figure \ref{svd} the first $50$ singular values of the Hankel matrix ($N=250$) associated with the Lebesgue measure in $[-1,1]$. The $y$-axis scales as a logarithm in base $10$, and it is seen that the singular values decay faster than exponentially.
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}[scale=0.9]
\begin{axis}[
ylabel={$\log_{10}$ of absolute error},
xlabel={Index of singular value},
ymode=log,
width=0.6\textwidth,
height=200pt,
xmin=0,
xmax=50,
ymin=1e-18,
ymax=1e2,
xtick={0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50},
ytick={1e-18, 1e-16,1e-14, 1e-12, 1e-10,1e-08, 1e-06, 1e-4, 1e-2, 1e0, 1e2}]
\input{SingularValues}
\end{axis}
\end{tikzpicture}%
\end{center}
\caption{$\log_{10}$ of singular values of Hankel matrix ($N = 250$) associated with the Lebesgue measure on $[-1,1]$.}
\label{svd}
\end{figure}
In light of the above discussion, we might expect in these circumstances the existence of a polynomial $p(x)$ of degree $D \approx \max \{ 1 \leq i \leq N : \sigma_i > \epsilon \sigma_1 \}$ such that
\begin{equation*}
\left | \int p(x) x^j \, \d \alpha(x) \right | \lesssim \epsilon
\end{equation*}
for all $0 \leq j \leq N$, and this leads us to the introduction of the concept of $\epsilon$-quasiorthogonal polynomial which we now define,
\begin{defn}
A polynomial $p(x)$ is called $\epsilon$-quasiorthogonal of order $N$ with respect to the measure $\alpha(\cdot)$ and the basis $\{ L_n (x) \}$ if,
\begin{equation*}
\left | \int p(x) \, L_n(x) \, \d \alpha(x) \right | \leq \epsilon
\end{equation*}
for all $n=0,$ \ldots, $N$.
\end{defn}
Importantly, this definition imposes no restriction per regards to the measure $\alpha(\cdot)$, in opposition with orthogonal polynomials which demand the measure to be positive (\cite{Szego}). In this sense, the relation described is not one of orthogonality for it is not possible to define a nondegenerate inner-product unless $\alpha( \cdot )$ is positive. This is why we chose the name \emph{quasi}-orthogonal. We also note that given $\epsilon \geq \sigma_N(H)$ such polynomial always exists for it suffices to pick $a$ aligned with the right singular vector associated with the smallest singular value $\sigma_N(H)$.
From a computational standpoint, there exists an efficient scheme to find such polynomials given a measure $\alpha( \cdot)$ and some $\epsilon>0$. This is the subject of Section \ref{AGQ:comp}. For the remaining of this section, we will focus on demonstrating how such polynomials can be used to obtain efficient quadratures. As will be shown, the construction of the scheme shares a lot with that of classical Gaussian quadrature. This is what constitutes the origin of the denomination.
We will need the following technical lemma which proof is provided in appendix,
\begin{lemma}
\label{AGQ:tech_lemma}
Let $\alpha(\cdot)$ be an arbitrary measure on $(\mathbb{R}, \mathcal{B})$, and
\begin{equation*}
p(x) =x^{d+1} + \sum_{n=0}^{d} p_n x^n
\end{equation*}
be a monic $\epsilon$-quasiorthogonal polynomial of degree $d+1$ and order $N$ associated with $\alpha(\cdot)$. Further, let,
\begin{equation*}
f(x) = \sum_{n=0}^{N+d} f_n \, L_n(x)
\end{equation*}
be some polynomial of degree $N+d$ and $ \tilde{q}(x) = \sum_{n=0}^{d} \tilde{q}_n x^n$ be the Lagrange interpolant of $q(x)$ associated with the zeros of $p(x)$. Finally, let $r(x) = \sum_{n=0}^{N} r_n x^n$ be the unique polynomial such that $ q(x) - \tilde{q}(x)= p(x) r(x) $. Then,
\begin{align*}
\sum_{n=0}^N |r_n| \leq \lVert\Gamma^{-1} \bar{q} \rVert_{1}
\end{align*}
where $\bar{q} =\left( q_{d+1}, q_{d+2}, \ldots, q_{N+d}\right )^T $ and $\Gamma$ is the $N \times N$ Toeplitz matrix such that $[ \Gamma ]_{i,j} = p_{j-i}$ if $0\leq j-i \leq d$ and $0$ otherwise.
\label{AGQ:errorlemma}
\end{lemma}
We are now ready to prove our main theorem.
\begin{thm}{\bf(Approximate Gaussian quadrature)}
\label{AGQ:AGQ_thm}
Consider an arbitrary measure $\alpha(\cdot)$ on $(\mathbb{R}, \mathcal{B})$. Let $p(x)$ be a monic $\epsilon$-quasiorthogonal polynomial of degree $d+1$ and order $N$ with respect to $\alpha( \cdot )$, where $0<\epsilon<1$. Then, the quadrature rule with nodes $\{ x_n \}_{n=0}^{d}$ consisting in the zeros of $p(x)$ and weights $\{ w_n \}_{n=0}^{d}$ given by,
\begin{equation}
\label{AGQ:AGQ_thm:weight}
w_n = \int \ell_n (x) \, \d \alpha(x)
\end{equation}
where $\ell_n (x)$ is the $n^{th}$Lagrange basis polynomial associated with the nodes, integrates polynomials $q(x)$ of degree $\leq N+d$ with an error bounded by,
\begin{equation*}
\left | \int q(x) \, \d \alpha(x) - \sum_{n=0}^d w_n \, q(x_n) \right | \leq \lVert\Gamma^{-1} \bar{q} \rVert_{1} \, \epsilon
\end{equation*}
where $\{ q_n \}_{n=0}^{N+d}$ are the coefficients of $q(x)$, $\bar{q} =\left( q_{d+1} , q_{d+2} , \ldots, q_{N+d}\right )^T $ and $\Gamma$ is the $N \times N$ Toeplitz matrix such that $[ \Gamma ]_{i,j} = p_{j-i}$ if $0\leq j-i \leq d$ and $0$ otherwise.
\end{thm}
\begin{proof}
Let $q(x)$ be a polynomial of degree $ N + d$ and consider the Lagrange interpolant at the nodes $\{ x_n \}$,
\begin{equation*}
\tilde{q} (x) = \sum_{n=0}^d q(x_n) \ell_n (x)
\end{equation*}
Then consider,
\begin{align*}
I = \int \left [ q(x) - \tilde{q}(x) \right ] \, \d \alpha(x)
\end{align*}
The quantity $[q(x) - \tilde{q}(x)]$ is a polynomial of degree at most $(N+d)$ and has zeros located at each of the nodes $\{ x_n \}_{n=0}^d$. Therefore, by the factorization theorem for polynomials we can write,
\begin{equation*}
q(x) - \tilde{q}(x) = \prod_{n=0}^d (x - x_n) \, r(x)
\end{equation*}
where $r(x)$ is a polynomial of degree at most $N$. We further note that $\prod_{n=0}^d (x - x_n)$ is a monic polynomial of degree $d+1$ with zeros at $\{ x_n \}_{n=0}^d$ just as $p(x)$. Since monic polynomials are uniquely characterized by their roots we have,
\begin{equation*}
\prod_{n=0}^d (x - x_n) = p (x)
\end{equation*}
Therefore,
\begin{align*}
|I| &= \left | \int p(x) r(x) \, \d \alpha(x) \right | \leq \sum_{n=0}^N | r_n | \,\left | \int p(x) \, x^n \, \d \alpha(x) \right | \leq \sum_{n=0}^N | r_n | \, \epsilon \\
\end{align*}
where we used the $\epsilon$-quasiorthogonality of $p(x)$. Finally, thanks to Lemma \ref{AGQ:tech_lemma} we get,
\begin{equation*}
\left | \int q(x) \, \d \alpha(x) - \sum_{n=0}^d w_n \, q(x_n) \right | \leq \lVert \Gamma^{-1} \bar{q} \rVert_{1} \epsilon
\end{equation*}
\end{proof}
Interestingly, the above analysis reveals that an AGQ of order $d$ is in fact \emph{exact} for polynomials of degree $\leq d$.
Some advantages of AGQ is that there is no need for the measure $\alpha( \cdot )$ to have any specific properties beyond the existence of moments of high-enough order. Furthermore, the problem of the existence and uniqueness of the solution to the Hankel system is of no importance; in fact, the larger the null-space of $H$ the better it is.
Both characteristics are in sharp contrast with common wisdom regarding classical Gaussian quadratures. First, the positivity of the measure is key in proving the existence of a sequence of orthogonal polynomials necessary to build a classical quadrature (see \cite{Meurant}, Theorem 2.7). Secondly, the notion of orthogonality is at the heart of modern numerical schemes used to obtain nodes and weights for it gives rise to a three-term recurrence relation that is thoroughly exploited computationally (see \cite{Golub:1969, Meurant}).
\subsection{Computational considerations}
\label{AGQ:comp}
The first computational issue we describe here is that of finding an adequate $\epsilon$-quasiorthogonal polynomials of order $N$ given a measure $\alpha(\cdot)$ on $(\mathbb{R}, \mathcal{B})$, some $N \in \mathbb{N}$ and some value $ 0< \epsilon$. For this purpose, we note that a sufficient condition for a monic polynomial $p(x)$ of degree $(d+1)$ to fall within this category is to satisfy the following inequality,
\begin{equation*}
\lVert H(N,d) \, \bar{p} + h(d) \rVert_{\infty} \leq \epsilon
\end{equation*}
where $\bar{p} = [p_0, \, p_1 \, , \ldots, p_{d}]^T$, $p(x) = x^{d+1} + \sum_{n=0}^d p_n x^n$, $ h(d) = [\mu_{d+1}, \, \mu_{d+2} \, , \ldots, \mu_{d+N+1}]^T$ and $H(N,d)$ is the $(N+1)\times (d+1)$ Hankel matrix associated with the measure, i.e.
\[ H(N,d) =
\begin{pmatrix}
\mu_0 & \mu_1 & \cdots & \mu_{d} \\
\mu_1 & \mu_2 & \cdots & \mu_{d+1}\\
\vdots &\vdots &\vdots &\vdots \\
\mu_{N} & \mu_{N+1} & \cdots & \mu_{d+N}
\end{pmatrix}
\]
The proof is analogous to that of Corollary \ref{AGQ:quasiortho_cor} and uses the definition of quasiorthogonal polynomials.
This inequality provides a constructive way for finding an $\epsilon$-quasiorthogonal polynomial of small degree. This is described in Algorithm \ref{poly_alg}; note that we replace the $\lVert \cdot \rVert_{\infty}$ norm by the more computationally-friendly $\lVert \cdot \rVert_2$ norm which is equivalent.
\begin{algorithm2e}[htbp]
Let $(d+1) = \mathrm{rank}(H,\delta)$\\
Solve $ \min_p \lVert H(N,d) \, p + h(d) \rVert_2 $ \\
\While{$\lVert H(N,d) \, p + h(d) \rVert_2 > \epsilon $} {
$d = d+1$ \\
Solve $ \min_p \lVert H(N,d) \, p + h(d) \rVert_2 $
}
\caption{Pseudo-code to determine $\epsilon$-quasiorthogonal polynomial of order $N$ of small degree for desired accuracy $\delta$.}
\label{poly_alg}
\end{algorithm2e}
{\bf Note: } The quadrature obtained from $p(x)$ integrates polynomials of degree $N+d$ with error prescribed by Theorem \ref{AGQ:AGQ_thm}. This error term involves the norm of the inverse of a matrix $\Gamma$ which is upper-triangular, Toeplitz with diagonal entries all equal to $1$ and remaining entries depending on the coefficients of the polynomial $p(x)$. In order to guarantee that an AGQ integrates polynomials of degree $\leq N+d$ with accuracy $\delta$ say, it is sufficient to set $\epsilon \leq \frac{\delta}{C}$ and constrain $p(x)$ to be such that $\lVert \Gamma^{-1} \rVert_{\infty} \leq C$ for some $C>0$. Upon obtaining some characterization of the set $\mathcal{S}_C := \{ p(x) : \lVert \Gamma^{-1} \rVert_{\infty} \leq C \}$, one could potentially carry out the steps described in Algorithm \ref{poly_alg} while restraining the solution to $\mathcal{S}_C$. One would thus guarantee the accuracy of the AGQ \emph{a priori}. Unfortunately, such characterization is not readily available so one is left with the \emph{a posteriori} estimates of Theorem \ref{AGQ:AGQ_thm}. On the other hand, numerical experiments point to the fact that the product $\lVert \Gamma^{-1} \rVert_{\infty} \, \epsilon$ does indeed decay in a fast manner as a function of the degree of $p(x)$, for $\bar{p}$ the solution of the least-squares problem having the smallest norm in Algorithm \ref{poly_alg}. In short, although AGQ in its current state performs well, some improvements are still possible. This constitutes a topic for future research.
Once such polynomial has been obtained, its roots constitute the nodes of the approximate Gaussian quadrature as per Theorem \ref{AGQ:AGQ_thm}. The cost of solving a thin $(N+1) \times (d+1)$ least-squares problem is $\O( [N+1) + (d+1)/3] (d+1)^2 ) $ (see \cite{Golub}). Since in general we expect $d \ll N$ the cost is \emph{linear} in $N$. Also, each step of the while loop constitutes a rank-1 update of the system, so $p$ can be recomputed cheaply.
Another great computational aspect of the scheme is the availability of a simple analytical formula for the computation of the weights. Indeed, from Theorem \ref{AGQ:AGQ_thm} we have,
\begin{equation*}
w_n = \int \ell_n (x) \, \d \alpha(x) = \int \sum_{k=0}^d [\ell_n]_k x^k \, \d \alpha(x) = \sum_{k=0}^d [\ell_n]_k \, \mu_k
\end{equation*}
where $[\ell_n]_k $ is the $k^{th}$ coefficient of the $n^{th}$ Lagrange basis polynomial $ \ell_n (x)$, which can be obtained cheaply from the zeros of $ \ell_n (x)$, i.e., the nodes of the quadrature. We also noticed that it is generally possible to neglect nodes associated with small weights when such are present. This further reduces the cost of the method.
As a final comment, the accuracy of the scheme is highly dependent on the accuracy of the nodes. For this reason, we recommend performing the computations in extended arithmetic. In this paper, we used $Maple^\copyright$ in order to compute the nodes and weights of each approximate quadrature with high precision.
\section{Numerical simulations}
\label{NS}
In this section, we demonstrate the efficiency and the versatility of the scheme through a few numerical examples. In section \ref{NS:classical}, we compare fixed-order approximate Gaussian quadratures (AGQ) with two types of classical Gaussian quadratures (Gauss-Legendre and Gauss-Chebyshev) on monomials $x^n$ of increasing degree and show how it quickly becomes advantageous to use an approximate quadrature in those cases. Then in Section \ref{NS:sing}, we give examples related to functions with an integrable singularity at the origin.
In section \ref{NS:trig}, we show how the scheme can be applied to monomials on the complex circle, i.e., functions of the form $e^{\i n x}$ where $0 \leq n$. The resulting quadratures are then used in Section \ref{NS:Beylkin} to obtain approximations of functions through short exponential sums which is related to the method of Beylkin \& Monz\'on \cite{Beylkin:2005, Beylkin:2010}.
\subsection{Comparison with classical quadratures}
\label{NS:classical}
In this section, we compare results between the approximate Gaussian quadrature scheme, the Gauss-Legendre $\left ( \d \alpha(x) = \d x \right ) $ and Gauss-Chebyshev $\left ( \d \alpha(x) = \frac{1}{\sqrt{1-x^2} }\d x \right ) $ quadrature.
\subsubsection{Integration of monomials}
For this benchmark, we fix the order ($N$ in Section \ref{AGQ:comp}) and study the error in approximating integrals of the form,
\begin{equation*}
\int_{-1}^1 x^n \, \d \alpha (x)
\end{equation*}
through quadratures involving different number of nodes ($d$ in Section \ref{AGQ:comp}) where $n$ varies between $0$ to $700$.
Numerical results are shown in Figure \ref{GLcompare} and \ref{GCcompare}. They were obtained using $N=350$. The results need to be interpreted carefully. The choice of $N$ represents in effect the polynomial order that would be required to approximate a given function $f(x)$ to some accuracy $\epsilon$. A numerical quadrature will then be able to approximate the integral of $f(x)$ if it can integrate all monomials of degree less than $N$ with accuracy $\epsilon$. In Fig.~\ref{GLcompare} for example, we see that the Gauss-Legendre quadrature is exact to machine precision up to $n=39$. However the error increases rapidly to reach $10^{-3}$ near $n=350$. In contrast, although AGQ is not exact for $n \le 39$, the error up to $n \le 350$ remains lower than $10^{-4}$ with only 20 nodes. As we increase the number of nodes (middle and bottom plots) the gain below $n= 350$ is even more significant.
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}[scale=0.65]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
ylabel={$\log_{10}$ of absolute error},
xlabel={Degree of monomial ($n$)},
ymode=log,
width=0.75\textwidth,
height=175pt,
xmin=0,
xmax=700,
ymin=1e-17,
ymax=0,
xtick={0, 50,100,150,200,250,300,350, 400,450,500,550,600,650,700},
ytick={1e-16,1e-14, 1e-12, 1e-10,1e-08, 1e-06, 1e-4, 1e-2}]
\input{GaussLegendre20}
\legend{Gauss-Legendre (20 nodes) \\%
Approximate Gaussian Quadrature (20 nodes) \\%
}
\end{axis}
\end{tikzpicture}%
\begin{tikzpicture}[scale=0.65]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
ylabel={$\log_{10}$ of absolute error},
xlabel={Degree of monomial ($n$)},
ymode=log,
width=0.75\textwidth,
height=175pt,
xmin=0,
xmax=700,
ymin=1e-17,
ymax=0,
xtick={0, 50,100,150,200,250,300,350, 400,450,500,550,600,650,700},
ytick={1e-16,1e-14, 1e-12, 1e-10,1e-08, 1e-06, 1e-4, 1e-2}]
\input{GaussLegendre30}
\legend{
Gauss-Legendre (30 nodes) \\%
Approximate Gaussian Quadrature (30 nodes) \\%
}
\end{axis}
\end{tikzpicture}%
\begin{tikzpicture}[scale=0.65]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
ylabel={$\log_{10}$ of absolute error},
xlabel={Degree of monomial ($n$)},
ymode=log,
width=0.75\textwidth,
height=175pt,
xmin=0,
xmax=700,
ymin=1e-17,
ymax=0,
xtick={0,50,100,150,200,250,300,350, 400,450,500,550,600, 650,700},
ytick={1e-16,1e-14, 1e-12, 1e-10,1e-08, 1e-06, 1e-4, 1e-2}]
\input{GaussLegendre40}
\legend{
Gauss-Legendre (40 nodes) \\%
Approximate Gaussian Quadrature (40 nodes) \\%
}
\end{axis}
\end{tikzpicture}%
\end{center}
\caption{Comparison between the absolute error incurred in the evaluation of the integral $\int_{-1}^1 x^n \, \d x$ through an Approximate Gaussian Quadrature of order $N=350$ (black) and a Gauss-Legendre quadrature (green) for different number of nodes. Top: 20 nodes, Middle: 30 nodes, Bottom: 40 nodes}
\label{GLcompare}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}[scale=0.65]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
ylabel={$\log_{10}$ of absolute error},
xlabel={Degree of monomial ($n$)},
ymode=log,
width=0.75\textwidth,
height=175pt,
xmin=0,
xmax=700,
ymin=1e-16,
ymax=0,
xtick={0, 50,100,150,200,250,300,350, 400,450,500,550,600, 650,700},
ytick={1e-16,1e-14, 1e-12, 1e-10,1e-08, 1e-06, 1e-4, 1e-2}]
\input{GaussChebyshev20}
\legend{
Gauss-Chebyshev (20 nodes) \\%
Approximate Gaussian Quadrature (20 nodes) \\%
}
\end{axis}
\end{tikzpicture}%
\begin{tikzpicture}[scale=0.65]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
ylabel={$\log_{10}$ of absolute error},
xlabel={Degree of monomial ($n$)},
ymode=log,
width=0.75\textwidth,
height=175pt,
xmin=0,
xmax=700,
ymin=1e-16,
ymax=0,
xtick={0, 50,100,150,200,250,300,350, 400,450,500,550,600,650,700},
ytick={1e-16,1e-14, 1e-12, 1e-10,1e-08, 1e-06, 1e-4, 1e-2}]
\input{GaussChebyshev30}
\legend{
Gauss-Chebyshev (30 nodes) \\%
Approximate Gaussian Quadrature (30 nodes) \\%
}
\end{axis}
\end{tikzpicture
\begin{tikzpicture}[scale=0.65]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
ylabel={$\log_{10}$ of absolute error},
xlabel={Degree of monomial ($n$)},
ymode=log,
width=0.75\textwidth,
height=175pt,
xmin=0,
xmax=700,
ymin=1e-16,
ymax=0,
xtick={0,50,100,150,200,250,300,350, 400,450,500,550,600,650,700},
ytick={1e-16,1e-14, 1e-12, 1e-10,1e-08, 1e-06, 1e-4, 1e-2}]
\input{GaussChebyshev40}
\legend{
Gauss-Chebyshev (40 nodes) \\%
Approximate Gaussian Quadrature (40 nodes) \\%
}
\end{axis}
\end{tikzpicture}%
\end{center}
\caption{Comparison between the absolute error incurred in the evaluation of the integral $\int_{-1}^1 x^n \, \frac{1}{\sqrt{1-x^2}} \d x$ through an Approximate Gaussian Quadrature of order $N=350$ (black) and a Gauss-Chebyshev quadrature (green) for different number of nodes. Top: 20 nodes, Middle: 30 nodes, Bottom: 40 nodes}
\label{GCcompare}
\end{figure}
The behavior of AGQ in the top plot around $n \approx 40$ where Gauss-Legendre seems to outperform AGQ is not significant. Indeed if a polynomial of order $n \approx 40$ is sufficient to approximate $f$, we would reduce $N$. This would result in an AGQ quadrature much more accurate in the range $n \in [0,40]$.
On Figure \ref{bound:Legendre} and \ref{bound:Chebyshev}, we also compare the theoretical bound obtained in Theorem \ref{AGQ:AGQ_thm} with the actual absolute error obtained through a 30-node AGQ for both the Lebesgue and Chebyshev measures respectively. In both cases, it is seen that the bound provides a reasonable estimate for the behavior of the error.
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}[scale=0.65]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
xlabel={Degree of monomial ($n$)},
ymode=log,
width=0.75\textwidth,
height=175pt,
xmin=0,
xmax=350,
ymin=1e-17,
ymax=0,
xtick={0, 50,100,150,200,250,300,350},
ytick={1e-16,1e-14, 1e-12, 1e-10,1e-08, 1e-06, 1e-4, 1e-2}]
\input{GaussLegendre30_bound}
\legend{
Error bound\\%
Approximate Gaussian Quadrature error (30 nodes) \\%
}
\end{axis}
\end{tikzpicture}%
\end{center}
\caption{Comparison between the absolute error incurred in the evaluation of the integral $\int_{-1}^1 x^n \, \d x$ through a $30$-node Approximate Gaussian Quadrature of order $N=350$ and the error bound introduced in Theorem \ref{AGQ:AGQ_thm} (red)}
\label{bound:Legendre}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}[scale=0.65]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
xlabel={Degree of monomial ($n$)},
ymode=log,
width=0.75\textwidth,
height=175pt,
xmin=0,
xmax=350,
ymin=1e-17,
ymax=0,
xtick={0, 50,100,150,200,250,300,350},
ytick={1e-16,1e-14, 1e-12, 1e-10,1e-08, 1e-06, 1e-4, 1e-2}]
\input{GaussChebyshev30_bound}
\legend{
Error bound\\%
Approximate Gaussian Quadrature error (30 nodes) \\%
}
\end{axis}
\end{tikzpicture}%
\end{center}
\caption{Comparison between the absolute error incurred in the evaluation of the integral $\int_{-1}^1 x^n \, \frac{1}{\sqrt{1-x^2}} \d x$ through a $30$-node Approximate Gaussian Quadrature of order $N=350$ and the error bound introduced in Theorem \ref{AGQ:AGQ_thm} (red).}
\label{bound:Chebyshev}
\end{figure}
Finally, an interesting thing to be noted is that in both cases the nodes associated with the approximate Gaussian quadratures were \emph{real} and the weights were \emph{real and positive}; it is a known fact that this should be the case for classical Gaussian quadratures. However, this is by no means obvious for the case of approximate Gaussian quadratures, and we currently have no theory demonstrating that it is always the case for real positive measures.
\subsubsection{General integrands}
An important difference between AGQ and Gaussian quadratures is that AGQ takes $N$ as a parameter. $N$ represents in effect the order of a polynomial that can approximate $f(x)$ to the desired accuracy. This is function-dependent and therefore may need to be adjusted in AGQ depending on the integrand, if one wishes to have a near optimal quadrature.
Generally speaking, AGQ should be able to outperform a classical Gaussian quadrature in all cases since a Gaussian quadrature is a special case of AGQ, by basically choosing $d=N-1$ where $d$ is the degree of the polynomial. Indeed, this is what we observed in our numerical tests. Whenever the classical Gaussian quadrature or CGQ performs well, no gain is obtained with AGQ. We note that, in this case, the usual numerical techniques to evaluate Gaussian quadrature nodes should be more effective than the numerical procedure we are advocating for AGQ (due to ill-conditioning for too stringent a tolerance as mentioned in the introduction).
Conversely, when the convergence of CGQ is slow, AGQ provides a significant improvement. This corresponds to situation where expanding $f$ using polynomials requires terms of high degree and then the approximation of AGQ for high order monomials makes a difference. This is illustrated in the examples below.
We used the following integrands to investigate the accuracy of AGQ:
\begin{align*}
\log \left ( 1- \frac{x}{1.05} \right ) &= - \sum_{n=0}^{\infty} \frac{1}{n (1.05)^n} \,x^n ,\;\;\; |x| \leq 1 \\
\frac{1}{ 1- \frac{x}{1.05} } &= - \sum_{n=0}^{\infty} \frac{1}{(1.05)^n} \,x^n ,\;\;\; |x| \leq 1 \\
e^{-10\,x} &= - \sum_{n=0}^{\infty} \frac{(-10)^n}{n!} \,x^n ,\;\;\; 0\leq x \leq 1
\end{align*}
The first two integrand have slowly-decaying coefficients and can be approximated in the interval $[-1,1]$ through a sum containing $\O( \log_{1.05}(1/\epsilon) ) $ terms for an accuracy of $\epsilon$. At $\epsilon$-machine ($\epsilon = 10^{-15}$) this implies approximately $700$ terms. The third integrand has very fast decay, and in this case only $50$ terms are sufficient.
For each case, we varied the number of nodes in the quadrature. Then for AGQ, we selected the integer $N$ that gave us the most accurate result. In practice, an algorithm would be required to estimate $N$ numerically but we will not address this question here. Results are show in Table \ref{general:log}--\ref{general:exp}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{cccc} \toprule
Number of nodes & Optimal value for $N$ & AGQ & Gauss-Legendre\\ \midrule
10 & 75 & $2.16 \cdot 10^{-8}$ & $1.39 \cdot 10^{-4}$ \\
15 & 100 & $1.08 \cdot 10^{-8}$ & $3.94 \cdot 10^{-6}$ \\
20 & 150 & $2.05 \cdot 10^{-11}$ & $1.26 \cdot 10^{-7}$ \\
25 & 200 & $3.99 \cdot 10^{-14}$ & $4.31 \cdot 10^{-9}$ \\
30 & 250 & $1.61 \cdot 10^{-15}$ & $1.54 \cdot 10^{-10}$ \\ \bottomrule
\end{tabular}
\end{center}
\caption{Absolute error incurred by an AGQ and a Gauss-Legendre quadrature for the integration of $f(x) = \log \left ( 1- \frac{x}{1.05} \right ) $ over the interval $[-1,1]$ for various number of nodes.}
\label{general:log}
\end{table}%
\begin{table}[htbp]
\begin{center}
\begin{tabular}{cccc} \toprule
Number of nodes & Optimal value for $N$ & AGQ & Gauss-Legendre\\ \midrule
10 & 75 & $5.81 \cdot 10^{-5}$ & $8.15 \cdot 10^{-3}$ \\
15 & 100 & $2.20 \cdot 10^{-6}$ & $3.60 \cdot 10^{-4}$ \\
20 & 150 & $4.26 \cdot 10^{-9}$ & $1.56 \cdot 10^{-5}$ \\
25 & 200 & $1.58 \cdot 10^{-11}$ & $6.76 \cdot 10^{-7}$ \\
30 & 250 & $4.01 \cdot 10^{-13}$ & $2.92 \cdot 10^{-8}$ \\
35 & 300 & $1.77 \cdot 10^{-15}$ & $1.25 \cdot 10^{-9}$ \\ \bottomrule
\end{tabular}
\end{center}
\caption{Absolute error incurred by an AGQ and a Gauss-Legendre quadrature for the integration of $f(x) = \frac{1}{ 1- \frac{x}{1.05} }$ over the interval $[-1,1]$ for various number of nodes.}
\label{general:geo}
\end{table}%
\begin{table}[htbp]
\begin{center}
\begin{tabular}{cccc} \toprule
Number of nodes & Optimal value for $N$ & AGQ & Gauss-Legendre\\ \midrule
5 & 15 & $1.09 \cdot 10^{-6}$ & $8.82 \cdot 10^{-5}$ \\
7 & 7 & $1.29 \cdot 10^{-7}$ & $1.29 \cdot 10^{-7}$ \\
10 & 10 & $1.02 \cdot 10^{-12}$ & $1.02 \cdot 10^{-12}$ \\
12 & 12 & $4.44 \cdot 10^{-16}$ & $4.44 \cdot 10^{-16}$ \\ \bottomrule
\end{tabular}
\end{center}
\caption{Absolute error incurred for $e^{-10\,x} $ over the interval $[0,1]$. In that case, Gauss-Legendre converges very fast and AGQ simply provides a quadrature with the same accuracy. The two methods become essentially identical.}
\label{general:exp}
\end{table}
We observe the superior accuracy of AGQ. The first two cases are challenging for CGQ and AGQ does significantly better. For the last case, CGQ converges extremely fast and then AGQ simply finds that the optimal choice is CGQ and provides an estimate with the same accuracy.
In summary, $N$ shoud be adjusted depending on the type of integrand. If the integrand is such that expansions in a polynomial basis possess slowy-decaying coefficients, AGQ will provide significantly greater accuracy. If on the contrary, a polynomial expansion converges very rapidly, both AGQ and CGQ will provide essentially identical (and fast) convergence.
We also stress that AGQ can be constructed for a wide range of measures whereas CGQ is restricted to positive measures (weight function) only.
\subsection{Singular functions}
\label{NS:sing}
We show how AGQ can be used to integrate functions with integrable singularities. For this purpose, we consider integrand of the form $x^n \log(x)$ for $x \in (0,1]$ and $0 \leq n \leq 700$. In this case, the integral of interest takes the form,
\begin{equation*}
\int_{0}^1 x^n \, \log(x) \, \d x
\end{equation*}
This quantity can either be seen as the integration of $x^n \, \log(x)$ with respect to Lebesgue measure or as the integration of the monomial $x^n$ with respect to the measure $\d \alpha(x) = \log(x) \, \d x$. Considering the latter, we build an AGQ of order $N=350$ with different number of nodes and display the absolute error as a function of the degree $n$ and the number of quadrature points. This is shown in Figure \ref{NS:log}. Note that the bound is not plotted beyond $N=350$ for it is no more valid past this point.
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}[scale=0.65]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
ylabel={$\log_{10}$ of absolute error},
xlabel={Degree of monomial ($n$)},
ymode=log,
width=0.75\textwidth,
height=175pt,
xmin=0,
xmax=700,
ymin=1e-17,
ymax=0,
xtick={0, 50,100,150,200,250,300,350,400,450,500,550,600,650,700},
ytick={1e-16,1e-14, 1e-12, 1e-10,1e-08, 1e-06, 1e-4, 1e-2}]
\input{LogMeasure_5}
\legend{
Error bound\\%
Approximate Gaussian Quadrature error (5 nodes) \\%
}
\end{axis}
\end{tikzpicture}%
\begin{tikzpicture}[scale=0.65]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
ylabel={$\log_{10}$ of absolute error},
xlabel={Degree of monomial ($n$)},
ymode=log,
width=0.75\textwidth,
height=175pt,
xmin=0,
xmax=700,
ymin=1e-17,
ymax=0,
xtick={0, 50,100,150,200,250,300,350,400,450,500,550,600,650,700},
ytick={1e-16,1e-14, 1e-12, 1e-10,1e-08, 1e-06, 1e-4, 1e-2}]
\input{LogMeasure_10}
\legend{
Error bound\\%
Approximate Gaussian Quadrature error (10 nodes) \\%
}
\end{axis}
\end{tikzpicture}%
\begin{tikzpicture}[scale=0.65]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
ylabel={$\log_{10}$ of absolute error},
xlabel={Degree of monomial ($n$)},
ymode=log,
width=0.75\textwidth,
height=175pt,
xmin=0,
xmax=700,
ymin=1e-17,
ymax=0,
xtick={0, 50,100,150,200,250,300,350,400,450,500,550,600,650,700},
ytick={1e-16,1e-14, 1e-12, 1e-10,1e-08, 1e-06, 1e-4, 1e-2}]
\input{LogMeasure_15}
\legend{
Error bound\\%
Approximate Gaussian Quadrature error (15 nodes) \\%
}
\end{axis}
\end{tikzpicture}%
\end{center}
\caption{Comparison between the absolute error incurred in the evaluation of the integral $\int_{-1}^1 x^n \, \log(x) \d x$ through an Approximate Gaussian Quadrature of order $N=350$ (black) and the error bound introduced in Theorem \ref{AGQ:AGQ_thm} (red) for 5, 10 and 15 nodes. \label{NS:log}}
\end{figure}
We note that in this case we cannot perform a comparison with a classical Gaussian quadrature for no such quadrature exists as is the case with most measures but a few.
\subsection{Quadrature for polynomials on the complex circle}
\label{NS:trig}
In this section, we are interested in integrands that take the form of trigonometric monomials, i.e., functions of the form,
\begin{equation*}
f(x) = e^{\i n x}
\end{equation*}
where $0 \leq n$. As their name conveys, such functions are just homogeneous polynomials $z^n$ in the complex plane which have been restricted to the boundary of the unit circle, i.e., $z = e^{ix}$. Thanks to this close relationship with polynomials on the real axis, one can also develop approximate Gaussian quadratures for such functions as well. In fact it suffices to replace the moments $\mu_n$ by the trigonometric moments,
\begin{equation*}
\tau_n = \int (e^{\i x})^n \, \d \alpha(x)
= \int z^n \, \d \alpha(z)
\end{equation*}
in all that has been presented above and similar results follow.
As an example, we built an AGQ of order $N=350$ for trigonometric polynomials with respect to the Lebesgue measure over the interval $[-1,1]$. The absolute error between our approximation and the exact value of the integral,
\begin{equation*}
\int_{-1}^1 e^{\i n x} \, \d x = \frac{e^{\i n} - e^{-\i n}}{\i n}
\end{equation*}
are presented in Figure \ref{NS:trigplot}. There, it is seen that as little as $30$ quadrature points are necessary to integrate a complex exponential with frequency $n=500$ with $\approx 10^{-6}$ accuracy. We also plotted the theoretical bound of Theorem \ref{AGQ:AGQ_thm}. Again, it appears to be a good estimate.
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}[scale=0.75]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
ylabel={$\log_{10}$ of absolute error},
xlabel={Degree of monomial ($n$)},
ymode=log,
width=0.75\textwidth,
height=175pt,
xmin=0,
xmax=700,
ymin=1e-17,
ymax=0,
xtick={0,50,100,150,200,250,300,350,400,450,500,550,600,650,700,750,800},
ytick={1e-16,1e-14, 1e-12, 1e-10,1e-08, 1e-06, 1e-4, 1e-2}]
\input{OscillatoryIntegrand_10}
\legend{
Error bound\\%
Approximate Gaussian Quadrature (10 nodes) \\%
}
\end{axis}
\end{tikzpicture}%
\begin{tikzpicture}[scale=0.75]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
ylabel={$\log_{10}$ of absolute error},
xlabel={Degree of monomial ($n$)},
ymode=log,
width=0.75\textwidth,
height=175pt,
xmin=0,
xmax=700,
ymin=1e-17,
ymax=0,
xtick={0,50,100,150,200,250,300,350,400,450,500,550,600,650,700,750,800},
ytick={1e-16,1e-14, 1e-12, 1e-10,1e-08, 1e-06, 1e-4, 1e-2}]
\input{OscillatoryIntegrand_20}
\legend{
Error bound\\%
Approximate Gaussian Quadrature (20 nodes) \\%
}
\end{axis}
\end{tikzpicture}%
\begin{tikzpicture}[scale=0.75]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
ylabel={$\log_{10}$ of absolute error},
xlabel={Degree of monomial ($n$)},
ymode=log,
width=0.75\textwidth,
height=175pt,
xmin=0,
xmax=700,
ymin=1e-17,
ymax=0,
xtick={0,50,100,150,200,250,300,350,400,450,500,550,600,650,700,750,800},
ytick={1e-16,1e-14, 1e-12, 1e-10,1e-08, 1e-06, 1e-4, 1e-2}]
\input{OscillatoryIntegrand_30}
\legend{
Error bound\\%
Approximate Gaussian Quadrature (30 nodes) \\%
}
\end{axis}
\end{tikzpicture}%
\end{center}
\caption{Comparison between the absolute error incurred in the evaluation of the integral $\int_{-1}^1 e^{\i n x} \, \d x$ through an Approximate Gaussian Quadrature of order $N=350$ (black) and the error bound introduced in Theorem \ref{AGQ:AGQ_thm} (red) for various number of nodes. Top: 10 nodes, Middle: 20 nodes, Bottom: 30 nodes. \label{NS:trigplot}}
\end{figure}
It is interesting to look at the location of the nodes for such quadratures. An example is displayed in Figure \ref{NS:trignodes}. The nodes are shown in the complex plane and appear to lie along a curve which rapidly moves upward from $-1$, slowly moves across, and rapidly moves back to $1$ on the real axis. This does not appear to be a coincidence given the fact that functions of the form $e^{\i n x}$ decay exponentially and do not oscillate along the positive imaginary axis. Thus, the underlying curve could be some sort of path of \emph{least oscillation} in an average sense over $0 \leq n \leq N$. At this point, this is a mere qualitative observation, but might be worth investigating in the future.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10cm]{trignodes.png}
\caption{Location in the complex plane of the nodes of a 20-node AGQ for trigonometric polynomials. The nodes appear to lie on a smooth curve with positive imaginary part.}
\label{NS:trignodes}
\end{center}
\end{figure}
\subsection{Approximation of functions through short exponential sums}
\label{NS:Beylkin}
In this section, we are interested in the approximation of functions by a short sum of exponentials. That is, given a function $f(x)$ defined over an interval $[a,b]$, we seek some approximation in the form,
\begin{equation*}
f(x) \approx \sum_{m=0}^d \alpha_m e^{ \beta_m x}
\end{equation*}
for $x \in [a,b]$, and where $d$ should be as small as possible. Such expansions can be viewed as more efficient representations of functions compared to Fourier transforms as they typically require fewer terms. They can form the starting point for various fast algorithms such as the fast multipole method, hierarchical matrices ($\mathcal H$-matrices), etc. Such techniques are particularly desirable when it comes to the solution of integral equations with translation-invariant kernels (see e.g., \cite{Letourneau:2012,Beylkin:2005}). Very powerful techniques based on dynamical systems and recursion ideas were recently introduced by Beylkin \& Monz\'on \cite{Beylkin:2005, Beylkin:2010} in order to approach this problem. As was mentioned earlier, the latter inspired the current work.
We will show how AGQ can be used to derive similar approximations through the discretization of the Fourier transform. The final formulation shares some characteristics with the problem of Beylkin \& Monz\'on that can be stated as follows: given the accuracy $\epsilon > 0$, for a smooth function $f(x)$ find the minimal number of complex weights $w_n$ and nodes $e^{t_m}$ such that,
\begin{equation*}
\left | f(x) - \sum_m w_m e^{t_m x} \right | < \epsilon
\end{equation*}
for $x \in I$, $I$ being some interval in $\mathbb{R}$.
Their scheme is based on an important result regarding Hankel matrices. Consider a Hankel matrix $H$ associated with a sequence $h_k$ where $h_k = f(x_k)$ are uniform samples of $f$. Assume that the null space of $H$ is non-trivial and consider the polynomial whose coefficients are given by a vector in the null space of $H$. The zeros of this polynomial, $\lambda_i$, satisfy the following property (see e.g., \cite{Boley:1998}),
\begin{equation*}
h_k = \sum_{i=1}^r \lambda_i^k \,d_i
\end{equation*}
for some $\{ d_i \}$, where $r$ is at most the number of columns of $H$. With our choice for $h_k$, one obtains,
\begin{equation*}
f(x_k) = \sum_{i=1}^r d_i \, e^{ \log( \lambda_i ) k }
\end{equation*}
which naturally extends to an interpolation formula for $f(\cdot)$.
In \cite{Beylkin:2005, Beylkin:2010}, the authors search for an approximate formula since in general the matrix $H$ is full rank and therefore no efficient representation, that would yield exactly $f(x_k)$, is possible. To achieve this, Beylkin et al.~\cite{Beylkin:2005, Beylkin:2010} show how $\lambda_i$ can be obtained as the roots of a polynomial whose coefficients are given as the entries of a con-eigenvector $u$, i.e., a vector such that,
\begin{equation*}
H u = \sigma \overline{u}
\end{equation*}
$\sigma$ being real and nonnegative. The error is then on the order of $\sigma$. They also show that the weights satisfy a well-conditioned Vandermonde system.
As will be seen, both our method and theirs involve a Hankel matrix with entries given by the uniform samples of the function to be approximated over the interval considered. However, the current approach avoids the solution of a con-eigenvalue problem altogether and allows for the direct computation of the weights rather than their computation through the solution of a Vandermonde system. Furthermore, since the quasi-orthogonal polynomial obtained through our scheme has small degree, the number of zeros that must be computed is also much smaller. This results in significant computational savings compared to the former method.
The resulting error estimates for both methods are different. Indeed, in the case of \cite{Beylkin:2005} one expects the error to be bounded \emph{uniformly} by an expression on the order of the modulus of the small con-eigenvalue $\sigma$ (Theorem 2, \cite{Beylkin:2005}), and such value can be determined \emph{a priori}. In our case however, the error in \emph{not} uniform (as can be seen from the numerical examples). Furthermore, our current error estimate is \emph{a posteriori}.
To begin with, consider a function $f(x) \in \mathcal{L}^2 ( \mathbb{R})$ uniformly sampled at $x_n = a + \frac{n (b-a)}{N}$, $n = 0... (N-1)$ for some $N \in \mathbb{N}$ and $a,b \in \mathbb{R}$, and use the Fourier transform to write,
\begin{equation*}
f(x_n) = \int_{-\infty}^{\infty} e^{2 \pi \i x_n \xi} \hat{f} (\xi) \, \d \lambda( \xi)=\frac{N}{(b-a)}\, \int_{-\infty}^{\infty} e^{2 \pi \i n \zeta} \, e^{2 \pi \i a \zeta} \hat{f} \left (\frac{N}{b-a} \zeta \right ) \, \d \lambda( \zeta)
\end{equation*}
where $\hat{f} (\xi)$ denotes the Fourier transform of $f(x)$, and $\lambda( \cdot )$ is the Lebesgue measure. We note that $$\frac{N}{b-a} \, e^{2 \pi \i a \zeta} \hat{f} \left (\frac{N}{b-a} \zeta \right )$$ can be seen as a Radon-Nykodym derivative of a certain measure $\alpha( \cdot)$ absolutely continuous with respect to Lebesgue measure (see \cite{Cohn}), i.e.,
\begin{equation*}
\frac{\d \alpha }{ \d \lambda} (\zeta) = \frac{N}{b-a} \, e^{2 \pi \i a \zeta} \hat{f} \left (\frac{N}{b-a} \zeta \right )
\end{equation*}
With this measure we have,
\begin{equation*}
f(x_n) = \int_{-\infty}^{\infty} e^{2 \pi \i n \zeta} \, \d \alpha(\zeta) , \;\; n = 0... N
\end{equation*}
which is perfectly well-suited for discretization through an approximate Gaussian quadrature as described in the previous section. To find such quadrature, we first need the trigonometric moments of the measure. These moments turn out to have a very simple form. Indeed, a quick look at their definition shows that,
\begin{equation*}
\tau_n = \int_{\mathbb{T}} e^{\i n \zeta} \, \d \alpha(\zeta) = \int_{\mathbb{T}} e^{\i n \zeta} \, \left [ \frac{N}{b-a} \, e^{2 \pi \i a \zeta} \hat{f} \left (\frac{N}{b-a} \zeta \right ) \right ] \d \lambda(\zeta) = f \left( a + n \frac{(b-a)}{N} \right )
\end{equation*}
At this point, we note that the Hankel matrix arising from such moments is exactly the same as the one described in \cite{Beylkin:2005} as previously mentioned.
Finally, the nodes $\{ w_n \}$ can be obtained through Eq.\eqref{AGQ:AGQ_thm:weight}. In the end, we obtain
\begin{equation*}
f(x_n) \approx \sum_{m=0}^d w_m \, e^{ \i n \zeta_m } , \;\;\; n=1,...,N
\end{equation*}
with error bounded by the expression provided in Theorem \ref{AGQ:AGQ_thm}. To obtain an approximation to $f(x)$ in all of $[a,b]$, we simply allow $\frac{n}{N}$ to vary continuously so that
\begin{equation*}
\frac{n}{N} = \frac{x-a}{b-a}
\end{equation*}
for $x \in [a,b]$ and write,
\begin{align*}
f(x) &\approx \sum_{m=0}^d \alpha_m \, e^{ \i \beta_m x } \\
\alpha_m &= w_m \, e^{-\i \frac{a}{b-a} N \xi_m } \\
\beta_m &= \frac{1}{b-a} N \xi_m
\end{align*}
When $x$ corresponds to a sample, i.e., $x = x_n$ for some $n$, this reduces to the previous expression. However, when $x$ lies between two samples this last formula should be seen as an interpolation. We do not currently have the complete theory describing the interpolation error. However, it was observed numerically that such error is generally of the same order as that associated with the closest sample whenever the function $f(x)$ is sufficiently oversampled. Numerical examples are provided below.
At this point, we describe an algorithm for the construction of such an approximation. The description can be found in pseudo-code in Algorithm \ref{approx_alg}.
\begin{algorithm2e}[htbp]
\caption{Pseudo-code to for the construction of a short exponential sum approximation of a function $f( \cdot )$ in an interval. \label{approx_alg}}
Pick $N \in \mathbb{N}$ sufficiently large (beyond the Nyquist rate)\\
Compute $ \tau_n = f \left( a + n \frac{(b-a)}{N} \right )$\\
Build the Hankel matrix $H_{i,j} = \tau_{i+j}$ for $i,j = 0 .. N$\\
Proceed as described in Algorithm \ref{poly_alg} to find $p(x)$\\
Compute $\{ x_n \}$, the nodes/zeros of $p(x)$ \\
Compute weights $w_n$ following Eq. \eqref{AGQ:AGQ_thm:weight} \\
Build approximation: $ \sum_{n} w_n \, e^{ \i \frac{(x-a)}{(b-a)} N \, \frac{\log(x_n)}{\i} }$
\end{algorithm2e}
We now provide a few examples for the representation of some oscillatory functions: the Bessel functions of the first kind $J_{\nu} (100 \pi \, x )$ over the interval $[0,1]$ and for orders $ \nu \in \{ 0, 25 \}$. Such functions are relevant in problems involving the scattering of waves in two dimensions for instance. In both cases, the order of the AGQ is $N=400$ (note that the spectrum of both functions is bounded by about $400 \approx 100 \pi$) and a $40$-terms approximation is obtained using the scheme just introduced. The results are presented in Figure \ref{bessel0} and \ref{bessel25} respectively. Agreement within $10^{-10}$ and $10^{-7}$ absolute error is observed in each cases respectively. It should also be noted that the number of terms lies much below what should be expected with a standard Fourier series given the nature of the oscillations.
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}[scale=0.75]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
width=0.75\textwidth,
height=0.5\textwidth,
xmin=0,
xmax=1,
ymin=-1.,
ymax=1,
xtick={0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1},
ytick={-1,-0.5,0,0.5,1}]
\input{Bessel0}
\end{axis}
\end{tikzpicture}%
\begin{tikzpicture}[scale=0.75]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
ymode=log,
width=0.75\textwidth,
height=0.5\textwidth,
xmin=0,
xmax=1,
ymin=1e-19,
ymax=1e-10,
xtick={0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1},
ytick={1e-19, 1e-18, 1e-17, 1e-16, 1e-15, 1e-14, 1e-13, 1e-12, 1e-11, 1e-10}]
\input{Bessel0_error}
\end{axis}
\end{tikzpicture}%
\end{center}
\caption{ $40$-term exponential sum approximation of Bessel function of the first kind of order $0$ ($J_{0} (100\pi x)$) in $[0,1]$ (top) and absolute error (bottom) }
\label{bessel0}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}[scale=0.75]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
width=0.75\textwidth,
height=0.5\textwidth,
xmin=0,
xmax=1,
ymin=-0.25,
ymax=0.25,
xtick={-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2,0.4, 0.6, 0.8, 1},
ytick={-0.3, -0.2, -0.1, 0., 0.1, 0.2, 0.3}]
\input{Bessel25}
\end{axis}
\end{tikzpicture}%
\begin{tikzpicture}[scale=0.75]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
ymode=log,
width=0.75\textwidth,
height=0.5\textwidth,
xmin=0,
xmax=1,
ymin=1e-17,
ymax=1e-7,
xtick={-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2,0.4, 0.6, 0.8, 1},
ytick={1e-19, 1e-18, 1e-17, 1e-16, 1e-15, 1e-14, 1e-13, 1e-12, 1e-11, 1e-10, 1e-9, 1e-8, 1e-7, 1e-6}]
\input{Bessel25_error}
\end{axis}
\end{tikzpicture}%
\end{center}
\caption{ $40$-term exponential sum approximation of Bessel function of the first kind of order $25$ ($J_{25} (100 \pi x)$) in $[0,1]$ (top) and absolute error (bottom) }
\label{bessel25}
\end{figure}
As a final example, we chose to represent the Dirichlet kernel,
\begin{equation*}
D_N (x) = \sum_{k=-N}^N e^{\i k x} = \frac{\sin \left ( \pi( N + 1/2) \, x \right ) }{\sin \left ( \pi/2 \, x \right )}
\end{equation*}
over the interval $[-1,1]$. When applied through convolution, the Dirichlet kernel acts as a low-frequency filter. In this sense, a short exponential sum approximation can be used to speed up the filtering process.
We picked $N = 200$. To obtain the approximation, we proceeded as described in \cite{Beylkin:2005} and went on to first approximate,
\begin{equation*}
G_{200} (x) = \sum_{k \geq 0 } \frac{\sin(200 \pi (x+k))}{200 \pi (x+k)}
\end{equation*}
through a 40-term exponential sum and then built the Dirichlet kernel through the identity,
\begin{equation*}
D_{200} (x) = G_{200} (x) + G_{200} (1-x)
\end{equation*}
resulting in a 80-term approximation. It is shown in Figure \ref{dirichlet}. The error is non-uniform as expected from Theorem \ref{AGQ:AGQ_thm} but still remains below $10^{-7}$ for all values in the interval.
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}[scale=0.75]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
width=0.75\textwidth,
height=0.5\textwidth,
xmin=0,
xmax=1,
ymin=-0.25,
ymax=0.25,
xtick={0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1},
ytick={-0.25, -0.2, -0.15, -0.1, -0.05, 0., 0.05, 0.1, 0.15, 0.2, 0.25}]
\input{Dirichlet200}
\end{axis}
\end{tikzpicture}%
\begin{tikzpicture}[scale=0.75]
\pgfplotsset{every axis legend/.append style={at={(1.1,0.4)},anchor=north}}
\begin{axis}[
ymode=log,
width=0.75\textwidth,
height=0.5\textwidth,
xmin=0,
xmax=1,
ymin=1e-12,
ymax=1e-7,
xtick={0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1},
ytick={1e-19, 1e-18, 1e-17, 1e-16, 1e-15, 1e-14, 1e-13, 1e-12, 1e-11, 1e-10, 1e-9, 1e-8, 1e-7, 1e-6}]
\input{Dirichlet200_error}
\end{axis}
\end{tikzpicture}%
\end{center}
\caption{ $80$-term exponential sum approximation of the Dirichlet kernel of order $200$ ($D_{200} ( x)$) in $[-1,1]$ (top) and absolute error (bottom) }
\label{dirichlet}
\end{figure}
\section{Conclusion}
We have introduced a new type of quadrature closely related to Gaussian quadratures but which use the concept of $\epsilon$-quasiorthogonality to reduce the number of quadrature nodes and weights. Such quadratures have desirable computational properties and can be applied to a family much broader than that targeted by classical Gaussian quadratures. We have provided the theory for the existence of such quadratures and have provided error estimates together with practical ways of constructing them. We have also carried out various numerical examples displaying the versatility and performance of the method. Finally, we have described how AGQ can be used to approximate functions through short exponential sums and provided further numerical examples in these cases.
\section{Acknowledgements}
The authors would like to thank Professor Ying Wu from King Abdullah University of Science and Technology (KAUST) for supporting this research through her grant as well as the National Sciences and Engineering Research Council of Canada (NSERC) for their financial support.
\newpage
| {
"timestamp": "2018-11-13T02:24:19",
"yymm": "1811",
"arxiv_id": "1811.04846",
"language": "en",
"url": "https://arxiv.org/abs/1811.04846",
"abstract": "We introduce a new type of quadrature, known as approximate Gaussian quadrature (AGQ) rules using {\\epsilon}-quasiorthogonality, for the approximation of integrals of the form \\int f(x)d \\alpha(x). The measure {\\alpha}(\\cdot) can be arbitrary as long as it possesses finite moments {\\mu}n for sufficiently large n. The weights and nodes associated with the quadrature can be computed in low complexity and their count is inferior to that required by classical quadratures at fixed accuracy on some families of integrands. Furthermore, we show how AGQ can be used to discretize the Fourier transform with few points in order to obtain short exponential representations of functions.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Gaussian Quadrature Rule using ε-Quasiorthogonality",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232940063591,
"lm_q2_score": 0.817574471748733,
"lm_q1q2_score": 0.8035312354195988
} |
https://arxiv.org/abs/0801.0418 | Invariant tensors and graphs | We describe a correspondence between GL_n-invariant tensors and graphs, and show how this correspondence accomodates various types of symmetries and orientations. | \section*{Introduction}
Let $V$ be a finite dimensional vector space over a field ${\mathbf k}$ of
characteristic zero and ${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$ the group of invertible linear
endomorphisms of $V$.
The classical (Co)Invariant Tensor Theorem recalled in Section~\ref{s1}
states that the space of
${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-invariant linear maps between tensor products of copies of $V$
is generated by specific `elementary invariant tensors' and that these
elementary tensors are linearly independent if the dimension of $V$ is
big enough.
We will observe that elementary invariant tensors are in one-to-one
correspondence with contraction schemes for indices which are, in turn,
described by graphs. We then show how this translation between
invariant tensors and linear combination of graphs
accommodates various types of symmetries and orientations.
The above type of description of invariant tensors by graphs was
systematically used by M.~Kontsevich in his seminal
paper~\cite{kontsevich:93}. Graphs representing tensors
appeared also in the work of several other authors, let us
mention at least J.~Conant, A.~Hamilton, A.~Lazarev, J.-L.~Loday, S.~Mahajan
M.~Mulase, M.~Penkava, K.~Vogtmann, A.~Schwarz and G.~Weingart.
We were, however, not able to find a~suitable reference containing all
details. The need for such a reference appeared in connection with our
paper~\cite{markl:na} that provided a vocabulary between natural
differential operators and graph complexes. Indeed, this note was
originally designed as an appendix to~\cite{markl:na}, but we believe
that it might be of independent interest. It supplies necessary
details to~\cite{markl:na} and its future applications, and also puts
the `abstract tensor calculus' attributed to R.~Penrose onto a solid
footing.
\noindent
{\bf Acknowledgement.} We would like to express our thanks to
J.-L.~Loday and J.~Stasheff for useful comments and remarks
concerning the first draft of this note.
\vskip 1cm
\noindent
{\bf Table of content:} \ref{s1}.
Invariant Tensor Theorem: A recollection -- page~\pageref{s1}
\hfill\break\noindent
\hphantom{{\bf Table of content:\hskip .5mm}} \ref{s2}.
Graphs appear: An example -- page~\pageref{s2}
\hfill\break\noindent
\hphantom{{\bf Table of content:\hskip .5mm}} \ref{s3}.
The general case -- page~\pageref{s3}
\hfill\break\noindent
\hphantom{{\bf Table of content:\hskip .5mm}} \ref{s4}.
Symmetries occur -- page~\pageref{s4}
\hfill\break\noindent
\hphantom{{\bf Table of content:\hskip .5mm}} \ref{s5}.
A particular case -- page~\pageref{s5}
\section{Invariant Tensor Theorem: A recollection}
\label{s1}
Recall that, for finite-dimensional ${\mathbf k}$-vector spaces $U$ and $W$,
one has canonical isomorphisms
\begin{equation}
\label{conon}
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(U,W)^* \cong {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(W,U),\
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(U,V) \cong U^* \ot V
\mbox { and }
(U \ot W)^* \cong U^* \ot V^*,
\end{equation}
where ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(-,-)$ denotes the space of ${\mathbf k}$-linear maps, $(-)^*$ the
linear dual and $\ot$ the tensor product over ${\mathbf k}$. The first
isomorphism in~(\ref{conon}) is induced by the non-degenerate pairing
\[
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(U,W) \ot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(W,U) \to {\mathbf k}
\]
that takes $f \ot g \in {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(U,W) \ot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(W,U)$ into the trace of the
composition $\Tr(f \circ g)$, the remaining two isomorphisms are
obvious. In this note, by a {\em canonical isomorphism\/} we will usually
mean a composition of isomorphisms of the above types.
Einstein's convention assuming summation
over repeated (multi)indices is used. We will
also assume that the ground field ${\mathbf k}$ is of characteristic zero.
In what follows, $V$ will be an $n$-dimensional ${\mathbf k}$-vector space
and ${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$ the group of linear
automorphisms of $V$.
We start by considering
the vector space ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp Vk,\otexp Vl)$ of ${\mathbf k}$-linear
maps $f : \otexp Vk \to \otexp Vl$, $k,l \geq 0$.
Since both $\otexp Vk$ and $\otexp Vl$ are natural
${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-modules, it makes sense to study the subspace
${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}(\otexp Vk,\otexp Vl) \subset {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp Vk,\otexp Vl)$ of
${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-equivariant maps.
As there are no ${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-equivariant maps in ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp Vk,\otexp
Vl)=0$ if $k \not= l$ (see, for
instance,~\cite[\S24.3]{kolar-michor-slovak}), the only interesting
case is $k=l$. For a permutation $\sigma \in \Sigma_k$,
define the {\em elementary invariant tensor \/} $t_\sigma \in
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp Vk,\otexp Vk)$ as the map given by
\begin{equation}
\label{jdu_k_doktorovi}
t_\sigma(v_1 \otimes \cdots \otimes v_k) :=
v_{\sigma^{-1}(1)}\otimes \cdots \otimes v_{\sigma^{-1}(k)},\
\mbox { for }
\Rada v1k \in V.
\end{equation}
It is simple to verify that $t_\sigma$ is ${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-equivariant. The
following theorem is a celebrated result of H.~Weyl~\cite{weyl}.
\begin{itt}
\label{itt}
The space ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}(\otexp Vk,\otexp Vk)$ is spanned by elementary
invariant tensors $t_\sigma$, $\sigma \in \Sigma_k$. If $\dim(V) \geq
k$, the tensors $\{t_\sigma\}_{\sigma \in \Sigma_k}$ are linearly
independent.
\end{itt}
This form of the Invariant Tensor Theorem is a straightforward translation
of~\cite[Theorem~2.1.4]{fuks} describing
invariant tensors in $\otexp{{V^*}}k \ot \otexp Vk$ and remarks following
this theorem, see also~\cite[Theorem~24.4]{kolar-michor-slovak}.
The Invariant Tensor Theorem can be reformulated into saying that the map
\begin{equation}
\label{preziji_to?}
{\mathcal R}_n : {\mathbf k}[\Sigma_k] \to {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}(\otexp Vk,\otexp Vk)
\end{equation}
from the group ring of $\Sigma_k$ to the subspace of
${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-equivariant maps given by ${\mathcal R}_n(\sigma) := t_\sigma$, $\sigma
\in \Sigma_k$, is always an epimorphism and is an isomorphism for $n
\geq k$ (recall $n$ denoted the dimension of $V$).
The tensors $\{t_\sigma\}_{\sigma \in
\Sigma_k}$ are not linearly independent if $\dim(V) <k$.
For a subset $S \subset\{\rada 1k\}$ such that
$\card(S) > \dim(V)$, denote by $\Sigma_S$ the subgroup of
$\Sigma_k$ consisting of permutations that leave the complement $\{\rada
1k\}\setminus S$ fixed. It is simple to verify that then
\begin{equation}
\label{Pozitri_zpet_do_Prahy}
\sum_{\sigma \in \Sigma_S}{\rm sgn\/}(\sigma) \cdot t_\sigma = 0
\end{equation}
in ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}(\otexp Vk,\otexp Vk)$. By ~\cite[II.1.3]{fuks},
all relations between the elementary invariant tensors
are induced by the relations of the above type. In other
words, the kernel of the map ${\mathcal R}_n$ in~(\ref{preziji_to?})
is generated by the expressions
\[
\sum_{\sigma \in \Sigma_S}{\rm sgn\/}(\sigma) \cdot \sigma \in
{\mathbf k}[\Sigma_k],
\]
where $S$ and $\Sigma_S$ are as above. Observe that, with the
convention used in~(\ref{jdu_k_doktorovi}) involving the inverses of
$\sigma$ in the right hand side, ${\mathcal R}_n$ is a ring
homomorphism.
\begin{definition}
\label{stab}
By the {\em stable range\/} we mean the situation when $\dim(V)
\geq k$, that is, when the map
${\mathcal R}_n$ in~(\ref{preziji_to?}) is a monomorphism.
\end{definition}
\section{Graphs appear: An example}
\label{s2}
In this section we analyze an example that illustrates how the Invariant
Tensor Theorem leads to graphs.
We are going to describe invariant tensors in
${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}\left(\adjust {.4}\otexp V2 \sqot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V2,V),V\right)$. The canonical
identifications~(\ref{conon}) determine a
${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-equivariant isomorphism
\[
\Phi : {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}\left(\adjust {.4}\otexp V2 \ot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V2,V),V\right)
\cong {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V3,\otexp V3).
\]
Applying the Invariant Tensor Theorem to ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V3,\otexp V3)$,
one concludes that the subspace ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}(\otexp V2 \sqot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp
V2,V),V)$ is spanned by $\Phi^{-1}(t_\sigma)$, $\sigma \in \Sigma_3$,
and that these generators
are linearly independent if $\dim(V) \geq 3$. It is a simple exercise
to calculate the tensors $\Phi^{-1}(t_\sigma)$ explicitly. The results
are shown in the second column of the table in Figure~\ref{table} in
which $X \ot Y \ot F$ is an element of $\otexp V2 \sqot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp
V2,V)$ and $\Tr(-)$ the trace of a linear map $V \to V$.
\begin{figure}
\unitlength.9cm
\begin{picture}(15,14)(-1,-12)
\put(0,-.5){
\put(1,1){\makebox(0,0)[l]{$\Phi^{-1}(t_\sigma)$:}}
\put(8,1.45){\makebox(0,0)[l]{coordinate}}
\put(8,.9){\makebox(0,0)[l]{form:}}
\put(11,1){\makebox(0,0)[l]{graph:}}
}
\thinlines
\put(-2,0){\line(1,0){16}}
\put(-2,0.1){\line(1,0){16}}
\put(0.6,1){\line(0,-1){12.7}}
\put(0.7,1){\line(0,-1){12.7}}
\put(7.75,1){\line(0,-1){12.7}}
\put(10.6,1){\line(0,-1){12.7}}
\put(-2,-1){\makebox(0,0)[l]{$\sigma = {\it identity}$}}
\put(1,-1){\makebox(0,0)[l]{$X\ot Y \ot F \mapsto F(X,Y)$}}
\put(8,-1){\makebox(0,0)[l]{$X^jY^kF^i_{jk}e_i$}}
\put(11.8,-1.5){
\unitlength.5cm
\put(0,2){\makebox(0,0)[cc]{\hskip .5mm${\raisebox {.1em}{\rule{.6em}{.6em}} \hskip .1em}$}}
\put(0,1.08){\vector(0,1){.85}}
\put(0,1){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(0.3,1.2){\makebox(0,0)[l]{\scriptsize$F$}}
\put(-1,-1){
\put(0,1){{\vector(1,1){.92}}}
\put(0,1){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(0,0.4){\makebox(0,0)[t]{\scriptsize$X$}}
}
\put(1,-1){
\put(0,1){{\vector(-1,1){.92}}}
\put(0,1){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(0,.4){\makebox(0,0)[t]{\scriptsize$Y$}}
}}
\put(-2,-3){\makebox(0,0)[l]{$\sigma =$}}
\put(-1,-3){\sigmadva}
\put(1,-3){\makebox(0,0)[l]{$X\ot Y \ot F \mapsto F(Y,X)$}}
\put(8,-3){\makebox(0,0)[l]{$X^jY^kF^i_{kj}e_i$}}
\put(11.8,-3.5){
\unitlength.5cm
\put(0,2){\makebox(0,0)[cc]{\hskip .5mm${\raisebox {.1em}{\rule{.6em}{.6em}} \hskip .1em}$}}
\put(0,1.08){\vector(0,1){.85}}
\put(0,1){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(0.3,1.2){\makebox(0,0)[l]{\scriptsize$F$}}
\put(-1,-1){
\put(0,1){{\vector(1,1){.92}}}
\put(0,1){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(0,0.4){\makebox(0,0)[t]{\scriptsize$Y$}}
}
\put(1,-1){
\put(0,1){{\vector(-1,1){.92}}}
\put(0,1){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(0,.4){\makebox(0,0)[t]{\scriptsize$X$}}
}}
\put(-2,-5){\makebox(0,0)[l]{$\sigma =$}}
\put(-1,-5){\sigmatri}
\put(1,-5){\makebox(0,0)[l]{$X\ot Y \ot F \mapsto Y \otimes \Tr(F(X,-))$}}
\put(8,-5){\makebox(0,0)[l]{$X^jY^iF^k_{jk}e_i$}}
\put(11.1,-5.2){\borelioza YX}
\put(-2,-7){\makebox(0,0)[l]{$\sigma =$}}
\put(-1,-7){\sigmapet}
\put(1,-7){\makebox(0,0)[l]{$X\ot Y \ot F \mapsto Y \otimes \Tr(F(-,X))$}}
\put(8,-7){\makebox(0,0)[l]{$X^jY^iF^k_{kj}e_i$}}
\put(11.1,-7.2){\boreliozaInv YX}
\put(-2,-9){\makebox(0,0)[l]{$\sigma =$}}
\put(-1,-9){\sigmactyri}
\put(1,-9){\makebox(0,0)[l]{$X\ot Y \ot F \mapsto X \otimes \Tr(F(-,Y))$}}
\put(8,-9){\makebox(0,0)[l]{$X^iY^jF^k_{kj}e_i$}}
\put(11.1,-9.2){\boreliozaInv XY}
\put(-2,-11){\makebox(0,0)[l]{$\sigma =$}}
\put(-1,-11){\sigmasest}
\put(1,-11){\makebox(0,0)[l]{$X\ot Y \ot F \mapsto X \otimes \Tr(F(Y,-))$}}
\put(8,-11){\makebox(0,0)[l]{$X^iY^jF^k_{jk}e_i$}}
\put(11.1,-11.2){\borelioza XY}
\put(14.2,-1){\brace}
\put(14.2,-5){\brace}
\put(14.2,-9){\brace}
\end{picture}
\caption{\label{table} Invariant tensors in ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V2 \ot
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V2,V),V)$. The meaning of vertical braces on the right is
explained in Example~\protect\ref{456}.}
\end{figure}
Let us fix a basis $\{\Rada e1n\}$ of $V$ and write $X = X^ae_a$, $Y =
Y^ae_a$ and $F(e_a,e_b) = F^c_{ab} e_c$, for some scalars $X^a, Y^a, F^c_{ab}
\in {\mathbf k}$, $1\leq a,b,c \leq n$. The corresponding
coordinate forms of the elementary
tensors are shown in the
third column of the table. Observe that the expressions in this
column are all possible {\em contractions of indices\/} of the tensors $X$,
$Y$ and~$F$.
The contraction schemes for indices are encoded by the rightmost
column as follows. Given a graph $G$ from this column, decorate its
edges by symbols $i,j,k$. For example, for the graph in the bottom
right corner of the table, choose the decoration
\[
\unitlength.9cm
\begin{picture}(5,2)(-1.5,.7)
\put(0,2){\put(0.03,0){\makebox(0,0)[cc]{${\raisebox {.1em}{\rule{.6em}{.6em}} \hskip .1em}$}}}
\put(0,1){\vector(0,1){.935}}
\put(0,1){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(0,.7){\makebox(0,0)[t]{\scriptsize$X$}}
\put(.4,.3){
\put(2.09,.85){\makebox(0,0)[cc]{\oval(1.5,1.5)[b]}}
\put(2.09,1.15){\makebox(0,0)[cc]{\oval(1.5,1.5)[t]}}
\put(2.85,1.22){\line(0,1){.3}}
\put(.7,.7){\vector(1,1){.55}}
\put(.7,.7){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(1.35,1.35){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(1.32,1.25){\makebox(0,0)[tc]{\vector(0,1){0}}}
\put(1.2,1.45){\makebox(0,0)[r]{\scriptsize $F$}}
\put(.7,0.4){\makebox(0,0)[t]{\scriptsize $Y$}}
\put(-.5,1.2){\makebox(0,0)[r]{\scriptsize$i$}}
\put(1,1){\makebox(0,0)[lt]{\scriptsize$j$}}
\put(3,1.4){\makebox(0,0)[l]{\scriptsize$k$}}
\put(3.4,1){\makebox(0,0){.}}
}
\end{picture}
\]
To each vertex of this edge-decorated graph we assign the coordinates of
the corresponding tensors with the names of indices determined by
decorations of edges adjacent to this vertex. For example, to the
$F$-vertex we assign $F^k_{jk}$, because its left ingoing edge is
decorated by $j$ and its right ingoing edge which happens to be the
same as its outgoing edge, is decorated by $k$. The vertex \anchor,
called {\em the anchor\/}, plays a special role.
We assign to it the basis of $V$ indexed by the decoration of its
ingoing edge. We get
\[
\unitlength.9cm
\begin{picture}(5,2)(-2,.7)
\put(0,2){\put(0.03,0){\makebox(0,0)[cc]{${\raisebox {.1em}{\rule{.6em}{.6em}} \hskip .1em}$}}}
\put(0,2.3){\put(0.03,0){\makebox(0,0)[b]{$e_i$}}}
\put(0,1){\vector(0,1){.935}}
\put(0,1){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(0,.8){\makebox(0,0)[t]{\scriptsize$X^i$}}
\put(.4,.3){
\put(2.09,.85){\makebox(0,0)[cc]{\oval(1.5,1.5)[b]}}
\put(2.09,1.15){\makebox(0,0)[cc]{\oval(1.5,1.5)[t]}}
\put(2.85,1.22){\line(0,1){.3}}
\put(.7,.7){\vector(1,1){.55}}
\put(.7,.7){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(1.35,1.35){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(1.32,1.25){\makebox(0,0)[tc]{\vector(0,1){0}}}
\put(1.1,1.45){\makebox(0,0)[r]{\scriptsize $F^k_{jk}$}}
\put(.7,0.5){\makebox(0,0)[t]{\scriptsize $Y^j$}}
\put(-.5,1.2){\makebox(0,0)[r]{\scriptsize$i$}}
\put(1,1){\makebox(0,0)[lt]{\scriptsize$j$}}
\put(3,1.4){\makebox(0,0)[l]{\scriptsize$k$}}
}
\end{picture}
\]
As the final step we take the product of the factors assigned to
vertices and perform the summation over repeated indices. The result
is
\[
\sum_{1 \leq i,j,k \leq n}X^iY^jF^k_{jk}e_i.
\]
In this formula we made an exception from Einstein's convention and
wrote the summation explicitly to emphasize the idea of the
construction. A formal general definition of this process of
interpreting graphs as contraction schemes is given below.
Let $\wGr_{\rm ex}$ be the vector space spanned by the six graphs in
the last column of the table; the hat indicates that the graphs are
not oriented. The subscript ``ex'' is an abbreviation of
``example,'' and distinguishes this space from other spaces
with similar names used throughout the note. The procedure described
above gives an epimorphism
\begin{equation}
\label{boli_mne_v_krku}
\widehat{\mathrm R}}\def\uR{{\underline{R}}_n : \wGr_{\rm ex} \to {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}\left(\adjust {.4}\otexp V2 \ot
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V2,V),V\right)
\end{equation}
which is an isomorphism if $n \geq 3$. The map ${\widehat R}} \def\wGr{{\widehat{{\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}}}_n$
defined in this way obviously does not depend on the choice of the basis
$\{\Rada e1n\}$ of $V$.
The space $\wGr_{\rm ex}$ can also be defined as the span of all
directed graphs with three unary vertices
\begin{equation}
\label{aaa}
\unitlength .5cm
\begin{picture}(0,1)(0,.2)
\put(-.5,0){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(0,0){\makebox(0,0)[bl]{\scriptsize $X$}}
\put(0.9,0.2){\makebox(0,0){,}}
\put(-.5,0){\vector(0,1){1.2}}
\end{picture}
\hskip 3em
\unitlength .5cm
\begin{picture}(0,1)(0,.2)
\put(-.5,0){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(0,0){\makebox(0,0)[bl]{\scriptsize $Y$}}
\put(-.5,0){\vector(0,1){1.2}} \hskip 1em
\put(0.9,0.2){\makebox(0,0)[lb]{and}}
\end{picture}
\hskip 4.5em
\raisebox{-1em}{\rule{0pt}{0pt}}
\unitlength .4cm
\begin{picture}(0,1.4)(0,-.3)
\put(-.45,.55){\makebox(0,0)[cc]{${\raisebox {.1em}{\rule{.6em}{.6em}} \hskip .1em}$}}
\put(-.5,-.8){\vector(0,1){1.2}}
\put(0.7,-.25){\makebox(0,0){,}}
\end{picture}
\end{equation}
and one ``planar'' binary vertex
\begin{equation}
\label{bbb}
\unitlength.5cm
\begin{picture}(5,1.5)(-2,.4)
\put(0,1.08){\vector(0,1){.85}}
\put(0,1){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(0.3,1.2){\makebox(0,0)[l]{\scriptsize$F$}}
\put(-1,-1){
\put(0,1){{\vector(1,1){.92}}}
}
\put(1,-1){
\put(0,1){{\vector(-1,1){.92}}}
}
\end{picture}
\end{equation}
whose planarity means that its inputs are linearly
ordered. In pictures, this order is determined by reading the inputs
from left to right.
\section{The general case}
\label{s3}
Let us generalize calculations in
Section~\ref{s2} and describe
${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-invariant elements in
\begin{equation}
\label{zabiraji_antibiotika?}
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}\left(\adjust {.4}{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V{{\hh}_1},\otexp V{p_1}) \ot \cdots \ot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp
V{{\hh}_r},\otexp V{p_r}),{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp Vc,\otexp Vd)\right),
\end{equation}
where $r,\Rada p1r,\Rada {\hh}1r,c$ and $d$ are non-negative integers.
The above space is canonically isomorphic to
\[
\otexp{{V^*}}{p_1} \ot \otexp V{{\hh}_1} \ot \cdots \ot
\otexp{{V^*}}{p_r} \ot \otexp V{{\hh}_r} \ot \otexp{{V^*}}{c} \ot \otexp V{d},
\]
which is in turn isomorphic to\label{888}
\begin{equation}
\label{budu?}
\otexp {{V^*}}{(p_1 + \cdots + p_r + c)}
\ot \otexp V{({\hh}_1 + \cdots+ {\hh}_r + d)},
\end{equation}
via the isomorphism that moves all $V^*$-factors to the left, without
changing their relative order. By the last and first isomorphisms
in~(\ref{conon}), the space in~(\ref{budu?}) is isomorphic to
\[
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V{(p_1 + \cdots + p_r + c)},\otexp V{({\hh}_1 + \cdots+ {\hh}_r + d)}).
\]
We will denote the composite isomorphism
between~(\ref{zabiraji_antibiotika?}) and the space in the above
display by $\Phi$.
Since all isomorphisms above are ${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-equivariant, $\Phi$ is
equivariant, too, thus the
space~(\ref{zabiraji_antibiotika?}) may contain nontrivial
${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-equivariant maps only if
\begin{equation}
\label{vyleci_mne_to?}
p_1 + \cdots + p_r + c = {\hh}_1 + \cdots + {\hh}_r + d.
\end{equation}
Denote by $\wGr$ the space spanned by all directed graphs with $r+1$
planar vertices
\[
\raisebox{-3.5em}{\rule{0pt}{0pt}}
\unitlength 4mm
\linethickness{0.4pt}
\begin{picture}(20,5.1)(10.5,19.4)
\put(20,20){\vector(1,1){2}}
\put(20,20){\vector(-1,1){2}}
\put(20,20){\vector(-1,2){1}}
\put(18,18){\vector(1,1){1.9}}
\put(22,18){\vector(-1,1){1.9}}
\put(19,18){\vector(1,2){.935}}
\put(20,20){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(19,20){\makebox(0,0)[r]{\scriptsize $F_1$}}
\put(20.5,18){\makebox(0,0)[cc]{$\ldots$}}
\put(20,17){\makebox(0,0)[cc]{%
$\underbrace{\rule{16mm}{0mm}}_{\mbox{\scriptsize ${\hh}_1$ inputs}}$}}
\put(0,40){
\put(20.5,-18){\makebox(0,0)[cc]{$\ldots$}}
\put(20,-17){\makebox(0,0)[cc]{%
$\overbrace{\rule{16mm}{0mm}}^{\mbox{\scriptsize $p_1$ outputs}}$}}}
\end{picture}
\hskip -2.8cm
\raisebox{1.2mm}{$\cdots$}
\hskip -2.2cm
\begin{picture}(20,5.1)(10.5,19.4)
\put(20,20){\vector(1,1){2}}
\put(20,20){\vector(-1,1){2}}
\put(20,20){\vector(-1,2){1}}
\put(18,18){\vector(1,1){1.9}}
\put(22,18){\vector(-1,1){1.9}}
\put(19,18){\vector(1,2){.935}}
\put(20,20){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(19,20){\makebox(0,0)[r]{\scriptsize $F_r$}}
\put(20.5,18){\makebox(0,0)[cc]{$\ldots$}}
\put(20,17){\makebox(0,0)[cc]{%
$\underbrace{\rule{16mm}{0mm}}_{\mbox{\scriptsize ${\hh}_r$ inputs}}$}}
\put(0,40){
\put(20.5,-18){\makebox(0,0)[cc]{$\ldots$}}
\put(20,-17){\makebox(0,0)[cc]{%
$\overbrace{\rule{16mm}{0mm}}^{\mbox{\scriptsize $p_r$ outputs}}$}}}
\end{picture}
\hskip -2.8cm
\raisebox{1.2mm}{\mbox{and}}
\hskip -2.2cm
\begin{picture}(20,5.1)(10.5,19.4)
\put(20,20){\vector(1,1){2}}
\put(20,20){\vector(-1,1){2}}
\put(20,20){\vector(-1,2){1}}
\put(18,18){\vector(1,1){1.8}}
\put(22,18){\vector(-1,1){1.8}}
\put(19,18){\vector(1,2){.9}}
\put(20,20){\makebox(0,0)[cc]{${\raisebox {.1em}{\rule{.6em}{.6em}} \hskip .1em}$}}
\put(20.5,18){\makebox(0,0)[cc]{$\ldots$}}
\put(20,17){\makebox(0,0)[cc]{%
$\underbrace{\rule{16mm}{0mm}}_{\mbox{\scriptsize $d$ inputs}}$}}
\put(0,40){
\put(20.5,-18){\makebox(0,0)[cc]{$\ldots$}}
\put(20,-17){\makebox(0,0)[cc]{%
$\overbrace{\rule{16mm}{0mm}}^{\mbox{\scriptsize $c$ outputs}}$}}}
\end{picture} \hskip -3cm , \hskip 2cm
\]
where planarity means that linear orders of the sets of input and
output edges are specified.
Observe that the number of edges of each graph spanning $\wGr$ equals
the common value of the sums in~(\ref{vyleci_mne_to?}).
For each graph $G \in \wGr$ we define a
${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-equivariant map $\widehat{\mathrm R}}\def\uR{{\underline{R}}_n(G)$ in the
space~(\ref{zabiraji_antibiotika?}) as follows.
As in Section~\ref{s2},
choose a basis $(\Rada e1n)$ of $V$ and let
$(\rada{e^1}{e^n})$ be the corresponding dual basis of $V^*$. For
$F_i \in {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V{{\hh}_i},\otexp V{p_i})$, $1 \leq i \leq r$, write
\[
F_i =
{F_i \hskip .2em}^{a_1^i,\ldots,a^i_{p_i}}_{b_1^i,\ldots,b^i_{{\hh}_i}} \
e_{a_1} \ot \cdots \ot e_{a_{p_i}}
\otimes e^{b_1} \ot \cdots \ot e^{b_{{\hh}_i}}
\]
with some scalars
${F_i \hskip .2em}^{a_1^i,\ldots,a^i_{p_i}}_{b_1^i,\ldots,b^i_{{\hh}_i}} \in {\mathbf k}$
or, more concisely,
$F_i = {F_i \hskip .2em}^{A^i}_{B^i}\ e_{A^i} \otimes e^{B^i}$,
where $A^i$ abbreviates the multiindex $(a_1^i,\ldots,a^i_{p_i})$,
$B^i$ the multiindex $(b_1^i,\ldots,b^i_{{\hh}_i})$, $e_{A^i} := e_{a_1} \ot
\cdots \ot e_{a_{p_i}}$, $e^{B^i} := e^{b_1} \ot \cdots \ot
e^{b_{{\hh}_i}}$ and, as everywhere in this paper, summations over
repeated (multi)indices are assumed.
A {\em labelling\/} of a graph $G \in \wGr$ is a function $\ell :
{\it Edg}(G) \to \{\rada 1n\}$, where ${\it Edg}(G)$ denotes the set of edges of
$G$. Let ${\it Lab}(G)$ be the set of all labellings of $G$. For $\ell \in
{\it Lab}(G)$ and $1 \leq i \leq r$, define $A^i(\ell)$ to be the multiindex
$(a_1^i,\ldots,a^i_{p_i})$ such that $a^i_s$ equals $\ell(e)$, where
$e$ is the edge that starts at the $s$-th output of the vertex $F_i$,
$1 \leq s \leq p_i$. Likewise, put $I(\ell) := (\Rada i1c)$ with $i_t
:= \ell(e)$, where now $e$ is the edge that starts at the $t$-th
output of the \raisebox{-.1em}{${\raisebox {.1em}{\rule{.6em}{.6em}} \hskip .1em}$}-vertex, $1 \leq t \leq c$.
Let $B^i(\ell)$ and $J(\ell)$ have similar obvious meanings, with
`inputs' taken instead of `outputs.' For $F_1 \ot \cdots\ot F_r \in
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V{{\hh}_1},\otexp V{p_1}) \ot \cdots \ot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V{{\hh}_r},\otexp
V{p_r})$ define finally
\begin{equation}
\label{beru_antibiotika}
\widehat{\mathrm R}}\def\uR{{\underline{R}}_n(G)(F_1 \ot \cdots\ot F_r) :=
\sum_{\ell \in {\it Lab}(G)}
{F_1 \hskip .2em}^{A^1(\ell)}_{B^1(\ell)} \ot \cdots\ot
{F_r \hskip .2em}^{A^r(\ell)}_{B^r(\ell)}\
e_{J(\ell)} \ot e^{I(\ell)} \in {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp Vc,\otexp Vd).
\end{equation}
It is easy to check that $\widehat{\mathrm R}}\def\uR{{\underline{R}}_n(G)$ is a ${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-fixed
element of the space~(\ref{zabiraji_antibiotika?}).
The nature of the summation in~(\ref{beru_antibiotika}) is close
to the {\em state sum model\/} for link invariants,
see~\cite[Section~I.8]{kauffman:KnotsandPhysics}, with states being
the values of labels of the edges of the graph.
\begin{proposition}
\label{zabere_to?}
Let $r,\Rada p1r,\Rada {\hh}1r,c$ and $d$ be non-negative integers. Then
the map
\[
\widehat{\mathrm R}}\def\uR{{\underline{R}}_n :\wGr \to {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}\left(\adjust {.4}{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V{{\hh}_1},\otexp V{p_1})
\ot \cdots \ot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp
V{{\hh}_r},\otexp V{p_r}),{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp Vc,\otexp Vd)\right)
\]
defined by~(\ref{beru_antibiotika}) is an epimorphism. If $n
\geq e$, where $e$ is the number of edges of graphs spanning $\wGr$
and $n = \dim(V)$, $\widehat{\mathrm R}}\def\uR{{\underline{R}}_n$ is also an isomorphism.
\end{proposition}
Observe that we do not need to assume~(\ref{vyleci_mne_to?}) in
Proposition~\ref{zabere_to?}. If~(\ref{vyleci_mne_to?}) is not
satisfied, then there are no ${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-invariant elements
in~(\ref{zabiraji_antibiotika?}) and also the space $\wGr$ is trivial,
thus $\widehat{\mathrm R}}\def\uR{{\underline{R}}_n$ is an isomorphism of trivial spaces.
\begin{proof}[Proof of Proposition~\ref{zabere_to?}] By the above
observation, we may assume~(\ref{vyleci_mne_to?}).
Consider the diagram
\begin{equation}
\label{nehoji_se_to}
\raisebox{-1.9cm}{\rule{0pt}{4cm}}
\unitlength 1cm
\linethickness{0.4pt}
\begin{picture}(15,1.2)(-.5,1.5)
\put(0,0){\makebox(0,0){$\wGr$}}
\put(8.6,0){\makebox(0,0){${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}\left(\adjust {.4}{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V{{\hh}_1},\otexp V{p_1})
\ot \cdots \ot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp
V{{\hh}_r},\otexp V{p_r}),{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp Vc,\otexp Vd)\right)$}}
\put(0,3){\makebox(0,0){${\mathbf k}[\Sigma_k]$}}
\put(8.6,3){\makebox(0,0){${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}(\otexp V{(p_1 + \cdots + p_r +
c)},\otexp V{({\hh}_1 + \cdots+ {\hh}_r + d)})$}}
\put(.6,0){\vector(1,0){1.8}}
\put(.75,3){\vector(1,0){4}}
\put(0,.5){\vector(0,1){2}}
\put(8,.5){\vector(0,1){2}}
\put(8.3,1.5){\makebox(0,0)[l]{$\Phi$}}
\put(0.3,1.5){\makebox(0,0)[l]{$\Psi$}}
\put(7.7,1.5){\makebox(0,0)[r]{$\cong$}}
\put(-0.3,1.5){\makebox(0,0)[r]{$\cong$}}
\put(1.5,.15){\makebox(0,0)[b]{$\widehat{\mathrm R}}\def\uR{{\underline{R}}_n$}}
\put(2.6,3.15){\makebox(0,0)[b]{${\mathcal R}_n$}}
\end{picture}
\end{equation}
in which ${\mathcal R}_n$ is the map~(\ref{preziji_to?}), $\widehat{\mathrm R}}\def\uR{{\underline{R}}_n$ is
defined in~(\ref{beru_antibiotika}) and $\Phi$ is the composition of
canonical isomorphisms and reshufflings of factors described on
page~\pageref{888} above. The map $\Psi$ is defined as follows.
Let us denote, for the purposes of this proof only, by $\OUT(F_i)$ the
linearly ordered
set of outputs of the $F_i$-vertex, $1 \leq i \leq r$,
and by $\OUT(\ctverecek)$ the linearly
ordered set of
outputs of \hskip 2pt \raisebox{-1pt}{${\raisebox {.1em}{\rule{.6em}{.6em}} \hskip .1em}$}.
The set $\OUT := \OUT(F_1) \cup \cdots \cup
\OUT(F_r) \cup \OUT(\ctverecek)$ is linearly ordered by requiring that
\[
\OUT(F_1) < \cdots < \OUT(F_r) < \OUT(\ctverecek)
\]
(we believe that the meaning of this shorthand is obvious). Let $\IN$
be the linearly ordered set of inputs defined in the similar way. The
orders define unique isomorphisms
\begin{equation}
\label{mam_chripku}
\OUT \cong (\rada 1k) \ \mbox { and } \IN \cong (\rada 1k)
\end{equation}
of ordered sets.
Since graphs spanning $\wGr$ are determined by specifying how the
outputs of vertices are connected to its inputs, there exists a
one-to-one correspondence $G \leftrightarrow \varphi_G$ between graphs
$G \in \wGr$ and isomorphisms $\varphi_G : \OUT \stackrel{\cong}{\to}
\IN$. Given~(\ref{mam_chripku}), such $\varphi_G$ can be interpreted
as an element of the symmetric group $\Sigma_k$. The map $\Psi$ is
then defined by $\Psi(G) := \varphi_G$.
It is simple to verify that the diagram~(\ref{nehoji_se_to}) commutes,
so the proposition follows from the Invariant Tensor Theorem.
\end{proof}
\section{Symmetries occur}
\label{s4}
In the light of diagram~(\ref{nehoji_se_to}),
Proposition~\ref{zabere_to?} may look just as a clumsy reformulation of
the Invariant Tensor Theorem. Graphs become relevant when
symmetries occur.
\begin{example}
\label{456}
Let $\Sym(\otexp V2,V) \subset {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V2,V)$ be the subspace of
symmetric bilinear maps, i.e.~maps satisfying $f(v',v'') =
f(v'',v')$ for $v',v'' \in V$. Let us explain how to use
calculations of Section~\ref{s2} to describe
${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-equivariant maps in ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}\left(\adjust {.4}\otexp V2 \sqot \Sym(\otexp
V2,V),V\right)$.
The right $\Sigma_2$-action on ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V2,V)$ given by permuting
the inputs of bilinear maps is such that the space $\Sym(\otexp V2,V)$
equals the subspace ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V2,V)^{\Sigma_2}$ of $\Sigma_2$-fixed
elements. This right $\Sigma_2$-action induces a left
$\Sigma_2$-action on ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}\left(\adjust {.4}\otexp V2 \sqot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp
V2,V),V\right)$ which commutes with the ${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-action, therefore it
restricts to a left $\Sigma_2$-action on the subspace
${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}\left(\adjust {.4}\otexp V2 \sqot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V2,V),V\right)$ of
${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-equivariant maps.
There is also a left $\Sigma_2$-action on the linear space $\wGr_{\rm ex}$
interchanging the inputs of the $F$-vertices of generating graphs. It
is simple to check that the map~(\ref{boli_mne_v_krku}) of
Section~\ref{s2} is equivariant with respect
to these two $\Sigma_2$-actions, hence it induces the map
\begin{equation}
\label{porad_mi_neni_dobre}
\Sigma_2 \backslash \widehat{\mathrm R}}\def\uR{{\underline{R}}_n :\Sigma_2 \backslash \wGr_{\rm ex}
\to \Sigma_2 \backslash
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}\left(\adjust {.4}\otexp V2 \ot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V2,V),V\right)
\end{equation}
of left cosets. Observe that, by a standard duality argument,
\begin{equation}
\label{ttt}
\Sigma_2 \backslash{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}\left(\adjust {.4}\otexp V2 \ot
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V2,V),V\right)\cong
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}\left(\adjust {.4}\otexp V2 \ot \Sym(\otexp V2,V),V\right).
\end{equation}
Let us denote $\wGr_{{\rm ex},\bullet} := \Sigma_2 \backslash\wGr_{\rm
ex}$. The bullet $\bullet$ in the subscript signalizes the presence
of vertices with fully symmetric inputs. By definition, graphs $G',
G'' \in \wGr_{\rm ex}$ are identified in the quotient
$\wGr_{{\rm ex},\bullet}$ if they
differ only by the order of inputs of the $F$-vertex. In
Figure~\ref{table}, this identification is indicated by vertical
braces. We see that $\wGr_{{\rm ex},\bullet}$
is again a space {\em spanned by
graphs,\/} this time with no linear order on the inputs of the
$F$-vertex. So we may {\em define\/} $\wGr_{{\rm ex},\bullet}$ as the space
spanned by directed graphs with vertices~(\ref{aaa}) and one binary
(ordinary, non-planar) vertex~(\ref{bbb}). We conclude by
interpreting~(\ref{porad_mi_neni_dobre}) as the map
\begin{equation}
\label{Phillips}
\widehat{\mathrm R}}\def\uR{{\underline{R}}_n : \wGr_{{\rm ex},\bullet} \to {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}\left(\adjust {.4}\otexp V2 \ot
\Sym(\otexp V2,V),V\right).
\end{equation}
It follows from the properties of the map~(\ref{boli_mne_v_krku}) and
the characteristic zero assumption that $\widehat{\mathrm R}}\def\uR{{\underline{R}}_n$ is always an
epimorphism and is an isomorphism if $n \geq 3$.
\end{example}
At this point we want to incorporate, by generalizing the pattern
used in Example~\ref{456}, symmetries into Proposition~\ref{zabere_to?}.
Unfortunately, it turns out that treating the
space~(\ref{zabiraji_antibiotika?}) in full generality leads to a
notational disaster.
To keep the length of formulas within a
reasonable limit, we decided to {\em assume from now on\/} that
$p_1= \cdots = p_r = 1$,
$c=0$ and $d=1$. This means that we will restrict our attention to
maps in
\begin{equation}
\label{red}
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}\left(\adjust {.4}{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V{{\hh}_1},V) \ot \cdots \ot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp
V{{\hh}_r},V),V\right).
\end{equation}
For graphs this assumption implies that the vertices $\Rada F1r$
have precisely one output, and that the anchor
$\hskip .2em\raisebox{-.1em}{{\raisebox {.1em}{\rule{.6em}{.6em}} \hskip .1em}}\hskip -.2em$
has one input and no outputs. The number of inputs of $F_i$ will be
called the {\em arity\/} of $F_i$, $1 \leq i \leq r$.
Condition~(\ref{vyleci_mne_to?}) reduces to
\[
r = {\hh}_1 + \cdots + {\hh}_r + 1
\]
and one also sees that $r$ equals the number of edges of the
generating graphs.
The above generality is sufficient for all applications we have in mind.
A modification to the general case is straightforward but
notationally challenging.
The space ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V{\hh},V)$ admits, for each ${\hh} \geq 0$, a
natural right $\Sigma_{\hh}$-action given by permuting inputs of
multilinear maps.
A {\em symmetry\/} of maps in ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V{\hh},V)$
will be specified by a subset ${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} \subset
{\mathbf k}[\Sigma_{\hh}]$. We then denote
\[
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}(\otexp V{\hh},V) := \left\{\adjust {.4} f \in {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V{\hh},V)
;\ f {\mathfrak s} = 0 \mbox {
for each } {\mathfrak s} \in {\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}\right\}.
\]
For ${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}$ as above and a left $\Sigma_{\hh}$-module $U$, we will abbreviate
by ${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} \backslash U$ the left coset ${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} U \backslash U$.
\begin{example}
\label{exxx}
Let ${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} := I_{\hh} \subset {\mathbf k}[\Sigma_\hh]$ be the augmentation ideal.
Then ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{I_{\hh}}(\otexp V{\hh},V)$ is the space of symmetric maps,
\[
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{I_{\hh}}(\otexp V{\hh},V) = \Sym(\otexp V{\hh},V),
\]
therefore the augmentation ideal describes the symmetry of the local
coordinates of vector fields and their derivatives,
see~\cite[Example~3.2]{markl:na}.
We leave as an exercise to describe in this language the spaces of
{\em anti\/}symmetric maps.
\end{example}
\begin{example}
\label{exxy}
Let ${\hh} := v+2$, $v \geq 0$, and let $\nabla \subset {\mathbf k}[\Sigma_{\hh}]$ be
the image of the augmentation ideal $I_v$ of ${\mathbf k}[\Sigma_v]$ in
${\mathbf k}[\Sigma_{\hh}]$ under the map of group rings induced by the inclusion
$\Sigma_v \hookrightarrow \Sigma_v \times \Sigma_2
\hookrightarrow\Sigma_{\hh}$ that interprets permutations of $(\rada 1v)$
as permutations of $(\rada 1v,v+1,v+2)$ keeping the last two elements
fixed. Then ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_\nabla(\otexp V{\hh},V)$ consists of multilinear maps
$\otexp V{(v+2)} \to V$ that are symmetric in the first $v$ inputs,
i.e.~multilinear maps possessing the symmetry of the Christoffel
symbols of linear connections and their derivatives, see
again~\cite[Example~3.2]{markl:na}.
\end{example}
\begin{remark}
\label{patek_v_IHES}
It is clear how to generalize the above notion of symmetry to maps in
the left $\Sigma_p$- right $\Sigma_{\hh}$-module ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V{\hh},\otexp
Vp)$ for general $p,{\hh} \geq 0$. A symmetry of these maps
will be specified by
subsets ${\mathfrak{{I}}}} \def\So{{\mathfrak{{O}}} \in {\mathbf k}[\Sigma_{{\hh}}]$ and $\So \in {\mathbf k}[\Sigma_{p}]$,
the corresponding subspaces will then be
\[
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\mathfrak{{I}}}} \def\So{{\mathfrak{{O}}}}^{\So}(\otexp {V}{{\hh}},\otexp {V}{p}) :=
\left\{\adjust {.4} f \in {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V{\hh},\otexp Vp)
;\ f {\mathfrak s} = 0 = {\mathfrak t} f \mbox {
for each } {\mathfrak s} \in {\mathfrak{{I}}}} \def\So{{\mathfrak{{O}}} \mbox { and } {\mathfrak t} \in \So
\right\}.
\]
\end{remark}
Suppose we are given subsets ${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_i \subset {\mathbf k}[\Sigma_{{\hh}_i}]$, $1
\leq i \leq r$. Our aim is to describe ${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-invariant elements in
the space
\begin{equation}
\label{reds}
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}\left(\adjust {.4}{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_1}(\otexp V{{\hh}_1},V)
\ot \cdots \ot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_r}(\otexp
V{{\hh}_r},V),V\right).
\end{equation}
Let
\[
{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} :={\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_1 \cup \cdots \cup {\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_r
\subset {\mathbf k}[\Sigma_{{\hh}_1} \times \cdots \times \Sigma_{{\hh}_r}],
\]
where ${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_i$ is, for $1 \leq i \leq r$, identified with
its image in ${\mathbf k}[\Sigma_{{\hh}_1} \times \cdots \times \Sigma_{{\hh}_r}]$
under the map induced by the group inclusion $\Sigma_{{\hh}_i} \hookrightarrow
\Sigma_{{\hh}_1} \times \cdots \times \Sigma_{{\hh}_r}$.
As in Example~\ref{456}, we use the fact that, for $1 \leq i \leq r$,
each ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V{{\hh}_i},V)$ is a right $\Sigma_{{\hh}_i}$-space, hence
the tensor product ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V{{\hh}_1},V) \ot \cdots \ot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp
V{{\hh}_r},V)$ has a natural right $\Sigma_{{\hh}_1} \times \cdots \times
\Sigma_{{\hh}_r}$-action which induces a left $\Sigma_{{\hh}_1} \times \cdots
\times \Sigma_{{\hh}_r}$-action on the space~(\ref{red}). This action
restricts to the subspace of ${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-equivariant maps.
There is also a left $\Sigma_{{\hh}_1} \times \cdots \times
\Sigma_{{\hh}_r}$-action on the space $\wGr$ given by permuting, in the
obvious manner, the inputs of the vertices $\Rada F1r$ of generating
graphs. The map $\widehat{\mathrm R}}\def\uR{{\underline{R}}_n$ of Proposition~\ref{zabere_to?} is equivariant
with respect to the above two actions and induces the map
\[
{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} \backslash \widehat{\mathrm R}}\def\uR{{\underline{R}}_n : {\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} \backslash \wGr
\to
{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} \backslash
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}\left(\adjust {.4}{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp V{{\hh}_1},V) \ot \cdots \ot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}(\otexp
V{{\hh}_r},V),V\right)
\]
of left quotients. Denoting $\wGr_{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} := {\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} \backslash \wGr$
and realizing that, by duality, the codomain of ${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} \backslash \widehat{\mathrm R}}\def\uR{{\underline{R}}_n$ is
isomorphic to the subspace of ${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-fixed
elements in~(\ref{reds}), we obtain the map (denoted again $\widehat{\mathrm R}}\def\uR{{\underline{R}}_n$)
\begin{equation}
\label{jeste_jeden_den}
\widehat{\mathrm R}}\def\uR{{\underline{R}}_n : \wGr_{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} \to {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}\left(\adjust {.4}{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_1}(\otexp V{{\hh}_1},V)
\ot \cdots \ot {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_r}(\otexp
V{{\hh}_r},V),V\right)
\end{equation}
which is, by Proposition~\ref{zabere_to?},
an epimorphism and is an isomorphism if $\dim(V) \geq r$.
\begin{remark}
\label{jaja}
As in Example~\ref{456}, it turns out that
the quotient $\wGr_{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} = {\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} \backslash \wGr$ is a
{\em space of graphs\/} though, for general symmetries,
``space of graphs'' means a free wheeled operad on a certain
$\Sigma$-module~\cite{mms}.
In the cases relevant for our paper, we however remain in the realm of
`classical' graphs, as shown in the following example, see also the
proof of Corollary~\ref{boli_mne_za_krkem}.
\end{remark}
\begin{example}
\label{ja}
Suppose that, for some $1 \leq i \leq r$, ${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_i$ equals the
augmentation ideal $I_{{\hh}_i}$ of ${\mathbf k}[\Sigma_{{\hh}_i}]$ as in
Example~\ref{exxx}.
Then, in the quotient ${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} \backslash \wGr$, one identifies graphs that
differ by the order of inputs of the vertex $F_i$. In other words,
modding out by ${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_i \subset {\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}$ erases the order of inputs of
$F_i$, turning $F_i$ into an ordinary (non-planar) vertex. If ${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_i =
\nabla$ as in Example~\ref{exxy}, one gets a
vertex of arity $v+2$, $v \geq 0$,
whose first $v$ inputs are symmetric.
\end{example}
For applications, we
still need one more level of generalization that will reflect the
antisymmetry of the Chevalley-Eilenberg
complex~\cite[Section~2]{markl:na} in the Lie algebra variables. As
a motivation for our construction, we offer the following continuation
of the calculations in Section~\ref{s2} and Example~\ref{456}.
\begin{example}
\label{zivotosprava}
We will consider the tensor product $V \ot V$ as a left
$\Sigma_2$-module, with the action $\tau(v' \ot v'') := -
(v'' \ot v')$, for $v',v'' \in V$ and the generator $\tau \in
\Sigma_2$. The subspace $(V \ot V)^{\Sigma_2}$ of $\Sigma_2$-fixed
elements is then precisely the second exterior power $\mbox{\Large$\land$}}\def\bp{{\mathbf p}^2 V$. This left
action induces a ${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-equivariant right $\Sigma_2$-action on the
space ${\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}\left(\adjust {.4}\otexp V2 \sqot \Sym(\otexp V2,V),V\right)$ such that
\[
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}\left(\adjust {.4}\otexp V2 \ot \Sym(\otexp V2,V),V\right)/\Sigma_2 \cong
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}\left(\adjust {.4}\mbox{\Large$\land$}}\def\bp{{\mathbf p}^2 V \ot \Sym(\otexp V2,V),V\right).
\]
The above isomorphism restricts to an isomorphism
\begin{equation}
\label{u}
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}\left(\adjust {.4}\otexp V2 \ot \Sym(\otexp V2,V),V\right)/\Sigma_2 \cong
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}\left(\adjust {.4}\mbox{\Large$\land$}}\def\bp{{\mathbf p}^2 V \ot \Sym(\otexp V2,V),V\right).
\end{equation}
of the subspaces of ${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-equivariant maps.
Likewise, $\wGr_{{\rm ex},\bullet}$ carries a right $\Sigma_2$-action that
interchanges the labels $X$ and $Y$ of the \black-vertices of graphs
in the last column of Figure~\ref{table} and multiplies the sign of
the corresponding generator by $-1$. The map~(\ref{Phillips}) is
$\Sigma_2$-equivariant, therefore it induces the map
\[
\widehat{\mathrm R}}\def\uR{{\underline{R}}_n/\Sigma_2 : \wGr_{{\rm ex},\bullet} /\Sigma_2 \to
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}\left(\adjust {.4}\otexp V2 \ot \Sym(\otexp V2,V),V\right)/\Sigma_2.
\]
Let us denote ${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^2_{{\rm ex},\bullet}
:= \wGr_{{\rm ex},\bullet}/\Sigma_2$ and $\rR^2_n :=
\widehat{\mathrm R}}\def\uR{{\underline{R}}_n/\Sigma_2$. Using~(\ref{u}), one rewrites the above map as an
epimorphism
\[
\rR^2_n : {\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^2_{{\rm ex},\bullet}
\epi {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}\left(\adjust {.4}\mbox{\Large$\land$}}\def\bp{{\mathbf p}^2 V \sqot \Sym(\otexp V2,V),V\right)
\]
which is an isomorphism if $n \geq 3$.
The space ${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^2_{{\rm ex},\bullet}$ is isomorphic to the span of the set of
directed, oriented graphs with one (non-planar) binary vertex $F$, an
anchor \anchor, and two `white' vertices \white. By an {\em
orientation\/} we mean a linear order of white vertices. A graph with
the opposite orientation is identified with the original one taken with the
opposite sign. It is clear that, with ${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^2_{{\rm ex},\bullet}$
defined in this way, the
map ${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^2_{{\rm ex},\bullet}
\to \wGr_{{\rm ex},\bullet} /\Sigma_2$ that replaces the first
(in the linear order given by the orientation) white vertex \white\ by
the black vertex \black\ labelled by $X$, and the second white vertex
by the black vertex labelled by $Y$, is an isomorphism.
The symmetry of the inputs of the vertex $F$ implies
the following identities in ${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^2_{{\rm ex},\bullet}$:
\[
\unitlength.5cm
\begin{picture}(10,2)(0,.5)
\put(0,2){\makebox(0,0)[cc]{\hskip .5mm${\raisebox {.1em}{\rule{.6em}{.6em}} \hskip .1em}$}}
\put(0,1.08){\vector(0,1){.85}}
\put(0,1){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(0.3,1.2){\makebox(0,0)[l]{\scriptsize$F$}}
\put(-1,-1){
\put(0.15,1.15){{\vector(1,1){.76}}}
\put(0,1){\makebox(0,0)[cc]{\Large$\circ$}}
\put(1,1){\makebox(0,0)[cc]{$<$}}
}
\put(1,-1){
\put(-.15,1.15){{\vector(-1,1){.76}}}
\put(0,1){\makebox(0,0)[cc]{\Large$\circ$}}
}
\put(2.5,1){\makebox(0,0){$=-$}}
\put(5,0){
\put(0,2){\makebox(0,0)[cc]{\hskip .5mm${\raisebox {.1em}{\rule{.6em}{.6em}} \hskip .1em}$}}
\put(0,1.08){\vector(0,1){.85}}
\put(0,1){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(0.3,1.2){\makebox(0,0)[l]{\scriptsize$F$}}
\put(-1,-1){
\put(0.15,1.15){{\vector(1,1){.76}}}
\put(0,1){\makebox(0,0)[cc]{\Large$\circ$}}
\put(1,1){\makebox(0,0)[cc]{$>$}}
}
\put(1,-1){
\put(-.15,1.15){{\vector(-1,1){.76}}}
\put(0,1){\makebox(0,0)[cc]{\Large$\circ$}}
}
\put(2.5,1){\makebox(0,0){$=-$}}
}
\put(10,0){
\put(0,2){\makebox(0,0)[cc]{\hskip .5mm${\raisebox {.1em}{\rule{.6em}{.6em}} \hskip .1em}$}}
\put(0,1.08){\vector(0,1){.85}}
\put(0,1){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(0.3,1.2){\makebox(0,0)[l]{\scriptsize$F$}}
\put(-1,-1){
\put(0.15,1.15){{\vector(1,1){.76}}}
\put(0,1){\makebox(0,0)[cc]{\Large$\circ$}}
\put(1,1){\makebox(0,0)[cc]{$<$}}
}
\put(1,-1){
\put(-.15,1.15){{\vector(-1,1){.76}}}
\put(0,1){\makebox(0,0)[cc]{\Large$\circ$}}
}
}
\put(11.5,1){\makebox(0,0){,}}
\end{picture}
\adjust {1.2}
\]
from which one concludes that
\[
\hskip 5cm
\unitlength.5cm
\begin{picture}(10,2)(0,.5)
\put(0,2){\makebox(0,0)[cc]{\hskip .5mm${\raisebox {.1em}{\rule{.6em}{.6em}} \hskip .1em}$}}
\put(0,1.08){\vector(0,1){.85}}
\put(0,1){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(0.3,1.2){\makebox(0,0)[l]{\scriptsize$F$}}
\put(-1,-1){
\put(0.15,1.15){{\vector(1,1){.76}}}
\put(0,1){\makebox(0,0)[cc]{\Large$\circ$}}
\put(1,1){\makebox(0,0)[cc]{$<$}}
}
\put(1,-1){
\put(-.15,1.15){{\vector(-1,1){.76}}}
\put(0,1){\makebox(0,0)[cc]{\Large$\circ$}}
}
\put(2.5,1){\makebox(0,0){$=0$.}}
\end{picture}
\adjust {1.2}
\]
Therefore ${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^2_{{\rm ex},\bullet}$
is in this case one-dimensional, spanned by the
equivalence class of the oriented directed graph
\[
\begin{picture}(7,5)(1,5)
\unitlength.74cm
\put(-1,-1.5){
\put(0,-.1){
\put(0,2){\put(0.03,0){\makebox(0,0)[cc]{${\raisebox {.1em}{\rule{.6em}{.6em}} \hskip .1em}$}}}
\put(0,1.17){\vector(0,1){.75}}
\put(0,1){\makebox(0,0)[cc]{\Large$\circ$}}
}
\put(.4,.2){
\put(2.09,.85){\makebox(0,0)[cc]{\oval(1.5,1.5)[b]}}
\put(2.09,1.15){\makebox(0,0)[cc]{\oval(1.5,1.5)[t]}}
\put(2.85,1.22){\line(0,1){.3}}
\put(.82,.82){\vector(1,1){.48}}
\put(.7,.7){\makebox(0,0)[cc]{\Large$\circ$}}
\put(1.35,1.35){\makebox(0,0)[cc]{\Large$\bullet$}}
\put(1.32,1.25){\makebox(0,0)[tc]{\vector(0,1){0}}}
\put(1,1.45){\makebox(0,0)[r]{\scriptsize $F$}}
\put(0.16,0.7){\makebox(0,0)[cc]{$<$}}
\put(3.4,1){\makebox(0,0)[b]{.}}
}}
\end{picture}
\adjust {2}
\]
In the notation of Figure~\ref{table}, the above graph represents the
map that sends $(X\land Y) \ot F \in \mbox{\Large$\land$}}\def\bp{{\mathbf p}^2 V \ot \Sym(\otexp V2,V)$ into
\[
X \otimes \Tr(F(Y,-)) - Y \otimes \Tr(F(X,-)) \in V.
\]
\end{example}
Let us turn to our final task. We want to describe ${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-invariant
elements in the space
\begin{equation}
\label{piano-nad-hlavou}
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}\left(\mathop{{\rm \EXT}}\displaylimits_{1 \leq i \leq m} \Sym(\otexp V{{\hh}_i},V) \ot
\bigotimes_{m+1 \leq i \leq r} {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_i}(\otexp V{{\hh}_i},V),V\right)
\end{equation}
where, as before, $r,\Rada {\hh}1r$ are positive integers, ${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_i
\subset {\mathbf k}[\Sigma_{{\hh}_i}]$ for $m+ 1 \leq i \leq r$, and $m$ is
an integer such that $1 \leq m \leq r$. Having in mind the description
of the space of symmetric multilinear maps given in
Example~\ref{exxx}, we extend the definition of ${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_i$ also to
$1 \leq i \leq m$, by putting ${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_i: = I_{\hh_i}$.
The first step is to identify
the exterior power $\mathop{{\LAND}}\displaylimits_{1 \leq i \leq m} \Sym(\otexp V{{\hh}_i},V)$
with the fixed point set of an action of a suitable finite group. This
can be done as follows.
For $1 \leq w \leq m$, let $A(w) \subset \{\rada 1m\}$ be the subset
$A(w) := \{1 \leq i \leq m;\ {\hh}_i = {\hh}_w\}$. Then
\[
\{\rada 1m\} = \textstyle\bigcup_{1 \leq w \leq m} A(w)
\]
is a decomposition of $\{\rada 1m\}$ into not necessarily distinct
subsets. Let $\fA \subset \Sigma_m$ be the subgroup of permutations
of $\{\rada 1m\}$ preserving this decomposition.
The group $\fA$ acts on $\bigotimes _{1 \leq i \leq m}
\Sym(\otexp V{{\hh}_i},V)$ by permuting the corresponding
factors. If we consider this tensor product as a left $\fA$-module
with this permutation action twisted by the signum representation, then
\[
\mathop{{\rm \EXT}}\displaylimits_{1 \leq i \leq m} \Sym(\otexp V{{\hh}_i},V) \cong
\left(\bigotimes_{1 \leq i \leq m} \Sym(\otexp V{{\hh}_i},V)
\right)^\fA.
\]
The above left $\fA$-action on $\bigotimes _{1 \leq i
\leq m} \Sym(\otexp V{{\hh}_i},V)$ induces a dual ${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-equivariant
right $\fA$-action on the space~(\ref{piano-nad-hlavou}).
There is a right $\fA$-action on the quotient $\wGr_{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} = {\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}
\backslash \wGr$ defined as
follows. For a graph $G \in \wGr$ representing an element $[G] \in
\wGr_{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}$ and for $\sigma \in \fA$, let
$G^\sigma$ be the graph obtained from $G$ by permuting
the vertices $\Rada F1m$ according to $\sigma$. We then put $[G]\sigma
:= {\rm sgn\/}(\sigma) [G^\sigma]$. Since, by the
definition of $\fA$, $\sigma$ may interchange only vertices with the
same number of inputs and the same symmetry, our definition of
$G^\sigma$ makes sense.
It is simple to see that
the map $\widehat{\mathrm R}}\def\uR{{\underline{R}}_n$ in~(\ref{jeste_jeden_den}) is $\fA$-equivariant,
giving rise to the map
\[
\widehat{\mathrm R}}\def\uR{{\underline{R}}_n / \fA : \wGr_{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}/\fA \to
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}({\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_1}(\otexp V{{\hh}_1},V) \ot \cdots \ot
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_r}(\otexp V{{\hh}_r},V),V)/\fA
\]
of right cosets.
The codomain of $\widehat{\mathrm R}}\def\uR{{\underline{R}}_n/ \fA$ is easily seen to be isomorphic to the
subspace of ${{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}$-equivariant elements
in~(\ref{piano-nad-hlavou}). The above calculations are summarized in the
following proposition in which
${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} := \wGr_{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}/\fA$ and $\rR^m_n:= \widehat{\mathrm R}}\def\uR{{\underline{R}}_n / \fA$.
\begin{proposition}
\label{zabere_to??}
Let $r,\Rada \hh 1r$ be non-negative integers, $1 \leq m \leq r$, and
${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_i \subset {\mathbf k}[\Sigma_{\hh_i}]$ for $m+1 \leq i \leq r$. Then the map
\begin{equation}
\label{zitra_na_kole}
\rR^m_n :{\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} \to {\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}\left(\mathop{{\rm \EXT}}\displaylimits_{1 \leq i \leq m}
\Sym(\otexp V{\hh_i},V) \ot
\bigotimes_{m+1 \leq i \leq r}
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_i}(\otexp V{\hh_i},V),V\right)
\end{equation}
constructed above is an epimorphism.
If, moreover, the dimension $n$ of $V$
$\geq$ the number of edges of graphs spanning
${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}$, $\rR^m_n$ is also an isomorphism.
\end{proposition}
The following result says that the presence of vertices with symmetric
inputs miraculously {\em extends\/} the stability range
(Definition~\ref{stab}). In applications, these vertices will
represent the Lie algebra generators in the Chevalley-Eilenberg
complex.
\begin{proposition}
\label{zitra_odletam_z_IHES_do_Prahy}
Suppose that $\Rada \hh 1m \geq 2$.
If $n \geq e-m$, where $n$ is the dimension of $V$ and $e$
the number of edges of graphs spanning
${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}$, then the map $\rR^m_n$ in Proposition~\ref{zabere_to??}
is an isomorphism.
\end{proposition}
\begin{proof}
Let $G$ be a
graph spanning ${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}$ and $S \subset {\it Edg}(G)$ a subset of edges of $G$
such that $\card(S) > n$. For each permutation $\sigma$ of elements
of $S$, denote by $G_\sigma$ the graph obtained by cutting the edges
belonging to $S$ in the middle and regluing them following the
automorphism $\sigma$. The linear combination
\begin{equation}
\label{eeee}
\sum_{\sigma \in \Sigma_S}{\rm sgn\/}(\sigma) \cdot G_\sigma \in
{\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}
\end{equation}
is then a graph-ical representation of the expression
in~(\ref{Pozitri_zpet_do_Prahy}), thus the kernel of $\rR^m_n$ is
generated by expressions of this type. Since, by assumption,
$\card(S) \leq n+m$ and $\Rada \hh 1m \geq 2$, the set $S$ must
necessarily contain two input edges of the {\em same\/} symmetric
vertex of $G$. This implies that the sum~(\ref{eeee}) vanishes,
because with each graph $G_\sigma$ it contains the same graph with the
opposite sign. This shows that the kernel of $\rR^m_n$ is trivial.
\end{proof}
\begin{remark}
\label{po_navratu_z_polska}
By an absolutely straightforward generalization of the above
constructions, one can obtain versions of
Proposition~\ref{zabere_to??} and
Proposition~\ref{zitra_odletam_z_IHES_do_Prahy} describing the space
\begin{equation}
\label{Eli_chce_prijet_v_prosinci.}
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)}\left(\mathop{{\rm \EXT}}\displaylimits_{1 \leq i \leq m}
\Sym(\otexp V{\hh_i},V) \ot
\bigotimes_{m+1 \leq i \leq r}
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_i}^{\So_i}(\otexp V{\hh_i},\otexp V{p_i}),
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}}^{\So}(\otexp V{c},\otexp V{d})\right)
\end{equation}
in terms of a space spanned by graphs. Since the notational aspects of
such a generalization are horrendous, we must leave the details as an
exercise to the reader.
\end{remark}
\section{A particular case}
\label{s5}
We finish this note by a corollary tailored for the needs of~\cite{markl:na}.
For non-negative integers $m,b$ and $c$, denote by
${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{\bullet(b)\nabla(c)}$ the space spanned by directed, oriented
graphs with
\begin{itemize}
\item[(i)]
$m$ unlabeled `white' vertices with fully symmetric inputs and
arities $\geq 2$,
\item[(ii)]
$b$ `black' labelled vertices with fully
symmetric inputs and arities $\geq 0$,
\item[(iii)]
$c$ labelled $\nabla$-vertices, and
\item[(iv)]
the anchor \anchor.
\end{itemize}
In item~(iii), a $\nabla$-vertex means a vertex with the symmetry
described in Example~\ref{exxy}, see also Example~\ref{ja}.
As in Example~\ref{zivotosprava}, an {\em orientation\/}
is given by
a linear order on the set of white vertices. If $G'$ and $G''$ are graphs
in ${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{\bullet(b)\nabla(c)}$ whose orientations differ by an odd number of
transpositions, then we identify $G' = -G''$ in
${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{\bullet(b)\nabla(c)}$.
\begin{corollary}
\label{boli_mne_za_krkem}
For each non-negative integers $m,b$ and $c$ there exists a natural
epimorphism
\begin{eqnarray*}
\lefteqn{
\rR^m_{\bullet(b)\nabla(c),n} :{\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{\bullet(b)\nabla(c)} \epi}
\\
&& \bigoplus_{\vec h \in \frH}
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_{{\rm GL\/}}\def\GL#1#2{{\GLname\/}^{(#1)}_{#2}(V)} \hskip -.2em
\left(\mathop{{\rm \EXT}}\displaylimits_{1 \leq i \leq m} \hskip -.2em
\Sym(\otexp V{{\hh}_i},V) \ot
\bigotimes_{m+1 \leq i \leq m+b} \hskip -1.2em \Sym(\otexp V{{\hh}_i},V)
\bigotimes_{m+b+1 \leq i \leq m+b+c} \hskip -1.2em
{\mbox {\it Lin\/}}}\def\Sym{{\mbox {\it Sym\/}}_\Delta(\otexp V{{\hh}_i},V),V\right),
\end{eqnarray*}
with the direct sum taken over the set $\frH$ of all multiindices
$\vec h = (\Rada h1{m+b+c})$ such that
\[
\Rada h1m \geq 2,\
\Rada h{m+1}{m+b} \geq 0\ \mbox { and }\ \Rada h{m+b+1}{m+b+c} \geq 2.
\]
The map $\rR^m_{\bullet(b)\nabla(c),n}$ is an isomorphism if $n = \dim(V) \geq
b+c$.
\end{corollary}
\begin{proof}
The map $\rR^m_{\bullet(b)\nabla(c),n}$ is constructed by assembling
the maps $\rR^m_n$ from Proposition~\ref{zabere_to??} as follows. For
a multiindex $\vec h = (\Rada h1{m+b+c}) \in \frH$ as in the corollary
take, in Proposition~\ref{zabere_to??}, $r:= m+b+c$ and
\[
{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_i = {\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}_i(\vec h) :=
\cases{\adjust {.4} I_{{\hh}_i}}{for $m+1 \leq i \leq m+b$ and}%
{\rule{0pt}{1.2em}\nabla}{for $m+b+1 \leq i \leq r$,}
\]
see Examples~\ref{exxx}
and~\ref{exxy} for the notation. Let $\rR^m_n(\vec h)$ be the
map~(\ref{zitra_na_kole}) corresponding to the above choices and
$\rR^m_{\bullet(b)\nabla(c),n} :=
\bigoplus_{\vec h \in \frH}\rR^m_n(\vec h)$.
We only need to show that the graph
space ${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{\bullet(b),\nabla(c)}$ is isomorphic to the direct
sum of the double quotients
${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}(\vec h)} = {\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}(\vec h) \backslash \wGr /\fA$.
As we argued in Example~\ref{ja}, the left quotient
$\wGr_{{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}(\vec h)} = {{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}(\vec h)} \backslash \wGr$
is spanned by directed graphs with $r$ labelled
vertices $\Rada F1r$ such that the 1st type vertices $\Rada F1m$
(`white' vertices) have
fully symmetric inputs and arities ${\hh}_1,\ldots, {\hh}_m$, and the remaining
vertices $\Rada F{m+1}r$ are as in items (ii)--(iv) of the definition of
${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{\bullet(b)\nabla(c)}$ but with fixed
arities $\rada {h_{m+1}}{h_r}$.
Modding out $\wGr_{{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}(\vec h)}$ by $\fA$ identifies graphs
that differ by a relabelling of white vertices of
the same arity and the sign given by to the signum of this relabelling.
This clearly means that the map
\[
{\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{\bullet(b),\nabla(c)} \to
\bigoplus_{\vec h \in \frH} {\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}(\vec h)} = \bigoplus_{\vec h \in \frH}
\wGr_{{\mathfrak{{I}}}} \def\udelta{{\underline{\delta}}(\vec h)} / \fA
\]
that assigns to the first
(in the linear order given by the orientation) white vertex of graphs
generating ${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{\bullet(b),\nabla(c)}$
label $F_1$, to the second white vertex label $F_2$, etc.,
is an isomorphism. By simple combinatorics, graphs spanning
${\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{\bullet(b),\nabla(c)}$ have precisely $m+b+c$ edges which
completes the proof of the corollary.
\end{proof}
\begin{remark}
\label{ZASE_mne_boli_v_krku}
Proposition~\ref{zabere_to??} and its Corollary~\ref{boli_mne_za_krkem} was
obtained by applying the double-coset reduction ${\mathfrak{{I}}}} \def\udelta{{\underline{\delta}} \backslash \ {-}
/\fA$ and standard duality to the map $\widehat{\mathrm R}}\def\uR{{\underline{R}}_n$ of
Proposition~\ref{zabere_to?}.
Backtracking all the constructions involved, one can see that, in
Corollary~\ref{boli_mne_za_krkem}, the invariant
linear map $\rR^m_{\bullet(b)\nabla(c),n}(G)$ corresponding to a graph $G \in
{\EuScript {G}\rm r}}\def\plGr{{\rm pl\widehat{\EuScript {G}\rm r}}^m_{\bullet(b)\nabla(c)}$ is given by
the `state sum'~(\ref{beru_antibiotika}) {\em antisymmetrized\/}
in the white vertices.
\end{remark}
\def$'${$'$}
| {
"timestamp": "2008-02-28T22:05:38",
"yymm": "0801",
"arxiv_id": "0801.0418",
"language": "en",
"url": "https://arxiv.org/abs/0801.0418",
"abstract": "We describe a correspondence between GL_n-invariant tensors and graphs, and show how this correspondence accomodates various types of symmetries and orientations.",
"subjects": "Representation Theory (math.RT); Algebraic Topology (math.AT)",
"title": "Invariant tensors and graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9884918529698321,
"lm_q2_score": 0.8128673110375458,
"lm_q1q2_score": 0.8035127145061085
} |
https://arxiv.org/abs/1004.2517 | Upper and lower bounds for normal derivatives of spectral clusters of Dirichlet Laplacian | In this paper, we prove the upper and lower bounds for normal derivatives of spectral clusters $u=\chi_{\lambda}^s f$ of Dirichlet Laplacian $\Delta_M$, $$c_s \lambda\|u\|_{L^2(M)} \leq \| \partial_{\nu}u \|_{L^2(\partial M)} \leq C_s \lambda \|u\|_{L^2(M)} $$ where the upper bound is true for any Riemannian manifold, and the lower bound is true for some small $0<s<s_M$, where $s_M$ depends on the manifold only, provided that $M$ has no trapped geodesics (see Theorem \ref{Thm3} for a precise statement), which generalizes the early results for single eigenfunctions by Hassell and Tao. | \section{\bf Introduction}
Let $M$ be a smooth compact Riemannian manifold with boundary $\partial M = Y$. It is well known that minus the Dirichlet Laplacian $-\Delta_M$ on $M$ has discrete spectrum $0 < \lambda_1^2 < \lambda_2^2 \leq \lambda_3^2 \dots \to \infty$. Let $e_j$ be an $L^2$-normalized eigenfunction corresponding to $\lambda_j^2$, and let $\psi_j$ be the normal derivative of $e_j$ at the boundary. In \cite{Ozawa} Ozawa posed the following question:
{\it Do there exist constants
$0< c < C < \infty$, depending on $M$ but not on $j$, such that
\begin{equation}
c \lambda_j \leq \| \psi_j \|_{L^2(Y)} \leq C \lambda_j ?
\label{bounds}
\end{equation}}
Using heat kernel techniques, Ozawa \cite{Ozawa} showed that an averaged version of (\ref{bounds}) holds. More precisely, he showed that
$$
\sum_{\lambda_j < \lambda} \psi_j^2(y) =
\frac{\lambda^{n+2} }{ (4\pi)^{n/2} \Gamma((n/2)+2)}+ o(\lambda^{n+2}),
\quad \forall y \in Y.
$$
This asymptotic formula (after integrating over $Y$) would be implied by (\ref{bounds}) in view of Weyl asymptotics for the $\lambda_j$. In
\cite{HT} Hassell and Tao proved an upper bound of the form $\|\psi \|_2 \leq C\lambda$ for general manifolds, and a lower bound
$c \lambda \leq \|\psi \|_2$ provided that $M$ has no trapped geodesics ( see Theorem \ref{Thm3} for a precise statement).
Define the {\bf Spectral Cluster} $\chi_{\lambda}^s f$ with spectral band width $s$,
\begin{eqnarray*}
\chi_{\lambda}^s f &=&\sum_{\lambda_j\in[\lambda,\lambda+s)}e_j(f)
=\int_M[\sum_{\lambda_j\in[\lambda,\lambda+s)}e_j(x)e_j(y)]f(y)dy \label{SpecC} \\
e_j(f)&=& e_j(x)\int_M e_j(y)f(y)dy.
\end{eqnarray*}
The $L^2\to L^p$ estimates and gradient estimates on spectral clusters have been widely studied (see \cite{G}, \cite{SS1}-\cite{Xu2}). In general, the estimates for single eigenfunctions might be still true for spectral clusters in some sense. It is a natural question: {\it whether the upper and lower bound for normal derivatives of Dirichlet eigenfunctions in \cite{HT} is still true for normal derivatives of spectral clusters $\chi_{\lambda}^s f$.}
The key obstacle to answer this question directly from the estimates of Hassell and Tao \cite{HT} for single eigenfunctions is that $\partial_{\nu}e_i$ and $\partial_{\nu}e_j$ are NOT orthogonal in $L^2(\partial M)$ in general when $\lambda_i\neq \lambda_j$.
In this paper, based on the ideas in \cite{HT} plus some estimates for extra terms which come up for spectral clusters, we prove the upper bound from (\ref{bounds}) replacing $e_j$ by $\chi_{\lambda}^s f$ on general manifolds, for any $s>0$,
\t[{\bf Upper Bound}]\label{Thm1} Let $M$ be a smooth compact Riemannian manifold with boundary, and $u=\chi_{\lambda}^s f$ be the spectral clusters, we have that $\forall s>0$, there exists $C>0$ independent of $\lambda$ and $s$, such that
$$ \|\partial_{\nu} u\|_{L^2(\partial M)}\leq C\sqrt{1+s}(\lambda+s) \|u\|_{L^2(M)}.$$
\et
Especially for spectral projection $P_{\lambda}(f)=\displaystyle\sum_{\lambda_j\in(0,\lambda]}e_j(f)$, we have the upper bound estimate for its normal derivative:
\c\label{Cor1} Let $M$ be a smooth compact Riemannian manifold with boundary, for spectral projection $P_{\lambda}(f)$, we have
$$ \|\partial_{\nu}P_{\lambda}(f) \|_{L^2(\partial M)}\leq C\lambda^{3/2} \|P_{\lambda}(f)\|_{L^2(M)}.$$
\ec
Next it will be more subtle to study the lower bound from (\ref{bounds}) replacing $e_j$ by $\chi_{\lambda}^s f$. It might only hold when the value of $s$, the width of the spectral cluster, is sufficiently small. This can be seen in the case of the unit disc. If $s>\pi$, we can take a suitable linear combination of two consecutive eigenfunctions with angular dependence $e^{in\theta}$ (these are of the form $e^{in\theta} J_n(\alpha r)$ where $\alpha$ is a zero of the Bessel function $J_n$) and find a function in a ``wide" spectral cluster with zero normal derivative.
In order to obtain the lower bound, we first study on bounded Euclidean domains. Following an idea of Rellich \cite{R} for single eigenfunctions on bounded Euclidean domains, we have the lower bound from (\ref{bounds}) replacing $e_j$ by $\chi_{\lambda}^s f$ on bounded Euclidean domains for small $s>0$,
\t[{\bf Lower Bound for Euclidean Domains}]\label{Thm2}
Let $M \subset \R^n$ be a bounded Euclidean domain, and $R_M=\max_{x,y\in M}|x-y|$ be the diameter of the domain $M$, and $u=\chi_{\lambda}^s f$ be the spectral clusters. Then for $0<s<\frac{1}{2R_M}$, there exists $C_s>0$ independent of $\lambda$, such that
$$ \| \partial_{ \nu} u\|_{L^2(\partial M)}\geq C_s\lambda \|u\|_{L^2(M)}.$$
\et
Next we turn to study the lower bound on general manifolds. To show a basic picture of our theorem, we refer some simple examples from \cite{HT} for single eigenfunctions, i.e., the cylinder (Example 3 in \cite{HT}), the hemisphere (Example 4 in \cite{HT}), the spherical cylinder (Example 5 in \cite{HT}). In all these examples, the upper bound holds, but the lower bound fails. These examples lead one to expect that the failure of the lower bound is related to the presence of geodesics in $M$ which do not reach the boundary. We obtain the lower bound estimates as in \cite{HT} replacing $e_j$ by $\chi_{\lambda}^s f$ for small $0<s<s_M$, where $s_M$ depends on the manifold only,
\t[{\bf Lower Bound for Manifolds}]\label{Thm3}
Suppose $M$ has {\bf no trapped geodesics}, i.e., $M$ can be embedded in the interior of a compact manifold with boundary, $N$, of the same dimension, such that every geodesic in $M$ eventually meets the boundary of $N$, and $u=\chi_{\lambda}^s f$ be the spectral clusters. There exists $s_M>0$, which depends on the manifold only, such that for any $0<s<s_M$, there exists $C_s>0$ independent of $\lambda$, such that
$$ \|\partial_{\nu} u\|_{L^2(\partial M)}\geq C_s\lambda \|u\|_{L^2(M)}.$$
\et
We organize our paper as the following: In section 2, we prove a Rellich-type estimate from Green's formula, and some perturbation estimates to deal with the extra terms in the Rellich-type estimate. In section 3, we prove the the upper bound for general manifolds using the estimates from section 2 following the argument in \cite{HT}. In section 4, we prove the lower bound for Euclidean domains using the fact that the commutator $[-\Delta_M,x\cdot\nabla]=-2\Delta_M$, which gives the idea of the proof of lower bound for general case. In section 5, we show the lower bound for $L^2$ norm of $\partial_{\nu}u$ on an arbitrary compact Riemannian manifold $M$ satisfying the no trapped geodesics condition in Theorem \ref{Thm3} by finding a differential operator $P$ of order $2K-1$ which has a positive commutator with $-\Delta_M$, which depends on a trick due to Morawetz, Ralston and Strauss \cite{MRS}. In Appendix, we study the $L^2$ estimates for spectral clusters near the boundary, which are needed in the proof of Theorem \ref{Thm3}, following the same ideas in section 3 in \cite{HT} for single eigenfunctions.
In what follows we shall use the convention that $C$ denotes a constant that is not necessarily the same at each occurrence.
\section{\bf Rellich-type estimates and Perturbation estimates}
To prove the upper bound, and the lower bound for Euclidean domains, we use the following Lemma which we call a Rellich-type estimate.
\la({\bf Rellich-type estimates}) Let $u=\chi_{\lambda}^s f$ be the spectral projection of $f$. Then for any differential operator A,
\aa
\int_Y \partial_{\nu} u Au d\sigma &= &\int_M <u,[-\Delta,A]u>dg + \int_M <(-\Delta-\lambda^2)u,Au>dg \nn\\
&&-\int_M <u,A(-\Delta-\lambda^2)u>dg.\label{Rellich}
\eaa
\el
{\bf Proof:} The proof is very simple. By Green's Formula, one has
\begin{eqnarray*} \int_Y \partial_{\nu} u Au d\sigma - \int_Y u \partial_{\nu} Au d\sigma= \int_M <-\Delta u,Au>dg -\int_M <u,-\Delta Au>dg.\end{eqnarray*}
Note that $u\equiv 0$ on $Y$, left side of above equality gives left side of (\ref{Rellich}). Use the fact that
$[-\Delta,A]=[-\Delta-\lambda^2,A]$ to write the right side as
\begin{eqnarray*}
&& \int_M <(-\Delta-\lambda^2)u,Au>dg -\int_M <u,(-\Delta-\lambda^2)Au>dg\\
&=& \int_M <u,[-\Delta,A]u>dg + \int_M <(-\Delta-\lambda^2)u,Au>dg \\
&&-\int_M <u,A(-\Delta-\lambda^2)u>dg.
\end{eqnarray*}\qe
\r If we pick $f=e_j$ the eigenfunction with eigenvalue $\lambda_j^2$, using the fact that $\Delta e_j+\lambda_j^2 e_j=0$, the above Lemma is reduced to Lemma 2.1 in \cite{HT}.
\er
Since there have two additional terms in (\ref{Rellich}) comparing with Lemma 2.1 in \cite{HT}, we need estimates them by the following Lemma.
\la({\bf Perturbation estimates}) Let $u=\chi_{\lambda}^s f$ be the spectral projection of $f$, $A$ is a differential operator with order one, we have
\begin{eqnarray*}
&&||(-\Delta-\lambda^2)u ||_2\leq 2s(\lambda+s) ||u||_2;\\
&& ||A(-\Delta-\lambda^2)u ||_2\leq C_A s(\lambda+s)^2 ||u||_2.
\end{eqnarray*}
\el
{\bf Proof:} For the first inequality, by direct computation, we have:
\begin{eqnarray*}
||(-\Delta-\lambda^2)u ||_2^2&=&\int_M<(-\Delta-\lambda^2)u ,(-\Delta-\lambda^2)u >dg\\
&=&\sum_{\lambda_j\in[\lambda,\lambda+s)}(\lambda_j^2-\lambda^2)^2e_j^2(f)\\
&<&\sum_{\lambda_j\in[\lambda,\lambda+s)}(2s\lambda+s^2)^2e_j^2(f)\\
&<&4s^2(\lambda+s)^2||u||_2^2
\end{eqnarray*}
For the second inequality, since $A$ is a differential operator with order one and $M$ is
compact, we have pointwise estimates
$$|A f(x)|\leq C_A|\nabla f(x)|,\qquad \forall x\in M\; and \; \forall f\in C^1(M).$$
With this estimates, by direct computation, we have:
\begin{eqnarray*}
||A(-\Delta-\lambda^2)u ||_2^2&\leq& C_A^2||\nabla (-\Delta-\lambda^2)u ||_2^2\\
&=&C_A^2\int_M<\nabla(-\Delta-\lambda^2)u ,\nabla(-\Delta-\lambda^2)u >dg\\
&=&C_A^2\sum_{\lambda_j\in[\lambda,\lambda+s)}(\lambda_j^2-\lambda^2)^2\lambda_j^2e_j^2(f)\\
&<&C_A^2\sum_{\lambda_j\in[\lambda,\lambda+s)}(2s\lambda+s^2)^2\lambda_j^2e_j^2(f)\\
&<&C_A^24s^2(\lambda+s)^4||u||_2^2
\end{eqnarray*}
\qe
\section{\bf Upper bound for general manifolds}
In this section, we shall prove the the upper bound for general manifolds. Here we use the geodesic coordinates with respect to the boundary. We can find a small constant $\delta>0$ so that the map $x=(y,r)\in Y\times [0,\delta) \rightarrow M$,
sending $(y,r)$ to the endpoint $x$, of the geodesic of length $r$ which starts a $y\in Y=\partial M$ and is perpendicular to $Y$ is a local diffeomorphism. In this local coordinates $x=(y, r)$, the metric $ g = dr^2 + h_{ij} dy_i dy_j$ and the Riemannian measure
\begin{equation}
dg = k^2 dr dy, \quad where \quad k^4 = \det h_{ij}.\label{k}
\end{equation}
and the Laplacian can be written as
\begin{eqnarray*}
\Delta_g= \sum_{i,j=1}^n g^{ij}(x)\frac{\partial^2}{\partial x_i \partial x_j} + \sum_{i=1}^n b_i(x) \frac{\partial}{\partial x_i},
\end{eqnarray*}
where $(g^{ij}(x))_{1\le i,j \le n}$ is the inverse matrix of $(g_{ij}(x))_{1\le i,j \le n}$, and $g^{nn}=1$, and $g^{nk}=g^{kn}=0$ for $k\neq n$. Also the $b_i(x)$ are $C^{\infty}$ and real valued.
{\bf Proof of Theorem \ref{Thm1}:} For $u=\chi_{\lambda}^s f$, to prove an upper bound for the $L^2$ norm of $\partial_{\nu} u$, we choose an operator $A$ so that the left hand side of (\ref{Rellich}) in Lemma 2.1 is a positive form in $\partial_{\nu} u$. To do this, we choose $A = \chi(r) \partial_r$, where $\chi \in C_c^\infty(\R)$ is identically $1$ for $r$ close to zero, and vanishes for $r \geq \delta$. The left hand side of (\ref{Rellich}) in Lemma 3.1 is then precisely the square of the $L^2$ norm of $\partial_{\nu} u$.
After one integration by parts for the first term of the right hand side of (\ref{Rellich}) in Lemma 3.1, there are first order (vector-valued) differential operators $B_1$, $B_2$ with smooth coefficients,
$$\int_M <u,[-\Delta,A]u>dg=\int_M <B_1 u, B_2 u>dg. $$
From Lemma 3.2, each term of the right hand side of (\ref{Rellich}) in Lemma 3.1 is dominated by
\begin{eqnarray*}
|\int_M <u,[-\Delta,A]u>dg|&=&|\int_M<B_1 u, B_2 u>dg|\leq C_A||\nabla u||_2^2 \leq C_A(\lambda+s)^2||u||_2^2\\
|\int_M <(-\Delta-\lambda^2)u,Au>dg|&\leq& ||(-\Delta-\lambda^2)u||_2||Au||_2\leq C_As(\lambda+s)^2||u||_2^2\\
|\int_M <u,A(-\Delta-\lambda^2)u>dg|&\leq& ||A(-\Delta-\lambda^2)u||_2||u||_2\leq C_As(\lambda+s)^2||u||_2^2
\end{eqnarray*}
where $C$ and $C_A$ depend on the domain, but not on $\lambda$. This proves the upper bound for any compact Riemannian manifold with boundary.
\qe
If we choose $A = Q^*Q\partial_r$ near the boundary in the above proof, with $Q$ an elliptic differential operator of order k in the y variables, one has the $H^k$ estimates for upper bound of the spectral clusters $u=\chi_{\lambda}^s f$:
\t\label{Thm-H-k}
\aa
||\partial_{\nu} u||_{H^k(Y)}\leq C\sqrt{1+s}(\lambda+s)^k||u||_2\label{deriv-upper-bounds}
\eaa
for any integer k, and hence (by interpolation) any real k.
\et
\section{\bf Lower bound for Euclidean domains}
In this section, we shall prove Theorem \ref{Thm2}, the lower bound for Euclidean domain $M \subset \R^n$.
{\bf Proof of Theorem \ref{Thm2}:}
We choose $A$ so that the first term in the right hand side, rather than the left hand side, of (\ref{Rellich}) in Lemma 2.1 is a positive form. Without lose of generality, assume $M\subset \{x\in \R^n| |x|\leq \frac{R_M}{2}\}$. We choose
\begin{equation}
A = \sum_{i=1}^n x_i \frac{\partial}{\partial x_i}=x\cdot\nabla.\nn
\end{equation}
As is very well known in scattering theory, the commutator of this with $-\Delta$ (which is minus the Euclidean Laplacian here) is $[-\Delta,A] = -2\Delta$, and for any $g\in C^1(M)$, $|A g(x)|\leq \frac{R_M}{2}|\nabla g(x)|$ for all $x\in M$. Hence, in this case the left side of (\ref{Rellich}) gives us
\begin{equation}
\int\limits_{Y} \frac{\partial u}{\partial \nu} Au \, d\sigma =
\int\limits_{Y} \nu \cdot x \, \big( \frac{\partial u}{\partial \nu} \big)^2 \, d\sigma
\leq C \| \partial_{\nu}u \|_2^2,
\label{lower-Euc}
\end{equation}
And the right side of (\ref{Rellich}) gives us
\begin{eqnarray}
&&RIGHTSIDE \; of\; (\ref{Rellich}) \nn\\
&\geq& \int_M <u,-2\Delta u>dg -\|(-\Delta-\lambda^2)u\|_2\|Au \|_2-\|u\|_2\|A(-\Delta-\lambda^2)u\|_2\nn\\
&\geq& 2\|\nabla u\|_2^2-\frac{R_M}{2}\Big[\|(-\Delta-\lambda^2)u\|_2\|\nabla u\|_2+\|u\|_2\|\nabla(-\Delta-\lambda^2)u\|_2\Big] \nn\\
&\geq& \Big(2\lambda^2-2R_M s(\lambda+s)^2\Big) \|u\|_2^2,\label{lower-Euc-2}
\end{eqnarray}
which gives the lower bound. The equality in (\ref{lower-Euc}) for single eigenfunctions was proved by Rellich \cite{R}.
\qe
\section{\bf The lower bound on Riemannian manifolds}
To find a lower bound for $L^2$ norm of $\partial_{\nu}\Big(\chi_{\lambda}^s f\Big)$ on an arbitrary compact Riemannian manifold $M$ satisfying the no trapped geodesics conditions of the main Theorem, we need to find a differential operator which has a positive commutator with $-\Delta_M$ as we did for domains in Euclidean spaces. One might wonder whether, on an arbitrary compact Riemannian manifold, with no trapped geodesics, one could choose a first order {\it differential} operator $A$ whose commutator with $-\Delta_M$ had a positive symbol. Example 8 in \cite{HT} shows that this is impossible in general.
Firstly, we have a first order pseudo-differential operator $A$ on $N$ which has the required property to leading order, i.e., such that the symbol of $i[-\Delta,A]$ is positive:
\begin{lem}\label{Q-lemma}({\bf Lemma 4.1 in \cite{HT}}) Given any geodesic $\gamma$ in $S^*N$, there is a first order, classical, self-adjoint pseudodifferential operator $Q$ satisfying the transmission condition (see \cite{Ho}, section 18.2), and properly supported on $N$, such that the principal symbol $\sigma(i[-\Delta,Q])$ of $i[-\Delta,Q]$ is nonnegative on $T^*M$, and
\begin{equation}
\sigma(i[-\Delta,Q]) \geq \sigma(-\Delta) = |\xi|^2
\label{comm-cond}
\end{equation}
on a conic neighborhood $U_\gamma$ of $\gamma \cap T^*M$.
\end{lem}
We now use Lemma~\ref{Q-lemma} to construct our operator $A$. For each geodesic $\gamma$ in $S^*N$, we have a conic neighborhood $U_\gamma$ as in the Lemma. By compactness of $S^*M$, a finite number of the $U_\gamma$ cover $S^*M$. Let $A$ be the sum of the corresponding $Q_\gamma$. Then Lemma~\ref{Q-lemma} implies that
\begin{equation}
\sigma(i[-\Delta,A]) \geq |\xi|^2 \; on \; T^*M .
\label{A}
\end{equation}
Secondly, we turn the pseudodifferential operator $A$ into a differential operator $P$ of order $2K-1$ with positive commutator with $-\Delta$ as Hassell and Tao did at Section 5 in \cite{HT} for single eigenfunctions, which depends on a trick due to Morawetz, Ralston and Strauss \cite{MRS}.
Recall some facts about spherical harmonics. Let $\Delta_{S^{n-1}}$ denote the Laplacian on the $(n-1)$-sphere, which has eigenvalues $k(n+k-2)$, $k = 0, 1, 2, \dots$, and the corresponding eigenspace be denoted $V_k$. We recall that for every $\phi \in V_k$, the function $r^k \phi$ (thought of as a function on $\R^n$ written in polar coordinates) is a homogeneous polynomial, of degree $k$, on $\R^n$. We summarize the needed results from Section 5 in \cite{HT} as the following proposition:
\p[Hassell-Tao \cite{HT}]
Since the symbol $a$ of the operator $A$ is odd, there is spherical harmonics expansion of $a$ restricted to the cosphere bundle of $N$
$$
a \restriction_{ S^*N}= \sum_{l=0}^\infty \phi_{2l+1}(x, \frac{\xi}{|\xi|}), \quad \phi_k(x, \cdot) \in V_k(S^*_x N).
$$
And there is a nature number $K$ such that the operator $A'$ with symbol
$$
a' = \sum_{l=0}^{K-1} \phi_{2l+1}(x, \frac{\xi}{|\xi|})
$$
also has positive commutator with $-\Delta$. Following \cite{MRS}, one can turn $A'$ into a differential operator $P$ of order $2K-1$, by letting
$$
p = \sigma(P) = \sum_{l=0}^{K-1} \phi_{2l+1}(x, \frac{\xi}{|\xi|}) |\xi|^{2K- 1}.
$$
Moreover, the symbol of $i[-\Delta,P]$ satisfies
$$
\sigma(i[-\Delta,P]) = |\xi|^{2K} \big( \sigma(i[-\Delta,P]) |_{|\xi| = 1} \big) \geq c|\xi|^{2K}
\quad for\; some\; c > 0.
$$
Applying the G$\mathring{a}$rding inequality to $Q = i[-\Delta,P]$, there is
\ee
\int_M \langle u, Qu \rangle dg \geq c \| u \|_{H^K(M)}^2 - C \Big( \| u \|_{L^2(M)}^2 + \sum_{k=0}^{K-1} \| \pa_r^k u \| _{H^{K-1/2-k}(Y)}^2 \Big),
\label{Garding}
\eee
where $c$ is a positive constant depending on $P$ and $(M,g)$.
\ep
{\bf Proof of Theorem \ref{Thm3}:}
Let $A=P$ in Lemma 2.1,
\aa
&&RIGHTSIDE \; of\; (\ref{Rellich}) \nn\\
&\geq& \int_M <u,Qu>dg -\|(-\Delta-\lambda^2)u\|_2\|Pu \|_2-\|u\|_2\|P(-\Delta-\lambda^2)u\|_2\nn\\
&\geq& \int_M <u,Qu>dg -C\Big[\|(-\Delta-\lambda^2)u\|_2\|u\|_{H^{2K-1}(M)}\nn\\
&& +\|u\|_2\|(-\Delta-\lambda^2)u\|_{H^{2K-1}(M)}\Big] \nn\\
&\geq& \int_M <u,Qu>dg -Cs\lambda^{2K}\|u\|_2^2\nn\\
&\geq& (c-Cs)\lambda^{2K} \| u \|_2^2 - C \Big( \| u \|_{L^2(M)}^2 + \sum_{k=0}^{K-1} \| \pa_r^k u \| _{H^{K-1/2-k}(Y)}^2 \Big), \label{lower-general}
\eaa
where we use Lemma 2.2 and $\|u\|_{H^k(M)}\leq C\lambda^k\|u\|_2,\; \forall k>0$ to estimate the extra terms, and make use of (\ref{Garding}) to obtain the last inequality. Thus, there exists a constant $s_M>0$, which depends on $M$ only, such that the first term in (\ref{lower-general}) is positive when $0<s<s_M$.
Next to consider the left hand side of (\ref{Rellich}). Let us write $u = k^{-1}v$, where $k$ is as in (\ref{k}), so that $v$ satisfies (\ref{v-eqn}) in Appendix. Then $Pu = \tilde P v$, where $\tilde P = P \circ k$ is a differential operator of order $2K-1$. Since $k$ is smooth, we obtain
\begin{equation}
C_s\lambda^{2K}\|u\|_2^2 \leq \|u\|_2^2 + \sum_{k=0}^{K-1} \| \pa_r^k v \| _{H^{K-1/2-k}(Y)}^2
+ \big| \int_{Y} \langle \partial_{\nu}v, \tilde P v \rangle \, d\sigma \big|.
\label{eqqq}\end{equation}
Since $v=0$ at $Y$, and we are interested in $\tilde P v |_Y$, we may assume that $\tilde P = P' \pa_r$, where $P'$ has order $2K-2$. Using (\ref{v-eqn}) in Appendix, we may replace $\pa_r^2 v$ by $-(\lambda - F)v - \pa_{y_i} (h^{ij} \pa_{y_j} v)+H$ repeatedly, until only $\pa_r \pa_y^\alpha v$ terms remain. Thus we have
\begin{eqnarray*}
\tilde P v |_Y = \sum_{j=0}^{K-1} \lambda^j P_j (\partial_{\nu}v),
\end{eqnarray*}
where $P_j$ is a differential operator on $Y$ of order $2(K-1-j)$, independent of $\lambda$. Hence (\ref{eqqq}) becomes
\begin{equation}
C'_s\lambda^{2K}\|u\|_2^2 \leq \|u\|_2^2 + \sum_{k=0}^{K-1} \lambda^{2k-2} \| \partial_{\nu}u \| _{H^{K-1/2-k}(Y)}^2
+ \sum_{j=0}^{K-1} \lambda^{2j}\big| \int_{Y} \langle \partial_{\nu}v, P_j (\partial_{\nu}v) \rangle \, d\sigma \big|.\nn
\end{equation}
The argument to reduce $Pu$ on $Y$ to $\sum_{j=0}^{K-1} \lambda^j P_j (\partial_{\nu}v)$ on $Y$ is the same as what Hassell and Tao did at Section 5 in \cite{HT} for single eigenfunctions.
Using the upper bound estimate (\ref{deriv-upper-bounds}) for $H^k$ norm on the sum over $k$ and for all terms in the sum over $j$ with $j < K-1$, we find
\begin{eqnarray*}
C''_s\lambda^{2K}\|u\|_2^2 \leq (1 + \lambda^{2K-1})\|u\|_2^2 + \lambda^{2K-2} \| \partial_{\nu}u \|_2^2
+ \lambda^{2K-1} \|u\|_2\| \partial_{\nu}u\|_2 .
\end{eqnarray*}
which gives
\begin{equation}
\| \partial_{\nu}u \|_2^2+\lambda\|u\|_2\| \partial_{\nu}u\|_2-(C''_s-\lambda^{-1}-\lambda^{-2K} )\lambda^{2}\|u\|_2^2\geq 0.\label{almost}
\end{equation}
Solve the inequality (\ref{almost}), for $\lambda$ large enough, we have constant $C_s$ independent of $\lambda$, such that
\begin{eqnarray*}
\| \partial_{\nu}u \|_2\geq C_s\lambda\|u\|_2,
\end{eqnarray*}
This proves the lower bound.
\qe
\r
One may also prove the lower bound following what Hassell and Tao did at Section 4 in \cite{HT} for single eigenfunctions almost line by line, while one need do some additional estimates on the nonhomogeneous terms like $H$ in (\ref{v-eqn}), which can be looked as small perturbation terms when $s>0$ is small enough. This approach is length and involves many pseudodifferential operator constructions and calculus.
\er
\section{\bf Appendix: Estimates for spectral clusters near the boundary}
Here we study the $L^2$ estimates for spectral clusters near the boundary, which are needed in the proof of Theorem \ref{Thm3}, following some ideas from section 3 in \cite{HT} with its erratum \cite{HT1} for single eigenfunctions and the upper bound for $\partial_{\nu}\Big(\chi_{\lambda}^s f\Big)$ from Theorem \ref{Thm1}.
As in section 3, we use the geodesic coordinates system $(y,r)$ near the boundary. Let us denote the boundary of $M$ by $Y$, and write $Y_r$ for the set of points at distance $r$ from the boundary, which is a submanifold for $r \le \delta$. Suppose that $u=\chi_{\lambda}^s f$ is a spectral cluster for Dirichlet Laplacian. Similar as Lemma 3.2 in \cite{HT} with its erratum \cite{HT1} for a single eigenfunction, we derive an estimate on the $L^2$ norm of the spectral cluster $u=\chi_{\lambda}^s f$ on $Y_r$, exploiting the fact that $u$ vanishes on the boundary.
\begin{Prop} \label{L7-2} There exists $C > 0$, independent of $\lambda$ and $s$, such that
\begin{equation}
\int_{Y_r} u^2 d\sigma(y) \leq C\sqrt{1+s}(\lambda+s)^2 r^2\|u\|_2^2 \quad \forall\; r \in [0, \frac{\delta}{3}].
\label{bdy-est}\end{equation}
\end{Prop}
It will be convenient to change to the function $v = ku$ (this is equivalent to looking at the Laplacian acting on half-densities). Denote $u_l=e_l(f)$, $v_l=ku_l$ for $l\in[\lambda, \lambda+s)$. From the equation (3.2) in \cite{HT}, $v_l$ solves the equation
\begin{equation}
\pa_r^2 v_l + \pa_i (h^{ij} \pa_j v_l) + \lambda_l^2 v_l + F v_l = 0, \quad h^{ij} = (h_{ij})^{-1}\nn
\end{equation}
where
$$
F = - k^{-1}\pa_r^2 k - k^{-1} \pa_i ( h^{ij} \pa_j k )
$$
is a smooth function on $M$. We have
\begin{equation}\label{v-eqn}
\pa_r^2 v + \pa_i (h^{ij} \pa_j v) + \lambda^2 v + F v = \sum_{\lambda_l\in[\lambda, \lambda+s)}(\lambda^2-\lambda_l^2)v_l=H
\end{equation}
As did in section 3 of \cite{HT} for a single eigenfunction, where the nonhomogeneous term $H$ doesn't appear, for the spectral cluster $u$, we define a sort of `energy' $E(r)$ for each value of $r$:
\begin{equation}
E(r) = \frac1{2} \int_{Y_{r}} \big( v_r^2 + (\lambda^2 + F)v^2 - h^{ij} \pa_i v \pa_j v-Hv \big) dy.
\end{equation}
This is obtained formally from the energy for hyperbolic operators, with $r$ playing the role of a time variable, by switching the sign of the term involving tangential derivatives. Similar as Lemma 3.1 in \cite{HT}, we have the following estimate for $E(r)$
\begin{lem}\label{L7-1} For $r \in [0, \delta]$,
\begin{equation}\label{energy-est}
|E(r)| \leq C\sqrt{1+s}(\lambda+s)^2\|u\|_2^2
\end{equation}
where $C$ is independent of $\lambda$ and $s$.
\end{lem}
{\bf Proof of Lemma \ref{L7-1}:}
From the upper bound argument in section 3, we know that $E(0) = \frac1{2} \| \partial_{\nu} u \|_2^2 \leq C\lambda^2\| u \|_2^2$. We compute the derivative of $E(r)$:
\begin{eqnarray*}
\frac{\pa}{\pa r} E(r) = \int_{Y_{r}} \big( \pa_r^2 v \pa_r v +(\lambda^2 + F)v \pa_r v -h^{ij} \pa_i v \pa_r \pa_j v -Hv_r\\
+ \frac{\pa h^{ij}}{\pa r} \pa_i v \pa_j v + \frac{\pa F}{\pa r} v^2 -H_rv\big) dy.
\end{eqnarray*}
Integrating by parts in the third term, using the equation for $v$, and applying Cauchy-Schwarz to last term, we
obtain
\begin{eqnarray*}
\big| \frac{\pa E}{\pa r} \big|(r) &\leq& C \int_{Y_{r}} \big( v^2 + |\nabla v|^2 +\lambda^2|v|^2+\lambda^{-2}|\nabla H|^2\big) dy \\
&\leq& C \int_{Y_{r}} \big( u^2 + |\nabla u|^2+ \lambda^2|u|^2+\lambda^{-2}|\nabla (H/k)|^2\big) k^2 dy.
\end{eqnarray*}
Thus, for $r_0 \in [0, \delta]$,
\begin{eqnarray*}
E(r_0) &=& E(0) + \int_0^{r_0} \frac{d}{dr} E(r) dr \\
&\leq & C\lambda^2\| u \|_2^2 + \int_M \big( u^2 + |\nabla u|^2+ \lambda^2|u|^2
+\lambda^{-2}|\nabla (H/k)|^2\big) dg\\
&\leq& C\sqrt{1+s}(\lambda+s)^2\|u\|_2^2,
\end{eqnarray*}
where we use Lemma 2.2 to estimate them last term.
\qe
\vspace{4mm}
{\bf Proof of Proposition \ref{L7-2}:}
Here we follow the main idea of proof of Lemma 3.2 in \cite{HT} with its erratum \cite{HT1} for single eigenfunctions.
Consider the $L^2$ norm on $Y_r$,
$$L(r) = \int_{Y_r} u^2 k^2 dy = \int_{Y_r} v^2 dy.$$
And we have
\begin{equation}\label{u-bd}
\int_0^\delta L(r) dr \leq \int_M u^2 \, dg = \|u\|_2^2.
\end{equation}
By direct computation, we have
$$ L'(r_0) = \int_{Y_{r_0}} 2 v v_r\ dy\quad \mathrm{and} \quad L''(r_0) =4 \int_{Y_{r_0}} v_r^2 \ dy \ - 4E(r_0).$$
On the other hand, from Cauchy-Schwarz we have
$$ 4 \int_{Y_{r_0}} v_r^2 \ \geq \ \frac{(\int_{Y_{r_0}} 2 v v_r\ dy)^2}{\int_{r=r_0} v^2\ dy} = \frac{L'(r_0)^2}{L(r_0)}.$$
Thus we have the differential inequality for $L(r)$:
\begin{equation}
L'' \geq \frac{(L')^2}{L} - C\sqrt{1+s}(\lambda+s)^2\|u\|_2^2 ,
\label{diff-ineq}
\end{equation}
for some constant $C$ depending only on the manifold $M$. Define the quantity
$$
B(r) := \frac{L'(r)^2}{L(r)^2} - \frac{C\sqrt{1+s}(\lambda+s)^2\|u\|_2^2}{L(r)}
$$
For any $r\in [0, \delta]$ with $L'(r)>0$, from (\ref{diff-ineq}) we have
$$
B'(r) = \frac{2 L' L''}{L^2} - \frac{2 (L')^3}{L^3} + \frac{2 C\sqrt{1+s}(\lambda+s)^2\|u\|_2^2 L'}{L^2} \geq 0.
$$
Hence $B(r)$ is non-decreasing for $r\in [0, \delta]$ with $L'(r)>0$.
\vspace{4mm}
{\bf Claim:} There is $\Lambda>0$ such that for any $\lambda\geq \Lambda$, either $L'(r)\leq 0$ or $B(r)\leq 0$ are true for all $0<r<\delta/3$.
\vspace{4mm}
Define $O_{\lambda}=\{r | L'(r)>0,\; 0<r<\delta \}=\cup (a_n^{\lambda}, b_n^{\lambda})\subset (0,\delta]$. For any $r\in (a_n^{\lambda}, b_n^{\lambda})$ with $b_n^{\lambda}<\delta$, we have
$$
B(r)\leq B(b_n^{\lambda})= - \frac{C\sqrt{1+s}(\lambda+s)^2\|u\|_2^2}{L(b_n^{\lambda})}\leq 0.
$$
Hence {\bf Claim} will be true unless there is an unbounded sequence of $\lambda$ such that $B(r_0)> 0$ for some $r_0\in (a_n^{\lambda}, \delta)$ and $0 < r_0 < \delta/3$, where $(a_n^{\lambda}, \delta)$ is one subinterval of $O_{\lambda}$. Then we would have $B(r) > 0$ for all $r \geq r_0$, so
$$
L'(r)^2 > 2C\sqrt{1+s}(\lambda+s)^2\|u\|_2^2 L(r) \hbox{ for all } r \geq r_0.
$$
In particular $L'(r)$ would be strictly positive for $r \geq r_0$. We rearrange this as
$$
(L(r)^{1/2})' = \frac{1}{2} L'(r) L(r)^{-1/2} > \sqrt{C\sqrt{1+s}(\lambda+s)^2\|u\|_2^2} \hbox{ for all } r > r_0.
$$
This would give
$$
L(r) \geq C'(1+s) \lambda^2(r-r_0)^2\|u\|_2^2 \hbox{ for all } r >\delta/3>r_0.
$$
This would contradict the bound (\ref{u-bd}) for large $\lambda$. Hence {\bf Claim} is true.
\vspace{4mm}
Thus, for $\lambda \geq \Lambda$ (where $\Lambda$ is obtained from {\bf Claim} and depends only on $M$), we must have $B(r) \leq 0$ or $L'(r) \leq 0$ for all $0 < r \leq \delta/3$. In either case
$$
(L(r)^{1/2})' = \frac{1}{2} L'(r) L(r)^{-1/2} \leq \sqrt{C\sqrt{1+s}(\lambda+s)^2\|u\|_2^2} \hbox{ for all } r \leq \frac{\delta}{3}.
$$
Since $L(0) = 0$, this implies (\ref{bdy-est}) for $\lambda \geq \Lambda$.
Next for $\lambda < \Lambda$, since
$$
u=\sum_{\lambda_j\in[\lambda,\lambda+s)}e_j(f)\leq \Big(\sum_{\lambda_j\in[\lambda,\lambda+s)}e_j^2\Big)^{1/2}\Big(\sum_{\lambda_j\in[\lambda,\lambda+s)}||e_j(f)||_2^2\Big)^{1/2}
=\Big(\sum_{\lambda_j\in[\lambda,\lambda+s)}e_j^2\Big)^{1/2}||u||_2
$$
we have
$$
\int_{Y_r} u^2 d\sigma(y)\leq \int_{Y_r} \sum_{\lambda_j\in[\lambda,\lambda+s)}e_j^2 d\sigma(y)\|u\|_2^2 \leq C\Big(\sum_{\lambda_j\in[\lambda,\lambda+s)}\lambda_j^2\Big) r^2\|u\|_2^2\leq C_{\Lambda} r^2\|u\|_2^2
$$
where we make use of the result of Lemma 3.2 in \cite{HT} with its erratum \cite{HT1} for single eigenfunctions:
$$
\int_{Y_r} e_j^2 d\sigma(y)\leq C\lambda_j^2, \quad \lambda_j\leq \Lambda+s.
$$
\qe
\vspace{5mm}
{\bf Acknowledgement:} The author would like to thank Professor Andrew Hassell for pointing out a mistake in first version of this paper and some helpful suggestions on this paper.
\bibliographystyle{abbrv}
| {
"timestamp": "2011-06-20T02:00:26",
"yymm": "1004",
"arxiv_id": "1004.2517",
"language": "en",
"url": "https://arxiv.org/abs/1004.2517",
"abstract": "In this paper, we prove the upper and lower bounds for normal derivatives of spectral clusters $u=\\chi_{\\lambda}^s f$ of Dirichlet Laplacian $\\Delta_M$, $$c_s \\lambda\\|u\\|_{L^2(M)} \\leq \\| \\partial_{\\nu}u \\|_{L^2(\\partial M)} \\leq C_s \\lambda \\|u\\|_{L^2(M)} $$ where the upper bound is true for any Riemannian manifold, and the lower bound is true for some small $0<s<s_M$, where $s_M$ depends on the manifold only, provided that $M$ has no trapped geodesics (see Theorem \\ref{Thm3} for a precise statement), which generalizes the early results for single eigenfunctions by Hassell and Tao.",
"subjects": "Analysis of PDEs (math.AP); Spectral Theory (math.SP)",
"title": "Upper and lower bounds for normal derivatives of spectral clusters of Dirichlet Laplacian",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9912886152849366,
"lm_q2_score": 0.8104788995148791,
"lm_q1q2_score": 0.8034185060177638
} |
https://arxiv.org/abs/1305.2661 | On Improved Bounds on Bounded Degree Spanning Trees for Points in Arbitrary Dimension | Given points in Euclidean space of arbitrary dimension, we prove that there exists a spanning tree having no vertices of degree greater than 3 with weight at most 1.559 times the weight of the minimum spanning tree. We also prove that there is a set of points such that no spanning tree of maximal degree 3 exists that has this ratio be less than 1.447. Our central result is based on the proof of the following claim:Given $n$ points in Euclidean space with one special point $V$, there exists a Hamiltonian path with an endpoint at $V$ that is at most 1.559 times longer than the sum of the distances of the points to $V$.These proofs also lead to a way to find the tree in linear time given the minimal spanning tree. | \section{Abstract}
Given points in Euclidean space of arbitrary dimension, we prove that there exists a spanning tree having no vertices of degree greater than 3 with weight at most 1.559 times the weight of the minimum spanning tree. We also prove that there is a set of points such that no spanning tree of maximal degree 3 exists that has this ratio be less than 1.447. Our central result is based on the proof of the following claim:
Given $n$ points in Euclidean space with one special point $v$, there exists a Hamiltonian path with an endpoint at $v$ that is at most 1.559 times longer than the sum of the distances of the points to $v$.
These proofs also lead to a way to find the tree in linear time given the minimal spanning tree.
\section{Introduction}
\label{sec:problem}
The minimum spanning tree (MST) problem in graphs is perhaps one of the most basic problems in graph algorithms. An MST is a spanning tree with minimal sum of edge weights. Efficient algorithms for finding an MST are well known.
One variant on the MST problem is the bounded degree MST problem, which consists of finding a spanning tree satisfying given upper bounds on the degree of each vertex and with minimal sum of edges weights subject to these degree bounds.
In general, this problem is NP-hard \cite{NPhard}, so no efficient algorithm exists. However, there are certain achievable results. For undirected graphs, Singh and Lau \cite{SinghLau} found a polynomial time algorithm to generate a spanning tree with total weight no more than that of the bounded degree MST and with each vertex having degree at most one greater than that vertex's bound. If the graph is undirected and satisfies the triangle inequality, Fekete and others \cite{network} bound the ratio of the total weight of the bounded-degree MST to that of any given tree, with a polynomial-time algorithm for generating a spanning tree satisfying the degree constraints and this ratio bound.
The Euclidean case, with vertices being points in Euclidean space and edge weights being Euclidean distances, also has a rich history. We denote (following Chan in \cite{1.633}) by $\tau_k^d$ the supremum, over all sets of points in $d$-dimensional Euclidean space, of the ratio of the weight of the bounded degree MST with all degrees at most $k$ to the weight of the MST with no restrictions on degrees ($\tau_k^\infty$ is the supremum of $\tau_k^d$ over all $d$). For $k=2$, the bounded-degree MST problem becomes the Traveling Salesman Problem and $\tau_2^d=2$ \cite{network}, thus making $k=3$ the first unsolved case.
Papadimitriou and Vazirani \cite{NPhard} showed that finding the degree-3 MST is NP-hard. Khuller, Raghavachari, and Young \cite{1.67} showed that $1.104\approx (\sqrt{2}+3)/4 \leq \tau_3^2 \leq 1.5$ and $1.035< \tau_4^2 \leq 1.25$. Chan \cite{1.633} improved the upper bounds to 1.402 and 1.143, respectively.
Jothi and Raghavachari \cite{degree4} showed that $\tau_4^2 \leq (2+\sqrt{2})/3\approx 1.1381$. $\tau_5^2=1$ since there is always an MST with maximal degree 5 or less \cite{degree5}.
These same papers also studied the problem in higher dimensions. Khuller, Raghavachari, and Young \cite{1.67} gave an upper bound on $\tau_3^\infty$ of $5/3\approx 1.667$, which Chan \cite{1.633} improved to $2\sqrt{6}/3\approx 1.633$. These two followed the same approach, proving these bounds on a certain ratio, which we will call $r$. $r$ is the maximum ratio between the shortest path through a collection of points starting at a special point and the size of a star centered at that point. It is conjectured to actually be 1.5.
Khuller, Raghavachari, and Young \cite{1.67} showed that $\tau_3^\infty \leq r$. This is achieved in linear time as follows:
\begin{enumerate}
\item root the original tree
\item treating the root as $v$, find a Hamiltonian path with ratio at most $r$ through its children.
\item repeat recursively on each child.
\end{enumerate}
Each vertex then has at most 3 neighbors: two as a child and one as a parent.
We improve previous upper bounds on $r$, and thus $\tau_3^\infty$, to 1.559. The proof leads to a linear time algorithm for generating the path and thus the bounded degree tree. Our approach is based on Chan's, but we weigh paths differently and select the number of points to remove when performing the induction based on the distances of points to $v$.
We also find, by construction, a non-trivial lower bound of about 1.447 on $\tau_3^\infty$.
In Section \ref{sec:lemmas}, we go over $r$ a bit more carefully as well as refering to a useful paper and discuss how we will use it. In Section \ref{sec:upperbound}, we improve the upper bound on $r$ to 1.559, and in Section \ref{sec:lowerbound} we improve the lower bound on $\tau_3^\infty$ to 1.447.
\section{Preliminaries}
\label{sec:lemmas}
$r$ is properly defined as follows:
Given point $v$ and $m$ points $a_1,a_2,\ldots,a_m$ in a Euclidean space of arbitrary finite dimension, let $\displaystyle S=\sum_{i=1}^n d(v,a_i)$ and let $L$ be the length of the shortest possible path that starts at $v$ and goes around the other points in some order (it does not go back to $v$). Then $r$ is the supremum of the possible values of $L/S$ over all arrangements of points in any number of dimensions. $r=1.5$ is achieved for $m=2$ in one dimension by the points $v=0,a_1=1,a_2=-1$.
We use the results of Young \cite{distsum} multiple times in order to bound certain sums of distances.
This paper deals with the maximum of weighted sums (with weights $w_{i,j}$) of lengths between $n$ points in $n-1$ dimensional Euclidean space, given that each point $a_i$ is specified as being no further than some distance $l_i$ from the origin.
\begin{equation}\label{eq: Young}
\max \left(\sum_{1\leq i<j\leq n} w_{i,j} d(a_i,a_j) \right)=\min \left(\sqrt{\sum_{1\leq i<j\leq n} \frac{w^2_{\displaystyle i,j}}{x_ix_j}}\sqrt{\sum_{i=1}^n l^2_ix_i}\sqrt{\sum_{i=1}^n x_i}\right)
\end{equation}
where the maximum is taken over all arrangements of points and the minimum is taken over all nonnegative $x_i$.
Furthermore, Young specifies a relationship between the optimal arrangement and the values of $x_i$ where equality is achieved. Thus one can iteratively approximate the optimal arrangement using the same method as in \cite{experimental}, and then calculate $x_i$ values from it.
Whenever \eqref{eq: Young} is used to give an upper bound on some weighted sum of distances, the values for $x_i$ used are given in Appendix \ref{sec: xis}.
\section{Main proof of upper bound on $r$}
\label{sec:upperbound}
Let $r^*=1.559$. We will prove that $L\leq r^*S$ (as $L$ and $S$ are defined in the introduction), thus showing that $r<1.559$.
We will prove this by strong induction on the number of points. Given $m$ vectors $a_1, a_2,\ldots,a_m$ with norms $d_1 \geq d_2 \geq d_3 \geq \ldots \geq d_m > 0$, respectively, we will try to induct by removing $a_1,\ldots,a_n$ for various values of $n$. We will try to traverse the other points, ending at $a_{n+1}$ or $a_{n+2}$. We will then add in the removed points, projected onto a sphere, and look at the average length of a path traversing them and ending at $a_1$ or $a_2$. We will then move them out in stages, seeing how this average path length changes at each stage, in order to bound the final average path length in terms of the values $d_k$. Since the average is an upper bound on the minimum, this gives us a linear inequality on the $d_k$ which is a sufficient condition for the inductive step to work. We then use linear programming to show that one of these inequalities is satisfied and thus that induction is possible. For the algorithm, we will then follow the induction to split the points up into blocks, choose the starting and ending vertex for one block at a time, using brute force to find the shortest path that goes through all the block's points.
We start by defining $a_k=0$ and $d_k=0$ for all $k>m$. Introducing these new points does not affect the distance sum or the traversing path length, as the traversing path can go to them first.
We will prove the following claim:
\begin{claim}\label{claim}
There exist two paths $P_1$ and $P_2$ ending at $a_1$ and $a_2$, respectively, such that the average of the lengths of these paths is at most $r^*S$
\end{claim}
This clearly implies that $L\leq r^*S$.
We will proceed by strong induction on $m$. To induct, remove $a_1$ through $a_n$ (where $n\geq 3$ may vary), use the inductive hypothesis to find two paths $P_1$ and $P_2$ through the other $m-n$ points, ending at $a_{n+1}$ and $a_{n+2}$, respectively. We will then try to find four paths $Q_{11}, Q_{12}, Q_{21}, Q_{22}$ with path $Q_{ij}$ going from $a_{n+i}$ to $a_j$ and going through all points $a_1, \ldots, a_n$, so that the average length of these four paths is at most $r^*(\sum_{i=1}^n d_i)$.
We will assume that this is impossible, generate a set of conditions on the values $d_n$, then prove that one of the conditions must be violated.
\subsection{Given $n \geq 3$}
\label{subsec:given}
In this section, we will assume $n>3$ to be a given value. We will select it in Section \ref{sec: recombine}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.5]{LandSn.png}
\label{fig:thickseg}
\caption{The thick segments contribute to $L(a_n, \ldots, a_1)$; the dotted segments contribute to $S_n$.}
\end{figure}
Let $\displaystyle S_n=\sum_{i=1}^n d_i$.
Let $L(u_{n+1}, \ldots, u_1)$ be the shortest length of a path $u_{s_{n+1}}, \ldots, u_{s_1}$ where $s$ is a permutation of $1, \ldots, n+1$ so that $s_{n+1}=n+1$ and $\{s_1,s_2\}=\{1,2\}$.
Let $\overline{L(u_{n+1}, \ldots, u_1)}$ be the average length over all such paths $u_{s_{n+1}}, \ldots, u_{s_1}$. Then
\begin{equation} \label{eq: L()}
\begin{split}
L(u_{n+1}, \ldots, u_1)&=\frac{1}{n-1}d(u_1,u_2)+\frac{1}{2(n-1)}d(u_1,u_{n+1})+\frac{1}{2(n-1)}d(u_2,u_{n+1})\\
+&\sum_{i=3}^n \frac{3}{2(n-1)}d(u_1,u_i)+\sum_{i=3}^n \frac{3}{2(n-1)}d(u_2,u_i)\\
+&\sum_{i=3}^n \frac{1}{n-1}d(u_i,d_{n+1})+\sum_{3\leq i<j\leq n} \frac{2}{n-1}d(u_i,u_j)
\end{split}
\end{equation}
We wish to find upper bounds on $\overline{L(a_{n+1}, \ldots, a_1)}$ and $\overline{L(a_{n+2},a_n,a_{n-1}, \ldots, a_1)}$. For $1\leq~i\leq~n$, let
\begin{equation} \label{eq: defineDnik}
\begin{split}
D_{n,i,1}&=\overline{L \left( a_{n+1}, \ldots, a_{i+1},a_i, \frac{d_i}{d_{i-1}}a_{i-1}, \ldots, \frac{d_i}{d_1}a_1 \right)} \\
&- \overline{L \left( a_{n+1}, \ldots, a_{i+1}, \frac{d_{i+1}}{d_{i}}a_i, \frac{d_{i+1}}{d_{i-1}}a_{i-1}, \ldots, \frac{d_{i+1}}{d_1}a_1 \right)}.
\end{split}
\end{equation}
and let
\begin{equation} \label{eq: defineDnk}
D_{n,1}=\overline{L \left( a_{n+1},\frac{d_{n+1}}{d_n}a_n, \ldots, \frac{d_{n+1}}{d_1}a_1 \right)}.
\end{equation}
$D_{n,i,2}$ and $D_{n,2}$ are defined identically, except $a_{n+1}$ and $d_{n+1}$ are replaced with $a_{n+2}$ and $d_{n+2}$.
For $k=1$ or $k=2$,
\begin{equation} \label{eq: decompose}
\overline{L(a_{n+k},a_n,a_{n-1},\ldots,a_1)}=D_{n,k}+\sum_{i=1}^n D_{n,i,k}
\end{equation}
Intuitively, we are setting all points at distance $d_{n+k}$, then moving out $n$ points to distance $d_n$, then moving out $n-1$ points, and so on.
We will now find values $B_{n,i}$ and $B_n$ independent of the arrangement of $a_1,a_2,\ldots$ satisfying
\begin{equation}\label{eq: UgeqD}
\begin{split}
B_{n,i}(d_i-d_{i+1}) &\geq D_{n,i,1}\\
B_n d_n &\geq D_{n,1}
\end{split}
\end{equation}
The corresponding equations (substituting $a_{n+2}$ for $a_{n+1}$ and $d_{n+2}$ for $d_{n+1}$) will then hold for $D_{n,i,2}$ and $D_{n,2}$.
\subsubsection{$B_n$}
Define $g(n)$ as the maximum value of
\begin{align*}
d(u_1,u_2)+\frac{1}{2}d(u_1,u_{n+1})+\frac{1}{2}d(u_2,u_{n+1})+\sum_{j=3}^n\frac{3}{2}d(u_1,u_j)+\\
+\sum_{j=3}^n\frac{3}{2}d(u_2,u_j)+\sum_{j=3}^n d(u_j,u_{n+1})+\sum_{3\leq j<k\leq n} 2d(u_j,u_k)
\end{align*}
over unit vectors $u_1,\ldots,u_{n+1}$.
We use equation \eqref{eq: Young} to obtain upper bounds on $g(n)$, which we then use to find numerical values for $B_n$.
Substituting in $\eqref{eq: defineDnk}$ and $\eqref{eq: L()}$, we get that
\[D_{n,1}\leq d_n\frac{g(n)}{n-1}\]
and similarly for $D_{n,2}$. Thus we can set
\begin{equation} \label{eq:Bn}
B_n=\frac{g(n)}{n-1}
\end{equation}
\subsubsection{$B_{n,i}$}
For $i<j<k$,
\begin{equation} \label{eq: bothstay}
d(a_j,a_k)-d(a_j,a_k)=0.
\end{equation}
For $j\leq i< k$
\begin{equation} \label{eq: onemoves}
d \left( \frac{d_i}{d_j}a_j,a_k \right)-d \left(\frac{d_{i+1}}{d_j}a_j,a_k \right)\leq d_i-d_{i+1}.
\end{equation}
For $k<j\leq i$
\begin{equation} \label{eq: bothmove}
d \left( \frac{d_i}{d_j}a_j,\frac{d_i}{d_j}a_k \right)-d \left( \frac{d_{i+1}}{d_j}a_j,\frac{d_{i+1}}{d_j}a_k \right)\leq (d_i-d_{i+1}) d\left( \frac{a_j}{d_j},\frac{a_k}{d_k} \right).
\end{equation}
Define also $f(i)$ as the maximum value of
\begin{align*}
d(u_1,u_2)+\sum_{j=3}^i\frac{3}{2}d(u_1,u_j)+\sum_{j=3}^i\frac{3}{2}d(u_2,u_j) +\sum_{3\leq j<k\leq i}2d(u_j,u_k)
\end{align*}
over unit vectors $u_1,\ldots,u_i$.
We use equation \eqref{eq: Young} to obtain upper bounds on $f(i)$, which we then use to find numerical values for $B_{n,i}$.
For $i>2$, substituting $\eqref{eq: bothstay}$, $\eqref{eq: onemoves}$, $\eqref{eq: bothmove}$, and $\eqref{eq: L()}$ into $\eqref{eq: defineDnik}$, we get that
\begin{align*}
D_{n,i,1} &\leq (d_i-d_{i+1})\frac{f(i)}{n-1}+ (d_i-d_{i+1})\left(\frac{1+3(n-i)+(i-2)+2(n-i)(i-2)}{n-1} \right)\\
D_{n,i,1} &\leq (d_i-d_{i+1})\left(\frac{f(i)}{n-1}+2\frac{(n-i)(i-1)}{n-1}+1 \right)
\end{align*}
and similarly for $D_{n,i,2}$. So we set
\begin{equation} \label{eq:Bni}
B_{n,i}=\frac{f(i)}{n-1}+2\frac{(n-i)(i-1)}{n-1}+1
\end{equation}
For $i=2$, the same substition gives us $B_{n,2}=3$. For $i=1$, the same substition gives us $B_{n,1}=1.5$.
If there do not exist four paths $Q_{11}, Q_{12}, Q_{21}, Q_{22}$, then the average length of a path is too great, namely
\begin{align*}
\frac{1}{2} \left( \overline{L(a_{n+1}, \ldots, a_1)}+\overline{L(a_{n+2},a_n,a_{n-1}, \ldots, a_1)} \right) &> r^*S_n\\
\frac{d_{n+1}+d_{n+2}}{2}B_n + \left(d_n-\frac{d_{n+1}+d_{n+2}}{2}\right)B_{n,n}+\sum_{i=1}^{n-1} \left(d_i-d_{i+1}\right)B_{n,i} &> \sum_{i=1}^{n} r^*d_n
\end{align*}
\subsection{$n=3$}
If $d_4 \leq 0.541d_3$, then, by \eqref{eq: Young},
\[\overline{L \left( a_4, a_3, \frac{d_3}{d_2}a_2, \frac{d_3}{d_1}a_1 \right)}\leq 4.677d_3=3r^*d_3.\]
Then, since $B_{3,2}=3<2r^*$ and $B_{3,1}=1.5<r^*$ as in the last section,
\[\overline{L(a_4, a_3, a_2, a_1)}\leq r^*(d_1+d_2+d_3).\]
Similarly,
\[\overline{L(a_5, a_3, a_2, a_1)}\leq r^*(d_1+d_2+d_3)\]
so the induction works. Thus for $n=3$ we have the constraint
$d_4>0.541d_3$, which is stronger than the one obtained for $n=3$ in the previous section.
\section{Choosing $n$} \label{sec: recombine}
We obtained linear constraints for various values of $n\leq 10$. These, together with the constraints $d_i\geq d_{i+1}$, make a linear program (given in Appendix \ref{sec: lp}), which is unsatisfiable. Thus one of the constraints must not hold, so the induction works for some $n$.
\section{Algorithm}
We repeatedly use the inductive step to obtain a sequence of indices $0=n_0<n_1<n_2<~\ldots$. At stage $j$, we remove $n_j-n_{j-1}$ points.
The intermediate ending points are then of the form $n_j+k_j$ where each $k_j$ is 1 or 2.
Since we are only using $n\leq 10$, we can find all the paths $Q_{11}, Q_{12}, Q_{21}, Q_{22}$ by brute force in linear time.
Now, for both possible values of $k_1$, we find which value of $k_0$ gives the shorter path. Then, for both possible values of $k_2$, we find which value of $k_1$ will make the total path after $a_{n_2+k_2}$ shorter. We repeat until we get to some $n_j>m$, at which point we have two paths and choose the shorter one. This whole algorithm is linear.
\section{Lower bound on degree-3 tree ratios}
\label{sec:lowerbound}
Denote by $\sigma$ the sum of edge weights of the minimal spanning tree and by $\sigma_3$ the sum of edge weights of a minimal degree 3 tree. Denote by $(x_1,x_2,\ldots,x_n)$ the coordinates of a point in $n$ dimensions.
In six dimensions, let $O$ be the origin and let $v_1,v_2,\ldots,v_7$ be the vertices of a simplex with center at $O$ and radius $\sqrt{6}$. Let the coordinates of $v_i$ be $(v_{i,1},v_{i,2},v_{i,3},v_{i,4},v_{i,5},v_{i,6})$. Note that $d(v_i,v_j)=\sqrt{2*7/6}\sqrt{6}=\sqrt{14}$.
Now, given natural $N$ and $0<\alpha < 1$, take the following tree in $7N$ dimensions:
\begin{enumerate}
\item The origin, $O$, is the root.
\item Its $N$ children are $p_1,p_2,\ldots,p_N$. $p_i$ has coordinates 0 except $x_{7i}=1-\alpha$.
\item Each $p_i$ has seven children, $q_{i,1},q_{i,2},\ldots,q_{i,7}$ The coordinates of $q_{i,j}$ are all 0 except $x_{7i}=1$ and, for $k$ from 1 to 6, $x_{7i-k}=v_{j,k}$.
\end{enumerate}
Then $q_{i,1},q_{i,2},\ldots,q_{i,7}$ form a simplex with center distance $\alpha$ from $p_i$ and with each vertex distance $\sqrt{6}$ from the center.
It is easy to check that
\begin{align*}
d(p_i,p_h)&=\sqrt{2}(1-\alpha) \text{ for } i\neq h\\
d(q_{i,j},q_{i,k})&=\sqrt{14}=d(q_{i,j},q_{h,k}) \text{ for } j\neq k,h\neq i\\
d(p_i,q_{i,j})&=\sqrt{6+\alpha^2}\\
\sigma&=N(1-\alpha+7\sqrt{6+\alpha^2}).
\end{align*}
Then we can pick
\[\alpha=-1-\sqrt{7}+\sqrt{4+4\sqrt{7}},\]
which gives us $d(q_{i,j},q_{h,k})+d(p_i,p_h)=2d(p_i,q_{i,j})$.
Then we can define function $c$ on the vertices so that $c(O)=0, c(p_i)=d(p_i,p_h)/2$ and $c(q_{i,j})=d(q_{i,j},q_{h,k})/2$.
In that case, the length of edge $AB$ is at least $c(A)+c(B)$, so $c$ can be thought of a half-edge length.
Then, since there are $8N+1$ vertices, there are $8N$ edges, so there is a total of $16N$ edge endpoints. At most 3 of them contribute 0 to $\sigma_3$, at most $3N$ contribute $(1-\alpha)/\sqrt{2}$, and the remainder contribute $\sqrt{14}/2$. Thus
\[\sigma_3\geq 3N\left(\frac{1}{2}\sqrt{2}(1-\alpha)\right)+(13N-3)\left(\frac{1}{2}\sqrt{14}\right)\]
\[\frac{\sigma_3}{\sigma}=\frac{3N\left(\frac{1}{2}\sqrt{2}(1-\alpha)\right)+(13N-3)\left(\frac{1}{2}\sqrt{14}\right)}{N\left(1-\alpha+7\sqrt{6+\alpha^2}\right)}\]
\[\lim_{N \to \infty}\frac{\sigma_3}{\sigma}=\frac{3\left(\frac{1}{2}\sqrt{2}(1-\alpha)\right)+13\left(\frac{1}{2}\sqrt{14}\right)}{1-\alpha+7\sqrt{6+\alpha^2}}\approx 1.4473\]
Thus $\tau_3^\infty \geq 1.447$.
\section{Acknowledgements}
I thank Samir Khuller for suggesting that I work on this problem and Timothy Chan for improved notation and organization.
| {
"timestamp": "2014-01-07T02:06:28",
"yymm": "1305",
"arxiv_id": "1305.2661",
"language": "en",
"url": "https://arxiv.org/abs/1305.2661",
"abstract": "Given points in Euclidean space of arbitrary dimension, we prove that there exists a spanning tree having no vertices of degree greater than 3 with weight at most 1.559 times the weight of the minimum spanning tree. We also prove that there is a set of points such that no spanning tree of maximal degree 3 exists that has this ratio be less than 1.447. Our central result is based on the proof of the following claim:Given $n$ points in Euclidean space with one special point $V$, there exists a Hamiltonian path with an endpoint at $V$ that is at most 1.559 times longer than the sum of the distances of the points to $V$.These proofs also lead to a way to find the tree in linear time given the minimal spanning tree.",
"subjects": "Computational Geometry (cs.CG); Combinatorics (math.CO)",
"title": "On Improved Bounds on Bounded Degree Spanning Trees for Points in Arbitrary Dimension",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9854964173268185,
"lm_q2_score": 0.8152324938410784,
"lm_q1q2_score": 0.8034087019687904
} |
https://arxiv.org/abs/1507.05952 | Optimal Testing for Properties of Distributions | Given samples from an unknown distribution $p$, is it possible to distinguish whether $p$ belongs to some class of distributions $\mathcal{C}$ versus $p$ being far from every distribution in $\mathcal{C}$? This fundamental question has received tremendous attention in statistics, focusing primarily on asymptotic analysis, and more recently in information theory and theoretical computer science, where the emphasis has been on small sample size and computational complexity. Nevertheless, even for basic properties of distributions such as monotonicity, log-concavity, unimodality, independence, and monotone-hazard rate, the optimal sample complexity is unknown.We provide a general approach via which we obtain sample-optimal and computationally efficient testers for all these distribution families. At the core of our approach is an algorithm which solves the following problem: Given samples from an unknown distribution $p$, and a known distribution $q$, are $p$ and $q$ close in $\chi^2$-distance, or far in total variation distance?The optimality of our testers is established by providing matching lower bounds with respect to both $n$ and $\varepsilon$. Finally, a necessary building block for our testers and an important byproduct of our work are the first known computationally efficient proper learners for discrete log-concave and monotone hazard rate distributions. |
\section{Experimental Evaluation}
In this section, we evaluate the efficacy of our proposed monotonicity tester for $d=1$, and compare it to the estimator proposed in~\cite{BatuKR04}, which we will refer to as the BKR tester henceforth.
We note that even for $d=1$, our algorithm is seen to outperform the BKR tester in almost all cases.
Both our algorithm and BKR require a threshold for the test statistic.
It follows from our results that our estimator's threshold should be at least the standard deviation of the statistic when the distribution is monotone -- this is around $\sqrt{2n}$.
Also, our threshold should be smaller than the expectation of the statistic when the distribution is not in the class.
In particular, we use the value $2m\varepsilon^2+ \sqrt{2n}$ as the threshold for our test statistic.
The statistic of BKR imitates an $\ell_2$ test by introducing a discrete statistic that counts the number of collisions within nearby elements.
The argument is that monotone distributions will be locally uniform, and therefore the number of such local collisions should be small.
They also compare their algorithm with a threshold that is proportional to $O(\log^2 n/\varepsilon)$. However, the constant is not specified in their paper. We considered a few Zipf distributions, which are monotone, and perturbed the probability elements to generate distributions that are $\varepsilon$-far from monotone distributions in total variation distance. We then optimized their threshold for the best possible performance for a range of $n$ and $\varepsilon$. In particular, we came up with the constant of 0.6 for their statistic. Note that one might use uniform distributions, or yet another class of distributions to choose the threshold.
As is expected, when we run experiments on the class that was used to determine the threshold, the performance of BKR is comparable with our approach for moderate values of $\varepsilon$.
For very small $\varepsilon$, once again their performance degrades, since the number of samples required for concentration of their statistic is proportional to $\varepsilon^{-6}$.
However, for experiments on other monotone classes of distributions (i.e., classes which were not used to train the threshold), our algorithm performed significantly better in almost all cases.
In particular, we also consider a test between the uniform distribution and perturbations thereof (which are guaranteed to be $\varepsilon$-far from monotone).
We give two illustrative plots in the appendix. The number of samples are plotted in logarithmic scale. Both these experiments are run for $n=50,000$. In the uniform case, we take the total variation bound to be $0.05$ and for the Zipf distribution we take it to be $0.07$. As we see for the uniform distribution, our algorithm has accuracy 0.9 with 30000 samples, compared to their accuracy of 0.55. For the Zipf plot, their algorithm is comparable, since we chose the threshold to work the best for this class.
\section{Learning product distributions in $\chi^2$ distance}
\label{sec:independence-appendix}
In this section we prove Lemma~\ref{lem:learning-product}. The proof
is analogous to the proof for learning monotone distributions, and
hinges on the following result of~\cite{KamathOPS15}.
Given $m$ samples from a distribution $q$ over $n$ elements, the add-1
estimator (Laplace estimator) $q$ satisfies:
\[
\expectation{\chi^2(p,q) } \le \frac{n}{m+1}.
\]
Now, suppose $p$ is a product distribution over
${\cal X}=[n_1]\times\cdots\times[n_d]$. We simply perform the add-1
estimation over each coordinate independently, giving a distribution
$q^1\times\cdots \times q^d$. Since $p$ is a product distribution the
estimates in each coordinate is independent. Therefore, a simple
application of the previous result and independence of the coordinates implies
\begin{align}
\expectation{\chi^2(p,q) } & =
\prod_{l=1}^{d}\left(1+ \expectation{\chi^2(p^l,q^l) }\right)-1\nonumber\\
& \le \prod_{l=1}^{d}\left(1+ \frac{n_l}{m+1}\right)-1\nonumber\\
&\le \exp\left(\frac{\sum_l {n_l}}{m+1}\right)-1,\label{eqn:haha}
\end{align}
where~\eqref{eqn:haha} follows from $e^x\ge1+x$.
Using $e^x\le 1+2x$ for $0\le x\le 1$, we have
\begin{align}
\expectation{\chi^2(p,q) } & \leq 2\frac{\sum_l {n_l}}{m+1},
\end{align}
when $m\geq\sum_l n_l$. Therefore, following an application of Markov's inequality, when $m = \Omega((\sum_l
n_l)/\varepsilon^2)$, Lemma~\ref{lem:learning-product} is proved.
\section{Testing Independence of Random Variables}
\label{sec:independence}
Let
${\cal X}{\hfill\blksquare}\medskip[n_1]\times\ldots\times[n_d]$, and let $\Pi_d$ be the class
of all product distributions over ${\cal X}$.
We first bound the $\chi^2$-distance between product distributions in
terms of the individual coordinates.
\begin{lemma}
\label{lem:prod-chi}
Let $p = p^1\times p^2\ldots\times p^d$, and
$q = q^1\times q^2\ldots\times q^d$ be two distributions in $\Pi_d$.
Then
\[
\chisqpq{p}{q} = \prod_{\gnote{\ell}=1}^d(1+\chisqpq{p\gnote{^\ell}}{q\gnote{^\ell}})-1.
\]
\end{lemma}
\begin{proof}
By the definition of $\chi^2$-distance
\begin{align}
\chisqpq{p}{q} &= \sum_{{\bf i}\in{\cal X}} \frac{{\dP}_{{\bf i}}^2}{{\dQ}_{{\bf i}}}-1\\
& =
\prod_{\gnote{\ell}=1}^d\left[\sum_{i \in[n_\gnote{\ell}]}\frac{\left(p^{\gnote{\ell}}_{i}\right)^2}{q^{\gnote{\ell}}_{i}}\right]-1\\
& =
\prod_{\gnote{\ell = 1}}^\gnote{d}\left(1+\chisqpq{p^\gnote{\ell}}{q^\gnote{\ell}}\right)-1.
\end{align}
\end{proof}
Along the lines of learning monotone distributions in $\chi^2$
distance we obtain the following result, proved in Section~\ref{sec:independence-appendix}.
\begin{lemma}
\label{lem:learning-product}
There is an algorithm that takes
\[
O\left(\sum_{\ell=1}^d \frac{n_\ell}{\varepsilon^2}\right)
\]
samples from a distribution $p$ in $\Pi_d$ and outputs a distribution
$q\in\Pi_d$ such that with probability at least $5/6$,
\[
\chisqpq{p}{q}\le O(\varepsilon^2).
\]
\end{lemma}
This fits precisely in our framework of robust $\chi^2$-$\ell_1$
testing. In particular, applying Theorem~\ref{thm:chisq-test}, we obtain the following result.
\begin{theorem}
\label{thm:independence}
For any $d\ge1$, there exists an algorithm for testing independence of
random variables over $[n_1]\times\ldots[n_d]$ with sample and time complexity
\[
O\left(\frac{(\prod_{\ell = 1}^dn_\ell)^{1/2} + \sum_{\ell = 1}^d n_\ell}{\varepsilon^2}\right).
\]
\end{theorem}
The following corollaries are immediate.
\begin{corollary}
Suppose $\prod_{\ell = 1}^d n_\ell^{1/2} \geq \sum_{\ell =1}^d n_\ell$.
Then there exists an algorithm for testing independence over $[n_1] \times \dots \times [n_d]$ with sample complexity $\Theta((\prod_{\ell = 1}^d n_\ell)^{1/2}/\varepsilon^2)$.
\end{corollary}
In particular,
\begin{corollary}
There exists an algorithm for testing if two distributions over $[n]$ are independent with sample complexity $\Theta(n/\varepsilon^2)$.
\end{corollary}
\section{Details of the Lower Bounds}
\label{sec:lb-appendix}
In this section, for the class of distributions $\mathcal{Q}$ described in discussion on lower bounds and a class of interest $\mathcal{C}$, we show that $d_{\mathrm {TV}}(\mathcal{C},\mathcal{Q}) \geq \varepsilon$, thus implying a lower bound of $\Omega(\sqrt{n}/\varepsilon^2)$ for testing $\mathcal{C}$.
\subsection{Monotone distributions}
We first consider $d=1$ and prove that for appropriately chosen $c$, any monotone distribution over $[n]$ is $\varepsilon$-far from all distributions in $\mathcal{Q}$.
Consider any $q \in \mathcal{Q}$.
For this distribution, we say that $i\in[n]$ is a \emph{raise-point} if $\dQ_i<\dQ_{i+1}$. Let $R_{q}$ be the set of raise points of $q$. For $q\in\mathcal{Q}$,~\eqref{eqn:fcl} implies at least one in every four consecutive integers in $[n]$ is a raise point, and therefore, $|R_q|\gen/4$. Moreover, note that if $i$ is a raise-point, then $i+1$ is not a raise point. For any monotone (decreasing) distribution $p$, $\dP_i\ge\dP_{i+1}$. For any raise-point $i\in R_q$, by the triangle inequality,
\begin{align}
\label{eqn:triangle-mon}
|\dP_i-\dQ_i|+|\dP_{i+1}-\dQ_{i+1}|\ge |\dP_i-\dP_{i+1}+\dQ_{i+1}-\dQ_i|\ge \dQ_{i+1}-\dQ_i = \frac{2c\varepsilon}{n}.
\end{align}
Summing over the set $R_q$, we obtain $d_{\mathrm {TV}}(p,q)\ge \frac12|R_q|\cdot \frac{2c\varepsilon}{n}\ge {c\varepsilon}/{4}$. Therefore, if $c\ge4$, then $d_{\mathrm {TV}}(\mathcal{M}_{n},q)\ge\varepsilon$. This proves the lower bound for $d=1$.
This argument can be extended to $[n]^d$. Consider the following class of distributions on $[n]^d$. For each point ${\bf i}=(i_1, \ldots, i_d)\in[n]^d$, where $i_1$ is even, generate a random $z\in\{-1,1\}$, and assign
to ${\bf i}$ a probability of $(1+z c\varepsilon)/n^{d}$. Let ${\bf e}_1{\hfill\blksquare}\medskip(1,0,\ldots,0)$. Similar to $d=1$, assign a probability $(1-z c\varepsilon)/n^{d}$ to the point ${\bf i}+{\bf e}_1 = (i_1+1,i_2, \ldots, i_d)$. This class consists of $2^{\frac{n^{d/2}}2}$ distributions, and Paninski's arguments extend to give a lower bound of $\Omega(n^{d/2}/\varepsilon^2)$ samples to distinguish this class from the uniform distribution over $[n]^d$. It remains to show that all these distributions are $\varepsilon$ far from $\mathcal{M}_n^d$. Call a point ${\bf i}$ as a raise point if $p_{\bf i}<p_{{\bf i}+{\bf e}_1}$. For any ${\bf i}$, one of the points ${\bf i}$, ${\bf i}+{\bf e}_1$, ${\bf i}+2{\bf e}_1$, ${\bf i}+3{\bf e}_1$ is a raise point, and the number of raise points is at least $n^{d}/4$. Invoking the triangle inequality (identical to~\eqref{eqn:triangle-mon}) over the raise-points, in the first dimension shows that any monotone distribution over $[n]^d$ is at a distance $\frac{c\varepsilon}4$ from any distribution in this class. Choosing $c=4$ yields a bound of $\varepsilon$.
\subsection{Testing Product Distributions}
Our idea for testing independence is similar to the previous section. We sketch the construction of a class of distributions on ${\cal X}=[n_1]\times\cdots\times[n_d]$. Then $|{\cal X}| = n_1\cdot n_2\ldots\cdot n_d$. For each element in ${\cal X}$ assign a value $(1\pm c\varepsilon)$ and then for each such assignment, normalize the values so that they add to 1, giving rise to a distribution. This gives us a class of $2^{|{\cal X}|}$ distributions. The key argument is to show that a \emph{large} fraction of these distributions are far from being a product distribution. This follows since the degrees of freedom of a product distribution is exponentially smaller than the number of possible distributions. The second step is to simply apply Paninski's argument, now over the larger set of distributions, where we show that distinguishing the collection of distributions we constructed from the uniform distribution over ${\cal X}$ (which is a product distribution) requires $\sqrt{|{\cal X}|}/\varepsilon^2$ samples.
\subsection{Log-concave and Unimodal distributions}
We will show that any log-concave or unimodal distribution is $\varepsilon$-far from all distributions in $\mathcal{Q}$.
Since $\mathcal{LCD}_n \subset \mathcal{U}_n$, it will suffice to show this for every unimodal distribution.
Consider any unimodal distribution $p$, with mode $\ell$. Then, $p$ is monotone non-decreasing over the interval $[\ell]$ and non-increasing over $\{\ell+1, \ldots, n\}$. By the argument for monotone distributions, the total variation distance between $p$ and any distribution $q$ over elements greater than $\ell$ is at least $\frac{n-\ell-1}{n}\frac{c\varepsilon}{4}$, and over elements less than $\ell$ is at least $\frac{\ell-1}{n}\frac{c\varepsilon}{4}$. Summing these two gives the desired bound.
\subsection{Monotone Hazard distributions}
We will show that any monotone hazard rate distribution is $\varepsilon$-far from all distributions in $\mathcal{Q}$.
Let $p$ be any monotone-hazard distribution. Any distribution $q\in\mathcal{Q}$ has mass at least $1/2$ over the interval $I=[n/4,3n/4]$. Therefore, by Lemma~\ref{lem:mhr-str}, for any $i\in I$, $\dP_{i+1}\left(1+\frac{\dP_i}{1/4}\right)\ge \dP_i$.
As noted before, at least $n/8$ of the raise-points are in $I$.
For any $i\in I\cap R_q$, $\dQ_i=(1+c\varepsilon)/n$, $\dQ_{i+1}=(1-c\varepsilon)/n$
\begin{align}
d_i = |\dP_i-\dQ_i|+|\dP_{i+1}-\dQ_{i+1}|.
\end{align}
If $\dP_i\ge(1+2c\varepsilon)/n$ or $\dP_i\le1/n$, then the first term, and therefore $d_i$ is at least $c\varepsilon/n$.
If $\dP_i\in(1/n, (1+2c\varepsilon)/n)$, then for $n>5/(c\varepsilon)$
\[
\dP_{i+1}\ge \frac{1}{n}\cdot \frac{1}{1+\frac4{n}}\ge \frac{1-c\varepsilon/2}{n}.
\]
Therefore the second term of $d_i$ is at $c\varepsilon/2n$. Since there are at least $n/8$ raise points in $I$,
\begin{align}
d_{\mathrm {TV}}(p,q)\ge \frac12\frac{n}{8}\cdot \frac{c\varepsilon}{2n}\ge \frac{c\varepsilon}{16}.
\end{align}
Thus any MHR distribution is $\varepsilon$-far from $\mathcal{Q}$ for $c\ge16$.
\section{Details on Testing Log-Concavity}
\label{sec:LCD}
It will suffice to prove Lemma~\ref{lem:lcd-learn}.
\begin{prevproof}{Lemma}{lem:lcd-learn}
We first draw samples from $p$ and obtain a $O(1/\varepsilon^{3/2})$-piecewise constant
distribution $f$ by appropriately flattening the empirical
distribution. The proof is now in two parts.
In the first part, we show that if $p \in \mathcal{LCD}_n$ then
$f$ will be close to $p$ in $\chi^2$ distance over its effective
support.
The second part involves proper learning of $p$. We
will use a linear program on $f$ to find a distribution $q\in\mathcal{LCD}_n$.
This distribution is such that if $p\in\mathcal{LCD}_n$, then $\chi^2(p,q)$ is
small, and otherwise the algorithm will either output some $q \in \mathcal{LCD}_n$ (with
no other relevant guarantees) or \textsc{Reject}\xspace.
We first construct $f$. Let $\hat p$ be the empirical distribution
obtained by sampling $O(1/\varepsilon^5)$ samples from $p$.
By Lemma~\ref{lem:dkw}, with probability at least $5/6$, $d_{\mathrm K}(p,\hat p) \leq \varepsilon^{5/2}/10$.
In particular, note that $|p_i - \hat p_i| \leq \varepsilon^{5/2}/10$.
Condition on this event in the remainder of the proof.
Let $a$ be the minimum $i$ such that $p_i \geq \varepsilon^{3/2}/5$, and let $b$ be
the maximum $i$ satisfying the same condition.
Let $M = \{a, \dots, b\}$ or $\emptyset$ if $a$ and $b$ are undefined.
By the guarantee provided by the DKW inequality, $p_i \geq \varepsilon^{3/2}/10$ for all $i \in M$.
Furthermore, $\hat p_i \in p_i \pm \varepsilon^{3/2}/10 \in (1 \pm \varepsilon) \cdot p_i$.
For each $i \in M$, let $f_i = \hat p_i$.
We note that $|M| = O(1/\varepsilon)$, so this contributes $O(1/\varepsilon)$ constant pieces to $f$.
We now divide the rest of the domain into $t$ intervals, all but constantly many of measure $\Theta(\varepsilon^{3/2})$ (under $p$).
This is done via the following iterative procedure.
As a base case, set $r_0 = 0$.
Define $I_j$ as $[l_j, r_j]$, where $l_j = r_{j-1} + 1$ and $r_j$ is the largest $j \in [n]$ such that $\hat p(I_j) \leq 9\varepsilon^{3/2}/10$.
The exception is if $I_j$ would intersect $M$ -- in this case, we ``skip'' $M$: set $r_j = a-1$ and $l_{j+1} = b+1$.
If such a $j$ exists, denote it by $j^*$.
We note that $p(I_j) \leq \hat p(I_j) + \varepsilon^{5/2}/10 \leq \varepsilon^{3/2}$.
Furthermore, for all $j$ except $j^*$ and $t$, $r_j + 1 \not \in M$,
so $p(I_j) \geq 9\varepsilon^{3/2}/10 - \varepsilon^{3/2}/5 - \varepsilon^{5/2}/10 \geq 3\varepsilon^{3/2}/5$.
Observe that this lower bound implies that $t \leq \frac{2}{\varepsilon^{3/2}}$ for $\varepsilon$ sufficiently small.
\paragraph{Part 1.}
For this part of the algorithm, we only care about the guarantees when
$p \in \mathcal{LCD}_n$, so we assume this is the case.
For the domain $[n] \setminus M$, we let $f$ be the flattening of
$\hat p$ over the intervals $I_1, \dots I_t$.
To analyze $f$, we need a structural property of log-concave
distributions due to Chan, Diakonikolas, Servedio, and Sun
\cite{ChanDSS13a}. This essentially states that a log-concave
distribution cannot have a sudden increase in probability.
\begin{lemma}[Lemma 4.1 in \cite{ChanDSS13a}]
\label{lem:lcd-flat}
Let $p$ be a distribution over $[n]$ that is non-decreasing and
log-concave on $[1,x] \subseteq [n]$.
Let $I = [x,y]$ be an interval of mass $P(I) = \tau$, and suppose that
the interval $J = [1,x-1]$ has mass $p(J) = \sigma > 0$. Then
$$p(y)/p(x) \leq 1 + \tau/\sigma.$$
\end{lemma}
Recall that any log-concave distribution is unimodal, and suppose the
mode of $p$ is at $i_0$.
We will first focus on the intervals $I_1, \dots, I_{t_L}$ which lie entirely to the left of $i_0$ and $M$.
We will refer to $I_j$ as $L_j$ for all $j \leq t_L$.
Note that $p$ is non-decreasing over these intervals.
The next steps to the analysis are as follows.
First we show that the flattening of $p$ over $L_j$ is a multiplicative $(1 + O(1/j))$ estimate for each $p_i \in L_j$.
Then, we show that flattening the empirical distribution $\hat p$ over $L_j$ is a multiplicative $(1 + O(1/j))$ estimate of $p(i)$ for each $i \in L_j$.
Finally, we exclude a small number of intervals (those corresponding to $O(\varepsilon)$ mass at the left and right side of the domain, as well as $j^*$) in order to get the $\chi^2$ approximation we desire on an effective support.
\begin{itemize}
\item First, recall that $p(L_j) \leq \varepsilon^{3/2}$ for all $j$.
Also, letting $J_j = [1, r_{j-1}]$, we have that $p(J_j) \geq (j-1)\cdot 3\varepsilon^{3/2}/5$.
Thus by Lemma \ref{lem:lcd-flat}, $p(r_j) \leq p(l_j) (1 + 2/(j-1))$.
Since the distribution is non-decreasing in $L_j$, the flattening $\bar p$ of $p$ is such that $\bar p(i) \in p(i) (1 \pm \frac{2}{j-1})$ for all $i \in L_j$.
\item We have that $p(L_j) \geq 3\varepsilon^{3/2}/5$, and $\hat p(L_j) \in p(L_j) \pm \varepsilon^{5/2}/10$, so $\hat p(L_j) \in p(L_j) \cdot (1 \pm \frac{\varepsilon}{6})$, and hence
$\hat p(i) \in \bar p(i) \cdot (1 \pm \frac{\varepsilon}{6})$ for all $i \in L_j$.
Combining with the previous point, we have that
$$\hat p(i) \in p(i) \cdot \left(1 \pm \left(\frac{2\varepsilon}{3(j-1)} + \frac{\varepsilon}{6} + \frac{2}{j-1}\right)\right) \in
p(i) \cdot \left(1 \pm \frac{11}{3(j-1)}\right).$$
\end{itemize}
A symmetric statement holds for the intervals that lie entirely to
the right of $i_0$ and $M$.
We will refer to $I_j$ as $R_{t-j}$ for all $j > t_L$.
To summarize, we have the following guarantees for the distribution $f$:
\begin{itemize}
\item For all $i \in M$, $f(i) \in p(i) \cdot (1 \pm \varepsilon)$;
\item For all $i \in L_j$ (except $L_1$ and $L_{j^*}$), $f(i) \in p(i) \cdot \left(1 \pm \frac{22}{3j}\right)$;
\item For all $i \in R_j$ (except $R_1$), $f(i) \in p(i) \cdot \left(1 \pm \frac{22}{3j}\right)$;
\end{itemize}
Note that, in particular, we have multiplicative estimates for all intervals, except those in $L_1$, $L_{j^*}$, $R_1$ and the interval containing $i_0$.
Let $S$ be the set of all intervals except $L_{j^*}$, $L_j$ and $R_j$ for $j \leq 1/\sqrt{\varepsilon}$, and the one containing $i_0$
Then, since each interval has probability mass at most $O(\varepsilon^{3/2})$ and we are excluding $O(1/\sqrt{\varepsilon})$ intervals,
$p(S)>1-O(\varepsilon)$.
We now compute the $\chi^2$-distance induced by this approximation for elements in $S$.
For an element $i \in L_j \cap S$, we have
$$\frac{(f(i) - p(i))^2}{p(i)} \leq \frac{60p(i)}{j^2}.$$
Summing over all $i \in L_j \cap S$ gives
$$\frac{60\varepsilon^{3/2}}{j^2}$$
since the probability mass of $L_j$ is at most $\varepsilon^{3/2}$.
Summing this over all $L_j$ for $j \geq 1/\sqrt{\varepsilon}$ and $j \neq j^*$ gives
\begin{align*}
60\varepsilon^{3/2} \sum_{j = 1/\sqrt{\varepsilon}}^{2/\varepsilon^{3/2}} \frac{1}{j^2} &\leq 60\varepsilon^{3/2} \int_{1/\sqrt{\varepsilon}}^{\infty} \frac{1}{x^2} dx \\
&= 60\varepsilon^{3/2}(\sqrt{\varepsilon}) \\
&= O(\varepsilon^2)
\end{align*}
as desired.
\paragraph{Part 2.}
To obtain a distribution $q\in\mathcal{LCD}_n$, we write a linear program.
We will work in the log domain, so our variables will be $Q_i$,
representing $\log q(i)$ for $i \in [n]$.
We will use $F_i = \log f(i)$ as parameters in our LP.
There will be no objective function, we simply search for a feasible point.
Our constraints will be
$$Q_{i-1} + Q_{i+1} \leq 2 Q_{i} \ \ \forall i \in [n-1]$$
$$Q_{i} \leq 0 \ \ \forall i \in [n]$$
$$ \log (1 + \varepsilon) \leq |Q_i - F_i| \leq \log (1 + \varepsilon) \ \text{for}\ i \in M$$
$$ \log \left(1 - \frac{22}{3j}\right) \leq |Q_i - F_i| \leq \log \left(1 + \frac{22}{3j}\right) \ \text{for}\ i \in L_j, j \geq 1/\sqrt{\varepsilon} \ \text{and}\ j \neq j^*$$
$$ \log \left(1 - \frac{22}{3j}\right) \leq |Q_i - F_i| \leq \log \left(1 + \frac{22}{3j}\right) \ \text{for}\ i \in R_j, j \geq 1/\sqrt{\varepsilon}$$
If we run the linear program, then after a rescaling and summing the error over all the intervals in the LP gives us that the distance between $p$ and $q$ to be $O(\varepsilon^2)$ $\chi^2$-distance in a set $S$ which has measure $p(S) \geq 1 - 4\varepsilon$, as desired.
If the linear program finds a feasible point, then we obtain a $q \in \mathcal{LCD}_n$.
Furthermore, if $p \in \mathcal{LCD}_n$, this also tells us that (after a rescaling of $\varepsilon$), summing the error over all intervals implies that $\chi^2(p_S,q_S) \leq \frac{\varepsilon^2}{500}$ for a known set $S$ with $p(S) \geq 1 - O(\varepsilon)$, as desired.
If $M \neq \emptyset$, this algorithm works as described.
The issue is if $M = \emptyset$, then we don't know when the $L$ intervals end and the $R$ intervals begin.
In this case, we run $O(1/\varepsilon)$ LPs, using each interval as the one containing $i_0$, and thus acting as the barrier between the $L$ intervals (to its left) and the $R$ intervals (to its right).
If $p$ truly was log-concave, then one of these guesses will be correct and the corresponding LP will find a feasible point.
\end{prevproof}
\section{Lower Bounds}
\label{sec:lower-bounds}
We now prove sharp lower bounds for the classes of distributions we
consider. We show that the example studied by
Paninski~\cite{Paninski08} to prove lower bounds on testing uniformity
can be used to prove lower bounds for the classes we consider.
They consider a class $\mathcal{Q}$ consisting of $2^{n/2}$ distributions
defined as follows. Without loss of generality assume that $n$ is even. For each of
the $2^{n/2}$ vectors $z_0z_1\ldots
z_{n/2-1}\in\{-1,1\}^{n/2}$, define a distribution $q\in\mathcal{Q}$ over
$[n]$ as follows.
\begin{align}
\label{eqn:fcl}
q_{i} =\begin{cases}
\frac{(1+z_\ell c\varepsilon)}{n} & \text{ for } i = 2\ell+1\\
\frac{(1-z_{\ell}c\varepsilon)}{n}& \text{ for } i=2\ell.\\
\end{cases}
\end{align}
Each distribution in $\mathcal{Q}$ has a total variation distance $c\varepsilon/2$
from $U_n$, the uniform distribution over
$[n]$. By choosing $c$ to be an appropriate constant, Paninski~\cite{Paninski08} showed that a distribution picked
uniformly at random from $\mathcal{Q}$ cannot be distinguished from $U_n$ with
fewer than $\sqrt{n}/\varepsilon^2$ samples with probability at least $2/3$.
Suppose $\mathcal{C}$ is a class of distributions such that
\begin{itemize}
\item
The uniform distribution $U_n$ is in $\mathcal{C}$,
\item
For appropriately chosen $c$, $d_{\mathrm {TV}}(\mathcal{C}, \mathcal{Q})\ge\varepsilon$,
\end{itemize}
then testing $\mathcal{C}$ is \costasnote{not} easier than distinguishing $U_n$ from $\mathcal{Q}$.
Invoking~\cite{Paninski08} immediately implies that testing the class
$\mathcal{C}$ requires $\Omega(\sqrt{n}/\varepsilon^2)$ samples.
The lower bounds for all the one dimensional distributions will follow
directly from this construction, and for testing monotonicity in
higher dimensions, we extend this construction to $d\ge1$,
appropriately. These arguments are proved in
Section~\ref{sec:lb-appendix}, leading to the following lower bounds for testing these classes:
\begin{theorem}$ $
\label{thm:lbs}
\begin{itemize}
\item
For any $d\ge1$, any algorithm for testing monotonicity over
$[n]^d$ requires $\Omega(n^{d/2}/\varepsilon^2)$ samples.
\item
For $d\ge1$, any algorithm for testing independence over
$[n_1]\times\cdots\times[n_d]$ requires
$\Omega\left(\frac{(n_1\cdot n_2\ldots\cdot
n_d)^{1/2}}{\varepsilon^2}\right)$ samples.
\item
Any algorithm for testing unimodality, log-concavity, or monotone
hazard rate over $[n]$ requires $\Omega(\sqrt{n}/\varepsilon^2)$ samples.
\end{itemize}
\end{theorem}
\section{Introduction} \label{sec:introduction}
The quintessential scientific question is whether an unknown object
has some property, i.e. whether a model from a specific class fits
the object's observed behavior. If the unknown object is
a probability distribution, $p$, to which we have sample access,
we are typically asked to distinguish whether $p$ belongs to some class
$\mathcal{C}$ or whether it is sufficiently far from it.
This question has received tremendous attention in \gnote{the field of} statistics (see, e.g.,~\cite{Fisher25,lehmann2006testing}), where
test statistics for important properties such as the ones we
consider here have been proposed. Nevertheless, the
emphasis has been on asymptotic
analysis, characterizing \gnote{the} rates of convergence of test statistics under null
hypotheses, as the number of samples tends to infinity. In contrast,
\gnote{we wish to study the following problem in the small sample regime:}
\vspace{2ex}
\begin{center}
\smallskip \framebox{
\begin{minipage}{13.5cm} $\Pi(\mathcal{C},\varepsilon)$: Given a family of distributions $\mathcal{C}$, some
$\varepsilon>0$, and sample access to an unknown distribution $p$ over \gnote{a}
discrete support, how many samples are required to distinguish
between $p \in \mathcal{C}$ versus $d_{\mathrm {TV}}(p,\mathcal{C})>\varepsilon$?
\end{minipage}
}
\end{center}
\vspace{2ex}
The problem has been studied intensely in the literature on \gnote{property
testing and sublinear algorithms} \cite{Goldreich98, Fischer01, Rubinfeld06, Ron08,canonne2015survey}, where the emphasis has been on
characterizing the optimal tradeoff between $p$'s support size and
the accuracy $\varepsilon$ in the number of samples. Several results have
been obtained, roughly clustering into three groups, where (i) $\mathcal{C}$
is the class of monotone distributions over $[n]$, or more generally a
poset~\cite{BatuKR04,Bhattacharyya11}; (ii) $\mathcal{C}$ is the
class of independent, or $k$-wise independent distributions over a
hypergrid~\cite{batu2001testing,alon2007testing}; and
(iii) $\mathcal{C}$ contains a single-distribution $q$, and the problem
becomes that of testing whether $p$ equals $q$ or is far from
it~\cite{batu2001testing,Paninski08, valiant2014automatic}.
With respect to (iii), \cite{valiant2014automatic} exactly characterizes the number of samples required to test identity to each distribution $q$, providing a single tester matching this bound simultaneously for all $q$. Nevertheless, this tester and its precursors are not applicable to the composite identity testing problem that we consider. If our class $\mathcal{C}$ were finite, we could test against each element in the class, albeit this would not necessarily be sample optimal. If our class $\mathcal{C}$ \gnote{were} a continuum, we \gnote{would} need \emph{tolerant} identity testers, which tend to be more expensive in terms of \gnote{sample complexity} \cite{ValiantV11}, and result in substantially suboptimal testers for the classes we consider. Or we could use approaches related to generalized likelihood ratio test, but their behavior is not well-understood in our regime, and optimizing likelihood over our classes becomes computationally intense.
\paragraph{Our Contributions} In this paper, we obtain sample-optimal and computationally efficient testers for $\Pi(\mathcal{C},\varepsilon)$ for the most \gnote{fundamental} shape restrictions to a distribution.
Our contributions are the following:
\begin{enumerate}
\item
\jnote{For a known distribution $q$ over $[n]$, and given samples from an unknown
distribution $p$, we show that distinguishing the cases: $(a)$ whether the $\chi^2$-distance
between $p$ and $q$ is at most $\varepsilon^2/2$, versus $(b)$ the $\ell_1$ distance
between $p$ and $q$ is at least $\varepsilon$, requires $\Theta(\sqrt
n/\varepsilon^2)$ samples.
As a corollary, we provide a simpler argument to
show that identity testing requires $\Theta(\sqrt n/\varepsilon^2)$ samples (previously shown in \cite{valiant2014automatic}). }
\item
For the class $\mathcal{C}=\mathcal{M}_n^d$ of monotone distributions over
$[n]^d$ we require an optimal $\Theta\left({n^{d/2} \over
\varepsilon^2}\right)$ number of samples, where prior work requires
$\Omega\left({\sqrt{n} \log n \over \varepsilon^4}\right)$ samples for $d=1$
and $\tilde{\Omega}\left(n^{d-{1\over 2}} {\rm poly}\left({1 \over
\varepsilon}\right)\right)$ for $d>1$~\cite{BatuKR04,Bhattacharyya11}. Our
results improve the exponent of $n$ with respect to $d$, shave all
logarithmic factors in $n$, and improve the exponent of $\varepsilon$ by at
least a factor of $2$.
\begin{enumerate}
\item A useful building block and interesting byproduct of our
analysis is extending Birg\'e's oblivious decomposition for
single-dimensional monotone distributions~\cite{Birge87} to monotone
distributions in $d\ge1$, and to the stronger notion of $\chi^2$-distance. See Section~\ref{sec:hypergrid}.
\item Moreover, we show that $O(\log^d n)$ samples suffice to learn a monotone
distribution over $[n]^d$ in $\chi^2$-distance. See
Lemma~\ref{lem:fin-learn-mon} for the precise statement.
\end{enumerate}
\item
\gnote{For the class $\mathcal{C} = \Pi_d$ of product distributions over $[n_1]\times\cdots\times[n_d]$,
our algorithm requires
$O\left(\left((\prod_\ell n_\ell)^{1/2} + \sum_\ell
n_\ell\right)/\varepsilon^2\right)$ samples.
We note that a product distribution is one where all marginals are independent, so this is equivalent to testing if a collection of random variables are all independent.}
\gnote{In the case where
$n_\ell$'s are large, then the first term dominates, and the sample
complexity is $O(\left(\prod_\ell n_\ell\right)^{1/2}/\varepsilon^2)$.
In particular, when $d$ is a constant and all $n_\ell$'s are equal to $n$, we achieve the optimal sample complexity of $\Theta(n^{d/2}/\varepsilon^2)$.
To the best of our knowledge, this is the first result for $d \geq 3$, and when $d = 2$, this improves the previously known complexity from $O\left(\frac{n}{\varepsilon^6}{\rm
polylog}(n/\varepsilon)\right)$ \cite{batu2001testing,levi2013testing}, significantly improving the dependence on $\varepsilon$ and shaving all logarithmic factors.
}
\item For the classes $\mathcal{C}=\mathcal{LCD}_n$, $\mathcal{C}=\mathcal{MHR}_n$ and $\mathcal{C}=\mathcal{U}_n$ of log-concave, monotone-hazard-rate and unimodal distributions over $[n]$, we require an optimal $\Theta\left({\sqrt{n} \over \varepsilon^2}\right)$ number of samples. Our testers for $\mathcal{LCD}_n$ and $\mathcal{C}=\mathcal{MHR}_n$ are to our knowledge the first for these classes for the low sample regime we are studying---see~\cite{hall2005testing} and its references for statistics literature on the asymptotic regime. Our tester for $\mathcal{U}_n$ improves the dependence of the sample complexity on $\varepsilon$ by at least a factor of $2$ in the exponent, and shaves all logarithmic factors in $n$, compared to testers based on testing monotonicity.
\begin{enumerate}
\item A useful building block and important byproduct of our analysis
are the first computationally efficient algorithms for properly
learning log-concave and monotone-hazard-rate distributions, to
within $\varepsilon$ in total variation distance, from ${\rm
poly}(1/\varepsilon)$ samples, independent of the domain size
$n$. See Corollaries~\ref{cor:learning LCD}
and~\ref{cor:learning MHR}. Again, these are the first
computationally efficient algorithms to our knowledge in the low
sample regime.~\cite{AcharyaDLS15, ChanDSS13b} provide algorithms for density
estimation, which are non-proper, i.e. will approximate an unknown
distribution from these classes with a distribution that does not
belong to these classes. On the other hand, the statistics
literature focuses on maximum-likelihood estimation in the
asymptotic regime---see e.g. \cite{cule2010theoretical} and its
references.
\end{enumerate}
\item For all the above classes we obtain matching lower bounds,
showing that the sample complexity of our testers is optimal with
respect to $n$, $\varepsilon$ and when applicable $d$. See
Section~\ref{sec:lower-bounds}. Our lower bounds are based on
extending Paninski's lower bound for testing uniformity~\cite{Paninski08}.
\end{enumerate}
At the heart of our tester lies a novel use of the $\chi^2$
statistic. Naturally, the $\chi^2$ and its related $\ell_2$ statistic
have been used in several of the afore-cited results. We propose a
new use of the $\chi^2$ statistic enabling our optimal sample
complexity. The essence of our approach is to first draw a small
number of samples (independent of $n$ for log-concave and
monotone-hazard-rate distributions and only logarithmic in $n$ for
monotone and unimodal distributions) to approximate the unknown
distribution $p$ in $\chi^2$ distance. If $p \in \mathcal{C}$, our learner
is required to output a distribution $q$ that is $O(\varepsilon)$-close
to $\mathcal{C}$ in total variation and $O(\varepsilon^2)$-close to $p$ in
$\chi^2$ distance. Then some analysis reduces our testing problem
to distinguishing the following cases:
\begin{itemize}
\item
$p$ and $q$ are $O(\varepsilon^2)$-close in $\chi^2$ distance; this case
corresponds to $p \in \mathcal{C}$.
\item
$p$ and $q$ are $\Omega(\varepsilon)$-far in total variation distance; this
case corresponds to $d_{\mathrm {TV}}(p,\mathcal{C})>\varepsilon$.
\end{itemize}
\gnote{
We draw a comparison with \emph{robust identity testing}, in which one
must distinguish whether $p$ and $q$ are $c_1\varepsilon$-close or
$c_2\varepsilon$-far in total variation distance, for constants $c_2> c_1 >
0$.
In \cite{ValiantV11}, Valiant and Valiant show that $\Omega(n/\log n)$
samples are required for this problem -- a nearly-linear sample
complexity, which may be prohibitively large in many settings.
In comparison, the problem we study tests for $\chi^2$ closeness
rather than total variation closeness: a relaxation of the previous
problem.
However, our tester demonstrates that this relaxation allows us to
achieve a substantially sublinear complexity of $O(\sqrt{n}/\varepsilon^2)$.
On the other hand, this relaxation is still tight enough to be useful,
demonstrated by our application in obtaining sample-optimal testers.
}
\jnote{We note that while the $\chi^2$ statistic for testing
hypothesis is prevalent in statistics providing optimal error
exponents in the large-sample regime, to the best of our knowledge,
in the small-sample regime, \emph{modified-versions} of the $\chi^2$
statistic have only been recently used for \emph{closeness-testing}
in~\cite{AcharyaDJOPS12, ChanDVV13} and for testing uniformity of
monotone distributions in~\cite{AcharyaJOS13}. In
particular,~\cite{AcharyaDJOPS12} design an unbiased statistic for
estimating the $\chi^2$ distance between two \emph{unknown}
distributions.}
In Section~\ref{sec:testing}, we show that a version of the $\chi^2$
statistic, appropriately excluding certain elements of the support,
is sufficiently well-concentrated to distinguish between the above
cases. Moreover, the sample complexity of our algorithm is optimal for
most classes.
Our base tester is combined with the afore-mentioned extension of
Birg\'e's decomposition theorem to test monotone distributions in
Section~\ref{sec:monotone} (see Theorem~\ref{thm:monotone-final}
and Corollary~\ref{cor:high-d}), and is also used to test independence
of distributions in Section~\ref{sec:independence} (see
Theorem~\ref{thm:independence}).
Naturally, there are several bells and whistles that we need to add to
the above skeleton to accommodate all classes of distributions that we
are considering. For log-concave and monotone-hazard distributions, we
are unable to obtain a cheap (in terms of samples) learner that
$\chi^2$-approximates the unknown distribution $p$ throughout its
support. Still, we can identify a subset of the support where the
$\chi^2$-approximation is tight and which captures almost all the
probability mass of $p$. We extend our tester to accommodate
excluding subsets of the support from the $\chi^2$-approximation. See
Theorems~\ref{thm:lcd-main} and~\ref{thm:mhr-main} in
Sections~\ref{sec:LCD-main} and \ref{sec:MHR-main}.
For unimodal distributions, we are even unable to identify a large
enough subset of the support where the $\chi^2$ approximation is
guaranteed to be tight. But we can show that there exists a light
enough piece of the support (in terms of probability mass under $p$)
that we can exclude to make the $\chi^2$ approximation tight. Given
that we only use Chebyshev's inequality to prove the concentration of
the test statistic, it would seem that our lack of knowledge of the
piece to exclude would involve a union bound and a corresponding
increase in the required number of samples. We avoid this through a
careful application of Kolmogorov's max inequality in our setting. See
Theorem~\ref{thm:unimodality} of Section~\ref{sec:unimodal-main}.
\paragraph{Related Work.} \jnote{For the problems that we study in
thie paper, we have provided the
related works in the previous section along with our contributions.}
We cannot do justice to the role of shape
restrictions of probability distributions in probabilistic modeling
and testing.
It suffices to say that the classes of distributions that
we study are fundamental, motivating extensive literature on their
learning and testing~\cite{BBBB:72}. In the recent times, there has
been work on shape restricted statistics, pioneered by Jon Wellner,
and others.~\cite{JW:09,BW10sn} study estimation of monotone and $k-$
monotone densities, and ~\cite{BJR11,SumardW14} study estimation of
log-concave distributions.
As we have mentioned, statistics has focused on the asymptotic
regime as the number of samples tends to infinity. Instead we are
considering the low sample regime and are more stringent about the
behavior of our testers, requiring $2$-sided guarantees. We want to
accept if the unknown distribution is in our class of interest, and
also reject if it is far from the class. For this problem, as
discussed above, there are few results when $\mathcal{C}$ is a whole class of
distributions. Closer related to our paper is the line of
papers~\cite{BatuKR04, ACS10, Bhattacharyya11} for monotonicity testing, albeit these
papers have sub-optimal sample complexity as discussed above.
\gnote{Testing independence of random variables has a long history in
statisics~\cite{rao1981analysis, agresti2011categorical}. The theoretical
computer science community has also considered
the problem of testing independence of two random
variables~\cite{batu2001testing, levi2013testing}.
While our results sharpen the case where the variables are over domains of equal size,
they demonstrate an interesting asymmetric upper bound when this is not the case.}
More recently, Acharya and Daskalakis provide optimal testers for the
family of Poisson Binomial Distributions~\cite{AcharyaD15}.
\gnote{
Finally,
contemporaneous work of Canonne et al~\cite{CanonneDGR15a,CanonneDGR15b} provides
a generic algorithm and lower bounds for the single-dimensional families of distributions considered here.
We note that their algorithm has a sample complexity which is suboptimal in both $n$ and $\varepsilon$, while our algorithms are optimal.
Their algorithm also extends to mixtures of these classes, though some of these extensions are not computationally efficient.
They also provide a framework for proving lower bounds, giving the optimal bounds for many classes when $\varepsilon$ is sufficiently large with respect to $1/n$.
In comparison, we provide these lower bounds unconditionally by modifying Paninski's construction \cite{Paninski08} to suit the classes we consider.
}
\input{preliminaries}
\input{overview}
\input{testing}
\input{monotone}
\input{unimodal-main}
\input{independence}
\input{otherclasses}
\input{lower-bounds}
\section*{Acknowledgements}
The authors thank Cl\'ement Canonne and Jerry Li; the former for several useful comments and suggestions on previous drafts of this work, and both for helpful discussions and thoughts regarding independence testing.
\bibliographystyle{alpha}
\section{Details on MHR testing}
\label{sec:MHR}
\begin{prevproof}{Lemma}{lem:mhr-learn}
As with log-concave distributions, our method for MHR distributions can be split into two parts.
In the first step, if $p \in \mathcal{MHR}_n$, we obtain a distribution $q$ which is $O(\varepsilon^2)$-close to $p$ in $\chi^2$ distance on a set $\mathcal{A}$ of intervals such that $p(\mathcal{A}) \geq 1 - O(\varepsilon)$.
$q$ will achieve this by being a multiplicative $(1 + O(\varepsilon))$ approximation for each element within these intervals.
This step is very similar to the decomposition used for unimodal distributions (described in Section~\ref{sec:unimodal-appendix}), so we sketch the argument and highlight the key differences.
The second step will be to find a feasible point in a linear program.
If $p \in \mathcal{MHR}_n$, there should always be a feasible point, indicating that $q$ is close to a distribution in $\mathcal{MHR}_n$ (leveraging the particular guarantees for our algorithm for generating $q$).
If $d_{\mathrm {TV}}(p,\mathcal{MHR}_n) \geq \varepsilon$, there may or may not be a feasible point, but when there is, it should imply the existence of a distribution $p^* \in \mathcal{MHR}_n$ such that $d_{\mathrm {TV}}(q,p^*) \leq \varepsilon/2$.
The analysis will rely on the following lemma from \cite{ChanDSS13a}, which roughly states that an MHR distribution is ``almost'' non-decreasing.
\begin{lemma}[Lemma 5.1 in \cite{ChanDSS13a}]
\label{lem:mhr-str}
Let $p$ be an MHR distribution over $[n]$.
Let $I = [a,b] \subset [n]$ be an interval, and $R = [b+1,n]$ be the elements to the right of $I$.
Let $\eta = p(I)/p(R)$. Then $p(b+1) \geq \frac{1}{1 + \eta}p(a)$.
\end{lemma}
\paragraph{Part 1.}
As before, with unimodal distributions, we start by taking $O(\frac{b \log b}{\varepsilon^2})$ samples, with the goal of partitioning the domain into intervals of mass approximately $\Theta(1/b)$.
First, we will ignore the left and rightmost intervals of mass $\Theta(\varepsilon)$.
For all ``heavy'' elements with mass $\geq \Theta(1/b)$, we consider them as singletons.
We note that Lemma~\ref{lem:mhr-str} implies that there will be at most $O(1/\varepsilon)$ contiguous intervals of such elements.
The rest of the domain is greedily divided (from left to right) into intervals of mass $\Theta(1/b)$, cutting an interval short if we reach one of the heavy elements.
This will result in the guarantee that all but potentially $O(1/\varepsilon)$ intervals have $\Theta(1/b)$ mass.
Next, similar to unimodal distributions, considering the flattened distribution, we discard all intervals for which the per-element probability is not within a $(1 \pm O(\varepsilon))$ multiplicative factor of the same value for both neighboring intervals.
The claim is that all remaining intervals will have the property that the per-element probability is within a $(1 \pm O(\varepsilon))$ multiplicative factor of the true probability.
This is implied by Lemma~\ref{lem:mhr-str}.
If there were a point in an interval which was above this range, the distribution must decrease slowly, and the next interval would have a much larger per-element weight, thus leading to the removal of this interval.
A similar argument forbids us from missing an interval which contains a point that lies outside this range.
Relying on the fact that truncating the left and rightmost intervals eliminates elements with low probability mass, similar to the unimodal case, one can show that we will remove at most $\log(n/\varepsilon)/\varepsilon$ intervals, and thus a $\log(n/\varepsilon)/b\varepsilon$ probability mass.
Choosing $b = \Omega(\varepsilon^2/\log(n/\varepsilon))$ limits this to be $O(\varepsilon)$, as desired.
At this point, if $p$ is indeed MHR, the multiplicative estimates guarantee that the result is $O(\varepsilon^2)$-close in $\chi^2$-distance among the remaining intervals.
\paragraph{Part 2.}
We note that an equivalent condition for distribution $f$ being MHR is log-concavity of $\log(1 - F)$, where $F$ is the CDF of $f$.
Therefore, our approach for this part will be similar to the approach used for log-concave distributions.
Given the output distribution $q$ from the previous part of this algorithm, our goal will be check if there exists an MHR distribution $f$ which is $O(\varepsilon)$-close to $q$.
We will run a linear program with variables $\mathfrak{f}_i = \log(1 - F_i)$.
First, we ensure that $f$ is a distribution.
This can be done with the following constraints:
\begin{alignat*}{3}
&\mathfrak{f}_i &&\leq 0 \hphantom{space}&&\forall i \in [n] \\
&\mathfrak{f}_i &&\geq \mathfrak{f}_{i+1} &&\forall i \in [n-1] \\
&\mathfrak{f}_n &&= -\infty
\end{alignat*}
To ensure that $f$ is MHR, we use the following constraint:
\begin{alignat*}{3}
&\mathfrak{f}_{i-1} + \mathfrak{f}_{i+1} &&\leq 2\mathfrak{f}_i \hphantom{space}&&\forall i \in [2,n-1]
\end{alignat*}
Now, ideally, we would like to ensure $f$ and $q$ are $\varepsilon$-close in total variation distance by ensuring they are pointwise within a multiplicative $(1 \pm \varepsilon)$ factor of each other:
$$(1 - \varepsilon) \leq f_i/q_i \leq (1 + \varepsilon)$$
We note that this is a stronger condition than $f$ and $q$ being $\varepsilon$-close, but if $p \in \mathcal{MHR}_n$, the guarantees of the previous step would imply the existence of such an $f$.
We have a separate treatment for the identified singletons (i.e., those with probability $\geq 1/b$) and the remainder of the support.
For each element $q_i$ identified to have $\geq 1/b$ mass, we add two constraints:
$$\log((1-\varepsilon/2b)(1 - Q_i)) \leq \mathfrak{f}_i \leq \log((1+\varepsilon/2b)(1 - Q_i))$$
$$\log((1-\varepsilon/2b)(1 - Q_{i-1})) \leq \mathfrak{f}_{i-1} \leq \log((1+\varepsilon/2b)(1 - Q_{i-1}))$$
If we satisfy these constraints, it implies that
$$q_i - \varepsilon/b \leq f_{i} \leq q_i + \varepsilon/b.$$
Since $q_i \geq 1/b$, this implies $$(1 - \varepsilon)q_i \leq f_i \leq (1 + \varepsilon)q_i$$
as desired.
Now, the remaining elements each have $\leq 1/b$ mass.
For each such element $q_i$, we create a constraint
$$(1 - O(\varepsilon))\frac{q_i}{1 - Q_{i-1}} \leq \mathfrak{f}_{i-1} - \mathfrak{f}_i \leq (1 + O(\varepsilon))\frac{q_i}{1 - Q_{i-1}} $$
Note that the middle term is
$$-\log\left(\frac{1 - F_i}{1 - F_{i-1}}\right) = -\log\left(1 - \frac{f_i}{1 - F_{i-1}}\right) \in \frac{f_i}{1 - F_{i-1}}\left(1 \pm 2\varepsilon\right),$$
where the second equality uses the Taylor expansion and the facts that $f_i \leq 1/b$ and $1 - F_{i-1} \geq \varepsilon$ (since during the previous part, we ignored the rightmost $O(\varepsilon)$ probability mass).
If we satisfy the desired constraints, it implies that
$$f_i \in \frac{1}{\left(1 \pm 2\varepsilon\right)}\frac{1 - F_{i-1}}{1 - Q_{i-1}}(1 \mp O(\varepsilon))q_i.$$
Since we are taking $\Omega(1/\varepsilon^4)$ samples and $1 - F_{i-1} \geq \Omega(\varepsilon)$, Lemma~\ref{lem:dkw} implies that $f_i$ is indeed a multiplicative $(1 \pm \varepsilon)$ approximation for these points as well.
We note that all points which do not fall into these two cases make up a total of $O(\varepsilon)$ probability mass.
Therefore, $f$ may be arbitrary at these points and only incur $O(\varepsilon)$ cost in total variation distance.
If we find a feasible point for this linear program, it implies the existence of an MHR distribution within $O(\varepsilon)$ total variation distance.
In this case, we continue to the testing portion of the algorithm.
Furthermore, if $p \in \mathcal{MHR}_n$, our method for generating $q$ certifies that such a distribution exists, and we continue on to the testing portion of the algorithm.
\end{prevproof}
\section{Testing for Monotone Hazard Rate}
\label{sec:MHR}
\gnote{Insert prose}
\section{Details on Testing Monotonicity}
\label{sec:monotone-appendix}
In this section, we prove the lemmas necessary for our monotonicity testing result.
\subsection{A Structural Result for Monotone Distributions on the Hypergrid}
\label{sec:hypergrid}
Birg\'e~\cite{Birge87} showed that any monotone
distribution is estimated to a total variation $\varepsilon$ with a
$O(\log(n)/\varepsilon)$-piecewise constant distribution. Moreover, the
intervals over which the output is constant is
independent of the distribution $p$. This result, was strengthened
to the Kullback-Leibler divergence by~\cite{AcharyaJOS14a} to study the compression of
monotone distributions. They upper bound the KL divergence by
$\chi^2$ distance and then bound the $\chi^2$ distance.
We extend this result to $[n]^d$.
We divide $[n]^d$ into $b^d$ rectangles
as follows.
Let $\{I_1, \dots, I_b\}$ be a partition of
$[n]$ into consecutive intervals defined as:
\begin{align*}
|I_j|
\begin{cases}
1 & \text{ for } 1\le j\le\frac b2, \\
\lfloor 2(1+\gamma)^{j-b/2} \rfloor & \text{ for }\frac b2< j\le b.
\end{cases}
\end{align*}
For ${\bf j}=(j_1,\ldots, j_d)\in[b]^d$, let $I_{\bf j}{\hfill\blksquare}\medskip I_{j_1}\times
I_{j_2}\times\ldots\times I_{j_d}$.
The $\chi^2$ distance between $p$ and $\bar{\dP}$
can be bounded as
\begin{align*}
\chi^2(p,\bar{\dP})
=& \left[\sum_{{\bf j}\in[b]^d}\sum_{{\bf i}\in
I_{{\bf j}}}\frac{{\dP}_{{\bf i}}^2}{{\dPbar}_{{\bf i}}}\right]-1\\
\le& \left[\sum_{{\bf j}\in[b]^d}{\dP}_{{\bf j}}^+|I_{\bf j}|\right]-1\\
\end{align*}
For ${\bf j}=(j_1,\ldots, j_d)\in\mathcal{S}_{\rm large}$, let ${\bf j}^*=(j_1^*,\ldots,
j_b^*)$ be
\begin{align*}
j_i^* =\begin{cases}
j_i& \text{ if } j_i\le b/2+1\\
j_i-1& \text{ otherwise}.
\end{cases}
\end{align*}
We bound the expression above as follows.
Let $T \subseteq [d]$ be any subset of $d$. Suppose the size of $T$ is
$\ell$. Let $\bar T$ be the set of all ${\bf j}$ that satisfy ${\bf j}_i=b/2+1$ for $i\in
T$. In other words, over the dimensions determined by $T$, the value
of the index is equal to $d/2+1$. The map ${\bf j}\rightarrow{\bf j}^*$ restricted to
$T$ is one-to-one, and since at most $d-\ell$ of the coordinates drop,
\begin{align*}
|I_{{\bf j}}| \le |I_{{\bf j}^*}|\cdot(1+\gamma)^{d-\ell}.
\end{align*}
Since there are $\ell$ coordinates that do not change,
and each of them have $2(1+\gamma)$ coordinates, we obtain
\begin{align*}
\sum_{{\bf j}\in\bar T}p_{\bf j} \le&
\ \sum_{{\bf j}\in\bar T}p_{{\bf j}^*}^-\cdot|I_{{\bf j}}|\cdot
(2(1+\gamma))^{\ell}\cdot(1+\gamma)^{d-\ell}\\
=&\ \sum_{{\bf j}\in\bar T}p_{{\bf j}^*}^-\cdot|I_{{\bf j}^*}| \cdot 2^{\ell} (1+\gamma)^{d}.
\end{align*}
Since the mapping is one-to-one, the probability of observing as element in
$\bar T$ is the probability of observing $b/2+1$ in $\ell$
coordinates, which is at most $(2/(b+2))^\ell$ under any monotone
distribution. Therefore,
\begin{align*}
\sum_{{\bf j}\in\bar T}p_{\bf j} \le&\left(\frac2{b+2}\right)^\ell \cdot 2^{\ell} (1+\gamma)^{d}.
\end{align*}
For any $\ell$ there are ${d\choose \ell}$ choices for $T$.
Therefore,
\begin{align}
\chi^2(p,\bar{\dP})\le\ & \sum_{\ell=0}^d {d\choose \ell}
\left(\frac4{b+2}\right)^\ell (1+\gamma)^{d}-1\nonumber\\
=&\ (1+\gamma)^d\left(1+\frac4{b+2}\right)^d-1\nonumber\\
=& \ \left(1+\gamma+\frac4{b+2}+\frac{4\gamma}{b+2}\right)^d-1\nonumber
\end{align}
Recall that $\gamma=2\log (n)/b>1/b$, implies that the expression above
is at most $(1+2\gamma)^d-1$.
This implies Lemma~\ref{lem:mon-str}.
\subsection{Monotone Learning}
Our algorithm requires a distribution $q$ satisfying the properties discussed earlier. We learn a monotone distribution from samples as follows.
Before proving this result, we prove a general result for $\chi^2$ learning of arbitrary discrete distributions, adapting the result from~\cite{KamathOPS15}.
For a distribution $p$, and a partition of the domain into $b$ intervals $I_1, \ldots, I_b$, let $\bar{\dP}_i= p(I_i)/|I_i|$ be the flattening of $p$ over these intervals. We saw that for monotone distributions there exists a partition of the domain such that $\bar{\dP}$ is \emph{close} to the underlying distribution in $\chi^2$ distance.
Suppose we are given $m$ samples from a distribution $p$ and a partition $I_1, \ldots, I_b$. Let $m_j$ be the number of samples that fall in $I_j$. For $i\in I_j$, let
\[
\dQ_i {\hfill\blksquare}\medskip \frac{1}{|I_j|}\frac{m_j+1}{m+b}.
\]
Let $S_j=\sum_{i\in I_j}\dP_i^2$. The expected $\chi^2$ distance between $p$ and $q$ can be bounded as follows.
\begin{align}
\expectation{\chi^2(p,q) } =& \left[\sum_{j=1}^b \sum_{i\in I_j} \sum_{\ell=0}^{m}{m\choose \ell}(p(I_j))^{\ell}(1-p(I_j))^{m-\ell}\frac{\dP_i^2}{(\ell+1)/(|I_j|(m+b))}\right]-1\nonumber\\
= &\left[\frac{m+b}{m+1}\sum_{j=1}^b \frac{S_j}{\bar{\dP}(I_j)/|I_j|} \left(\sum_{\ell=0}^{m}{m+1\choose \ell+1}(p(I_j))^{\ell+1}(1-p(I_j))^{m+1-\ell+1}\right)\right]-1 \nonumber\\
= & \left[\frac{m+b}{m+1}\sum_{j=1}^b \frac{S_j}{\bar{\dP}(I_j)/|I_j|} \left(1-(1-p(I_j)^{m+1}\right)\right]-1\nonumber\\
\le & \left[\frac{m+b}{m+1}\sum_{j=1}^b \frac{S_j}{\bar{\dP}(I_j)/|I_j|}\right]-1\nonumber\\
= & \left[\frac{m+b}{m+1}\left(\chi^2(p,\bar{\dP})+1\right)\right]-1\nonumber\\
= & \frac{m+b}{m+1}\cdot\chi^2(p,\bar{\dP})+\frac{b}{m+1}\label{eqn:chi-learn}.
\end{align}
Suppose $\gamma = O(\log (n)/b)$, and $b=O(d\cdot\log (n)/\varepsilon^2)$. Then, by Lemma~\ref{lem:mon-str},
\begin{align}
\chi^2(p,\bar{\dP})\le \varepsilon^2.
\end{align}
Combining this with~\eqref{eqn:chi-learn} gives Lemma~\ref{lem:fin-learn-mon}.
\section{Testing Monotonicity}
\label{sec:monotone}
As an application of our testing framework, we will demonstrate how to
test for monotonicity. Let $d\geq1$, and ${\bf i}=(i_1, \ldots,
i_d),{\bf j}=(j_1, \ldots, j_d)\in[n]^d$. We say
${\bf i}\succcurlyeq{\bf j}$ if $i_l>j_l$ for $l=1,\ldots,d$.
\begin{definition}
A distribution $p$ over $[n]^d$
is monotone (decreasing) if for all ${\bf i}\succcurlyeq{\bf j}$, $p_{{\bf i}}\le
p_{{\bf j}}$.
\end{definition}
Our main result of this section is as follows:
\begin{theorem}
\label{thm:monotone-final}
For any $d \geq 1$, there exists an algorithm for testing monotonicity over $[n]^d$ with sample complexity
$$O\left(\frac{n^{d/2}}{\varepsilon^2}+\left(\frac{d\log n}{\varepsilon^2}\right)^d\cdot \frac1{\varepsilon^2}\right)$$
and time complexity $O\left(\frac{n^{d/2}}{\varepsilon^2} + \poly(\log n, 1/\varepsilon)^d \right)$.
\end{theorem}
In particular, this implies the following optimal algorithms for
monotonicity testing for all $d \geq 1$:
\begin{corollary}
\label{cor:high-d}
Fix any $d \geq 1$, and suppose $\varepsilon>\frac{\sqrt{d\log n}}{n^{1/4}}$.
Then there exists an algorithm for testing monotonicity over $[n]^d$
with sample complexity $O\left(n^{d/2}/\varepsilon^2\right)$.
\end{corollary}
Our analysis starts with a structural lemma about monotone distributions.
In \cite{Birge87}, Birg\'e showed that any monotone distribution $p$
over $[n]$ can be \emph{obliviously} decomposed into $O(\log(n)/\varepsilon)$
intervals, such that the flattening $\bar p$ (recall
Definition~\ref{def:flattening}) of $p$ over these intervals is
$\varepsilon$-close to $p$ in total variation distance.
\cite{AcharyaJOS14a} extend this result, giving a bound between the
$\chi^2$-distance of $p$ and $\bar p$.
We strengthen these results by extending them to monotone distributions over $[n]^d$.
In particular, we partition the domain $[n]^d$ of $p$ into $O(
(d\log(n)/\varepsilon^2)^d)$ rectangles, and compare it with $\bar p$, the
flattening over these rectangles.
\begin{lemma}
\label{lem:mon-str}
Let $d\ge1$. There is an oblivious decomposition of $[n]^d$ into
$O((d\log(n)/\varepsilon^2)^d)$ rectangles such that for any monotone
distribution $p$ over $[n]^d$, its flattening $\bar p$ over these
rectangles satisfy $\chi^2(p,\bar p) \leq \varepsilon^2$.
\end{lemma}
This effectively reduces the support size to logarithmic in $n$.
At this point, we can apply the Laplace estimator (along the lines
of~\cite{KamathOPS15}) and learn a $q$ such
that if $p$ was monotone, then $q$ will be $O(\varepsilon^2)$-close in
$\chi^2$-distance.
\begin{lemma}
\label{lem:fin-learn-mon}
Let $d\ge1$, and $p$ be a monotone distribution over $[n]^d$.
There is an algorithm which outputs a distribution $q$ such that $\expectation{\chi^2(p,q)}\le \frac{\varepsilon^2}{500}$.
The time and sample complexity are both $O((d\log (n)/\varepsilon^2)^d/\varepsilon^2)$.
\end{lemma}
The final step before we apply our $\chi^2$-tester is to compute the
distance between $q$ and $\mathcal{M}_n^d$.
This subroutine is similar to the one introduced by~\cite{BatuKR04}.
The key idea is to write a linear program, which searches for any distribution $f$ which is close to $q$ in total variation distance.
We note that the desired properties of $f$ (i.e., monotonicity, normalization, and $\varepsilon$-closeness to $q$) are easy to enforce as linear constraints.
If we find that such an $f$ exists, we will apply our $\chi^2$-test to $q$.
If not, we output $\textsc{Reject}\xspace$, as this is sufficient evidence to conclude that $p \not \in \mathcal{M}_n^d$.
Note that the linear program operates over the oblivious decomposition
used in our structural result, so the complexity is polynomial in
$(d\log(n)/\varepsilon)^d$, rather than the naive $n^d$.
At this point, we have precisely the guarantees needed to apply Theorem~\ref{thm:chisq-test}, directly implying Theorem~\ref{thm:monotone-final}.
Proof of the lemmas in this section are provided in Section~\ref{sec:monotone-appendix}.
\section{Testing Log-Concavity}
\label{sec:LCD-main}
In this section we describe our results for testing log-concavity of
distributions. Our main result is as follows:
\begin{theorem}
\label{thm:lcd-main}
There exists an algorithm for testing log-concavity over $[n]$ with sample complexity
$$O\left(\frac{\sqrt{n}}{\varepsilon^2}+\frac{1}{\varepsilon^5}\right)$$
and time complexity $\poly(n,1/\varepsilon)$.
\end{theorem}
In particular, this implies the following optimal tester for this class:
\begin{corollary}
Suppose $\varepsilon > 1/n^{1/5}$.
Then there exists an algorithm for testing log-concavity over $[n]$ with sample complexity $O\left(\sqrt{n}/\varepsilon^2\right)$.
\end{corollary}
Our algorithm will fit into the structure of our general framework.
We first perform a very particular type of learning algorithm, whose
guarantees are summarized in the following lemma:
\begin{lemma}
\label{lem:lcd-learn}
Given $\varepsilon > 0$ and sample access to a distribution $p$, there exists an algorithm with the following guarantees:
\begin{itemize}
\item If $p \in \mathcal{LCD}_n$, the algorithm outputs a distribution $q \in \mathcal{LCD}_n$ and an $O(\varepsilon)$-effective support $S$ of $p$ such that $\chi^2(p_S,q_S) \leq \frac{\varepsilon^2}{500}$ with probability at least $5/6$;
\item If $d_{\mathrm {TV}}(p,\mathcal{LCD}_n) \geq \varepsilon$, the algorithm either outputs a
distribution $q \in \mathcal{LCD}_n$ or \textsc{Reject}\xspace.
\end{itemize}
The sample complexity is $O(1/\varepsilon^5)$ and the time complexity is $\poly(n,1/\varepsilon)$.
\end{lemma}
We note that as a corollary, one immediately obtains a $O(1/\varepsilon^5)$ proper learning algorithm for log-concave distributions.
The result is immediate from the first item of Lemma~\ref{lem:lcd-learn} and Proposition~\ref{prop:distance-relations}.
We can actually do a bit better -- in the proof of Lemma~\ref{lem:lcd-learn}, we partition $[n]$ into intervals of probability mass $\Theta(\varepsilon^{3/2})$.
If one instead partitions into intervals of probability mass $\Theta(\varepsilon/\log(1/\varepsilon))$ and works directly with total variation distance instead of $\chi^2$ distance, one can show that $\tilde O(1/\varepsilon^4)$ samples suffice.
\begin{corollary}
\label{cor:learning LCD}
Given $\varepsilon > 0$ and sample access to a distribution $p \in \mathcal{LCD}_n$, there exists an algorithm which outputs a distribution $q \in \mathcal{LCD}_n$ such that $d_{\mathrm {TV}}(p,q) \leq \varepsilon$.
The sample complexity is $\tilde O(1/\varepsilon^4)$ and the time complexity is $\poly(n,1/\varepsilon)$.
\end{corollary}
Then, given the guarantees of Lemma \ref{lem:lcd-learn},
Theorem~\ref{thm:lcd-main} follows from
Theorem~\ref{thm:chisq-test}\footnote{To be more precise, we require
the modification of Theorem~\ref{thm:lcd-main} which is described in
Section~\ref{sec:testing}, in order to handle the case where the
$\chi^2$-distance guarantees only hold for a known effective
support.}. The details of these results are presented in Section~\ref{sec:LCD}.
\section{Testing for Monotone Hazard Rate}
\label{sec:MHR-main}
In this section, we obtain our main result for testing for monotone hazard rate:
\begin{theorem}
\label{thm:mhr-main}
There exists an algorithm for testing monotone hazard rate over $[n]$ with sample complexity
$$O\left(\frac{\sqrt{n}}{\varepsilon^2}+\frac{\log(n/\varepsilon)}{\varepsilon^4}\right)$$
and time complexity $\poly(n,1/\varepsilon)$.
\end{theorem}
This implies the following optimal tester for the class:
\begin{corollary}
Suppose $\varepsilon > \sqrt{\log(n/\varepsilon)}/n^{1/4}$.
Then there exists an algorithm for testing monotone hazard rate over $[n]$ with sample complexity $O\left(\sqrt{n}/\varepsilon^2\right)$.
\end{corollary}
We obey the same framework as before, first applying a $\chi^2$-learner with the following guarantees:
\begin{lemma}
\label{lem:mhr-learn}
Given $\varepsilon > 0$ and sample access to a distribution $p$, there exists an algorithm with the following guarantees:
\begin{itemize}
\item If $p \in \mathcal{MHR}_n$, the algorithm outputs a distribution $q \in \mathcal{MHR}_n$ and an $O(\varepsilon)$-effective support $S$ of $p$ such that $\chi^2(p_S,q_S) \leq \frac{\varepsilon^2}{500}$ with probability at least $5/6$;
\item If $d_{\mathrm {TV}}(p,\mathcal{MHR}_n) \geq \varepsilon$, the algorithm either outputs a distribution $q \in \mathcal{MHR}_n$ and a set $S \subseteq [n]$ or \textsc{Reject}\xspace.
\end{itemize}
The sample complexity is $O(\log(n/\varepsilon)/\varepsilon^4)$ and the time complexity is $\poly(n,1/\varepsilon)$.
\end{lemma}
As with log-concave distributions, this implies the following proper learning result:
\begin{corollary}
\label{cor:learning MHR}
Given $\varepsilon > 0$ and sample access to a distribution $p \in \mathcal{MHR}_n$, there exists an algorithm which outputs a distribution $q \in \mathcal{MHR}_n$ such that $d_{\mathrm {TV}}(p,q) \leq \varepsilon$.
The sample complexity is $O(\log(n/\varepsilon)/\varepsilon^4)$ and the time complexity is $\poly(n,1/\varepsilon)$.
\end{corollary}
Again, combining the learning guarantees of Lemma \ref{lem:mhr-learn}
with the appropriate variant of Theorem~\ref{thm:chisq-test}, we
obtain Theorem~\ref{thm:mhr-main}. The details of the argument and
proofs are presented in Section~\ref{sec:MHR}.
\section{Overview}
Our algorithm for testing a distribution $p$ can be decomposed into three steps.
\paragraph{Near-proper learning in $\chi^2$-distance.}
Our first step requires a learning algorithm with very specific guarantees.
In proper learning, we are given sample access to a distribution $p \in \mathcal{C}$, where $\mathcal{C}$ is some class of distributions, and we wish to output $q \in \mathcal{C}$ such that $p$ and $q$ are close in total variation distance.
In our setting, given sample access to $p \in \mathcal{C}$, we wish to output $q$ such that $q$ is \emph{close} to $\mathcal{C}$ in total variation distance, and $p$ and $q$ are close in $\chi^2$-distance on an effective support\footnote{We also require the algorithm to output a description of an effective support for which this property holds.
This requirement can be slightly relaxed, as we show in our results for testing unimodality.} of $p$.
From an information theoretic standpoint, this problem is harder than proper learning, since $\chi^2$-distance is more restrictive than total variation distance.
Nonetheless, this problem can be shown to have comparable sample complexity to proper learning for the structured classes we consider in this paper.
\paragraph{Computation of distance to class.}
The next step is to see if the hypothesis $q$ is close to the class $\mathcal{C}$ or not.
Since we have an explicit description of $q$, this step requires no further samples from $p$, i.e. it is purely computational.
If we find that $q$ is far from the class $\mathcal{C}$, then it must be that $p \not \in \mathcal{C}$, as otherwise the guarantees from the previous step would imply that $q$ is close to $\mathcal{C}$. Thus, if it is not, we can terminate the algorithm at this point.
\paragraph{$\chi^2$-testing.}
At this point, the previous two steps guarantee that our distribution $q$ is such that:
\begin{itemize}
\item If $p \in \mathcal{C}$, then $p$ and $q$ are close in $\chi^2$ distance on a (known) effective support of $p$;
\item If $d_{\mathrm {TV}}(p,\mathcal{C}) \geq \varepsilon$, then $p$ and $q$ are far in total variation distance.
\end{itemize}
We can distinguish between these two cases using $O(\sqrt{n}/\varepsilon^2)$ samples with a simple statistical $\chi^2$-test, that we describe in Section~\ref{sec:testing}.
\smallskip Using the above three-step approach, our tester, as described in the next section, can directly test monotonicity, log-concavity, and monotone hazard rate. With an extra trick, using Kolmogorov's max inequality, it can also test unimodality.
\section{Probability distances}
We use the following probability distances in our paper.
\begin{definition}
The \emph{total variation distance} between distributions
$p$ and $q$ is defined as
$$d_{\mathrm {TV}}(p,q) {\hfill\blksquare}\medskip \sup_{A} |p(A) - q(A)| = \frac12\|p - q\|_1.$$
\end{definition}
For a subset of the domain, the total variation distance is defined as
half of the $\ell_1$ distance restricted to the subset.
\begin{definition} \label{def:chisq}
The \emph{$\chi^2$-distance} between $p$ and $q$ over $[n]$ is defined by
$$ \chi^2(p,q) {\hfill\blksquare}\medskip \sum_{i \in [n]}\frac{(p_i-q_i)^2}{q_i} = \left[\sum_{i \in [n]}\frac{p_i^2}{q_i}\right]-1.$$
\end{definition}
\begin{definition}
The \emph{Kolmogorov distance} between two probability measures
$p$ and $q$ over an ordered set ($e.g.$, $\Rho$) with cumulative
density functions (CDF) $F_p$ and $F_q$ is defined as
$$d_{\mathrm K}(p,q) {\hfill\blksquare}\medskip \sup_{x \in \mathbb{R}} |F_p(x) - F_q(x)|.$$
\end{definition}
\section{Preliminaries}
We use the following probability distances in our paper.
\begin{definition}
The \emph{total variation distance} between distributions
$p$ and $q$ is defined as
$$d_{\mathrm {TV}}(p,q) {\hfill\blksquare}\medskip \sup_{A} |p(A) - q(A)| = \frac12\|p - q\|_1.$$
\end{definition}
For a subset of the domain, the total variation distance is defined as
half of the $\ell_1$ distance restricted to the subset.
\begin{definition} \label{def:chisq}
The \emph{$\chi^2$-distance} between $p$ and $q$ over $[n]$ is defined by
$$ \chi^2(p,q) {\hfill\blksquare}\medskip \sum_{i \in [n]}\frac{(p_i-q_i)^2}{q_i} = \left[\sum_{i \in [n]}\frac{p_i^2}{q_i}\right]-1.$$
\end{definition}
\begin{definition}
The \emph{Kolmogorov distance} between two probability measures
$p$ and $q$ over an ordered set ($e.g.$, $\Rho$) with cumulative
density functions (CDF) $F_p$ and $F_q$ is defined as
$$d_{\mathrm K}(p,q) {\hfill\blksquare}\medskip \sup_{x \in \mathbb{R}} |F_p(x) - F_q(x)|.$$
\end{definition}
Our paper is primarily concerned with testing against classes of distributions, defined formally as follows:
\begin{definition}
Given $\varepsilon \in (0,1]$ and sample access to a distribution $p$, an algorithm is said to \emph{test} a class $\mathcal{C}$ if it has the following guarantees:
\begin{itemize}
\item If $p \in \mathcal{C}$, the algorithm outputs \textsc{Accept}\xspace with probability at least $2/3$;
\item If $d_{\mathrm {TV}}(p,\mathcal{C}) \geq \varepsilon$, the algorithm outputs \textsc{Reject}\xspace with probability at least $2/3$.
\end{itemize}
\end{definition}
The Dvoretzky-Kiefer-Wolfowitz (DKW) inequality gives a generic algorithm for learning any distribution with respect to the Kolmogorov distance~\cite{DvoretzkyKW56}.
\begin{lemma}{(See~\cite{DvoretzkyKW56},\cite{Massart90})}
\label{lem:dkw}
Suppose we have $n$ \textit{i.i.d.} samples $X_1, \dots X_n$ from a distribution with CDF $F$.
Let $F_n(x) {\hfill\blksquare}\medskip \frac{1}{n}\sum_{i=1}^n \mathbf{1}_{\{X_i \leq x\}}$ be the empirical CDF.
Then $\Pr[d_{\mathrm K}(F,F_n) \geq \varepsilon] \leq 2e^{-2n\varepsilon^2}$.
In particular, if $n = \Omega((1/\varepsilon^2) \cdot \log(1/\delta))$, then $\Pr[d_{\mathrm K}(F,F_n) \geq \varepsilon] \leq \delta$.
\end{lemma}
We note the following useful relationships between these distances~\cite{GibbsS02}:
\begin{proposition}
\label{prop:distance-relations}
$d_{\mathrm K}(p,q)^2 \leq d_{\mathrm {TV}}(p,q)^2 \leq \frac14 \chi^2(p,q)$.
\end{proposition}
In this paper, we will consider the following classes of distributions:
\begin{itemize}
\item Monotone distributions over $[n]^d$ (denoted by $\mathcal{M}_n^d$), for which $i \lesssim j$ implies $f_i \geq f_j$\footnote{This definition describes monotone non-increasing distributions. By symmetry, identical results hold for monotone non-decreasing distributions.};
\item Unimodal distributions over $[n]$ (denoted by $\mathcal{U}_n$), for which there exists an $i^*$ such that $f_i$ is non-decreasing for $i \leq i^*$ and non-increasing for $i \geq i^*$;
\item Log-concave distributions over $[n]$ (denoted by $\mathcal{LCD}_n$), the
sub-class of unimodal distributions for which $f_{i-1}f_{i+1} \leq f_i^2$;
\item Monotone hazard rate (MHR) distributions over $[n]$ (denoted by $\mathcal{MHR}_n$), for which $i < j$ implies $\frac{f_i}{1 - F_i} \leq \frac{f_j}{1 - F_j}$.
\end{itemize}
\begin{definition}
An \emph{$\eta$-effective support} of a distribution $p$ is any set $S$ such that $p(S) \geq 1 - \eta$.
\end{definition}
The \emph{flattening} of a function $f$ over a subset $S$ is the function $\bar{f}$ such that
$\bar{f}_i= p(S)/|S|$.
\begin{definition}
\label{def:flattening}
Let $p$ be a distribution, and support $I_1, \ldots$ is a partition of
the domain. The flattening of $p$ with respect to $I_1,
\ldots$ is the distribution $\bar p$ which is the flattening of $p$
over the intervals $I_1, \ldots$.
\end{definition}
\paragraph{Poisson Sampling}
Throughout this paper, we use the standard Poissonization
approach.
Instead of drawing exactly $m$ samples from a distribution $p$, we first draw $m' \sim \rm Poisson(m)$, and then draw $m'$ samples from $p$.
As a result, the number of times different elements in the support of $p$ occur in the sample become independent, giving much simpler analyses.
In particular, the number of times we will observe domain element $i$ will be distributed as $\rm Poisson(m\dP_i)$, independently for each $i$.
Since $\rm Poisson(m)$ is tightly concentrated around $m$, this additional flexibility comes only at a sub-constant cost in the sample complexity
with an inversely exponential in $m$, additive increase in the error probability.
\section{Introduction} \label{sec:introduction}
The quintessential scientific question is whether an unknown object
has some property, i.e. whether a model from a specific class fits
the object's observed behavior. If the unknown object is
a probability distribution, $p$, to which we have sample access,
we are typically called to distinguish from samples whether $p$ belongs to some class
$\mathcal{C}$ or whether it is sufficiently far from it.
This question has received tremendous attention in Statistics, where
test statistics for important properties such as the ones we
consider here have been proposed. Nevertheless, the
emphasis has been on asymptotic
analysis, characterizing rates of convergence of test statistics under null
hypotheses, as the number of samples tends to infinity. In contrast,
we want to study the small sample regime, studying the following
problem:
\vspace{2ex}
\begin{center}
\smallskip \framebox{
\begin{minipage}{13.5cm} $\Pi(\mathcal{C},\epsilon)$: Given a family of distributions $\mathcal{C}$, some
$\epsilon>0$, and sample access to an unknown distribution $p$ over some
discrete support, how many samples are required to distinguish
between $p \in \mathcal{C}$ versus $d_{\mathrm {TV}}(p,\mathcal{C})>\epsilon$?
\end{minipage}
}
\end{center}
\vspace{2ex}
The problem has been studied intensely in the literature on Property
Testing and Sublinear Algorithms, where the emphasis has been on
characterizing the optimal tradeoff between $p$'s support size and
the accuracy $\epsilon$ in the number of samples. Several results have
been obtained, roughly clustering into three groups, where (i) $\mathcal{C}$
is the class of monotone distributions over $[n]$, or more generally a
poset---see e.g.~\cite{BatuKR04,Bhattacharyya11}; (ii) $\mathcal{C}$ is the
class of independent, or $k$-wise independent distributions over a
hypergrid---see, e.g., \cite{batu2001testing,alon2007testing}; and
(iii) $\mathcal{C}$ contains a single-distribution $q$, and the problem
becomes that of testing whether $p$ equals $q$ or is far from
it---see, e.g.~\cite{batu2001testing,Paninski08, valiant2014automatic}.
With respect to (iii), \cite{valiant2014automatic} characterize exactly the number of samples required to test identity to each distribution $q$, providing a single tester matching this bound simultaneously for all $q$. Nevertheless, this tester and its precursors are not applicable to the composite identity testing problem that we consider. If our class $\mathcal{C}$ were finite, we could test against each element in the class, albeit this would not necessarily be sample optimal. If our class $\mathcal{C}$ is a continuum though, we need tolerant identity testers, which tend to be more expensive in terms of number of samples, and result in substantially suboptimal testers for the classes we consider. Or we could use approaches related to generalized likelihood ratio test, but their behavior is not well-understood in our regime, and optimizing likelihood over our classes becomes computationally intense.
\jnote{Our problem falls in the general framework of property
testing~\cite{Goldreich98, Fischer01, Rubinfeld06, Ron08,canonne2015survey}, where the
objective is to decide if an object has a certain property, or is
\emph{far} from having the property. The objects in our case are
distributions, and properties are belongingness to \emph{simple}
classes of distributions $\mathcal{C}$. The objective of property testing is
to solve such problems with as few samples as possible, and as fast
(computationally) as possible. This is different from the extensively
studied (See \emph{e.g.,}~\cite{Fisher25,lehmann2006testing}) classic
problem of (composite) hypothesis testing in statistics where the
number of samples are taken to infinity and error exponents and
consistency are studied. }
In this paper, we obtain sample-optimal and computationally efficient testers for $\Pi(\mathcal{C},\epsilon)$ for the most basic shape restrictions to a distribution. Our contributions are the following:
\begin{enumerate}
\item
\jnote{For a known distribution $q$ over $[n]$, and samples from an unknown
distribution $p$, we show that to distinguishing the cases $(a)$ whether the $\chi^2$ distance
between $p$ and $q$ is at most $\varepsilon^2/2$, versus $(b)$ the $\ell_1$ distance
between $p$ and $q$ is at least $\varepsilon$ requires $O(\sqrt
n/\varepsilon^2)$ samples. As a corollary, we provide simpler arguments to
show that identity testing requires $\Theta(\sqrt n/\varepsilon^2)$ samples. }
\item For the class $\mathcal{C}=\mathcal{M}_n^d$ of monotone distributions over
$[n]^d$ we require an optimal $\Theta\left({n^{d/2} \over
\varepsilon^2}\right)$ number of samples, where prior work requires
$\Omega\left({\sqrt{n} \log n \over \varepsilon^4}\right)$ samples for $d=1$
and $\tilde{\Omega}\left(n^{d-{1\over 2}} {\rm poly}({1 \over
\varepsilon})\right)$ for $d>1$~\cite{BatuKR04,Bhattacharyya11}. Our
results improve the exponent of $n$ with respect to $d$, shave all
logarithmic factors in $n$, and improve the exponent of $\varepsilon$ by at
least a factor of $2$.
\begin{enumerate}
\item A useful building block and interesting byproduct of our
analysis is extending Birg\'e's oblivious decomposition for
single-dimensional monotone distributions~\cite{Birge87} to monotone
distributions in $d\ge1$, and to the stronger notion of $\chi^2$
distance. See Section~\ref{sec:hypergrid}.
\item Moreover, we show that $O(\log(n)^{d})$ samples suffice to learn a monotone
distribution over $[n]^d$ in $\chi^2$ distance. See
Lemma~\ref{lem:fin-learn-mon} for the precise statement.
\end{enumerate}
\item For the classes $\mathcal{C}=\mathcal{LCD}_n$, $\mathcal{C}=\mathcal{MHR}_n$ and $\mathcal{C}=\mathcal{U}_n$ of log-concave, monotone-hazard-rate and unimodal distributions over $[n]$, we require an optimal $\Theta\left({\sqrt{n} \over \varepsilon^2}\right)$ number of samples. Our testers for $\mathcal{LCD}_n$ and $\mathcal{C}=\mathcal{MHR}_n$ are to our knowledge the first for these classes for the low sample regime we are studying---see~\cite{hall2005testing} and its references for Statistics literature on the asymptotic regime. Our tester for $\mathcal{U}_n$ improves the dependence of the sample complexity on $\varepsilon$ by at least a factor of $2$ in the exponent, and shaves all logarithmic factors in $n$, compared to testers based on testing monotonicity.
\begin{enumerate}
\item A useful building block and important byproduct of our analysis are the first computationally efficient algorithms for properly learning log-concave and monotone-hazard-rate distributions, to within $\epsilon$ in total variation distance, from ${\rm poly}(1/\epsilon)$ samples, independent of the domain size $n$. See Corollaries~\ref{cor:learning LCD} and~\ref{cor:learning MHR}. Again, these are the first computationally efficient algorithms to our knowledge in the low sample regime. \cite{ChanDSS13b} provide algorithms for density estimation, which are non-proper, i.e. will approximate an unknown distribution from these classes with a distribution that does not belong to these classes. On the other hand, the Statistics literature focuses on maximum-likelihood estimation in the asymptotic regime---see e.g. \cite{cule2010theoretical} and its references.
\end{enumerate}
\item For all the above classes we obtain matching lower bounds, showing that the sample complexity of our testers is optimal with respect to $n$, $\varepsilon$ and when applicable $d$. See Section~\ref{sec:lower-bounds}. Our lower bounds are based on extending Paninski's lower bound~\cite{Paninski08}.
\end{enumerate}
In the heart of our tester lies a novel use of the $\chi^2$ statistic. Naturally, the $\chi^2$ and its related $\ell_2$ statistic have been used in several of the afore-cited results. We propose a new use of the $\chi^2$ statistic enabling our optimal sample complexity. The essence of our approach is to first draw a small number of samples (independent of $n$ for log-concave and monotone-hazard-rate distributions and only logarithmic in $n$ for monotone and unimodal distributions) to approximate the unknown distribution $p$ in $\chi^2$ distance. If $p \in \mathcal{C}$, our learner is required to output a distribution $q$ that is $O(\epsilon)$-close to $\mathcal{C}$ in total variation and $O(\epsilon^2)$-close to $p$ in $\chi^2$ distance. Then a small analysis reduces our testing problem to telling apart the following cases:
\begin{itemize}
\item $p$ and $q$ are $O(\epsilon^2)$-close in $\chi^2$ distance; this case corresponds to $p \in \mathcal{C}$.
\item $p$ and $q$ are $\Omega(\epsilon)$-far in total variation distance; this case corresponds to $d_{\mathrm {TV}}(p,\mathcal{C})>\varepsilon$.
\end{itemize}
In Section~\ref{sec:testing}, we show that a version of the $\chi^2$
statistic, appropriately excluding certain elements of the support,
is sufficiently well-concentrated to distinguish between the above
cases. Moreover, the sample complexity of our algorithm is optimal for
most classes. \explainindetail{Should we put the chi-sq L1 as a
separate result, and say why this is a robust identity testing.}
Our tester, combined with the afore-mentioned extension of
Birg\'e's decomposition theorem is used in Section~\ref{sec:monotone}
to test monotone distributions. See Theorem~\ref{thm:monotone-final}
and Corollary~\ref{cor:high-d}.
Naturally, there are several bells and whistles that we need to add to the above skeleton to accommodate all classes of distributions that we are considering. For log-concave and monotone-hazard distributions, we are unable to obtain a cheap (in terms of samples) learner that $\chi^2$-approximates the unknown distribution $p$ throughout its support. Still, we can identify a subset of the support where the $\chi^2$-approximation is tight and which captures almost all the probability mass of $p$. We extend our tester to accommodate excluding subsets of the support from the $\chi^2$-approximation. See Theorems~\ref{thm:lcd-main} and~\ref{thm:mhr-main} in Sections~\ref{sec:LCD} and \ref{sec:MHR}, which are in the appendix due to lack of space. Some discussion is provided in Section~\ref{sec:blurb}.
For unimodal distributions, we are even unable to identify a large enough subset of the support where the $\chi^2$ approximation is guaranteed to be tight. But we can show that there exists a light enough piece of the support (in terms of probability mass under $p$) that we can exclude to make the $\chi^2$ approximation tight. Given that we only use Chebyshev's inequality to prove the concentration of the test statistic, it would seem that our lack of knowledge of the piece to exclude would involve a union bound and a corresponding increase in the required number of samples. We avoid this through a nice application of Kolmogorov's max inequality in our setting. See Theorem~\ref{thm:unimodality} of Section~\ref{sec:unimodal}, which is in the appendix due to lack of space. Some discussion is provided in Section~\ref{sec:blurb}.
\paragraph{Related Work.} We cannot do justice to the role of shape
restrictions of probability distributions in probabilistic modeling
and testing. It suffices to say that the classes of distributions that
we study are fundamental, motivating extensive literature on their
learning and testing~\cite{BBBB:72}. In the recent times, there has
been work on shape restricted statistics, pioneered by Jon Wellner,
and others~\cite{JW:09,BW10sn, BJR11,SumardW14}.
Due to the sheer volume of literature in statistics in this field, we
will restrict ourselves to those already referenced.
As we have mentioned, Statistics has focused on the asymptotic
regime as the number of samples tends to infinity. Instead we are
considering the low sample regime and are more stringent about the
behavior of our testers, requiring $2$-sided guarantees. We want to
accept if the unknown distribution is in our class of interest, and
also reject if it is far from the class. For this problem, as
discussed above, there are few results when $\mathcal{C}$ is a whole class of
distributions. Closer related to our paper is the line of
papers~\cite{BatuKR04, ACS10, Bhattacharyya11} for monotonicity testing, albeit these
papers have sub-optimal sample complexity as discussed above. More
recently, Acharya and Daskalakis provide optimal testers for the
family of Poisson Binomial Distributions~\cite{AcharyaD15}. Finally,
contemporaneous work of Canonne et al~\cite{CanonneDGR15} provide
testers for the single-dimensional families of distributions
considered here, albeit their sample complexity is suboptimal in both
$n$ and $\epsilon$. \insertref{Put a bunch of
statistics citations.}
\input{preliminaries}
\input{overview}
\input{testing}
\input{monotone}
\input{unimodal-main}
\input{otherclasses}
\input{lower-bounds}
\input{experiments}
\bibliographystyle{IEEEtran}
\section{Moments of the Chi-Squared Statistic}
\label{sec:chisq-moments}
We analyze the mean and variance of the statistic
$$ Z = \sum_{i \in \mathcal{A}} \frac{(X_i - mq_i)^2 - X_i}{mq_i},$$
where each $X_i$ is independently distributed according to $\rm Poisson(\text{$m p_i$})$.
We start with the mean:
\begin{align*}
\expectation{Z} &= \sum_{i \in \mathcal{A}} \expectation{\frac{(X_i - mq_i)^2 - X_i}{mq_i}} \nonumber \\
&= \sum_{i \in \mathcal{A}} \frac{\expectation{X_i^2} -
2mq_i\expectation{X_i} + m^2q_i^2 - \expectation{X_i}}{mq_i} \nonumber \\
&= \sum_{i \in \mathcal{A}} \frac{m^2p_i^2 + m p_i - 2m^2q_ip_i + m^2q_i^2 - m p_i}{mq_i} \nonumber \\
&= m \sum_{i \in \mathcal{A}} \frac{(p_i - q_i)^2}{q_i} \nonumber\\
&= m \cdot \chi^2(p_\mathcal{A},q_\mathcal{A}) \nonumber
\end{align*}
Next, we analyze the variance.
Let $\lambda_i = {\bf E}{X_i}=m p_i$ and $\lambda_i' = m q_i$.
\begin{align}
\Var{Z} &= \sum_{i \in \mathcal{A}}\frac{1}{\lambda_i'^2}\Var{(X_i - \lambda_i)^2 + 2(X_i - \lambda_i)(\lambda_i - \lambda_i') - (X_i - \lambda_i)} \nonumber \\
&= \sum_{i \in \mathcal{A}}\frac{1}{\lambda_i'^2}\Var{(X_i - \lambda_i)^2 + (X_i - \lambda_i)(2\lambda_i -2\lambda_i' - 1) } \nonumber\\
&= \sum_{i \in \mathcal{A}}\frac{1}{\lambda_i'^2}{\bf E}{(X_i - \lambda_i)^4 + 2(X_i - \lambda_i)^3(2\lambda_i -2\lambda_i' - 1) + (X_i - \lambda_i)^2(2\lambda_i -2\lambda_i' - 1)^2 - \lambda_i^2} \nonumber\\
&= \sum_{i \in \mathcal{A}}\frac{1}{\lambda_i'^2}[3\lambda_i^2 + \lambda_i + 2\lambda_i(2\lambda_i - 2\lambda_i' - 1) + \lambda_i(2\lambda_i - 2\lambda_i' - 1)^2 - \lambda_i^2] \nonumber\\
&= \sum_{i \in \mathcal{A}}\frac{1}{\lambda_i'^2}[2\lambda_i^2 + \lambda_i + 4\lambda_i(\lambda_i - \lambda_i') - 2\lambda_i + \lambda_i(4(\lambda_i - \lambda_i')^2 -4(\lambda_i - \lambda_i') + 1)] \nonumber\\
&= \sum_{i \in \mathcal{A}}\frac{1}{\lambda_i'^2}[2\lambda_i^2 + 4\lambda_i(\lambda_i - \lambda_i')^2 ] \nonumber\\
&= \sum_{i \in \mathcal{A}}\left[2 \frac{p_i^2}{q_i^2}+4m\cdot\frac{p_i\cdot(p_i-q_i)^2}{q_i^2}\right]
\end{align}
The third equality is by noting the random variable has expectation $\lambda_i $ and the fourth equality substitutes the values of centralized moments of the Poisson distribution.
\section{Analysis of our $\chi^2$-Test Statistic}
\label{sec:chisq-analysis}
We first prove the key lemmas in the analysis of our $\chi^2$-test.
\begin{prevproof}{Lemma}{lem:means}
The former case is straightforward from (\ref{eqn:mean}) and \ref{prp:in-chisq} of $q$.
We turn to the latter case.
Recall that $\mathcal{A}= \{i:q_i \geq \varepsilon/50n\}$, and thus $q(\bar \mathcal{A}) \leq \varepsilon/50$.
We first show that $d_{\mathrm {TV}}(p_{\mathcal{A}}, q_{\mathcal{A}})\geq \frac{6\varepsilon}{25}$, where $p_{\mathcal{A}}, q_{\mathcal{A}}$ are
defined as above and in our slight abuse of notation we use $d_{\mathrm {TV}}(p_{\mathcal{A}}, q_{\mathcal{A}})$ for non-probability vectors to denote $\frac12\|p_{\mathcal{A}} - q_{\mathcal{A}}\|_1.$
Partitioning the support into $\mathcal{A}$ and $\compl{\mathcal{A}}$, we have
\begin{align}
d_{\mathrm {TV}}(p,q)=d_{\mathrm {TV}}(p_{\mathcal{A}}, q_{\mathcal{A}})+d_{\mathrm {TV}}(p_{\compl{\mathcal{A}}}, q_{\compl{\mathcal{A}}}).\label{eqn:tv-decomp}
\end{align}
We consider the following cases separately:
\begin{itemize}
\item
{\bf $p(\compl{\mathcal{A}})\le \varepsilon/2$:} In this case,
\begin{align}
d_{\mathrm {TV}}(p_{\compl{\mathcal{A}}}, q_{\compl{\mathcal{A}}}) = \frac12 \sum_{i \in \compl{\mathcal{A}}} |p_i - q_i| \leq
\frac12 (p(\compl{\mathcal{A}})+q(\compl{\mathcal{A}})) \le \frac{1}{2}\left(\frac{\varepsilon}{2} + \frac{\varepsilon}{50}\right) = \frac{13\varepsilon}{50}.\nonumber
\end{align}
Plugging this in~\eqref{eqn:tv-decomp}, and using the fact that $d_{\mathrm {TV}}(p,q) \geq \varepsilon$
shows that $d_{\mathrm {TV}}(p_{\mathcal{A}}, q_{\mathcal{A}}) \geq \frac{6\varepsilon}{25}$.
\item
{\bf $p(\compl{\mathcal{A}})> \varepsilon/2$:} In this case, by the reverse triangle inequality,
\begin{align}
d_{\mathrm {TV}}(p_{\mathcal{A}}, q_{\mathcal{A}}) \geq \frac12 (q(\mathcal{A})-p(\mathcal{A})) \geq \frac12 ((1 - \varepsilon/50) - (1 - \varepsilon/2)) = \frac{6\varepsilon}{25} .\nonumber
\end{align}
\end{itemize}
By the Cauchy-Schwarz inequality,
\begin{align}
\chi^2(p_{\mathcal{A}}, q_{\mathcal{A}}) &\ge 4\frac{d_{\mathrm {TV}}(p_{\mathcal{A}},q_{\mathcal{A}})^2}{q(\mathcal{A})}\nonumber\\
&\geq \frac{\varepsilon^2}{5}. \nonumber
\end{align}
We conclude by recalling~\eqref{eqn:mean}.
\end{prevproof}
\begin{prevproof}{Lemma}{lem:vars}
We bound the terms of (\ref{eqn:variance}) separately, starting with the first.
\begin{align}
2\sum_{i \in \mathcal{A}} \frac{p_i^2}{q_i^2} &= 2\sum_{i \in \mathcal{A}} \left(\frac{(p_i - q_i)^2}{q_i^2} + \frac{2p_iq_i - q_i^2}{q_i^2}\right) \nonumber \\
&= 2\sum_{i \in \mathcal{A}} \left(\frac{(p_i - q_i)^2}{q_i^2} + \frac{2q_i(p_i - q_i) + q_i^2}{q_i^2}\right) \nonumber\\
&\leq 2n + 2\sum_{i \in \mathcal{A}} \left(\frac{(p_i - q_i)^2}{q_i^2} + 2\frac{(p_i - q_i)}{q_i}\right) \nonumber\\
&\leq 4n + 4\sum_{i \in \mathcal{A}} \frac{(p_i - q_i)^2}{q_i^2} \nonumber\\
&\leq 4n + \frac{200n}{\varepsilon} \sum_{i \in \mathcal{A}} \frac{(p_i - q_i)^2}{q_i}\nonumber\\
&= 4n + \frac{200n}{\varepsilon}\frac{E[Z]}{m} \nonumber\\
&\leq 4n + \frac{1}{100}\sqrt{n} E[Z]\label{eq:first-var-term-in}
\end{align}
The second inequality is the AM-GM inequality, the third inequality uses that $q_i \geq \frac{\varepsilon}{50n}$ for all $i \in \mathcal{A}$, the last equality uses \eqref{eqn:mean}, and the final inequality substitutes a value $m \geq 20000\frac{\sqrt{n}}{\varepsilon^2}$.
The second term can be similarly bounded:
\begin{align*}
4m \sum_{i \in \mathcal{A}} \frac{p_i(p_i - q_i)^2}{q_i^2} &\leq 4m \left(\sum_{i \in \mathcal{A}} \frac{p_i^2}{q_i^2}\right)^{1/2}\left(\sum_{i \in \mathcal{A}} \frac{(p_i - q_i)^4}{q_i^2}\right)^{1/2} \\
&\leq 4m \left(4n + \frac{1}{100}\sqrt{n} E[Z] \right)^{1/2}\left(\sum_{i \in \mathcal{A}} \frac{(p_i - q_i)^4}{q_i^2}\right)^{1/2} \\
&\leq 4m \left(2\sqrt{n} + \frac{1}{10}n^{1/4} E[Z]^{1/2}\right)\left(\sum_{i \in \mathcal{A}} \frac{(p_i - q_i)^2}{q_i}\right) \\
&= \left(8\sqrt{n} + \frac{2}{5}n^{1/4} E[Z]^{1/2}\right)E[Z] \\
\end{align*}
The first inequality is Cauchy-Schwarz, the second inequality uses (\ref{eq:first-var-term-in}), the third inequality uses the monotonicity of the $\ell_p$ norms, and the equality uses~\eqref{eqn:mean}.
Combining the two terms, we get
$$\Var{Z} \leq 4n + 9\sqrt{n} {\bf E}{Z} + \frac{2}{5}n^{1/4} {\bf E}{Z}^{3/2} .$$
We now consider the two cases in the statement of our lemma.
\begin{itemize}
\item
When $p \in \mathcal{C}$, we know from Lemma~\ref{lem:means} that ${\bf E}{Z} \leq \frac{1}{500} m\varepsilon^2$. Combined with a choice of $m \geq 20000 \frac{\sqrt{n}}{\varepsilon^2}$ and the above expression for the variance, this gives:
$$\Var{Z} \leq \frac{4}{20000^2}m^2\varepsilon^4 + \frac{9}{20000 \cdot 500}m^2\varepsilon^4 + \frac{\sqrt{10}}{12500000}m^2\varepsilon^4 \leq \frac{1}{500000}m^2\varepsilon^4.$$
\item When $d_{\mathrm {TV}}(p,\mathcal{C}) \geq \varepsilon$, Lemma~\ref{lem:means} and $m \geq 20000\frac{\sqrt{n}}{\varepsilon^2}$ give:
$${\bf E}{Z} \geq \frac{1}{5}m\varepsilon^2 \geq 4000\sqrt{n}.$$
Combining this with our expression for variance we get:
$$\Var{Z} \leq \frac{4}{4000^2}{\bf E}{Z}^2 + \frac{9}{4000}{\bf E}{Z}^2 + \frac{2}{5\sqrt{4000}}{\bf E}{Z}^2 \leq \frac{1}{100}{\bf E}{Z}^2.$$
\end{itemize}
\end{prevproof}
\section{A Robust $\chi^2$-$\ell_1$ Identity Test} \label{sec:testing}
{
Our main result in the Section is Theorem~\ref{thm:chisq-test}.
As an immediate corollary, we obtain the following result on testing
whether an unknown distribution is close in $\chi^2$ or far in
$\ell_1$ distance to a known distribution. In
particular, we show the following:
\begin{theorem}
\label{thm:rob-iden}
For a known distribution $q$, there exists an algorithm with sample
complexity
\[
O(\sqrt n/\varepsilon^2)
\]
distinguishes between the cases
\begin{itemize}
\item
$\chi^2(p,q)<\varepsilon^2/10$\ \ \ \ \emph{versus}
\item
$\|p-q\|>\varepsilon^2$.
\end{itemize}
with probability at least $5/6$.
\end{theorem}
This theorem follows from our main result of this section, stated
next, slightly more generally for classes of distributions.
}
\begin{theorem}
\label{thm:chisq-test}
Suppose we are given $\varepsilon \in (0,1]$, a class of probability distributions $\mathcal{C}$, sample access to a distribution $p$ over $[n]$, and an explicit description of a distribution $q$ with the following properties:
\begin{enumerate}[label=\textbf{Property \arabic*.},ref=Property \arabic*,align=left]
\item $d_{\mathrm {TV}}(q,\mathcal{C}) \leq \frac{\varepsilon}{2}$.\label{prp:q-tv}
\item If $p \in \mathcal{C}$, then $\chi^2(p,q) \leq \frac{\varepsilon^2}{500}$. \label{prp:in-chisq}
\end{enumerate}
Then there exists an algorithm with the following guarantees:
\begin{itemize}
\item If $p \in \mathcal{C}$, the algorithm outputs \textsc{Accept}\xspace with probability at least $2/3$;
\item If $d_{\mathrm {TV}}(p,\mathcal{C}) \geq \varepsilon$, the algorithm outputs \textsc{Reject}\xspace with probability at least $2/3$.
\end{itemize}
The time and sample complexity of this algorithm are $O\left(\frac{\sqrt{n}}{\varepsilon^2}\right)$.
\end{theorem}
\begin{remark}
\label{rmk}
As stated in Theorem~\ref{thm:chisq-test}, \ref{prp:in-chisq} requires that $q$ is $O(\varepsilon^2)$-close in $\chi^2$-distance to $p$ over its entire domain.
For the class of monotone distributions, we are able to efficiently obtain such a $q$, which immediately implies sample-optimal learning algorithms for this class.
However, for some classes, we cannot learn a $q$ with such strong guarantees, and we must consider modifications to our base testing algorithm.
For example, for log-concave and monotone hazard rate distributions, we can obtain a distribution $q$ and a set $S$ with the following guarantees:
\begin{itemize}
\item If $p \in \mathcal{C}$, then $\chi^2(p_S,q_S) \leq O(\varepsilon^2)$ and $p(S) \geq 1 - O(\varepsilon)$;
\item If $d_{\mathrm {TV}}(p,\mathcal{C}) \geq \varepsilon$, then $d_{\mathrm {TV}}(p,q) \geq \varepsilon/2$.
\end{itemize}
In this scenario, the tester will simply pretend the support of $p$ and $q$ is $S$, ignoring any samples and support elements in $[n] \setminus S$.
Analysis of this tester is extremely similar to what we present below.
In particular, we can still show that the statistic $Z$ will be separated in the two cases.
When $p \in \mathcal{C}$, excluding $[n] \setminus S$ will only reduce $Z$.
On the other hand, when $d_{\mathrm {TV}}(p,\mathcal{C}) \geq \varepsilon$, since $p(S) \geq 1 - O(\varepsilon)$, $p$ and $q$ must still be far on the remaining support, and we can show that $Z$ is still sufficiently large.
Therefore, a small modification allows us to handle this case with the same sample complexity of $O(\sqrt{n}/\varepsilon^2)$.
A further modification can handle even weaker learning guarantees.
We could handle the previous case because the tester ``knows what we don't know'' -- it can explicitly ignore the support over which we do not have a $\chi^2$-closeness guarantee.
A more difficult case is when there may be a low measure interval hidden in our effective support, over which $p$ and $q$ have a large $\chi^2$-distance.
While we may have insufficient samples to reliably identify this interval, it may still have a large effect on our statistic.
A naive solution would be to consider a tester which tries all possible ``guesses'' for this ``bad'' interval, but a union bound would incur an extra logarithmic factor in the sample complexity.
We manage to avoid this cost through a careful analysis involving Kolmogorov's max inequality, maintaining the $O(\sqrt{n}/\varepsilon^2)$ sample complexity even in this more difficult case.
Being more precise, we can handle cases where we can obtain a distribution $q$ and a set of intervals $S = \{I_1,\dots, I_b\}$ with the following guarantees:
\begin{itemize}
\item If $p \in \mathcal{C}$, then $p(S) \geq 1 - O(\varepsilon)$, $p(I_j) = \Theta(p(S)/b)$ for all $j \in [b]$, and there exists a set $T \subseteq [b]$ such that $|T| \geq b - t$ (for $t = O(1)$) and $\chi^2(p_R,q_R) \leq O(\varepsilon^2)$, where $R = \cup_T I_j$;
\item If $d_{\mathrm {TV}}(p,\mathcal{C}) \geq \varepsilon$, then $d_{\mathrm {TV}}(p,q) \geq \varepsilon/2$.
\end{itemize}
This allows us to additionally test against the class of unimodal distributions.
The tester requires that an effective support is divided into several intervals of roughly equal measure.
It computes our statistic over each of these intervals, and we let our statistic $Z$ be the sum of all but the largest $t$ of these values.
In the case when $p \in \mathcal{C}$, $Z$ will only become smaller by performing this operation.
We use Kolmogorov's maximal inequality to show that $Z$ remains large when $d_{\mathrm {TV}}(p,\mathcal{C}) \geq \varepsilon$.
More details on this tester are provided in Section~\ref{sec:unimodal-appendix}.
\end{remark}
\begin{algorithm}[h]
\caption{Chi-squared testing algorithm}\label{alg:testing}
\begin{algorithmic}[1]
\State \textbf{Input:} $\varepsilon$; an explicit distribution $q$; (Poisson) $m$ samples from a distribution $p$, where $N_i$ denotes the number of occurrences of the $i$th domain element.
\State $\mathcal{A} \leftarrow \{i:q_i \geq \varepsilon/50n\}$
\State $Z \leftarrow \sum_{i \in \mathcal{A}} \frac{(N_i - mq_i)^2 - N_i}{mq_i}$
\If {$Z \leq m\varepsilon^2/10$}
\State \Return \textsc{Accept}\xspace
\Else
\State \Return \textsc{Reject}\xspace
\EndIf
\end{algorithmic}
\end{algorithm}
\begin{prevproof}{Theorem}{thm:chisq-test}
Theorem \ref{thm:chisq-test} is proven by analyzing Algorithm \ref{alg:testing}.
As shown in Section~\ref{sec:chisq-moments}, $Z$ has the following mean and variance:
\begin{equation}
{\bf E}{Z} = m \cdot \sum_{i \in \mathcal{A}} \frac{(p_i - q_i)^2}{q_i} = m \cdot \chi^2(p_\mathcal{A},q_\mathcal{A}) \label{eqn:mean}
\end{equation}
\begin{equation}
\Var{Z} = \sum_{i \in \mathcal{A}}\left[2 \frac{p_i^2}{q_i^2}+4m\cdot\frac{p_i\cdot(p_i-q_i)^2}{q_i^2}\right] \label{eqn:variance}
\end{equation}
where by $p_\mathcal{A}$ and $q_\mathcal{A}$ we denote respectively the vectors $p$ and $q$ restricted to the coordinates in $\mathcal{A}$, and we slightly abuse notation when we write $\chi^2(p_\mathcal{A},q_\mathcal{A})$, as these do not then correspond to probability distributions.
Lemma~\ref{lem:means} demonstrates the separation in the means of the statistic $Z$ in the
two cases of interest, $i.e.,$ $p \in \mathcal{C}$ versus $d_{\mathrm {TV}}(p,\mathcal{C}) \geq
\varepsilon$, and Lemma~\ref{lem:vars} shows the separation in the variances in the
two cases. These two results are proved in
Section~\ref{sec:chisq-analysis}.
\begin{lemma}
\label{lem:means}
If $p \in \mathcal{C}$, then ${\bf E}{Z} \leq \frac{1}{500}m\varepsilon^2$.
If $d_{\mathrm {TV}}(p,\mathcal{C}) \geq \varepsilon$, then ${\bf E}{Z} \geq \frac{1}{5}m\varepsilon^2$.
\end{lemma}
\begin{lemma}
\label{lem:vars}
If $p \in \mathcal{C}$, then $\Var{Z} \leq \frac{1}{500000}m^2\varepsilon^4$.
If $d_{\mathrm {TV}}(p,\mathcal{C}) \geq \varepsilon$, then $\Var{Z} \leq \frac{1}{100}E[Z]^2$.
\end{lemma}
Assuming Lemmas~\ref{lem:means} and~\ref{lem:vars},
Theorem~\ref{thm:chisq-test} is now a simple application of
Chebyshev's inequality.
When $p \in \mathcal{C}$, we have that
$${\bf E}{Z} + \sqrt{3}\Var{Z}^{1/2} \leq \left(\frac{1}{500} + \sqrt{3}\left(\frac{1}{500000}\right)^{1/2}\right)m\varepsilon^2 \leq \frac{1}{200}m\varepsilon^2.$$
Thus, Chebyshev's inequality gives
$$\Pr\left[Z \geq m\varepsilon^2/10\right] \leq \Pr\left[Z \geq m\varepsilon^2/200\right] \leq \Pr\left[Z - {\bf E}{Z} \geq \sqrt{3}\Var{Z}^{1/2}\right] \leq \frac13.$$
The case for $d_{\mathrm {TV}}(p,\mathcal{C}) \geq \varepsilon$ is similar.
Here,
$${\bf E}{Z} - \sqrt{3}\Var{Z}^{1/2} \geq \left(1 - \sqrt{3}\left(\frac{1}{100}\right)^{1/2}\right)E[Z] \geq 3m\varepsilon^2/20.$$
Therefore,
\[\Pr\left[Z \leq m\varepsilon^2/10\right] \leq \Pr\left[Z \leq 3m\varepsilon^2/20\right] \leq \Pr\left[Z - {\bf E}{Z} \leq - \sqrt{3}\Var{Z}^{1/2}\right] \leq \frac13.\qedhere\qedhere\]
\end{prevproof}
\section{$t$-modal Distributions}
\label{sec:t-modal}
For a distribution, $p$ over $[n]$, $i\in[n]$ is said to be a \emph{mode} if $\left(p(i)-p(i-1)\right)\cdot \left(p(i+1)-p(i)\right)$ are of different signs. The family of distributions with at most $t$ modes are called $t$-modal distributions. Monotone distributions are 0-modal, and unimodal and 1-modal distribution classes. These classes of distributions are extremely useful to model mixture distributions, for example it is well known that mixture of $t$ Gaussians is a $t$-modal.
PUT SOME RELATED WORK HERE.
Recently, in a personal communication~\cite{CanonneDGR15} it was shown that testing $t$-modal distributions is possible with $\tilde{O}(t\sqrt{n}/\varepsilon^4)$ samples. Their time complexity is not mentioned explicitly, however we believe there is an implementation in polynomial time.
Indeed, note that any $t$-modal distribution can be written as a mixture of a $t+1$-mixture of unimodal distributions. Indeed, our results will hold for this more general class of distributions. Since, log-concave and monotone hazard distributions are unimodal, our results also apply to them.
\subsection{Structural Results}
We observed that for monotone distributions, there is an oblivious decomposition of the domain, such that the flattening of the underlying distribution over these intervals has a small $\chi^2$ distance. We now extend this argument to $t$-modal distributions. Consider any distribution with at most $t$ modes. Let $b$ be a number, to be decided later.
Consider a partition of $[n]$ into $L\le 1/b+1/t$ intervals such that:
\begin{enumerate}
\item
For each element $i$ with probability at least $1/b$, there is an $I_\ell=\{i\}$.
\item
There are at most two intervals with $p(I)\le 1/b$.
\item
Every other interval $I$ satisfies $p(I)\in[\frac1b,\frac2b]$.
\end{enumerate}
Let $\bar{\dP}$ be the flattening of $p$ over the intervals satisfying these properties.
We first prove our decomposition result for unimodal distributions and then describe appropriate extension to mixtures and more modes.
Consider any decomposition given as above, and we remove the intervals $I_j$ for which $p(I_j)/|I_j|<\frac{\varepsilon}{50n}$. This operation essentially removes the tiny probability elements.
Our key lemma is as follows.
\begin{lemma}
Let $C>2$. For a unimodal distribution over $[n]$, there are at most most $\frac{4\log(50n/\varepsilon)}{C\varepsilon}$ intervals $I_j$ satisfy $\frac{\dP_j^+}{\dP_j^-}<(1+\varepsilon/C)$.
\end{lemma}
\begin{proof}
To the contrary, if there are more than $\frac{4\log(50n/\varepsilon)}{C\varepsilon}$ intervals, then at least half of them are on one side of the mode, however this implies that the ratio of the largest probability and smallest probability is at least $(1+\varepsilon/C)^j$, and if $j>\frac{2\log(50n/\varepsilon)}{C\varepsilon}$, is at least $50n/\varepsilon$, contradicting that we have removed all such elements.
\end{proof}
Let $R{\hfill\blksquare}\medskip \frac{4\log(50n/\varepsilon)}{C\varepsilon}$ denote the highest number of intervals with large ratios.
For $A\in [L]$, let $I_A$ denote the subset of intervals with indices in $A$. The following result is now immediate from the previous lemma.
\begin{lemma}
For any unimodal distribution $p$ over $[n]$, there is a subset of intervals $I_A$, such that
\begin{itemize}
\item
$p(I_A)>1-\varepsilon/50-2R/b$, and
\item
$\chi^2_{{I_A}}(p,\bar{\dP})\le \varepsilon^2/C^2$.
\end{itemize}
\end{lemma}
Consider the following test.
\begin{enumerate}
\item
Take $O(b\log b)$ samples to obtain the intervals satisfying the condition above.
\item
Use $O(b/\varepsilon^2)$ samples obtain a $q$ that is flat over the
intervals of interest.
\item
If $q$ is at least $\varepsilon/2$ far from the class, output \textsc{Reject}\xspace.
\item
Remove all singletons and intervals with $q(I)/|I|<\varepsilon/(50n)$.
\item
\jnote{Remove the intervals with mass at most $1/10b$-- may not be
necessary}
\item
Let $\mathcal{I}_{\rm rem}$ be the remaining intervals
\item
Let $Z_j$ be the $\chi^2$ statistic over the $i$th interval, computed
with $O(\sqrt{n}/\varepsilon^2)$ samples.
\item
Remove the $R$ largest $Z_j$'s.
\item
If $\sum_{j}Z_j>m\varepsilon^2/10$, output \textsc{Reject}\xspace.
\item
Output \textsc{Accept}\xspace.
\end{enumerate}
We now prove the accuracy of the algorithm.
\begin{claim}
If the output $q$ is at most $\varepsilon/2$-far from $\mathcal{C}$, and $p$ is
at least $\varepsilon-$ far from $\mathcal{C}$, then $p$ is at least $\varepsilon/5$-far
from $q$ over the intervals in $\mathcal{I}_{\rm rem}$.
\end{claim}
\begin{proof}
Over the singletons, the largest error introduced is at most $\varepsilon/4$
since we take $O(b/\varepsilon^2)$ samples,
\end{proof}
\section{Real Todos}
\begin{itemize}
\item
\sout{Everything}
\item
Test previous item
\item
Put in the lower bound derivation from Paninski/PBD's.
\item
$t$-modal distributions -- Think Think Think
\end{itemize}
\section{Details on testing Unimodality}
\label{sec:unimodal}
\label{sec:unimodal-appendix}
Recall that to circumvent Birg\'e's decomposition, we want to decompose the interval into disjoint intervals such that the probability of each interval is about $O(1/b)$, where $b$ is a parameter, specified later. In particular we consider a decomposition of $[n]$ with the following properties:
\begin{enumerate}
\item
For each element $i$ with probability at least $1/b$, there is an $I_\ell=\{i\}$.
\item
There are at most two intervals with $p(I)\le 1/{2b}$.
\item
Every other interval $I$ satisfies $p(I)\in\left[\frac1{2b},\frac2b\right]$.
\end{enumerate}
Let $I_1, \ldots, I_L$ denote the partition of $[n]$ corresponding to these intervals. Note that $L= O(b)$.
\begin{claim}
There is an algorithm that takes $O(b\log b)$ samples and outputs $I_1, \ldots, I_L$ satisfying the properties above.
\end{claim}
The first step in our algorithm is to estimate the \emph{total probability} within each of these intervals.
In particular,
\begin{lemma}
There is an algorithm that takes $m'=O(b\log b/\varepsilon^2)$ samples from a distribution $p$, and with probability at least 9/10 outputs a distribution $\bar{\dQ}$ that is constant on each $I_L$. Moreover, for any $j$ such that $p(I_j)>1/2b$,
$\bar{\dQ}(I_j)\in(1\pm\varepsilon)p(I_j)$.
\end{lemma}
\begin{proof}
Consider any interval $I_j$ with $p(I_j)\ge 1/2b$. The number of
samples $N_{I_j}$ that fall in that interval is distributed
$Binomial(m', p(I_j)$. Then by Chernoff bounds for $m'>12 b\log
b/\varepsilon^2$,
\begin{align}
\probof{|N_{I_j}-m'p(I_j)|>\varepsilon m'p(I_j) }\le& 2\exp\left(\varepsilon^2m'p(I_j)/2\right)\\
\le & \frac1{b^2},
\end{align}
where the last inequality uses the fact that $p(I_j)\ge 1/2b$.
\end{proof}
The next step is estimate the distance of $q$ from $\mathcal{U}_n$. This is possible by a simple dynamic program, similar to the one used for monotonicity.
If the estimated distance is more than $\varepsilon/2$, we output \textsc{Reject}\xspace.
Our next step is to remove certain intervals. This will be to ensure that when the underlying distribution is unimodal, we are able to estimate the distribution \emph{multiplicatively} over the remaining intervals.
In particular, we do the following preprocessing step:
\begin{itemize}
\item
$A= \emptyset$.
\item
For interval $I_j$,
\begin{itemize}
\item
If \begin{align}
q(I_j) &\notin\left((1-\varepsilon)\cdotq(I_{j+1}),
(1+\varepsilon)\cdotq(I_{j+1})\right)\ \ \text{ OR }\\
q(I_j) &\notin\left((1-\varepsilon)\cdotq(I_{j-1}), (1+\varepsilon)\cdotq(I_{j-1})\right),
\end{align}
add $I_j$ to $A$.
\end{itemize}
\item
Add the (at most 2) intervals with mass at most $1/2b$ to $A$.
\item
Add all intervals $j$ with $q(I_j)/|I_j|<\varepsilon/50n$ to $A$
\end{itemize}
If the distribution is unimodal, we can prove the following about the set of intervals ${A^c}$.
\begin{lemma}
If $p$ is unimodal then,
\begin{itemize}
\item
$p(I_{A^c})\ge 1-\varepsilon/25-1/b - O\left(\log n/(\varepsilon b)\right).$
\item
Except \emph{at most one} interval in ${A^c}$ every other interval $I_j$ satisfies,
\[
\frac{\dP_j^+}{\dP_j^-}\le (1+\varepsilon).
\]
\end{itemize}
\end{lemma}
If this holds, then the $\chi^2$ distance between $p$ and $q$ constrained to $A^c$, is at most $\varepsilon^2$.
This lemma follows from the following result.
\begin{lemma}
Let $C>2$. For a unimodal distribution over $[n]$, there are at
most $\frac{4\log(50n/\varepsilon)}{C\varepsilon}$ intervals $I_j$ that satisfy $\frac{\dP_j^+}{\dP_j^-}<(1+\varepsilon/C)$.
\end{lemma}
\begin{proof}
To the contrary, if there are more than $\frac{4\log(50n/\varepsilon)}{C\varepsilon}$ intervals, then at least half of them are on one side of the mode, however this implies that the ratio of the largest probability and smallest probability is at least $(1+\varepsilon/C)^j$, and if $j>\frac{2\log(50n/\varepsilon)}{C\varepsilon}$, is at least $50n/\varepsilon$, contradicting that we have removed all such elements.
\end{proof}
We have one additional pre-processing step here. We compute $q(A^c)$ and if it is smaller than $1-\varepsilon/25$, we output \textsc{Reject}\xspace.
Suppose there are $L'$ intervals in $A^c$. Then, except at most one interval in $L'$ we know that the $\chi^2$ distance between $p$ and $q$ is at most $\varepsilon^2$ when $p$ is unimodal, and the TV distance between $p$ and $q$ is at least $\varepsilon/2$ over $A^c$. We propose the following simple modification to take into account, the one interval that might introduce a high $\chi^2$ distance in spite of having a small total variation. If we knew the interval, we can simply remove it and proceed. Since we do not know where the interval lies, we do the following.
\begin{enumerate}
\item
Let $Z_j$ be the $\chi^2$ statistic over the $i$th interval in $A^c$, computed
with $O(\sqrt{n}/\varepsilon^2)$ samples.
\item
Let $Z_l$ be the largest among all $Z_j$'s.
\item
If $\sum_{j, j\ne l}Z_j>m\varepsilon^2/10$, output \textsc{Reject}\xspace.
\item
Output \textsc{Accept}\xspace.
\end{enumerate}
The objective of removing the largest $\chi^2$ statistic is our substitute for not knowing the largest interval.
We now prove the correctness of this algorithm.
\paragraph{Case 1 $p\in UM_n$:} We only concentrate on the final step. The $\chi^2$ statistic over all but one interval are at most $c\cdot m\varepsilon^2$, and the variance is bounded as before. Since we remove the largest statistic, the expected value of the new statistic is \emph{strictly dominated} by that of these intervals. Therefore, the algorithm outputs \textsc{Accept}\xspace with at least the same probability as if we removed the spurious interval.
\paragraph{Case 2 $p\notin UM_n$:}
This is the hard case to prove for unimodal distributions. We know that the $\chi^2$ statistic is large in this case, and we therefore have to prove that it remains large even after removing the largest test statistic $Z_l$.
We invoke Kolmogorov's Maximal Inequality to this end.
\begin{lemma}[Kolmogorov's Maximal Inequality]
For independent zero mean random variables $X_1,\ldots, X_L$ with finite variance, let $S_\ell=X_1+\ldots X_\ell$. Then for any $\lambda>0$,
\begin{align}
\probof{\max_{1\le\ell\le L}\left|S_\ell\right|\ge\lambda}\le \frac{1}{\lambda^2}\cdot Var\left(S_L\right).
\end{align}
\end{lemma}
As a corollary, it follows that $\probof{\max_{\ell}|X_\ell|>2\lambda}\le \frac{1}{\lambda^2}\cdot Var\left(S_L\right)$.
In the case we are interested in, we let $X_i=Z_\ell-\expectation{Z_\ell}$. Then, similar to the computations before, and the fact that each interval has a small mass, it follows that that the variance of the summation is at most $\expectation{Z_\ell}^2/100$. Taking $\lambda = \expectation{S_L-m\varepsilon^2/3}^2/100$, it follows that the statistic does not fall below to $\sqrt n$.
This completes the proof of Theorem~\ref{thm:unimodality}.
\section{Testing Unimodality}
One striking feature of Birge's result is that the decomposition of the domain is oblivious to the samples, and therefore to the unknown distribution. However, such an oblivious decomposition will not work for the unimodal distribution, since the mode is unknown. Suppose we know where the mode of the unknown distribution might be, then the problem can be decomposed into monotone functions over two intervals. Therefore, in theory, one can modify the monotonicity testing algorithm by iterating over all the possible $n$ modes. Indeed, applying a union bound, it then follows that
\begin{theorem}(Follows from Monotone)
For $\varepsilon>1/n^{1/4}$, there is an algorithm that takes $O(\frac{n^{1/2}}{\varepsilon^2}\log n)$ samples, and can test unimodality.
\end{theorem}
\jnote{It is then easy to extend this to $t$-modal distributions in the most trivial way by taking $t\sqrt{n}/\varepsilon^2$ samples. We dont like it, and will strive for the separation of testing and learning.}.
However, this is not satisfactory since we were only able to prove a lower bound of $\sqrt{n}/\varepsilon^2$. In order to overcome the logarithmic barrier introduced by the union bound, we resort to a non-oblivious decomposition of the domain.
We want to decompose the interval into disjoint intervals such that the probability of each interval is about $O(1/b)$, where $b$ is a parameter, specified later. In particular we consider the following decomposition of $[n]$.
\begin{enumerate}
\item
For each element $i$ with probability at least $1/b$, there is an $I_\ell=\{i\}$.
\item
There are at most two intervals with $p(I)\le 1/{2b}$.
\item
Every other interval $I$ satisfies $p(I)\in\left[\frac1{2b},\frac2b\right]$.
\end{enumerate}
Let $I_1, \ldots, I_L$ denote the partition of $[n]$ corresponding to these intervals. Note that $L= O(b)$.
\begin{claim}
There is an algorithm that takes $O(b\log b)$ samples and outputs $I_1, \ldots, I_L$ satisfying the properties above.
\end{claim}
The first step in our algorithm is to estimate the \emph{total probability} within each of these intervals.
In particular,
\begin{lemma}
There is an algorithm that takes $m'=O(b\log b/\varepsilon^2)$ samples from a distribution $p$, and with probability at least 9/100 outputs a distribution $\bar{\dQ}$ that is constant on each $I_L$. Moreover, for any $j$ such that $p(I_j)>1/2b$,
$q(I_j)\in(1\pm\varepsilon)q(I_j)$.
\end{lemma}
\begin{proof}
Consider any interval $I_j$ with $p(I_j)\ge 1/2b$. The number of samples $N_{I_j}$ that fall in that interval is distributed $Binomial(m', p(I_j)$. Then by Chernoff bounds for $m'>12 b\log b/\varepsilon^2$,
\begin{align}
\probof{|N_{I_j}-m'p(I_j)|>\varepsilon m'p(I_j) }\le& 2\exp\left(\varepsilon^2m'p(I_j)/2\right)\\
\le & \frac1{b^2},
\end{align}
where the last inequality uses the fact that $p(I_j)\ge 1/2b$.
\end{proof}
The next step is estimate the distance of $q$ from $UM_{n}$. This is possible by a simple dynamic program. If the estimated distance is more than $\varepsilon/2$, we output \textsc{Reject}\xspace.
Our next step is to remove certain intervals. This will be to ensure that when the underlying distribution is unimodal, we are able to estimate the distribution \emph{multiplicatively} over the remaining intervals.
In particular, we do the following preprocessing step:
\begin{itemize}
\item
$A= \emptyset$.
\item
For interval $I_j$,
\begin{itemize}
\item
If \begin{align}
q(I_j) &\notin\left((1-\varepsilon)\cdotq(I_{j+1}),
(1+\varepsilon)\cdotq(I_{j+1})\right)\ \ \text{ OR }\\
q(I_j) &\notin\left((1-\varepsilon)\cdotq(I_{j-1}), (1+\varepsilon)\cdotq(I_{j-1})\right),
\end{align}
add $I_j$ to $A$.
\end{itemize}
\item
Add the (at most 2) intervals with mass at most $1/2b$ to $A$.
\item
Add all intervals $j$ with $q(I_j)/|I_j|<\varepsilon/50n$ to $A$
\end{itemize}
If the distribution is unimodal, we can prove the following about the set of intervals ${A^c}$.
\begin{lemma}
If $p$ is unimodal then,
\begin{itemize}
\item
$p(I_{A^c})\ge 1-\varepsilon/25-1/b - O\left(\log n/(\varepsilon b)\right).$
\item
Except \emph{at most one} interval in ${A^c}$ every other interval $I_j$ satisfies,
\[
\frac{\dP_j^+}{\dP_j^-}\le (1+\varepsilon).
\]
\end{itemize}
\end{lemma}
If this holds, then the $\chi^2$ distance between $p$ and $q$ constrained to $A^c$, is at most $\varepsilon^2$.
This lemma follows from the following result.
\begin{lemma}
Let $C>2$. For a unimodal distribution over $[n]$, there are at most most $\frac{4\log(50n/\varepsilon)}{C\varepsilon}$ intervals $I_j$ satisfy $\frac{\dP_j^+}{\dP_j^-}<(1+\varepsilon/C)$.
\end{lemma}
\begin{proof}
To the contrary, if there are more than $\frac{4\log(50n/\varepsilon)}{C\varepsilon}$ intervals, then at least half of them are on one side of the mode, however this implies that the ratio of the largest probability and smallest probability is at least $(1+\varepsilon/C)^j$, and if $j>\frac{2\log(50n/\varepsilon)}{C\varepsilon}$, is at least $50n/\varepsilon$, contradicting that we have removed all such elements.
\end{proof}
We have one additional pre-processing step here. We compute $q(A^c)$ and if it is smaller than $1-\varepsilon/25$, we output \textsc{Reject}\xspace.
Suppose there are $L'$ intervals in $A^c$. Then, except at most one interval in $L'$ we know that the $\chi^2$ distance between $p$ and $q$ is at most $\varepsilon^2$ when $p$ is unimodal, and the TV distance between $p$ and $q$ is at least $\varepsilon/2$ over $A^c$. We propose the following simple modification to take into account, the one interval that might introduce a high $\chi^2$ distance in spite of having a small total variation. If we knew the interval, we can simply remove it and proceed. Since we do not know where the interval lies, we do the following.
\begin{enumerate}
\item
Let $Z_j$ be the $\chi^2$ statistic over the $i$th interval in $A^c$, computed
with $O(\sqrt{n}/\varepsilon^2)$ samples.
\item
Let $Z_l$ be the largest among all $Z_j$'s.
\item
If $\sum_{j, j\ne l}Z_j>m\varepsilon^2/10$, output \textsc{Reject}\xspace.
\item
Output \textsc{Accept}\xspace.
\end{enumerate}
The objective of removing the largest $\chi^2$ statistic is our substitute for not knowing the largest interval.
We now prove the correctness of this algorithm.
\paragraph{Case 1 $p\in UM_n$:} We only concentrate on the final step. The $\chi^2$ statistic over all but one interval are at most $c\cdot m\varepsilon^2$, and the variance is bounded as before. Since we remove the largest statistic, the expected value of the new statistic is \emph{strictly dominated} by that of these intervals. Therefore, the algorithm outputs \textsc{Accept}\xspace with at least the same probability as if we removed the spurious interval.
\paragraph{Case 2 $p\notin UM_n$:}
This is the hard case to prove for unimodal distributions. We know that the $\chi^2$ statistic is large in this case, and we therefore have to prove that it remains large even after removing the largest test statistic $Z_l$.
We invoke Kolmogorov's Maximal Inequality to this end.
\begin{lemma}[Kolmogorov's Maximal Inequality]
For independent zero mean random variables $X_1,\ldots, X_L$ with finite variance, let $S_\ell=X_1+\ldots X_\ell$. Then for any $\lambda>0$,
\begin{align}
\probof{\max_{1\le\ell\le L}\left|S_\ell\right|\ge\lambda}\le \frac{1}{\lambda^2}\cdot Var\left(S_L\right).
\end{align}
\end{lemma}
As a corollary, it follows that $\probof{\max_{\ell}|X_\ell|>2\lambda}\le \frac{1}{\lambda^2}\cdot Var\left(S_L\right)$.
In the case we are interested in, we let $X_i=Z_\ell-\expectation{Z_\ell}$. Then, similar to the computations before, and the fact that each interval has a small mass, it follows that that the variance of the summation is at most $\expectation{Z_\ell}^2/100$. Taking $\lambda = \expectation{S_L-m\varepsilon^2/3}^2/100$, it follows that the statistic does not fall below to $\sqrt n$.
\begin{theorem}
\label{thm:unimodality}
For $\varepsilon>n^{-1/4}$ samples, there is an algorithm that takes $O(\sqrt{n}/\varepsilon^2)$ samples to test unimodality.
\end{theorem}
\section{Testing Unimodality}
\label{sec:unimodal-main}
One striking feature of Birg\'e's result is that the decomposition of the domain is oblivious to the samples, and therefore to the unknown distribution. However, such an oblivious decomposition will not work for the unimodal distribution, since the mode is unknown. Suppose we know where the mode of the unknown distribution might be, then the problem can be decomposed into monotone functions over two intervals. Therefore, in theory, one can modify the monotonicity testing algorithm by iterating over all the possible $n$ modes. Indeed, by applying a union bound, it then follows that
\begin{theorem}(Follows from Monotone)
For $\varepsilon>1/n^{1/4}$, there exists an algorithm for testing unimodality over $[n]$ with sample complexity $O\left(\frac{\sqrt{n}}{\varepsilon^2}\log n\right)$.
\end{theorem}
However, this is unsatisfactory, since our lower bound (and as we will demonstrate, the true complexity of this problem) is $\sqrt{n}/\varepsilon^2$.
We overcome the logarithmic barrier introduced by the union bound, by
employing a non-oblivious decomposition of the domain, and using
Kolmogorov's max-inequality.
Our main result for testing unimodality is the following theorem,
which is proved in Section~\ref{sec:unimodal-appendix}.
\begin{theorem}
\label{thm:unimodality}
Suppose $\varepsilon>n^{-1/4}$.
Then there exists an algorithm for testing unimodality over $[n]$ with
sample complexity $O(\sqrt{n}/\varepsilon^2)$.
\end{theorem}
| {
"timestamp": "2015-12-09T02:14:38",
"yymm": "1507",
"arxiv_id": "1507.05952",
"language": "en",
"url": "https://arxiv.org/abs/1507.05952",
"abstract": "Given samples from an unknown distribution $p$, is it possible to distinguish whether $p$ belongs to some class of distributions $\\mathcal{C}$ versus $p$ being far from every distribution in $\\mathcal{C}$? This fundamental question has received tremendous attention in statistics, focusing primarily on asymptotic analysis, and more recently in information theory and theoretical computer science, where the emphasis has been on small sample size and computational complexity. Nevertheless, even for basic properties of distributions such as monotonicity, log-concavity, unimodality, independence, and monotone-hazard rate, the optimal sample complexity is unknown.We provide a general approach via which we obtain sample-optimal and computationally efficient testers for all these distribution families. At the core of our approach is an algorithm which solves the following problem: Given samples from an unknown distribution $p$, and a known distribution $q$, are $p$ and $q$ close in $\\chi^2$-distance, or far in total variation distance?The optimality of our testers is established by providing matching lower bounds with respect to both $n$ and $\\varepsilon$. Finally, a necessary building block for our testers and an important byproduct of our work are the first known computationally efficient proper learners for discrete log-concave and monotone hazard rate distributions.",
"subjects": "Data Structures and Algorithms (cs.DS); Information Theory (cs.IT); Machine Learning (cs.LG); Statistics Theory (math.ST)",
"title": "Optimal Testing for Properties of Distributions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575132207566,
"lm_q2_score": 0.8175744695262777,
"lm_q1q2_score": 0.8033139376505186
} |
https://arxiv.org/abs/1401.7007 | A Weighted Ostrowski Type inequality for L$_{1}\left[ a,b\right] $ and applications | The aim of this paper is to obtain some generalized weighted Ostrowski inequalities for differentiable mappings. Some well known inequalities can be derived as special cases of the inequalities obtained here. In addition, perturbed mid-point inequality and perturbed trapezoid inequality are also obtained. The inequalities obtained here have direct applications in Numerical Integration, Probability Theory, Information Theory and Integral Operator Theory. Some of these applications are discussed. | \section{Introduction}
Inequalities appear in most of the domains of Mathematics and has
applications in numerical integration, probability theory, information
theory and integral operator theory. Inequalities as a field came into
promince with the publication of a book by Hardy, Littlewood and Polya \cit
{6} in 1934. In 1938, Ostrowski \cite{8} discovered a useful inequality,
which is known after his name as Ostrowski inequality. In many practical
investigation, it is necessary to bound one quantity by another. This
classical Ostrowski inequality is very useful for this purpose\textbf{.}
Beckenbach and Bellman \cite{2} \ and Mitrinovi\'{c} \cite{12} highlighted
the importance of inequalities in their respective publications.\bigskip
More recently new inequalities of Ostrowski type were presented Dragomir and
Wang \textbf{\cite{5}} in 1997 and Dragomir and Rassias \cite{4} in 2002.
The weighted version of Ostrowski inequality was first presented in 1983 by
Pecari\'{c} and Savi\'{c} \cite{9}. In 2003, Roumeliotis \cite{11} did some
improvement in the weighted version of Ostrowski-Gr\"{u}ss type
inequalities. In \cite{10} and \cite{7}, Qayyum and Hussain discussed the
weighted version of Ostrowski-Gr\"{u}ss type inequalities. The tools that
are used in this paper are weighted Peano kernel approach which is the
classical and extensively used approach in developing Ostrowski integral
inequalities. The results presented in this paper are very general in
nature. \ The inequalities proved by Dragomir et al \textbf{\cite{5}},
Barnett et al \textbf{\cite{1}} and Cerone et al \textbf{\cite{3}} are
special cases of the inequalities developed here.
Ostrowski \cite{8} proved the classical integral inequality which is stated
here without proof.
\begin{theorem}
Let \ \ $f\ $: $\left[ a,b\right] \rightarrow
\mathbb{R}
$ \ be\ continuous on $\left[ a,b\right] $ and differentiable on $\left(
a,b\right) ,$ whose derivative $f^{\prime }:\left( a,b\right) \rightarrow
\mathbb{R}
$ is bounded on $\ \left( a,b\right) ,$ i.e. $\left\Vert f^{\prime
}\right\Vert _{\infty }=\sup_{t\in \left[ a,b\right] }\left\vert f^{\prime
}\left( t\right) \right\vert <\infty $ the
\begin{equation}
\left\vert f(x)-\frac{1}{b-a}\int_{a}^{b}f(t)dt\right\vert \leq \left[ \frac
1}{4}+\frac{\left( x-\frac{a+b}{2}\right) ^{2}}{\left( b-a\right) ^{2}
\right] \left( b-a\right) \left\Vert f^{\prime }\right\Vert _{\infty }
\tag{1.1}
\end{equation
for all $x\in \left[ a,b\right] $. The\ constant$\ \frac{1}{4}\;$is sharp in
the sense that it can not be replaced by a smaller one.
\end{theorem}
Dragomir and Wang \cite{5} proved $\left( 1.1\right) $ for $f^{\text{
\prime }\in L_{1}\left[ a,b\right] $ $,$ as follows:
\begin{theorem}
Let \ \ $f\ $:$I\subseteq $
\mathbb{R}
\rightarrow
\mathbb{R}
$ \ be\ a differentiable mapping in $I^{\circ }$ and $a,b\in I^{\circ }$with
$a<b.$ If $f^{\text{ }\prime }\in L_{1}\left[ a,b\right] $, then the
inequality hold
\begin{equation}
\left\vert \text{ }f(x)-\frac{1}{b-a}\int_{a}^{b}f(t)dt\right\vert \leq
\left[ \frac{1}{2}+\frac{\left\vert x-\frac{a+b}{2}\right\vert }{b-a}\right]
\left\Vert f^{\prime }\right\Vert _{1} \tag{1.2}
\end{equation
for all $x\in \left[ a,b\right] $.
\end{theorem}
They also pointed out some applications of $(1.2)$ in Numerical Integration
as well as for special means.
Barnett et,al, \cite{1} proved out an inequality of Ostrowski type for twice
differentiable mappings which is in terms of the $\left\Vert .\right\Vert
_{1}$ norm of the second derivative $f^{\prime \prime }$ and apply it in
numerical integration and for some special means.
The following inequality of Ostrowski's type for mappings which are twice
differentiable, holds \cite{3}.
\begin{theorem}
Let \ \ $f\ $: $\left[ a,b\right] \rightarrow
\mathbb{R}
$ \ be\ continuous on $\left[ a,b\right] $ and twice differentiable in
\left( a,b\right) $ and $f^{\text{ }\prime \prime }\in L_{1}\left(
a,b\right) .$ Then the inequality obtaine
\begin{eqnarray}
&&\left\vert \text{ }f(x)-\frac{1}{b-a}\int_{a}^{b}f(t)dt-\left( x-\frac{a+
}{2}\right) f^{\text{ }\prime }\left( x\right) \right\vert \label{1.3} \\
&\leq &\frac{1}{2\left( b-a\right) }\left( \left\vert x-\frac{a+b}{2
\right\vert +\frac{1}{2}\left( b-a\right) \right) ^{2}\left\Vert f^{\prime
\prime }\right\Vert _{1} \notag
\end{eqnarray
for all $x\in \left[ a,b\right] $.
\end{theorem}
J. Roumeliotis \cite{4}, presented product inequalities and weighted
quadrature. The weighted inequlity was also obtained in Lebesgue spaces
involving first derivative of the function, which is given b
\begin{eqnarray}
&&\left\vert \text{ }\frac{1}{b-a}\int_{a}^{b}w\left( t\right)
f(t)dt-m\left( a,b\right) f\left( x\right) \right\vert \notag \\
&\leq &\frac{1}{2}\left[ m\left( a,b\right) +\left\vert m\left( a,x\right)
-m\left( x,b\right) \right\vert \right] \left\Vert f^{\prime \prime
}\right\Vert _{1} \label{1.4}
\end{eqnarray
Motivated and inspired by the work of the above mentioned renowned
mathematicians, we will establish a new inequality by using weight function
, which will be better and generalized than those developed in $\left[ 1-
\right] .$ Some other interesting inequalities are also presented as special
cases. In the last, we presented applications for some special means and in
numerical integration.
\section{Main Results}
In order to prove our main result we first give the following essential
definition.
We assume that the weight function (or density) $w:(a,b)\longrightarrow
\lbrack 0,\infty )$ to be non-negative and integrable over its entire domain
an
\begin{equation*}
\int_{a}^{b}w(t)dt<\infty .
\end{equation*
The domain of $\ w$\ \ \ may be finite or infinite and may vanish at the
boundary point. We denote the momen
\begin{equation*}
m(a,b)=\int_{a}^{b}w(t)dt.
\end{equation*
We now give our main result.
\begin{theorem}
\textit{Let \ }$\mathit{\ }f:[a,b]\rightarrow
\mathbb{R}
$ \textit{be continuous on }$[a,b]$\textit{\ and differentiable on }$(a,b)$
and satisfy the condition $\theta \leq f^{\text{ }\prime }\leq \Phi $ $\ ,\
x\in (a,b).$ Then we have the inequalit
\begin{eqnarray}
&&\left\vert f(x)-\frac{1}{m(a,b)}w(x)\left( b-a\right) \left( x-\frac{a+b}{
}\right) f^{\text{ }\prime }(x)-\frac{1}{m(a,b)}\int_{a}^{b}f(t)w(t)dt\righ
\vert \notag \\
&\leq &\frac{1}{2m^{2}(a,b)}w(x)\left( \frac{1}{2}\left( b-a\right)
^{2}+2\left( x-\frac{a+b}{2}\right) ^{2}\right) \notag \\
&& \notag \\
&&\times \left( \frac{1}{2}\left( b-a\right) +\left\vert x-\frac{a+b}{2
\right\vert \right) \left\Vert f^{\text{ }\prime \prime }\right\Vert _{w,1}
\label{2.1}
\end{eqnarray
for all $x\in \left[ a,b\right] $.
\end{theorem}
\begin{proof}
L\textit{et us define the mapping\ }$P(.,.):[a,b]\longrightarrow
\mathbb{R}
$ given by
\begin{equation*}
P(x,t)=\left\{
\begin{array}{lll}
\int_{a}^{t}w(u)du\text{ \ } & \text{{if}} & t\in \lbrack a,x] \\
\int_{b}^{t}w(u)du\text{ } & \text{{if}} & t\in (x,b]
\end{array
\right.
\end{equation*
Integrating by parts, we hav
\begin{equation}
P(x,t)f^{\text{ }\prime }(t)dt=\int_{a}^{b}f(x)m(a,b)-\int_{a}^{b}f(t)w(t)dt.
\tag{2.2}
\end{equation
Applying the identity $(2.2)$ for $f^{\text{ }\prime }(.),$ we ge
\begin{equation*}
f^{\text{ }\prime }(t)=\frac{1}{m(a,b)}\int_{a}^{b}P(t,s)f^{\text{ }\prime
\prime }(s)ds+\frac{1}{m(a,b)}\int_{a}^{b}f^{\text{ }\prime }(s)w(s)ds.
\end{equation*
Substituting $f^{\text{ }\prime }(t)$ in the right membership of $\left(
2.2\right) ,$we hav
\begin{eqnarray}
f(x) &=&\frac{1}{m^{2}(a,b)}\int_{a}^{b}\int_{a}^{b}P(x,t)P(t,s)f^{\text{
\prime \prime }(s)dsdt \notag \\
&&+\frac{1}{m^{2}(a,b)}\int_{a}^{b}P(x,t)dt\int_{a}^{b}f^{\text{ }\prime
}(s)w(s)dsdt+\frac{1}{m(a,b)}\int_{a}^{b}f(t)w(t)dt. \label{2.3}
\end{eqnarray
Sinc
\begin{equation*}
\int_{a}^{b}P(x,t)dt=w(x)\left( b-a\right) \left( x-\frac{a+b}{2}\right)
\end{equation*
an
\begin{equation*}
\int_{a}^{b}f^{\text{ }\prime }(s)w(s)ds=f^{\text{ }\prime }(x)m\left(
a,b\right) .
\end{equation*
From $\left( 2.3\right) $ therefore we obtai
\begin{eqnarray}
f(x) &=&\frac{1}{m(a,b)}w(x)\left( b-a\right) \left( x-\frac{a+b}{2}\right)
f^{\text{ }\prime }(x)+\frac{1}{m(a,b)}\int_{a}^{b}f(t)w(t)dt \notag \\
&&+\frac{1}{m^{2}(a,b)}\int_{a}^{b}\int_{a}^{b}P(x,t)P(t,s)f^{\text{ }\prime
\prime }(s)dsdt. \label{2.4}
\end{eqnarray
No
\begin{equation*}
\int_{a}^{b}\text{ }\left\vert P(t,s)\right\vert ds=\frac{1}{2}w(t)\left[
\left( t-a\right) ^{2}+\left( t-b\right) ^{2}\right] ,
\end{equation*
\begin{eqnarray*}
&&\int_{a}^{b}\left\vert P(x,t)\right\vert \left[ \frac{w(t)}{2}\left(
\left( t-a\right) ^{2}+\left( b-t\right) ^{2}\right) \left\vert f^{\text{
\prime \prime }(s)\right\vert ds\right] dt \\
&\leq &\frac{1}{2}w(x)\left( \left( x-a\right) ^{2}+\left( b-x\right)
^{2}\right) \max \left\{ t-a,b-t\right\} \left\Vert f^{\text{ }\prime \prime
}\right\Vert _{w,1}.
\end{eqnarray*
From $\left( 2.4\right) $, we hav
\begin{eqnarray}
&&\left\vert f(x)-\frac{1}{m(a,b)}w(x)\left( b-a\right) \left( x-\frac{a+b}{
}\right) f^{\text{ }\prime }(x)-\frac{1}{m(a,b)}\int_{a}^{b}f(t)w(t)dt\righ
\vert \notag \\
&\leq &\frac{1}{2m^{2}(a,b)}w(x)\left( \left( x-a\right) ^{2}+\left(
b-x\right) ^{2}\right) \max \left\{ t-a,b-t\right\} \left\Vert f^{\text{
\prime \prime }\right\Vert _{w,1}. \label{2.5}
\end{eqnarray
Usin
\begin{equation*}
\max \left\{ t-a,b-t\right\} =\frac{1}{2}\left( b-a\right) +\left\vert x
\frac{a+b}{2}\right\vert
\end{equation*
in $\left( 2.5\right) $, we get our desired result.
\end{proof}
\begin{remark}
For $w\left( t\right) =1\mathbf{,}\ \ $the inequality $\left( 2.1\right) $
give
\begin{eqnarray}
&&\left\vert f(x)-\left( x-\frac{a+b}{2}\right) f^{\text{ }\prime }(x)-\frac
1}{\left( b-a\right) }\int\limits_{a}^{b}f(t)dt\right\vert \notag \\
&\leq &\frac{1}{2\left( b-a\right) ^{2}}\left( \frac{1}{2}\left( b-a\right)
^{2}+2\left( x-\frac{a+b}{2}\right) ^{2}\right) \notag \\
&& \notag \\
&&\times \left( \frac{1}{2}\left( b-a\right) +\left\vert x-\frac{a+b}{2
\right\vert \right) \left\Vert f^{\text{ }\prime \prime }\right\Vert _{1}
\label{2.6}
\end{eqnarray
which is similar to \bigskip Barnett's result proved in \cite{1}.
\end{remark}
\begin{corollary}
Under the assumptions of Theorem $4$ and choosing $x=\frac{a+b}{2}$ , we
have the perturbed midpoint inequalit
\begin{eqnarray}
&&\left\vert f\left( \frac{a+b}{2}\right) -\frac{1}{m(a,b)
\int_{a}^{b}f(t)w(t)dt\right\vert \notag \\
&\leq &\frac{1}{8m^{2}(a,b)}w(x)\left( b-a\right) ^{3}\left\Vert f^{\text{
\prime \prime }\right\Vert _{w,1}. \label{2.7}
\end{eqnarray}
\end{corollary}
\begin{proof}
This follows from inequality $\left( 2.1\right) .$
\end{proof}
\begin{corollary}
Under the assumptions of Theorem $4$, we have the perturbed trapezoidal
inequalit
\begin{eqnarray}
&&\left\vert \frac{f(a)+f(b)}{2}-\frac{1}{m(a,b)}\int_{a}^{b}f(t)w(t)dt
\frac{1}{m(a,b)}\frac{\left( b-a\right) ^{2}}{4}\left( w(a)f^{\text{ }\prime
}(a)-w(b)f^{\text{ }\prime }(b)\right) \right\vert \notag \\
&\leq &\frac{1}{4m^{2}(a,b)}\left( b-a\right) ^{3}\left[ w(a)+w(b)\right]
\left\Vert f^{\text{ }\prime \prime }\right\Vert _{w,1}. \label{2.8}
\end{eqnarray}
\end{corollary}
\begin{proof}
Put $x=a$ and $x=b$ in $\left( 2.1\right) $, summing up the obtained
inequalities, using the triangle inequality and dividing by 2, we get the
required inequality.
\end{proof}
\begin{remark}
The result given in $\left( 2.8\right) $ is different from the comparable
results available in \cite{4}.
\end{remark}
\begin{remark}
We can get the best estimation from the inequality $\left( 2.1\right) $ ,
only when $x=\frac{a+b}{2}$, this yields the inequality $\left( 2.7\right) .$
It shows that mid point estimation is better than the trapezoid estimation.\
\end{remark}
\section{\textbf{Application for some special means}}
We may now apply inequality $\left( 2.1\right) ,$ to deduce some
inequalities for special means by the use of particular mappings as follows:
\begin{remark}
Consider $f(x)=\sqrt{x}\ \ln x\ ,x\in \left[ a,b\right] \ \subset \left(
0,\infty \right) $
an
\begin{equation*}
w(x)=\frac{1}{\sqrt{x}},
\end{equation*
The inequality $\left( 2.1\right) $ therefore give
\begin{eqnarray}
&&\left\vert
\begin{array}{c}
\sqrt{x}\ln x-\frac{1}{2\left( \sqrt{b}-\sqrt{a}\right) x}\left( b-a\right)
\left( x-A\right) \left( 1+\frac{1}{2}\ln x\right) \\
-\frac{1}{2\left( \sqrt{b}-\sqrt{a}\right) }\left( b-a\right) \ln I\left(
a,b\right
\end{array
\right\vert \notag \\
&\leq &\frac{1}{8\left( \sqrt{b}-\sqrt{a}\right) ^{2}}\frac{1}{\sqrt{x}
\left( \frac{1}{2}\left( b-a\right) ^{2}+2\left( x-A\right) ^{2}\right)
\notag \\
&&\times \left( \frac{1}{2}\left( b-a\right) +\left\vert x-A\right\vert
\right) \frac{(b-a)}{4ab}\left( 1-\frac{\ln b^{a}-\ln a^{b}}{b-a}\right) .
\label{3.1}
\end{eqnarray
Choosing $x=A$ in $\left( 3.1\right) $, we ge
\begin{eqnarray}
&&\left\vert \sqrt{A}\ln A-\frac{1}{2\left( \sqrt{b}-\sqrt{a}\right) }\left(
b-a\right) \ln I\left( a,b\right) \right\vert \notag \\
&\leq &\frac{1}{128ab\left( \sqrt{b}-\sqrt{a}\right) ^{2}}\frac{1}{\sqrt{x}
\left( b-a\right) ^{4}\left( 1-\frac{\ln b^{a}-\ln a^{b}}{b-a}\right) .
\label{3.2}
\end{eqnarray
\bigskip
\end{remark}
\begin{remark}
Consider \ $f(x)=\frac{1}{x}\sqrt{x}\ \ \ ,x\in \lbrack a,b]\subset \lbrack
1,\infty )$
an
\begin{equation*}
w(x)=\frac{1}{\sqrt{x}}
\end{equation*
The inequality $\left( 2.1\right) $ therefore give
\begin{eqnarray}
&&\left\vert
\begin{array}{c}
\frac{1}{x}\text{ }\sqrt{x}\text{\ \ }+\frac{1}{4\left( \sqrt{b}-\sqrt{a
\right) }\frac{1}{x^{2}}\left( b-a\right) \left( x-A\right) \\
-\frac{1}{2\left( \sqrt{b}-\sqrt{a}\right) }\left( b-a\right) L_{-1}^{-1
\end{array
\right\vert \notag \\
&\leq &\frac{1}{8\left( \sqrt{b}-\sqrt{a}\right) ^{2}}\frac{1}{\sqrt{x}
\left( \frac{1}{2}\left( b-a\right) ^{2}+2\left( x-A\right) ^{2}\right)
\notag \\
&&\times \frac{3}{8}\left( \frac{1}{2}\left( b-a\right) +\left\vert
x-A\right\vert \right) \left( \frac{b^{2}-a^{2}}{a^{2}b^{2}}\right) .
\label{3.3}
\end{eqnarray
Choosing $x=A$ in $\left( 3.3\right) $, we ge
\begin{eqnarray}
&&\left\vert \frac{1}{A}\text{ }\sqrt{A}\text{\ \ }-\frac{1}{2\left( \sqrt{b
-\sqrt{a}\right) }\left( b-a\right) L_{-1}^{-1}\right\vert \notag \\
&\leq &\frac{3}{256\left( \sqrt{b}-\sqrt{a}\right) ^{2}}\frac{1}{\sqrt{A}
\left( b-a\right) ^{3}\left( \frac{b^{2}-a^{2}}{a^{2}b^{2}}\right) .
\label{3.4}
\end{eqnarray}
\end{remark}
\begin{remark}
Consider\ \ \ $f(x)=x^{p}\sqrt{x}$ $,$\ $x\in \left[ a,b\right] $ $f:\left(
0,\infty \right) \rightarrow R$\ ,$\ $where $p\in R/\left\{ -1,0\right\} $
then for $a<b,$
an
\begin{equation*}
w(x)=\frac{1}{\sqrt{x}}
\end{equation*
The inequality $\left( 2.1\right) $ therefore give
\begin{eqnarray}
&&\left\vert
\begin{array}{c}
x^{p}\sqrt{x}\text{ }-\frac{1}{2\left( \sqrt{b}-\sqrt{a}\right) }\frac{1}{x
\left( b-a\right) \left( x-A\right) \left( p+\frac{1}{2}\right) x^{p} \\
-\frac{1}{2\left( \sqrt{b}-\sqrt{a}\right) }\left( b-a\right) L_{p}^{p}
\end{array
\right\vert \notag \\
&\leq &\frac{1}{8\left( \sqrt{b}-\sqrt{a}\right) ^{2}}\frac{1}{\sqrt{x}
\left( \frac{1}{2}\left( b-a\right) ^{2}+2\left( x-A\right) ^{2}\right)
\notag \\
&&\times \left( \frac{1}{2}\left( b-a\right) +\left\vert x-A\right\vert
\right) \left( \frac{p^{2}-\frac{1}{4}}{p-1}\right) \left(
b^{p-1}-a^{p-1}\right) . \label{3.5}
\end{eqnarray
Choosing $x=A$\ in $\left( 3.5\right) ,$ we ge
\begin{eqnarray}
&&\left\vert A^{p}\sqrt{A}\text{ }-\frac{1}{2\left( \sqrt{b}-\sqrt{a}\right)
}\left( b-a\right) L_{p}^{p}\right\vert \notag \\
&\leq &\frac{1}{32\left( \sqrt{b}-\sqrt{a}\right) ^{2}}\frac{1}{\sqrt{A}
\left( b-a\right) ^{3}\left( \frac{p^{2}-\frac{1}{4}}{p-1}\right) \left(
b^{p-1}-a^{p-1}\right) . \label{3.6}
\end{eqnarray}
\end{remark}
\section{\textbf{An Application to Numerical integration}}
Let $I_{n}:$ $a=x_{0}<x_{1}<x_{2}<....<x_{n-1}<x_{n}=b$ be a division of the
interval $[a,b]$ and $\ \ \xi =\left( \xi _{0},\xi _{1},......,\xi
_{n-1}\right) ,$\ a sequence of intermediate points \ $\xi _{i}\in \left[
x_{i},x_{i+1}\right] $ $\left( i=0,1,.....,n-1\right) .$ Consider the
perturbed Riemann sum defined b
\begin{equation}
A=\underset{i=0}{\overset{n-1}{\sum }}m\left( x_{i},x_{i+1}\right) f\left(
\xi _{i}\right) -\underset{i=0}{\overset{n-1}{\sum }}w(\xi _{i})h_{i}\left(
\xi _{i}-\frac{x_{i}+x_{i+1}}{2}\right) f^
{\acute{}
}\left( \xi _{i}\right) \tag{4.1}
\end{equation}
\begin{theorem}
\textit{Let \ }$\mathit{\ }f:[a,b]\rightarrow
\mathbb{R}
$ \textit{be continuous on }$[a,b]$\textit{\ and differentiable on }$(a,b)$,
such that $f^{\text{
{\acute{}
}:\left( a,b\right) \rightarrow
\mathbb{R}
$ is bounded on $\left( a,b\right) $ and assume that\ $\gamma \leq $\ \ $f$
^
{\acute{}
}\leq \Gamma $ for all \ $x\in \left( a,b\right) .$ $f^{\prime \prime
}:(a,b)\longrightarrow
\mathbb{R}
$ belongs \ to $\mathbf{L}_{1}(a,b),$\ \ i.e
\begin{equation*}
\left\Vert f^{\prime \prime }\right\Vert
_{w,1}:=\int\limits_{a}^{b}\left\vert w(t)f(t)\right\vert dt<\infty .
\end{equation*
we hav
\begin{equation}
\int\limits_{a}^{b}f(t)w(t)dt=A\left( f,I,w,\xi \right) +R\left( f,I,w,\xi
\right) , \tag{4.2}
\end{equation
where the remainder $R$ satisfies the estimatio
\begin{eqnarray}
&&\left\vert R\left( f,I,w,\xi \right) \right\vert \notag \\
&\leq &\frac{\left\Vert f^{\text{ }\prime \prime }\right\Vert _{w,1}}
2m\left( x_{i},x_{i+1}\right) }w(\xi _{i})\left( \frac{1}{2}\left(
h_{i}\right) ^{2}+2\left( \xi _{i}-\frac{x_{i}+x_{i+1}}{2}\right) ^{2}\right)
\notag \\
&&\times \left( \frac{1}{2}\left( h_{i}\right) +\left\vert \xi _{i}-\frac
x_{i}+x_{i+1}}{2}\right\vert \right) \label{4.3}
\end{eqnarray
for any choice $\xi $ of the intermediate points.
\end{theorem}
\begin{proof}
Apply Theorem $4$ on the interval $[x_{i},x_{i+1}]$,\ \ $\xi _{i}\in \lbrack
x_{i},x_{i+1}],$ \ where $h_{i}=x_{i+1}-x_{i}$ $\left( i=1,2,3....n-1\right)
,$ to ge
\begin{eqnarray*}
&&\left\vert m(x_{i},x_{i+1})\text{ }f(\xi _{i})-\overset{x_{i+1}}{\underset
x_{i}}{\int }}f(t)w(t)dt-(\xi _{i}-\frac{x_{i}+x_{i+1}}{2})w(\xi _{i})\left(
x_{i+1}-x_{i}\right) f^
{\acute{}
}\left( \xi _{i}\right) \right\vert \\
&\leq &\frac{\left\Vert f^{\text{ }\prime \prime }\right\Vert _{w,1}}
2m\left( x_{i},x_{i+1}\right) }w(\xi _{i})\left( \frac{1}{2}\left(
x_{i+1}-x_{i}\right) ^{2}+2\left( \xi _{i}-\frac{x_{i}+x_{i+1}}{2}\right)
^{2}\right) \\
&&\times \left( \frac{1}{2}\left( x_{i+1}-x_{i}\right) +\left\vert \xi _{i}
\frac{x_{i}+x_{i+1}}{2}\right\vert \right)
\end{eqnarray*}
for all $\xi _{i}\in \lbrack x_{i},x_{i+1}]$ and $i\in \left(
0,1,....n-1\right) .$ Summing the above two inequalities over $i$ from $0$
to $n-1$ and using the generalized triangular inequality, we get the desired
estimation.
\end{proof}
\section{Conclusions}
We established weighted Ostrowski type inequality for bounded differentiable
mappings which generalizes the previous inequalities developed and discussed
in \cite{1},\cite{3},\cite{5} and \cite{8}. Perturbed midpoint and trapezoid
inequalities are obtained. Some closely new results are also given. This
inequality is extended to account for applications in some special means and
numerical integration to show his applicability towards obtaining direct
relationship of these means. These generalized inequalities will also be
useful for the researchers working in the field of the approximation theory,
applied mathematics, probability theory, stochastic and numerical analysis
to solve their problems in engineering and in practical life.
| {
"timestamp": "2014-01-29T02:00:15",
"yymm": "1401",
"arxiv_id": "1401.7007",
"language": "en",
"url": "https://arxiv.org/abs/1401.7007",
"abstract": "The aim of this paper is to obtain some generalized weighted Ostrowski inequalities for differentiable mappings. Some well known inequalities can be derived as special cases of the inequalities obtained here. In addition, perturbed mid-point inequality and perturbed trapezoid inequality are also obtained. The inequalities obtained here have direct applications in Numerical Integration, Probability Theory, Information Theory and Integral Operator Theory. Some of these applications are discussed.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "A Weighted Ostrowski Type inequality for L$_{1}\\left[ a,b\\right] $ and applications",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9770226287518852,
"lm_q2_score": 0.8221891327004132,
"lm_q1q2_score": 0.8032973877621902
} |
https://arxiv.org/abs/2104.08424 | Periodicity of quantum walks defined by mixed paths and mixed cycles | In this paper, we determine periodicity of quantum walks defined by mixed paths and mixed cycles. By the spectral mapping theorem of quantum walks, consideration of periodicity is reduced to eigenvalue analysis of $\eta$-Hermitian adjacency matrices. First, we investigate coefficients of the characteristic polynomials of $\eta$-Hermitian adjacency matrices. We show that the characteristic polynomials of mixed trees and their underlying graphs are same. We also define $n+1$ types of mixed cycles and show that every mixed cycle is switching equivalent to one of them. We use these results to discuss periodicity. We show that the mixed paths are periodic for any $\eta$. In addition, we provide a necessary and sufficient condition for a mixed cycle to be periodic and determine their periods. | \section{Introduction}
Quantum walks are quantum analogues of classical random walks \cite{AAKV, ADZ, Gu}.
In the last two decades,
a great deal of research on quantum walks has been carried out,
and they have strong connections with various fields.
In quantum information,
quantum walk models can be seen as a generalization of Grover's search algorithm \cite{Gr, P}.
An important fact in mathematics is the spectral mapping theorem of quantum walks \cite{HKSS2014, KSY}.
The spectral mapping theorems reduce eigenvalue analysis of time evolution operators
to eigenvalue analysis of other self-adjoint operators.
They bring quantum walks into close connection with functional analysis \cite{SS}
and spectral graph theory \cite{KST}.
In the study of periodicity of discrete-time quantum walk,
field theory and algebraic number theory have also been leveraged \cite{KKKS, SMA}.
Studies of perfect state transfer in continuous-time quantum walks have also been done
by algebraic graph theory and algebraic combinatorics.
We refer to Godsil's survey \cite{G2012}.
Recent studies on state transfer are in \cite{BMW, LW, LLZZ, MDDT, WL, Z}.
\subsection{Related works to periodicity}
The topic discussed in this paper is periodicity of discrete-time quantum walks.
The works of \cite{HKSS2017, KoST} triggered off studies of periodicity on various graphs.
For example,
the studies done on Grover walks can be summarized in Table~\ref{80}.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|}
\hline
Graphs & Ref. \\
\hline
\hline
Complete graphs, complete bipartite graphs, SRGs & \cite{HKSS2017} \\ \hline
Generalized Bethe trees & \cite{KSTY2018} \\ \hline
Distance regular graphs & \cite{Y2019} \\ \hline
Cycle (3-state) & \cite{KKKS} \\ \hline
Complete graphs with self loops & \cite{IMT} \\ \hline
\end{tabular}
\caption{Prior works on periodicity of Grover walks on undirected graphs} \label{80}
\end{table}
In other models,
periodicity of Fourier walks has been considered by Saito \cite{S},
and periodicity of staggered walks has been studied in \cite{KSTY2019}.
Recently, periodicity of quantum walks with generalized Grover coins has been considered
by Sarkar et al~\cite{SMA}.
\subsection{Main Results}
In this paper, we study periodicity of mixed graphs.
There are three main theorems.
See later sections for more detailed terms and definitions.
First, we generalize the result in \cite{AGNN} related to classification of mixed cycles to $\eta$-Hermitian adjacency matrices.
In \cite{AGNN},
Akbari et al~provided several typical switching functions and classified mixed cycles into three types.
We give similar considerations in $\eta$-Hermitian adjacency matrices.
Among the four types of switching defined in \cite{AGNN},
one switching cannot be used in $\eta$-Hermitian adjacency matrices.
Due to this,
we show that there are at most $n+1$ switching equivalence classes of mixed cycles
in $\eta$-Hermitian adjacency matrices.
The claim is as follows:
\begin{thm} \label{Main1}
{\it
Let $G = (V, \MC{A})$ be a mixed cycle of length $n$.
Then, there exists $j \in \{0,1,\dots, n\}$ such that $G$ and $C_n^j$ is $H_{\eta}$-cospectral.
Moreover, we have
\[ \det H_{\eta}(C_n^j) = (-1)^{n+1} 2 \cos (\eta j) + (-1)^{\lfloor \frac{n}{2} \rfloor} (1 + (-1)^n). \]
}
\end{thm}
The second main theorem is to determine periodicity of mixed paths.
Using the model defined in \cite{KST},
we study periodicity of quantum walks defined by mixed graphs.
This model is defined by both a mixed graph and a real number $\eta \in [0, 2\pi)$.
The claim is as follows:
\begin{thm} \label{Main2}
{\it
Let $G = (V, \MC{A})$ be a mixed path on $n$ vertices equipped with an $\eta$-function $\theta$.
Then $G$ is periodic for any $\eta \in [0, 2\pi)$, and the period is $2(n-1)$.
}
\end{thm}
The third main theorem is to determine periodicity of mixed cycles.
Since periodicity is determined by eigenvalues,
it is sufficient to consider the only $n+1$ types of mixed cycles by the first main theorem.
The claim is as follows:
\begin{thm} \label{Main3}
{\it
Let $G = (V, \MC{A})$ be a mixed cycle on $n$ vertices equipped with an $\eta$-function $\theta$.
Then,
$G$ is periodic if and only if $\eta \in \MB{Q}\pi$.
In addition,
we suppose that $\eta \in \MB{Q}\pi$ and the mixed cycle $G$ is $H_{\eta}$-cospectral with $C_n^j$.
Let $\eta = \frac{p}{q}\pi$, where $p$ and $q$ are coprime.
Then, the period $\tau$ of $G$ is the following:
\begin{equation} \label{65m}
\tau =
\begin{cases}
\frac{2qn}{(j, 2q)} \quad &\text{if $p$ is odd,} \\
\frac{qn}{(j, q)} \quad &\text{if $p$ is even.} \\
\end{cases}
\end{equation}
}
\end{thm}
This paper is organized as follows.
In Section~\ref{102},
we prepare terminologies on spectral graph theory.
The definitions of mixed graphs and their $\eta$-Hermitian adjacency matrices are provided.
In Section~\ref{11},
coefficients of the characteristic polynomials of $\eta$-Hermitian adjacency matrices are discussed.
We focus on permutations that contribute to values of determinants.
Relationship between the characteristic polynomials of a mixed graph and its underlying graph is clarified.
In Section~\ref{104},
we carry out classification of mixed cycles by $\eta$-Hermitian adjacency matrices.
We introduce $n+1$ types of mixed cycles
and show that every mixed cycle is switching equivalent to one of them.
On the other hand, we show that the $n+1$ types of mixed cycles have different eigenvalues except for a finite number of $\eta$.
In Section~\ref{105},
we prepare quantum walks defined by mixed graphs.
We define periodicity of mixed graphs and provide some characterizations of it.
In Section~\ref{106},
we discuss periodicity of mixed paths.
We observe action of time evolution matrices on the unit vectors and provide a visual proof.
In Section~\ref{107},
we discuss periodicity of mixed cycles.
It is easy to provide a necessary and sufficient condition for a mixed cycle to be periodic,
but determination of the period is a bit complicated.
\section{Preliminaries on spectral graph theory} \label{102}
Let $\Gamma =(V, E)$ be a finite simple and connected graph with the vertex set $V$ and the edge set $E$.
For $x \in V$,
the set of neighbors of $x$ is denoted by $N(x)$.
Define $\MC{A} = \MC{A}(\Gamma)=\{ (x, y), (y, x) \mid xy \in E(\Gamma) \}$,
which is the set of the symmetric arcs of $\Gamma$.
The origin and terminus vertices of $a=(x, y) \in \MC{A}$ are denoted by $o(a), t(a)$, respectively.
We express $a^{-1}$ as the inverse arc of $a$.
A {\it mixed graph} $G$ consists of a finite set $V(G)$ of vertices
together with a subset $\MC{A}(G) \subset V(G) \times V(G) \setminus \{ (x,x) \mid x \in V \}$
of ordered pairs called {\it arcs}.
Let $G$ be a mixed graph.
Define $\MC{A}^{-1}(G) = \{ a^{-1} \mid a \in \MC{A}(G) \}$
and $\MC{A}^{\pm}(G) = \MC{A}(G) \cup \MC{A}^{-1}(G)$.
If there is no danger of confusion, we write $\MC{A}(G)$ as $\MC{A}$ simply.
If $(x,y) \in \MC{A} \cap \MC{A}^{-1}$,
we say that the unordered pair $\{x,y\}$ is a {\it digon} of $G$.
For a vertex $x \in V(G)$,
define $\deg_{G} x = \deg_{G^{\pm}} x$.
A mixed graph $G$ is $k$-regular if $\deg_{G}x = k$ for any vertex $x \in V(G)$.
The graph $G^{\pm} = (V(G), \MC{A}^{\pm})$ is
so-called the {\it underlying graph} of a mixed graph $G$,
and this is regarded as an undirected graph depending on context.
On the other hand,
we equate an undirected graph with a mixed graph
by considering undirected edges $xy$ as bidirectional arcs $(x,y), (y,x)$.
Throughout this paper, we assume that mixed graphs are weakly connected,
i.e.,
we assume that $G^{\pm}$ is connected.
Let $G = (V, \MC{A})$ be a mixed graph.
For $\eta \in [0, 2\pi)$,
the {\it $\eta$-Hermitian adjacency matrix} $H_{\eta} = H_{\eta}(G) \in \MB{C}^{V \times V}$ is defined by
\[
(H_{\eta})_{x,y} = \begin{cases}
1 \qquad &\text{if $(x,y) \in \MC{A} \cap \MC{A}^{-1}$,} \\
e^{\eta i} \qquad &\text{if $(x,y) \in \MC{A} \setminus \MC{A}^{-1}$,} \\
e^{-\eta i} \qquad &\text{if $(x,y) \in \MC{A}^{-1} \setminus \MC{A}$,} \\
0 \qquad &\text{otherwise.}
\end{cases}
\]
When $\eta = \frac{\pi}{2}$,
the matrix $H_{\frac{\pi}{2}}$ is nothing but the Hermitian adjacency matrix.
This is introduced by Guo--Mohar \cite{GM} and Li--Liu \cite{LL}, independently.
When $\eta = \frac{\pi}{3}$,
the matrix $H_{\frac{\pi}{3}}$ is called the Hermitian adjacency matrix of the second kind.
This is introduced by Mohar \cite{M}.
We refer to \cite{AAS, GS, LY} as recent studies on Hermitian adjacency matrices.
Note that $H_{\eta}(G^{\pm})$ coincides with the ordinary adjacency matrix of $G^{\pm}$.
Define the {\it degree matrix} $D = D(G) \in \MB{C}^{V \times V}$ by $D_{x,y} = (\deg_{G}x)\delta_{x,y}$
for vertices $x,y \in V(G)$,
where $\delta_{x,y}$ is the Kronecker delta symbol.
For $\eta \in [0, 2\pi)$,
the {\it normalized $\eta$-Hermitian adjacency matrix} $\tilde{H}_{\eta}$ is defined by
\[ \tilde{H}_{\eta} = D^{-\frac12} H_{\eta} D^{-\frac12}. \]
Note that if a mixed graph $G$ is $k$-regular, we have $\tilde{H}_{\eta} = \frac{1}{k} H_{\eta}$.
Let $G$ be a mixed graph.
The list of the eigenvalues of $H_{\eta}(G)$ together with their multiplicities,
denoted by $\Spec(H_{\eta}(G))$, is called {\it $H_{\eta}$-spectrum} of $G$.
The same is on $\tilde{H}_{\eta}$.
\section{Permutations and characteristic polynomials} \label{11}
Let $\Gamma$ be an undirected graph.
A mixed graph $G$ is said to be a {\it mixed $\Gamma$} if $G^{\pm}$ is isomorphic to $\Gamma$.
Similarly, we say that $G$ is a {\it mixed tree} if $G^{\pm}$ is a tree.
Let $G = (V, \MC{A})$ be a mixed graph, and let $x_1,x_2, \dots, x_l \in V$.
We say that a sequence $C = (x_1,x_2, \dots, x_l)$ is an {\it $l$-cycle} in $G$
if $C$ is an $l$-cycle in $G^{\pm}$.
The {\it girth} of $G$ is defined by the girth of $ G^{\pm}$.
Note that if the girth of $G$ is $s+1$, then it has no $l$-cycle for $l \in \{1,2, \dots, s\}$.
Let $G = (V,\MC{A})$ be a mixed graph.
Put $X = \lambda I - H_{\eta}(G)$.
Then
\[ \det X = \sum_{\sigma \in \Sym(V)} \sgn(\sigma) \prod_{x \in V} X_{x, \sigma(x)}, \]
where $\Sym(V)$ is the set of all permutations of $V$.
Let
\[
\MC{P}(X) := \left\{ \sigma \in \Sym(V) \, \middle | \, \prod_{x \in V} X_{x, \sigma(x)} \neq 0 \right\}.
\]
In addition, we define
\[
\MC{P}_m(X) := \Big\{ \sigma \in \MC{P}(X) \, \Big | \, |\{ x \in V \mid \sigma(x) \neq x \} | = m \Big\}
\]
for $m \in \MB{N}$.
Any permutation $\sigma \in \Sym(V)$ can be expressed as the product of disjoint cyclic permutations,
say $\sigma = \sigma_1 \sigma_2 \cdots \sigma_l$ for some $l \in \MB{N}$.
We call each $\sigma_i$ a {\it factor} of $\sigma$.
\begin{lem} \label{00}
{\it
Let $G = (V, \MC{A})$ be a mixed graph with girth $s+1$,
and let $X = \lambda I - H_{\eta}(G)$.
For $l \in \{1,2, \dots, s\}$,
all factors of $\sigma \in \MC{P}_{l}(X)$ are transpositions,
where $n = |V|$.
}
\end{lem}
\begin{proof}
Let $H_{\eta} = H_{\eta}(G)$.
Suppose $\sigma \in \MC{P}_{l}(X)$ has a factor $\sigma_i$ of length $t > 2$.
Display as $\sigma_i = (x_1x_2 \cdots x_t)$.
Then $(H_{\eta})_{x_1, x_2} (H_{\eta})_{x_2, x_3} \cdots (H_{\eta})_{x_t, x_1} \neq 0$.
However, $G$ has no $t$-cycles since $t \leq l \leq s$.
Thus, at least one of $(H_{\eta})_{x_1, x_2}, (H_{\eta})_{x_2, x_3}, \dots, (H_{\eta})_{x_t, x_1}$ is $0$.
This is a contradiction.
\end{proof}
On the other hand,
for distinct vertices $x,y$ in a mixed graph $G$,
we have
\begin{equation} \label{01}
(H_{\eta})_{x,y} (H_{\eta})_{y,x} = |(H_{\eta})_{x,y}|^2 =
\begin{cases}
1 \qquad &\text{if $x$ is adjacent to $y$ in $G^{\pm}$,} \\
0 \qquad &\text{otherwise}
\end{cases}
\end{equation}
since $H_{\eta}$ is Hermitian.
Remarking this, we have the following.
\begin{pro} \label{10}
{\it
Let $G = (V, \MC{A})$ be a mixed graph with girth $s+1$.
Let
\begin{align*}
\det (\lambda I - H_{\eta}(G)) &= \lambda^n + a_1 \lambda^{n-1} + \cdots + a_{n-1} \lambda + a_n, \\
\det (\lambda I - H_{\eta}(G^{\pm})) &= \lambda^n + b_1 \lambda^{n-1} + \cdots + b_{n-1} \lambda + b_n,
\end{align*}
where $n = |V|$.
Then we have $a_l = b_l$ for any $l \in \{1,2, \dots, s\}$.
}
\end{pro}
\begin{proof}
Put $X = \lambda I - H_{\eta}(G)$ and $Y = \lambda I - H_{\eta}(G^{\pm})$.
For $x, y \in V$, $X_{x,y} \neq 0$ if and only if $Y_{x,y} \neq 0$,
so $\MC{P}(X) = \MC{P}(Y)$.
In particular, $\MC{P}_{l}(X) = \MC{P}_{l}(Y)$.
If $\MC{P}_{l}(X) = \emptyset$, we have $a_l = b_l = 0$.
We consider $\MC{P}_{l}(X) \neq \emptyset$.
Let $\sigma \in \MC{P}_{l}(X)$.
By Lemma~\ref{00},
all factors of $\sigma$ are transpositions.
Display as $\sigma = (x_1 y_1)(x_2 y_2) \cdots (x_t y_t)$,
where $t = l / 2$.
By (\ref{01}),
\begin{align*}
\sgn(\sigma) \prod_{x \in V} X_{x, \sigma(x)}
&= \lambda^{n-2t} (H_{\eta}(G))_{x_1, y_1} (H_{\eta}(G))_{y_1, x_1} \cdots (H_{\eta}(G))_{x_t, y_t} (H_{\eta}(G))_{y_t, x_t} \\
&= \lambda^{n-2t} \cdot 1 \\
&= \lambda^{n-2t} (H_{\eta}(G^{\pm}))_{x_1, y_1} (H_{\eta}(G^{\pm}))_{y_1, x_1} \cdots (H_{\eta}(G^{\pm}))_{x_t, y_t} (H_{\eta}(G^{\pm}))_{y_t, x_t} \\
&= \sgn(\sigma) \prod_{x \in V} Y_{x, \sigma(x)}.
\end{align*}
Therefore,
\[ a_l
= \sum_{\sigma \in \MC{P}_{l}(X)} \sgn(\sigma) \prod_{x \in V} X_{x, \sigma(x)}
= \sum_{\sigma \in \MC{P}_{l}(Y)} \sgn(\sigma) \prod_{x \in V} Y_{x, \sigma(x)}
= b_l. \]
\end{proof}
\begin{cor} \label{60}
{\it
Let $G$ be a mixed tree.
Then
\[ \det (\lambda I - H_{\eta}(G)) = \det (\lambda I - H_{\eta}(G^{\pm})), \]
i.e.,
$G$ and $G^{\pm}$ are $H_{\eta}$-cospectral.
}
\end{cor}
\begin{proof}
Since $G$ is a mixed tree, the girth is $\infty$.
By Proposition~\ref{10},
all coefficients of both characteristic polynomials are equal.
\end{proof}
For distinct vertices $x,y$ in a mixed graph $G$,
we also have
\[
(\tilde{H}_{\eta})_{x,y} (\tilde{H}_{\eta})_{y,x} = |(\tilde{H}_{\eta})_{x,y}|^2 =
\begin{cases}
\frac{1}{\deg x \deg y} \qquad &\text{if $x$ is adjacent to $y$ in $G^{\pm}$,} \\
0 \qquad &\text{otherwise.}
\end{cases}
\]
Therefore, the same as Propsition~\ref{10} holds for $\tilde{H}_{\eta}$.
\begin{pro} \label{12}
{\it
Let $G = (V, \MC{A})$ be a mixed graph with girth $s+1$.
Let
\begin{align*}
\det (\lambda I - \tilde{H}_{\eta}(G)) &= \lambda^n + a_1 \lambda^{n-1} + \cdots + a_{n-1} \lambda + a_n, \\
\det (\lambda I - \tilde{H}_{\eta}(G^{\pm})) &= \lambda^n + b_1 \lambda^{n-1} + \cdots + b_{n-1} \lambda + b_n,
\end{align*}
where $n = |V|$.
Then we have $a_l = b_l$ for any $l \in \{1,2, \dots, s\}$.
In particular,
if $G$ is a mixed tree, then
\[ \det (\lambda I - \tilde{H}_{\eta}(G)) = \det (\lambda I - \tilde{H}_{\eta}(G^{\pm})), \]
i.e.,
$G$ and $G^{\pm}$ are $\tilde{H}_{\eta}$-cospectral.
}
\end{pro}
\section{Classification of mixed cycles by $H_{\eta}$-spectra} \label{104}
Let $G$ and $G'$ be mixed graphs.
We say that $G$ and $G'$ are {\it $H_{\eta}$-cospectral} if they have the same $H_{\eta}$-spectrum.
The relation that two mixed graphs are $H_{\eta}$-cospectral is an equivalence relation.
We call its equivalence class {\it $H_{\eta}$-cospectral class}.
In this section,
we determine the equivalence classes in the mixed cycles.
Let $G = (V, \MC{A})$ be a mixed graph.
A function $\alpha : V \to \{ 1, e^{\pm i \eta} \}$ is called a {\it switching function}.
For a switching function $\alpha$,
we define the matrix $D(\alpha) \in \MB{C}^{V \times V}$ by $D(\alpha)_{x,y} = \alpha(x) \delta_{x,y}$.
Taking a switching function $\alpha$ well,
the matrix $D(\alpha) H_{\eta}(G) D(\alpha)^*$ is the $\eta$-Hermitian adjacency matrix of another mixed graph $G'$.
Then, we say that {\it $G'$ is obtained by switching with respect to $\alpha$ from $G$}.
Clearly,
$G$ and $G'$ are $H_{\eta}$-cospectral.
Note that if a mixed graph $G'$ is obtained by switching with respect to $\alpha$ from a mixed graph $G$,
we also say that $G$ and $G'$ are {\it switching equivalent}.
Recent studies related to switching equivalence of mixed graphs are in \cite{KB, WY}.
\subsection{Switching functions}
In \cite{AGNN},
Akbari et al~defined the four typical switching functions
and determined the $H_{\frac{\pi}{2}}$-cospectral classes in the mixed cycles.
We generalize their result to general $\eta \in [0, 2\pi)$.
Let $G = (V, \MC{A})$ be a mixed cycle.
First,
we define the three typical switching functions as follows:
\begin{itemize}
\item[Sw.2$'$.] For a vertex $x \in V$, define the switching function
\[ \alpha(v) = \begin{cases}
e^{i \eta} \qquad &\text{if $v = x$,} \\
1 \qquad &\text{otherwise}.
\end{cases} \]
Let $N_{G^{\pm}}(x) = \{v_1, v_2\}$.
If $(v_1, x), (v_2, x) \in \MC{A} \setminus \MC{A}^{-1}$,
then we have the mixed graph $(V, \MC{A} \cup \{ (v_1, x)^{-1}, (v_2, x)^{-1} \})$ by switching.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}
[scale = 0.5,
fat/.style={circle,fill=black, inner sep = 2mm},
slim/.style={circle,fill=black, inner sep = 0.8mm},
]
\node[slim] (s1) at (-2,2) [label = below:$v_1$] {};
\node[slim] (s2) at (0,3) [label = above:$x$] {};
\node[slim] (s3) at (2,2) [label = below:$v_2$] {};
\draw[line width = 1pt][->] (s1) to (s2);
\draw[line width = 1pt][->] (s3) to (s2);
\end{tikzpicture} \raisebox{10mm}{$\xrightarrow{\text{Sw.2$'$ on $x$}}$}
\begin{tikzpicture}
[scale = 0.5,
fat/.style={circle,fill=black, inner sep = 2mm},
slim/.style={circle,fill=black, inner sep = 0.8mm},
]
\node[slim] (s1) at (-2,2) [label = below:$v_1$] {};
\node[slim] (s2) at (0,3) [label = above:$x$] {};
\node[slim] (s3) at (2,2) [label = below:$v_2$] {};
\draw[line width = 1pt] (s1) -- (s2) -- (s3);
\end{tikzpicture}
\caption{Sw.2$'$ on the vertex $x$} \label{sw2}
\end{center}
\end{figure}
\item[Sw.3$'$.] For a vertex $x \in V$, define the switching function
\[ \alpha(v) = \begin{cases}
e^{-i \eta} \qquad &\text{if $v = x$,} \\
1 \qquad &\text{otherwise}.
\end{cases} \]
Let $N_{G^{\pm}}(x) = \{v_1, v_2\}$.
If $(x, v_1), (x, v_2) \in \MC{A} \setminus \MC{A}^{-1}$,
then we have the mixed graph $(V, \MC{A} \cup \{ (x, v_1)^{-1}, (x, v_2)^{-1} \})$ by switching.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}
[scale = 0.5,
fat/.style={circle,fill=black, inner sep = 2mm},
slim/.style={circle,fill=black, inner sep = 0.8mm},
]
\node[slim] (s1) at (-2,1) [label = below:$v_1$] {};
\node[slim] (s2) at (0,2) [label = above:$x$] {};
\node[slim] (s3) at (2,1) [label = below:$v_2$] {};
\draw[line width = 1pt][->] (s2) to (s1);
\draw[line width = 1pt][->] (s2) to (s3);
\end{tikzpicture} \raisebox{10mm}{$\xrightarrow{\text{Sw.3$'$ on $x$}}$}
\begin{tikzpicture}
[scale = 0.5,
fat/.style={circle,fill=black, inner sep = 2mm},
slim/.style={circle,fill=black, inner sep = 0.8mm},
]
\node[slim] (s1) at (-2,1) [label = below:$v_1$] {};
\node[slim] (s2) at (0,2) [label = above:$x$] {};
\node[slim] (s3) at (2,1) [label = below:$v_2$] {};
\draw[line width = 1pt] (s1) -- (s2) -- (s3);
\end{tikzpicture}
\caption{Sw.3$'$ on the vertex $x$} \label{sw3}
\end{center}
\end{figure}
\item[Sw.4$'$.] For a vertex $x \in V$, define the switching function
\[ \alpha(v) = \begin{cases}
e^{i \eta} \qquad &\text{if $v = x$,} \\
1 \qquad &\text{otherwise}.
\end{cases} \]
Let $N_{G^{\pm}}(x) = \{v_1, v_2\}$.
If $(v_1, x) \in \MC{A} \setminus \MC{A}^{-1}$ and $(x, v_2) \in \MC{A} \cap \MC{A}^{-1}$,
then we have the mixed graph $(V, (\MC{A} \setminus \{(v_2, x)\}) \cup \{ (v_1, x)^{-1} \})$ by switching.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}
[scale = 0.5,
fat/.style={circle,fill=black, inner sep = 2mm},
slim/.style={circle,fill=black, inner sep = 0.8mm},
]
\node[slim] (s1) at (-2,1) [label = below:$v_1$] {};
\node[slim] (s2) at (0,2) [label = above:$x$] {};
\node[slim] (s3) at (2,1) [label = below:$v_2$] {};
\draw[line width = 1pt][->] (s1) to (s2);
\draw[line width = 1pt] (s2) -- (s3);
\end{tikzpicture} \raisebox{10mm}{$\xrightarrow{\text{Sw.4$'$ on $x$}}$}
\begin{tikzpicture}
[scale = 0.5,
fat/.style={circle,fill=black, inner sep = 2mm},
slim/.style={circle,fill=black, inner sep = 0.8mm},
]
\node[slim] (s1) at (-2,1) [label = below:$v_1$] {};
\node[slim] (s2) at (0,2) [label = above:$x$] {};
\node[slim] (s3) at (2,1) [label = below:$v_2$] {};
\draw[line width = 1pt] (s1) -- (s2);
\draw[line width = 1pt][->] (s2) to (s3);
\end{tikzpicture}
\caption{Sw.4$'$ on the vertex $x$} \label{sw4}
\end{center}
\end{figure}
\end{itemize}
See also Figures~\ref{sw2}, \ref{sw3} and \ref{sw4}.
The above switching functions are named after \cite{AGNN}.
The lack of ``Sw.1$'$" is due to the generalization of $\eta$.
\subsection{The $n+1$ types of mixed cycles}
Let $n \in \MB{N}$ and let $j \in \{0,1, \dots, n\}$.
We define the {\it mixed cycle $C_n^{j}$ of type $j$} by
\begin{align*}
V(C_n^{j}) &= \{ x_1, x_2, \dots, x_n \}, \\
\MC{A}(C_n^{j}) &= \{ (x_1, x_2), \dots, (x_j, x_{j+1}) \} \cup \{ (x_{j+1}, x_{j+2}), \dots, (x_n, x_{n+1}) \}^{\pm},
\end{align*}
where we set $x_{n+1} = x_1$.
We provide the mixed cycles $C_8^3$ and $C_8^8$ in Figure~\ref{31a} as examples.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}
[scale = 0.5,
slim/.style={circle,fill=black, inner sep = 0.8mm},
]
\node[slim] (v1) at (0,2) [label = above:$x_1$] {};
\node[slim] (v2) at (1.414,1.414) [label =above right:$x_2$] {};
\node[slim] (v3) at (2,0) [label = right:$x_3$] {};
\node[slim] (v4) at (1.414,-1.414) [label =below right:$x_4$] {};
\node[slim] (v5) at (0,-2) [label = below:$x_5$] {};
\node[slim] (v6) at (-1.414,-1.414) [label = below left:$x_6$] {};
\node[slim] (v7) at (-2,0) [label = left:$x_7$] {};
\node[slim] (v8) at (-1.414,1.414) [label = above left:$x_8$] {};
\draw[line width = 1pt][->] (v2) to (v3);
\draw[line width = 1pt] (v4) to (v5);
\draw[line width = 1pt] (v6) to (v5);
\draw[line width = 1pt] (v7) to (v6);
\draw[line width = 1pt] (v7) to (v8);
\draw[line width = 1pt] (v8) to (v1);
\draw[line width = 1pt][->] (v3) to (v4);
\draw[line width = 1pt][->] (v1) to (v2);
\end{tikzpicture}
$\qquad$
\begin{tikzpicture}
[scale = 0.5,
slim/.style={circle,fill=black, inner sep = 0.8mm},
]
\node[slim] (v1) at (0,2) [label = above:$x_1$] {};
\node[slim] (v2) at (1.414,1.414) [label =above right:$x_2$] {};
\node[slim] (v3) at (2,0) [label = right:$x_3$] {};
\node[slim] (v4) at (1.414,-1.414) [label =below right:$x_4$] {};
\node[slim] (v5) at (0,-2) [label = below:$x_5$] {};
\node[slim] (v6) at (-1.414,-1.414) [label = below left:$x_6$] {};
\node[slim] (v7) at (-2,0) [label = left:$x_7$] {};
\node[slim] (v8) at (-1.414,1.414) [label = above left:$x_8$] {};
\draw[line width = 1pt][->] (v2) to (v3);
\draw[line width = 1pt][->] (v4) to (v5);
\draw[line width = 1pt][->] (v5) to (v6);
\draw[line width = 1pt][->] (v6) to (v7);
\draw[line width = 1pt][->] (v7) to (v8);
\draw[line width = 1pt][->] (v8) to (v1);
\draw[line width = 1pt][->] (v3) to (v4);
\draw[line width = 1pt][->] (v1) to (v2);
\end{tikzpicture}
\caption{The mixed cycles $C_8^3$ and $C_8^8$} \label{31a}
\end{center}
\end{figure}
Note that $C_n^0$ is the undirected cycle of length $n$.
We will show that any mixed cycle is $H_{\eta}$-cospectral with some type of the mixed cycle,
and different types of mixed cycles have different spectra except for a finite number of $\eta$.
\begin{lem} \label{02}
{\it
Let $G = (V, \MC{A})$ be a mixed cycle of length $n$,
and let $x, v_1, v_2, \dots, v_l, y \in V$.
If $(x,v_1) \in \MC{A} \setminus \MC{A}^{-1}$,
$(v_1, v_2), (v_2, v_3), \dots, (v_{l-1}, v_l) \in \MC{A} \cap \MC{A}^{-1}$,
and $(v_l, y) \in \MC{A}^{-1} \setminus \MC{A}$.
Then, the mixed cycle $(V, \MC{A} \cup \{ (x,v_1)^{-1}, (v_l, y) \} )$ is obtained by switching from $G$.
}
\end{lem}
\begin{proof}
We prove by induction on $l$.
Consider $l = 1$.
Applying Sw.2$'$ with respect to $v_1$, we have the statement.
We suppose that the statement follows in the case of $l - 1$.
We apply Sw.4$'$ with respect to $v_1$.
The switched mixed cycle is in the situation of the case of $l-1$.
By the assumption of the induction, we have the statement.
\end{proof}
\begin{pro} \label{03}
{\it
Let $G = (V, \MC{A})$ be a mixed cycle of length $n$.
Then, there exists $j \in \{0,1,\dots, n\}$ such that $G$ and $C_n^j$ is $H_{\eta}$-cospectral.
}
\end{pro}
\begin{proof}
By Lemma~\ref{02},
we have a switched mixed cycle $G'$ such that
the directions of all arcs are aligned clockwise or anticlockwise.
Applying Sw.4$'$ many times, arcs in the graph are replaced consecutively.
This graph is nothing but $C_n^j$ for some $j \in \{0,1,\dots, n\}$.
\end{proof}
Figure~\ref{30} shows that switching yields the mixed graph of type $3$ from a mixed cycle.
Note that Sw.3$'$ is actually unnecessary.
However, it can be used to obtain $C_n^j$ with less switching in some cases.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}
[scale = 0.5,
slim/.style={circle,fill=black, inner sep = 0.8mm},
]
\node[slim] (v1) at (0,2) [label = above:$v_1$] {};
\node[slim] (v2) at (1.414,1.414) [label =above right:$v_2$] {};
\node[slim] (v3) at (2,0) [label = right:$v_3$] {};
\node[slim] (v4) at (1.414,-1.414) [label =below right:$v_4$] {};
\node[slim] (v5) at (0,-2) [label = below:$v_5$] {};
\node[slim] (v6) at (-1.414,-1.414) [label = below left:$v_6$] {};
\node[slim] (v7) at (-2,0) [label = left:$v_7$] {};
\node[slim] (v8) at (-1.414,1.414) [label = above left:$v_8$] {};
\draw[line width = 1pt][->] (v2) to (v3);
\draw[line width = 1pt][->] (v4) to (v5);
\draw[line width = 1pt] (v6) to (v5);
\draw[line width = 1pt][->] (v7) to (v6);
\draw[line width = 1pt] (v7) to (v8);
\draw[line width = 1pt][->] (v8) to (v1);
\draw[line width = 1pt][->] (v3) to (v4);
\draw[line width = 1pt] (v1) -- (v2);
\end{tikzpicture}} \raisebox{15mm}{$\xrightarrow[\text{on $v_6$}]{\text{using Sw.4$'$}}$}
\begin{tikzpicture}
[scale = 0.5,
slim/.style={circle,fill=black, inner sep = 0.8mm},
]
\node[slim] (v1) at (0,2) [label = above:$v_1$] {};
\node[slim] (v2) at (1.414,1.414) [label =above right:$v_2$] {};
\node[slim] (v3) at (2,0) [label = right:$v_3$] {};
\node[slim] (v4) at (1.414,-1.414) [label =below right:$v_4$] {};
\node[slim] (v5) at (0,-2) [label = below:$v_5$] {};
\node[slim] (v6) at (-1.414,-1.414) [label = below left:$v_6$] {};
\node[slim] (v7) at (-2,0) [label = left:$v_7$] {};
\node[slim] (v8) at (-1.414,1.414) [label = above left:$v_8$] {};
\draw[line width = 1pt][->] (v2) to (v3);
\draw[line width = 1pt][->] (v3) to (v4);
\draw[line width = 1pt] (v7) to (v6);
\draw[line width = 1pt] (v7) to (v8);
\draw[line width = 1pt][->] (v8) to (v1);
\draw[line width = 1pt] (v1) -- (v2);
\draw[line width = 1pt][->] (v4) to (v5);
\draw[line width = 1pt][->] (v6) to (v5);
\end{tikzpicture}}\\
\raisebox{15mm}{$\xrightarrow[\text{on $v_5$}]{\text{using Sw.2$'$}}$}
\begin{tikzpicture}
[scale = 0.5,
slim/.style={circle,fill=black, inner sep = 0.8mm},
]
\node[slim] (v1) at (0,2) [label = above:$v_1$] {};
\node[slim] (v2) at (1.414,1.414) [label =above right:$v_2$] {};
\node[slim] (v3) at (2,0) [label = right:$v_3$] {};
\node[slim] (v4) at (1.414,-1.414) [label =below right:$v_4$] {};
\node[slim] (v5) at (0,-2) [label = below:$v_5$] {};
\node[slim] (v6) at (-1.414,-1.414) [label = below left:$v_6$] {};
\node[slim] (v7) at (-2,0) [label = left:$v_7$] {};
\node[slim] (v8) at (-1.414,1.414) [label = above left:$v_8$] {};
\draw[line width = 1pt][->] (v2) to (v3);
\draw[line width = 1pt][->] (v3) to (v4);
\draw[line width = 1pt][->] (v8) to (v1);
\draw[line width = 1pt] (v1) -- (v2);
\draw[line width = 1pt] (v4) -- (v5) -- (v6) -- (v7) -- (v8);
\end{tikzpicture}}
\raisebox{15mm}{$\xrightarrow[\text{on $v_1$}]{\text{using Sw.4$'$}}$}
\begin{tikzpicture}
[scale = 0.5,
slim/.style={circle,fill=black, inner sep = 0.8mm},
]
\node[slim] (v1) at (0,2) [label = above:$v_1$] {};
\node[slim] (v2) at (1.414,1.414) [label =above right:$v_2$] {};
\node[slim] (v3) at (2,0) [label = right:$v_3$] {};
\node[slim] (v4) at (1.414,-1.414) [label =below right:$v_4$] {};
\node[slim] (v5) at (0,-2) [label = below:$v_5$] {};
\node[slim] (v6) at (-1.414,-1.414) [label = below left:$v_6$] {};
\node[slim] (v7) at (-2,0) [label = left:$v_7$] {};
\node[slim] (v8) at (-1.414,1.414) [label = above left:$v_8$] {};
\draw[line width = 1pt][->] (v2) to (v3);
\draw[line width = 1pt][->] (v3) to (v4);
\draw[line width = 1pt][->] (v1) to (v2);
\draw[line width = 1pt] (v1) -- (v2);
\draw[line width = 1pt] (v4) -- (v5) -- (v6) -- (v7) -- (v8) -- (v1);
\end{tikzpicture}}
\caption{Switching a mixed cycle} \label{30}
\end{center}
\end{figure}
\subsection{Determinants of $H_{\eta}(C_n^j)$}
Next, we show that different types of mixed cycles have different spectra.
We focus on the constant term of the characteristic polynomial of $H_{\eta}(C_n^j)$.
Let $P_n$ be the undirected path graph on $n$ vertices.
\begin{lem} \label{50}
{\it
We have
\[ \det H_{\eta}(P_n) = (-1)^{\lfloor \frac{n}{2} \rfloor} \frac{1+(-1)^n}{2}. \]
}
\end{lem}
\begin{proof}
We will show that
\[ \det H_{\eta}(P_n) = \begin{cases}
1 \qquad &\text{if $n \equiv 0 \pmod 4$,} \\
-1 \qquad &\text{if $n \equiv 2 \pmod 4$,} \\
0 \qquad &\text{if $n \equiv 1,3 \pmod 4$.} \\
\end{cases} \]
The determinant is
\[
\det(H_{\eta}(P_{n})) =
\begin{vmatrix}
0 & 1 & 0 & \cdots & \cdots & \cdots & 0 \\
1 & 0 & 1 & 0 & \cdots & \cdots & 0 \\
0 & 1 & 0 & 1 & 0 & \cdots & 0 \\
\vdots & \ddots & \ddots & \ddots & \ddots & & \vdots \\
\vdots & & \ddots & \ddots & \ddots & \ddots & 0 \\
0 & \cdots & \cdots & 0 & 1 & 0 & 1 \\
0 & \cdots & \cdots & \cdots & 0 & 1 & 0
\end{vmatrix}.
\]
We first apply the cofactor expansion along the first row,
and we then apply it again along the first column.
We have $\det(H_{\eta}(P_{n})) = - \det(H_{\eta}(P_{n-2}))$.
Since $\det H_{\eta}(P_1) = 0$ and $\det H_{\eta}(P_2) = -1$,
we have the statement.
\end{proof}
\begin{pro} \label{35}
{\it
We have
\[ \det H_{\eta}(C_n^j) = (-1)^{n+1} 2 \cos (\eta j) + (-1)^{\lfloor \frac{n}{2} \rfloor} (1 + (-1)^n). \]
}
\end{pro}
\begin{proof}
We calculate $\det H_{\eta}(C_{n}^{j})$,
which is
\[
\begin{vmatrix}
0 & e^{\eta i} & 0 & \cdots & \cdots & \cdots & \cdots & 0 & 1 \\
e^{-\eta i} & 0 & e^{\eta i} & 0 & \cdots & \cdots & \cdots & \cdots & 0 \\
0 & \ddots & \ddots & \ddots & & & & & \vdots \\
\vdots & & e^{-\eta i} & 0 & e^{\eta i} & & & & \vdots \\
\vdots & & & e^{-\eta i} & 0 & 1 & & & \vdots \\
\vdots & & & & 1 & 0 & 1 & & \vdots \\
\vdots & & & & & \ddots & \ddots & \ddots & 0 \\
0 & \cdots & \cdots & \cdots & \cdots & 0 & 1 & 0 & 1 \\
1 & 0 & \cdots & \cdots & \cdots & \cdots & 0 & 1 & 0
\end{vmatrix}.
\]
By the cofactor expansion along the first row,
we have $\det H_{\eta}(C_n^j) = -e^{\eta i}D_1 + (-1)^{n+1}D_2$, where
\[ D_1 =
\begin{vmatrix}
e^{-\eta i} & e^{\eta i} & 0 & \cdots & \cdots & \cdots & \cdots & \cdots & 0 & 0 \\
0 & 0 & e^{\eta i} & 0 & \cdots & \cdots & \cdots & \cdots & \cdots & 0 \\
\vdots & e^{-\eta i} & 0 & e^{\eta i} & & & & & & \vdots \\
\vdots & & \ddots & \ddots & \ddots & & & & & \vdots \\
\vdots & & & e^{-\eta i} & 0 & e^{\eta i} & & & & \vdots \\
\vdots & & & & e^{-\eta i} & 0 & 1 & & & \vdots \\
\vdots & & & & & 1 & 0 & 1 & & \vdots \\
\vdots & & & & & & \ddots & \ddots & \ddots & 0 \\
0 & \cdots & \cdots & \cdots & \cdots & \cdots & 0 & 1 & 0 & 1 \\
1 & 0 & \cdots & \cdots & \cdots & \cdots & \cdots & 0 & 1 & 0
\end{vmatrix},
\]
and
\[ D_2 =
\begin{vmatrix}
e^{-\eta i} & 0 & e^{\eta i} & 0 & \cdots & \cdots & \cdots & \cdots & 0 & 0 \\
0 & e^{-\eta i} & 0 & e^{\eta i} & \cdots & \cdots & \cdots & \cdots & \cdots & 0 \\
\vdots & & e^{-\eta i} & 0 & e^{\eta i} & & & & & \vdots \\
\vdots & & & \ddots & \ddots & \ddots & & & & \vdots \\
\vdots & & & & e^{-\eta i} & 0 & e^{\eta i} & & & \vdots \\
\vdots & & & & & e^{-\eta i} & 0 & 1 & & \vdots \\
\vdots & & & & & & 1 & 0 & \ddots & \vdots \\
\vdots & & & & & & & \ddots & \ddots & 1 \\
0 & \cdots & \cdots & \cdots & \cdots & \cdots & & 0 & 1 & 0 \\
1 & 0 & \cdots & \cdots & \cdots & \cdots & \cdots & & 0 & 1
\end{vmatrix}.
\]
We apply the cofactor expansion along the first column to $D_1$.
By Corollary~\ref{60}, we have
\begin{align*}
D_1 &= (-1)^{1+1} e^{-\eta i} \det H_{\eta}(P_{n-2})+ (-1)^{(n-1)+1} e^{(j-1)\eta i} \\
&= e^{-\eta i} \det H_{\eta}(P_{n-2}) + (-1)^{n} e^{(j-1) \eta i}.
\end{align*}
Applying the cofactor expansion along the first column to $D_2$,
we have
\begin{align*}
D_2 &= (-1)^{1+1}e^{-\eta i} \cdot e^{-(j-1)\eta i} + (-1)^{(n-1) + 1} \det H_{\eta}(P_{n-2}) \\
&= e^{-j \eta i} + (-1)^{n} \det H_{\eta}(P_{n-2}).
\end{align*}
Therefore,
\begin{align*}
\det H_{\eta}(C_n^j) &= -e^{\eta i}D_1 + (-1)^{n+1}D_2 \\
&= (-1)^{n+1} 2 \cos (\eta j) + (-2) \det H_{\eta}(P_{n-2}) \\
&= (-1)^{n+1} 2 \cos (\eta j) + (-2) \cdot (-1)^{\lfloor \frac{n-2}{2} \rfloor} \frac{1+(-1)^{n-2}}{2} \tag{by Lemma~\ref{50}} \\
&= (-1)^{n+1} 2 \cos (\eta j) + (-1)^{\lfloor \frac{n}{2} \rfloor} (1 + (-1)^n).
\end{align*}
\end{proof}
Proposition~\ref{03} and Proposition~\ref{35} derive Theorem~\ref{Main1},
which is our first main theorem.
In addition,
Proposition~\ref{35} yields that, except for a finite number of $\eta$,
\[ \det (\lambda I - H_{\eta}(C_n^j)) \neq \det (\lambda I - H_{\eta}(C_n^k)) \]
for $j \neq k$.
We supplement the phrase ``except for a finite number of $\eta$" here.
For example, we consider $\eta = \frac{\pi}{2}$ and the mixed cycles of length 4.
We have
\begin{align*}
\det H_{\frac{\pi}{2}}(C_4^j) &= 2 -2\cos \frac{j \pi}{2} \\
&= \begin{cases}
0 \qquad &\text{if $j \in \{0, 4\}$,} \\
2 \qquad &\text{if $j \in \{1, 3\}$,} \\
4 \qquad &\text{if $j=2$.}
\end{cases}
\end{align*}
As this example points out,
when $\eta \in \MB{Q}\pi$ and the denominator of $\eta / \pi$ is smaller than a given $n$,
the determinants could be equal for different types of mixed cycles.
There are only a finite number of such $\eta$ for given $n$.
This is the reason why the only three types of mixed cycles appeared in the study of \cite{AGNN}.
\subsection{Characteristic polynomials of $H_{\eta}(C_n^j)$}
On the other hand,
if the determinants are same, the characteristic polynomials of mixed cycles are actually equal.
We find the characteristic polynomial of $C_n^j$.
An {\it elementary subgraph} of an undirected graph $\Gamma$ is
a subgraph of $\Gamma$ such that every component is either $K_2$ or an undirected cycle.
Let $\MC{H}(\Gamma)$ denote the set of all the elementary subgraphs of $\Gamma$,
and let $\MC{H}_l(\Gamma)$ denote the set of all the elementary subgraphs of $\Gamma$ on $l$ vertices.
The {\it rank} and {\it corank} of an undirected graph $\Gamma = (V, E)$ are, respectively,
$r(\Gamma) = |V| - c$ and $s(\Gamma) = |E| - |V| + c$,
where $c$ is the number of components of $\Gamma$.
Let $C_n$ be the undirected cycle graph on $n$ vertices.
\begin{pro}
{\it
We have
\[ \det (\lambda I - H_{\eta}(C_n^j)) = \sum_{k = 0}^{\lfloor \frac{n-1}{2} \rfloor}
(-1)^k \frac{n}{n-k} \binom{n-k}{k} \lambda^{n-2k} - 2 \cos (\eta j) + (-1)^{\lfloor \frac{n}{2} \rfloor} (1 + (-1)^n). \]
}
\end{pro}
\begin{proof}
Let
\begin{align*}
\det (\lambda I - H_{\eta}(C_n^j)) &= \lambda^n + a_1 \lambda^{n-1} + \cdots + a_{n-1} \lambda + a_n, \\
\det (\lambda I - H_{\eta}(C_n)) &= \lambda^n + b_1 \lambda^{n-1} + \cdots + b_{n-1} \lambda + b_n.
\end{align*}
Fix $l \in \{1, \dots, n-1 \}$.
Since the girth of $C_n^j$ is $n$,
we have $a_l = b_l$ by Proposition~\ref{10}.
Also,
\begin{equation} \label{35a}
\MC{H}_l(C_n) =
\begin{cases}
\{ \Gamma' \in \MC{H}(C_n) \mid \Gamma' \simeq \tfrac{l}{2} K_2 \} \quad &\text{if $l$ is even,} \\
\emptyset \quad &\text{if $l$ is odd,}
\end{cases}
\end{equation}
where $\Gamma' \simeq \tfrac{l}{2} K_2$ denotes that the graph $\Gamma'$ is isomorphic to
the disjoint union of the $\frac{l}{2}$ complete graphs $K_2$.
Thus, $b_l = 0$ if $l$ is odd.
In addition, if $l$ is even,
\begin{align*}
b_l &= \sum_{\Gamma' \in \MC{H}_l(C_n)} (-1)^{r(\Gamma')} 2^{s(\Gamma')} \tag{by Proposition~7.1 in \cite{B}} \\
&= \sum_{\Gamma' \in \MC{H}_l (C_n)} (-1)^{\frac{l}{2}} \tag{by (\ref{35a})} \\
&= (-1)^{\frac{l}{2}} | \{ \text{$\tfrac{l}{2}$-matching in $C_n$} \} | \\
&= (-1)^{\frac{l}{2}} \binom{n-\frac{l}{2}}{\frac{l}{2}}. \tag{by Exercises in p.14 of \cite{G}}
\end{align*}
By Proposition~\ref{35}, the characteristic polynomial $\det (\lambda I - H_{\eta}(C_n^j))$ is
\begin{align*}
& \, \sum_{k = 0}^{\lfloor \frac{n-1}{2} \rfloor} (-1)^k \frac{n}{n-k} \binom{n-k}{k} \lambda^{n-2k}
+ (-1)^n \{ (-1)^{n+1} 2 \cos (\eta j) + (-1)^{\lfloor \frac{n}{2} \rfloor} (1 + (-1)^n) \} \\
=& \sum_{k = 0}^{\lfloor \frac{n-1}{2} \rfloor} (-1)^k \frac{n}{n-k} \binom{n-k}{k} \lambda^{n-2k}
-2 \cos (\eta j) + (-1)^{\lfloor \frac{n}{2} \rfloor + n} (1 + (-1)^n) \\
=& \sum_{k = 0}^{\lfloor \frac{n-1}{2} \rfloor} (-1)^k \frac{n}{n-k} \binom{n-k}{k} \lambda^{n-2k}
-2 \cos (\eta j) + (-1)^{\lfloor \frac{n}{2} \rfloor} (1 + (-1)^n).
\end{align*}
We note that $(-1)^{\lfloor \frac{n}{2} \rfloor + n} \neq (-1)^{\lfloor \frac{n}{2} \rfloor}$,
but $(-1)^{\lfloor \frac{n}{2} \rfloor + n} (1 + (-1)^n) = (-1)^{\lfloor \frac{n}{2} \rfloor} (1 + (-1)^n)$
in the last calculation.
We have the statement.
\end{proof}
\subsection{Mixed graphs as $\MB{T}$-gain graphs}
In this subsection, we briefly touch gain graphs.
$\eta$-Hermitian adjacency matrices are also seen as adjacency matrices of $\MB{T}$-gain graphs.
We refer to \cite{MKS, SK} for readers.
We use notations and terminologies in \cite{MKS, SK}.
The $\eta$-Hermitian adjacency matrix of a mixed graph $G = (V, \MC{A})$
is the adjacency matrix of the $\MB{T}$-gain graph $\Phi = (G^{\pm}, \MB{T}, \varphi)$
defined by the $\MB{T}$-gain $\varphi : \MC{A}^{\pm} \to \MB{T}$ such that
\[
\varphi(a) =
\begin{cases}
1 \qquad &\text{if $a \in \MC{A} \cap \MC{A}^{-1}$,} \\
e^{\eta i} \qquad &\text{if $a \in \MC{A} \setminus \MC{A}^{-1}$,} \\
e^{-\eta i} \qquad &\text{if $a \in \MC{A}^{-1} \setminus \MC{A}$,} \\
0 \qquad &\text{otherwise.}
\end{cases}
\]
In \cite{GH, MKS},
the authors mention the determinants of the adjacency matrices of $\MB{T}$-gain graphs.
In addition, the coefficients of the characteristic polynomials are also found by using principal minors.
The discussions in this section can also be carried out in terms of $\MB{T}$-gain graphs.
\section{Preliminaries on quantum walks defined by mixed graphs} \label{105}
Let $\eta \in [0, 2\pi)$,
and let $G = (V, \MC{A})$ be a mixed graph.
The {\it $\eta$-function} $\theta : \MC{A}^{\pm} \to \MB{R}$ of a mixed graph $G$ is defined by
\[
\theta(a) = \begin{cases}
\eta \qquad &\text{if $a \in \MC{A} \setminus \MC{A}^{-1}$,} \\
-\eta \qquad &\text{if $a \in \MC{A}^{-1} \setminus \MC{A}$,} \\
0 \qquad &\text{if $a \in \MC{A} \cap \MC{A}^{-1}$.}
\end{cases}
\]
Note that $\theta(a^{-1}) = -\theta(a)$ for any $a \in \MC{A}^{\pm}$.
\subsection{Several matrices on quantum walks defined by mixed graphs}
In \cite{KST},
the authors provided a quantum walk defined by a mixed graph.
Let $G = (V, \MC{A})$ be a mixed graph equipped with an $\eta$-function $\theta$.
We define several matrices (operators) on quantum walks.
The {\it boundary operator} $K = K(G) \in \MB{C}^{V \times \MC{A}^{\pm}}$ is defined by
\[ K_{x,a} = \frac{1}{\sqrt{\deg x}} \delta_{x,t(a)}. \]
The {\it coin operator} $C = C(G) \in \MB{C}^{\MC{A}^{\pm} \times \MC{A}^{\pm}}$ is defined by
$C = 2K^*K-I$.
The {\it shift operator} $S_{\theta} = S_{\theta}(G) \in \MB{C}^{\MC{A}^{\pm} \times \MC{A}^{\pm}}$
is defined by $(S_{\theta})_{ab} = e^{\theta(b)i}\delta_{a,b^{-1}}$.
Define the {\it time evolution matrix} $U_{\theta} = U_{\theta}(G) \in \MB{C}^{\MC{A}^{\pm} \times \MC{A}^{\pm}}$ by $U_{\theta} = S_{\theta} C$.
\begin{lem} \label{22}
{\it
Let $G = (V, \MC{A})$ be a mixed graph equipped with an $\eta$-function $\theta$.
We have
\[ (U_{\theta})_{a,b} = e^{-\theta(a) i} \marukakko{
\frac{2}{\deg_{G} t(b)} \delta_{o(a), t(b)} - \delta_{a, b^{-1}}
} \]
for any $a,b \in \MC{A}^{\pm}$.
}
\end{lem}
\begin{proof}
Indeed,
\begin{align*}
(U_{\theta})_{a,b} &= (2 S_{\theta} K^* K - S_{\theta})_{a,b} \\
&= 2(S_{\theta} K^* K)_{a,b} - (S_{\theta})_{a,b} \\
&= 2 \sum_{z \in \MC{A}} \sum_{x \in V} (S_{\theta})_{a,z} (K^*)_{z, x} K_{x,b} - e^{\theta(b)i} \delta_{a, b^{-1}} \\
&= 2 \sum_{z \in \MC{A}} \sum_{x \in V} e^{\theta(z)i} \frac{1}{\sqrt{\deg x}} \frac{1}{\sqrt{\deg x}} \delta_{a,z^{-1}} \delta_{x, t(z)} \delta_{x, t(b)} - e^{\theta(a^{-1})i} \delta_{a, b^{-1}} \\
&= 2 \sum_{x \in V} e^{\theta(a^{-1})i} \frac{1}{\deg x} \delta_{x, t(a^{-1})} \delta_{x, t(b)} - e^{\theta(a^{-1})i} \delta_{a, b^{-1}} \\
&= \frac{2 e^{\theta(a^{-1})i}}{\deg_{G} t(b)} \delta_{o(a), t(b)} - e^{\theta(a^{-1})i} \delta_{a, b^{-1}} \\
&= e^{- \theta(a) i} \marukakko{ \frac{2}{\deg_{G} t(b)} \delta_{o(a), t(b)} - \delta_{a, b^{-1}} }.
\end{align*}
\end{proof}
The following is an important theorem that links quantum walks and spectral graph theory.
We cite \cite{KST}.
In \cite{HKSS2014}, Higuchi et al~proved a similar claim in more general models.
\begin{thm}[\cite{KST}] \label{21}
{\it
Let $G = (V, \MC{A})$ be a mixed graph equipped with an $\eta$-function $\theta$,
and let $U_{\theta}$ be the time evolution matrix.
Then we have
\[ \Spec(U_{\theta}) = \{ e^{\pm i \cos^{-1} (\lambda)} \mid \lambda \in \Spec(\tilde{H}_{\eta}(G)) \} \cup \{ 1 \}^{M_1} \cup \{-1\}^{M_{-1}}, \]
where
\begin{align*}
M_{1} = \frac{1}{2}|\MC{A}^{\pm}| - |V| + \dim \ker ( \tilde{H}_{\eta}(G) - I), \\
M_{-1} = \frac{1}{2}|\MC{A}^{\pm}| - |V| + \dim \ker ( \tilde{H}_{\eta}(G) + I).
\end{align*}
}
\end{thm}
The operators (matrices) used in our quantum walks are summarized in Table~\ref{1000},
where $G = (V, \MC{A})$ is a mixed graph equipped with an $\eta$-function $\theta$.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Notation & Name & Indices of rows and columns & Definition \\
\hline
\hline
$K$ & Boundary & $V \times \MC{A}^{\pm}$ & $K_{x,a} = \frac{1}{ \sqrt{\deg x} } \delta_{x, t(a)}$ \\
\hline
$C$ & Coin & $\MC{A}^{\pm} \times \MC{A}^{\pm}$ &$ C = 2K^*K - I$ \\
\hline
$S_{\theta}$ & Shift & $\MC{A}^{\pm} \times \MC{A}^{\pm}$ & $(S_{\theta})_{ab} = e^{\theta(b)i}\delta_{a,b^{-1}}$ \\
\hline
$U_{\theta}$ & Time evolution & $\MC{A}^{\pm} \times \MC{A}^{\pm}$ & $U_{\theta} = S_{\theta} C$ \\
\hline
\end{tabular}
\caption{The operators (matrices) used in our quantum walk} \label{1000}
\end{table}
\subsection{Necessary and sufficient conditions on periodicity}
Let $U_{\theta}$ be a time evolution matrix of a mixed graph $G$ equipped with an $\eta$-function $\theta$.
We say that $G$ is {\it periodic} if there exists $\tau \in \MB{N}$ such that $U_{\theta}^{\tau} = I$.
When the mixed graph $G$ is periodic,
the {\it period} is defined by $\min \{ \tau \in \MB{N} \mid U_{\theta}^{\tau} = I \}$.
\begin{lem} \label{13}
{\it
Let $U_{\theta}$ be a time evolution matrix of a mixed graph $G$ equipped with an $\eta$-function $\theta$.
Then, we have
\begin{equation} \label{70}
\{ \tau \in \MB{N} \mid U_{\theta}^{\tau} = I \} =
\{ \tau \in \MB{N} \mid \lambda^{\tau} = 1 \text{ \emph{for any} $\lambda \in \Spec(U_{\theta})$} \}.
\end{equation}
In particular, $G$ is periodic if and only if
there exists $\tau \in \MB{N}$ such that $\lambda^{\tau} = 1$ holds
for any eigenvalue $\lambda$ of $U_{\theta}$.
}
\end{lem}
\begin{proof}
Since $U_{\theta}$ is unitary,
there exists a unitary matrix $Q$ such that
\[ Q^{*} U_{\theta} Q = \diag (\lambda_1, \cdots, \lambda_{2m}), \]
where $m$ is the number of edges of $G^{\pm}$.
Thus for $\tau \in \MB{N}$,
\[ Q^{*} U_{\theta}^{\tau} Q = \diag (\lambda_1^{\tau}, \cdots, \lambda_{2m}^{\tau}). \]
This implies~(\ref{70}).
\end{proof}
Define $\BM{e}^{(a)} \in \MB{C}^{\MC{A}^{\pm}}$ by $(\BM{e}^{(a)})_z = \delta_{a,z}$.
Let $\MC{E}_{\MC{A}^{\pm}} = \{ \BM{e}^{(a)} \mid a \in \MC{A}^{\pm} \}$.
This is the canonical basis of $\MB{C}^{\MC{A}^{\pm}}$.
\begin{lem} \label{55}
{\it
Let $U_{\theta}$ be a time evolution matrix of a mixed graph $G$ equipped with an $\eta$-function $\theta$.
Then, we have
\[ \{ \tau \in \MB{N} \mid U_{\theta}^{\tau} = I \} =
\{ \tau \in \MB{N} \mid U_{\theta}^{\tau}\BM{e}^{(a)} = \BM{e}^{(a)}
\text{ \emph{for any} $\BM{e}^{(a)} \in \MC{E}_{\MC{A}^{\pm}}$} \}.\]
In particular,
$G$ is periodic if and only if
there exists $\tau \in \MB{N}$ such that $U_{\theta}^{\tau}\BM{e}^{(a)} = \BM{e}^{(a)}$ holds
for any vector $\BM{e}^{(a)} \in \MC{E}_{\MC{A}^{\pm}}$.
}
\end{lem}
\begin{proof}
It is clear that the left-hand side is included in the right-hand side.
We show the reverse inclusion.
Suppose $U_{\theta}^{\tau}\BM{e}^{(a)} = \BM{e}^{(a)}$ for any vector $\BM{e}^{(a)} \in \MC{E}_{\MC{A}^{\pm}}$ and for some $\tau \in \MB{N}$.
Number the arc set $\MC{A}$ as $a_1, a_2, \dots, a_{2m}$,
where $m$ is the number of edges of $G^{\pm}$.
Let $P = [ \BM{e}^{(a_1)} \, \BM{e}^{(a_2)} \, \cdots \BM{e}^{(a_{2m})} ]$.
Then, we have $U_{\theta}^{\tau} P = P$.
Since $P$ is invertible, $U_{\theta}^{\tau} = I$ holds.
\end{proof}
\section{Periodicity of mixed paths} \label{106}
In this section,
we discuss periodicity of mixed paths.
As a preparation for that,
we introduce notations for expressing dynamics of quantum walk.
Let $G = (V, \MC{A})$ be a mixed graph equipped with an $\eta$-function $\theta$.
We write the components of a vector $\Psi \in \MB{C}^{\MC{A}^{\pm}}$
on the arcs of the graph as in Figure~\ref{48}.
If a component of $\Psi$ is $0$,
we omit the arc itself corresponding to the component.
If a component of $\Psi$ is $1$,
we may omit the value on the arc corresponding to the component.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}
[scale = 0.7,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[u] (1) at (-1.2, 0) {};
\node[v] (2) at (0, 0) {};
\node[v] (3) at (3, 0) {};
\node[u] (4) at (-1.2, 0.68) {};
\node[u] (5) at (-1.2, -0.68) {};
\node[u] (7) at (4.2, 0) {};
\node[u] (8) at (4.2, 0.68) {};
\node[u] (9) at (4.2, -0.68) {};
\node[u] (12) at (3.3, 0.2) {};
\node[u] (13) at (5.7, 0.2) {};
\draw (0,-0.5) node{$x$};
\draw (3,-0.5) node{$y$};
\draw (1) to (2);
\draw[-] (2) to (4);
\draw[-] (2) to (3);
\draw[-] (5) to (2);
\node[u] (10) at (0.3, 0.2) {};
\node[u] (11) at (2.7, 0.2) {};
\draw[draw= blue,->] (10) to (11);
\node[u] (20) at (0.3, -0.2) {};
\node[u] (21) at (2.7, -0.2) {};
\draw[draw= blue,->] (21) to (20);
\draw[-] (3) to (7);
\draw[-] (3) to (8);
\draw[-] (3) to (9);
\draw (1.5,0.6) node[blue]{$\Psi_{(x,y)}$};
\draw (1.5,-0.6) node[blue]{$\Psi_{(y,x)}$};
\end{tikzpicture}
\caption{The components of a vector written on the arcs of the graph} \label{48}
\end{center}
\end{figure}
We provide an example.
Let $G = (V, \MC{A})$ be the mixed graph in Figure~\ref{a01}.
We consider a general $\eta \in [0, 2\pi)$ and the $\eta$-function $\theta$.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}
[scale = 0.8,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm}]
\node[v] (1) at (-3, 0) {};
\draw (-3,0.5) node {$v_1$};
\node[v] (2) at (0, 0) {};
\draw (0,0.5) node {$v_2$};
\node[v] (3) at (3, 1.7) {};
\draw (3.5,1.7) node {$v_3$};
\node[v] (4) at (3, -1.7) {};
\draw (3.5,-1.7) node {$v_4$};
\draw[->] (1) to [bend left = 15] (2);
\draw[->] (2) to [bend left = 15] (1);
\draw[->] (2) to (4);
\draw[->] (4) to (3);
\draw[->] (3) to (2);
\end{tikzpicture}
\caption{Mixed graph $G$} \label{a01}
\end{center}
\end{figure}
We focus on the vector $\BM{e}^{((v_3, v_2))} \in \MC{E}_{\MC{A}^{\pm}}$.
The actions of the coin operator $C$ and the shift operator $S_{\theta}$ are shown in Figure~\ref{40} and~\ref{41}.
Since $U_{\theta} = S_{\theta} C$,
the action of the time evolution matrix $U_{\theta}$ is as in Figure~\ref{42}.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}
[scale = 0.7,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[v] (1) at (-3, 0) {};
\draw (-3,0.5) node {$v_1$};
\node[v] (2) at (0, 0) {};
\draw (0,0.5) node {$v_2$};
\node[v] (3) at (3, 1.7) {};
\draw (3.5,1.7) node {$v_3$};
\node[v] (4) at (3, -1.7) {};
\draw (3.5,-1.7) node {$v_4$};
\node[u] (5) at (0.3, 0.37) {};
\node[u] (6) at (2.7, 1.73) {};
\draw (1) to (2);
\draw[-] (2) to (4);
\draw[-] (4) to (3);
\draw[-] (3) to (2);
\draw[draw= blue,->] (6) to (5);
\end{tikzpicture}
\raisebox{13mm}{$\quad \overset{C}{\mapsto} \quad$}
\begin{tikzpicture}
[scale = 0.7,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[v] (1) at (-3, 0) {};
\draw (-3,0.5) node {$v_1$};
\node[v] (2) at (0, 0) {};
\draw (0,0.5) node {$v_2$};
\node[v] (3) at (3, 1.7) {};
\draw (3.5,1.7) node {$v_3$};
\node[v] (4) at (3, -1.7) {};
\draw (3.5,-1.7) node {$v_4$};
\node[u] (5) at (0.3, 0.37) {};
\node[u] (6) at (2.7, 1.73) {};
\node[u] (7) at (-2.7, 0.2) {};
\node[u] (8) at (-0.3, 0.2) {};
\node[u] (9) at (0.3, -0.37) {};
\node[u] (10) at (2.7, -1.73) {};
\draw (1) to (2);
\draw[-] (2) to (4);
\draw[-] (4) to (3);
\draw[-] (3) to (2);
\draw[draw= blue,->] (6) to (5);
\draw[draw= blue,->] (7) to (8);
\draw[draw= blue,->] (10) to (9);
\draw (-1.5,0.75) node[blue] {$\tfrac{2}{3}$};
\draw (1.1,1.5) node[blue] {$-\tfrac{1}{3}$};
\draw (1.2,-1.5) node[blue] {$\tfrac{2}{3}$};
\end{tikzpicture}
\caption{Action of $C$} \label{40}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}
[scale = 0.7,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[v] (1) at (-3, 0) {};
\draw (-3,0.5) node {$v_1$};
\node[v] (2) at (0, 0) {};
\draw (0,0.5) node {$v_2$};
\node[v] (3) at (3, 1.7) {};
\draw (3.5,1.7) node {$v_3$};
\node[v] (4) at (3, -1.7) {};
\draw (3.5,-1.7) node {$v_4$};
\node[u] (5) at (0.3, 0.37) {};
\node[u] (6) at (2.7, 1.73) {};
\draw (1) to (2);
\draw[-] (2) to (4);
\draw[-] (4) to (3);
\draw[-] (3) to (2);
\draw[draw= blue,->] (6) to (5);
\end{tikzpicture}
\raisebox{13mm}{$\quad \overset{S_{\theta}}{\mapsto} \quad$}
\begin{tikzpicture}
[scale = 0.7,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[v] (1) at (-3, 0) {};
\draw (-3,0.5) node {$v_1$};
\node[v] (2) at (0, 0) {};
\draw (0,0.5) node {$v_2$};
\node[v] (3) at (3, 1.7) {};
\draw (3.5,1.7) node {$v_3$};
\node[v] (4) at (3, -1.7) {};
\draw (3.5,-1.7) node {$v_4$};
\node[u] (5) at (0.3, 0.37) {};
\node[u] (6) at (2.7, 1.73) {};
\draw (1.3,1.25) node[blue]{$e^{i\eta }$};
\draw (1) to (2);
\draw[-] (2) to (4);
\draw[-] (4) to (3);
\draw[-] (3) to (2);
\draw[draw= blue,->] (5) to (6);
\end{tikzpicture}
\caption{Action of $S_{\theta}$} \label{41}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}
[scale = 0.7,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[v] (1) at (-3, 0) {};
\draw (-3,0.5) node {$v_1$};
\node[v] (2) at (0, 0) {};
\draw (0,0.5) node {$v_2$};
\node[v] (3) at (3, 1.7) {};
\draw (3.5,1.7) node {$v_3$};
\node[v] (4) at (3, -1.7) {};
\draw (3.5,-1.7) node {$v_4$};
\node[u] (5) at (0.3, 0.37) {};
\node[u] (6) at (2.7, 1.73) {};
\draw (1) to (2);
\draw[-] (2) to (4);
\draw[-] (4) to (3);
\draw[-] (3) to (2);
\draw[draw= blue,->] (6) to (5);
\end{tikzpicture}
\raisebox{13mm}{$\quad \overset{U_{\theta}}{\mapsto} \quad$}
\begin{tikzpicture}
[scale = 0.7,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[v] (1) at (-3, 0) {};
\draw (-3,0.5) node {$v_1$};
\node[v] (2) at (0, 0) {};
\draw (0,0.5) node {$v_2$};
\node[v] (3) at (3, 1.7) {};
\draw (3.5,1.7) node {$v_3$};
\node[v] (4) at (3, -1.7) {};
\draw (3.5,-1.7) node {$v_4$};
\node[u] (5) at (0.3, 0.37) {};
\node[u] (6) at (2.7, 1.73) {};
\node[u] (7) at (-2.7, 0.2) {};
\node[u] (8) at (-0.3, 0.2) {};
\node[u] (9) at (0.3, -0.37) {};
\node[u] (10) at (2.7, -1.73) {};
\draw (1.1,1.4) node[blue]{$-\tfrac{1}{3}e^{i\eta}$};
\draw (-1.5,0.75) node[blue]{$\tfrac{2}{3}$};
\draw (1.1,-1.5) node[blue]{$\tfrac{2}{3} e^{-i\eta}$};
\draw (1) to (2);
\draw[-] (2) to (4);
\draw[-] (4) to (3);
\draw[-] (3) to (2);
\draw[draw= blue,->] (5) to (6);
\draw[draw= blue,->] (8) to (7);
\draw[draw= blue,->] (9) to (10);
\end{tikzpicture}
\caption{Action of $U_{\theta}$} \label{42}
\end{center}
\end{figure}
\begin{lem} \label{25}
{\it
Let $G = (V, \MC{A})$ be a mixed graph equipped with an $\eta$-function $\theta$,
and let $a \in \MC{A}^{\pm}$.
We have the following:
\begin{enumerate}[(1)]
\item If $\deg t(a) = 1$, then $U_{\theta} \BM{e}^{(a)} = e^{\theta(a) i}\BM{e}^{(a^{-1})}$; and
\item If $\deg t(a) = 2$, then $U_{\theta} \BM{e}^{(a)} = e^{-\theta(b)i}\BM{e}^{(b)}$,
where $b$ is the arc in $\{ z \in \MC{A}^{\pm} \mid o(z) = t(a) \} \setminus \{a^{-1}\}$.
\end{enumerate}
}
\end{lem}
\begin{proof}
Let $U_{\theta} = U_{\theta}(G)$.
First, we have
\begin{align*}
(U_{\theta} \BM{e}^{(a)})_{z}
&= \sum_{w \in \MC{A}^{\pm}} (U_{\theta})_{z,w} (\BM{e}^{(a)})_{w} \\
&= (U_{\theta})_{z,a} \\
&= e^{-\theta(z) i} \marukakko{ \frac{2}{\deg_{G} t(a)} \delta_{o(z), t(a)} - \delta_{z, a^{-1}} }. \tag{by Lemma~\ref{22}}
\end{align*}
Consider the case of $\deg t(a) = 1$.
Then $o(z) = t(a)$ if and only if $z = a^{-1}$.
Thus,
\[ (U_{\theta} \BM{e}^{(a)})_{z} = e^{-\theta(a^{-1}) i} ( 2 \delta_{z, a^{-1}} - \delta_{z, a^{-1}} )
= e^{\theta(a) i} \delta_{z, a^{-1}}.
\]
We have $U_{\theta} \BM{e}^{(a)} = e^{\theta(a) i}\BM{e}^{(a^{-1})}$.
We next consider the case of $\deg t(a) = 2$.
Then,
\begin{align*}
(U_{\theta} e_a)_{z} &= e^{-\theta(z) i} (\delta_{o(z), t(a)} - \delta_{z, a^{-1}}) \\
&= \begin{cases}
0 \quad &\text{if $z=a^{-1}$,} \\
e^{-\theta(b) i} \quad &\text{if $z = b$,} \\
0 \quad &\text{otherwise.}
\end{cases}
\end{align*}
Therefore, we have $U_{\theta} \BM{e}^{(a)} = e^{-\theta(b)i}\BM{e}^{(b)}$.
\end{proof}
The above lemma is illustrated as in Figure~\ref{43} and~\ref{44}.
Now, we discuss periodicity of mixed paths.
The following is our second main theorem.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}
[scale = 0.7,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[u] (1) at (-1.2, 0) {};
\node[v] (2) at (0, 0) {};
\node[v] (3) at (3, 0) {};
\node[u] (4) at (-1.2, 0.68) {};
\node[u] (5) at (-1.2, -0.68) {};
\node[u] (6) at (0.3, 0.2) {};
\node[u] (7) at (2.7, 0.2) {};
\draw (0,-0.5) node{$x$};
\draw (3,-0.5) node{$y$};
\draw (1) to (2);
\draw[-] (2) to (4);
\draw[-] (2) to (3);
\draw[-] (5) to (2);
\draw[draw= blue,->] (6) to (7);
\end{tikzpicture}
\raisebox{6mm}{$\quad \overset{U_{\theta}}{\mapsto} \quad$}
\begin{tikzpicture}
[scale = 0.7,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[u] (1) at (-1.2, 0) {};
\node[v] (2) at (0, 0) {};
\node[v] (3) at (3, 0) {};
\node[u] (4) at (-1.2, 0.68) {};
\node[u] (5) at (-1.2, -0.68) {};
\node[u] (6) at (0.3, 0.2) {};
\node[u] (7) at (2.7, 0.2) {};
\draw (1.5,0.7) node[blue]{$e^{i\theta((x,y)) }$};
\draw (0,-0.5) node{$x$};
\draw (3,-0.5) node{$y$};
\draw (1) to (2);
\draw[-] (2) to (4);
\draw[-] (2) to (3);
\draw[-] (5) to (2);
\draw[draw= blue,->] (7) to (6);
\end{tikzpicture}
\caption{Illustration of Lemma~\ref{25} (1)} \label{43}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}
[scale = 0.7,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[u] (1) at (-1.2, 0) {};
\node[v] (2) at (0, 0) {};
\node[v] (3) at (3, 0) {};
\node[u] (4) at (-1.2, 0.68) {};
\node[u] (5) at (-1.2, -0.68) {};
\node[v] (6) at (6, 0) {};
\node[u] (7) at (7.2, 0) {};
\node[u] (8) at (7.2, 0.68) {};
\node[u] (9) at (7.2, -0.68) {};
\node[u] (10) at (0.3, 0.2) {};
\node[u] (11) at (2.7, 0.2) {};
\node[u] (12) at (3.3, 0.2) {};
\node[u] (13) at (5.7, 0.2) {};
\draw (0,-0.5) node{$x$};
\draw (3,-0.5) node{$y$};
\draw (6,-0.5) node{$z$};
\draw (1) to (2);
\draw[-] (2) to (4);
\draw[-] (2) to (3);
\draw[-] (5) to (2);
\draw[draw= blue,->] (10) to (11);
\draw[-] (3) to (6);
\draw[-] (6) to (7);
\draw[-] (6) to (8);
\draw[-] (6) to (9);
\end{tikzpicture}
\raisebox{6mm}{$\quad \overset{U_{\theta}}{\mapsto} \quad$}
\begin{tikzpicture}
[scale = 0.7,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[u] (1) at (-1.2, 0) {};
\node[v] (2) at (0, 0) {};
\node[v] (3) at (3, 0) {};
\node[u] (4) at (-1.2, 0.68) {};
\node[u] (5) at (-1.2, -0.68) {};
\node[v] (6) at (6, 0) {};
\node[u] (7) at (7.2, 0) {};
\node[u] (8) at (7.2, 0.68) {};
\node[u] (9) at (7.2, -0.68) {};
\node[u] (10) at (0.3, 0.2) {};
\node[u] (11) at (2.7, 0.2) {};
\node[u] (12) at (3.3, 0.2) {};
\node[u] (13) at (5.7, 0.2) {};
\draw (4.5,0.7) node[blue]{$e^{-i\theta((y,z)) }$};
\draw (0,-0.5) node{$x$};
\draw (3,-0.5) node{$y$};
\draw (6,-0.5) node{$z$};
\draw (1) to (2);
\draw[-] (2) to (4);
\draw[-] (2) to (3);
\draw[-] (5) to (2);
\draw[draw= blue,->] (12) to (13);
\draw[-] (3) to (6);
\draw[-] (6) to (7);
\draw[-] (6) to (8);
\draw[-] (6) to (9);
\end{tikzpicture}
\caption{Illustration of Lemma~\ref{25} (2)} \label{44}
\end{center}
\end{figure}
\begin{thm}
{\it
Let $G = (V, \MC{A})$ be a mixed path on $n$ vertices equipped with an $\eta$-function $\theta$.
Then $G$ is periodic for any $\eta \in [0, 2\pi)$, and the period is $2(n-1)$.
}
\end{thm}
\begin{proof}
By Proposition~\ref{12},
$G$ and $G^{\pm}$ are $\tilde{H}_{\eta}$-cospectral since $G$ is a mixed tree.
By Theorem~\ref{21},
$U_{\theta}(G)$ and $U_{\theta}(G^{\pm})$ have the same eigenvalues.
From Lemma~\ref{13},
periodicity of $G$ and its period are determined by the eigenvalues of the time evolution matrix.
Thus, it is sufficient to discuss only periodicity of $G^{\pm}$,
which is the undirected path graph $P_n$ on $n$ vertices.
By Lemma~\ref{25}, we have
$U_{\theta}(P_n)^{2(n-1)} \BM{e}^{(a)} = \BM{e}^{(a)}$
for any vector $\BM{e}^{(a)} \in \MC{E}_{\MC{A}^{\pm}}$.
The dynamics is as follows:
\begin{align*}
\begin{tikzpicture}
[scale = 0.8,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[v] (1) at (-2, 0) {};
\node[v] (2) at (0, 0) {};
\node[v] (3) at (2, 0) {};
\node[u] (4) at (2.5, 0) {};
\draw (2.75,0) node{$\dots$};
\node[u] (5) at (3, 0) {};
\node[v] (6) at (3.5, 0) {};
\node[v] (7) at (5.5, 0) {};
\node[u] (8) at (-1.8, 0.2) {};
\node[u] (9) at (-0.2, 0.2) {};
\draw (1) to (2);
\draw[-] (2) to (3);
\draw[-] (3) to (4);
\draw[-] (5) to (6);
\draw[-] (6) to (7);
\draw[draw= blue,->] (8) to (9);
\end{tikzpicture}
& \raisebox{1mm}{$\quad \overset{U_{\theta}}{\mapsto} \quad$}
\begin{tikzpicture}
[scale = 0.8,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[v] (1) at (-2, 0) {};
\node[v] (2) at (0, 0) {};
\node[v] (3) at (2, 0) {};
\node[u] (4) at (2.5, 0) {};
\draw (2.75,0) node{$\dots$};
\node[u] (5) at (3, 0) {};
\node[v] (6) at (3.5, 0) {};
\node[v] (7) at (5.5, 0) {};
\node[u] (8) at (0.2, 0.2) {};
\node[u] (9) at (1.8, 0.2) {};
\draw (1) to (2);
\draw[-] (2) to (3);
\draw[-] (3) to (4);
\draw[-] (5) to (6);
\draw[-] (6) to (7);
\draw[draw= blue,->] (8) to (9);
\end{tikzpicture} \\
& \raisebox{1mm}{$\quad \overset{U_{\theta}}{\mapsto} \quad \cdots$} \\
& \quad \hspace{2mm} \vdots \\
& \raisebox{1mm}{$\quad \overset{U_{\theta}}{\mapsto} \quad$}
\begin{tikzpicture}
[scale = 0.8,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[v] (1) at (-2, 0) {};
\node[v] (2) at (0, 0) {};
\node[v] (3) at (2, 0) {};
\node[u] (4) at (2.5, 0) {};
\draw (2.75,0) node{$\dots$};
\node[u] (5) at (3, 0) {};
\node[v] (6) at (3.5, 0) {};
\node[v] (7) at (5.5, 0) {};
\node[u] (8) at (3.7, 0.2) {};
\node[u] (9) at (5.3, 0.2) {};
\draw (1) to (2);
\draw[-] (2) to (3);
\draw[-] (3) to (4);
\draw[-] (5) to (6);
\draw[-] (6) to (7);
\draw[draw= blue,->] (8) to (9);
\end{tikzpicture} \\
&\raisebox{1mm}{$\quad \overset{U_{\theta}}{\mapsto} \quad$}
\begin{tikzpicture}
[scale = 0.8,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[v] (1) at (-2, 0) {};
\node[v] (2) at (0, 0) {};
\node[v] (3) at (2, 0) {};
\node[u] (4) at (2.5, 0) {};
\draw (2.75,0) node{$\dots$};
\node[u] (5) at (3, 0) {};
\node[v] (6) at (3.5, 0) {};
\node[v] (7) at (5.5, 0) {};
\node[u] (8) at (5.3, -0.2) {};
\node[u] (9) at (3.7, -0.2) {};
\draw (1) to (2);
\draw[-] (2) to (3);
\draw[-] (3) to (4);
\draw[-] (5) to (6);
\draw[-] (6) to (7);
\draw[draw= blue,->] (8) to (9);
\end{tikzpicture} \\
& \raisebox{1mm}{$\quad \overset{U_{\theta}}{\mapsto} \quad \cdots$} \\
& \quad \hspace{2mm} \vdots \\
& \raisebox{1mm}{$\quad \overset{U_{\theta}}{\mapsto} \quad$}
\begin{tikzpicture}
[scale = 0.8,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[v] (1) at (-2, 0) {};
\node[v] (2) at (0, 0) {};
\node[v] (3) at (2, 0) {};
\node[u] (4) at (2.5, 0) {};
\draw (2.75,0) node{$\dots$};
\node[u] (5) at (3, 0) {};
\node[v] (6) at (3.5, 0) {};
\node[v] (7) at (5.5, 0) {};
\node[u] (8) at (-1.8, -0.2) {};
\node[u] (9) at (-0.2, -0.2) {};
\draw (1) to (2);
\draw[-] (2) to (3);
\draw[-] (3) to (4);
\draw[-] (5) to (6);
\draw[-] (6) to (7);
\draw[draw= blue,->] (9) to (8);
\end{tikzpicture} \\
& \raisebox{1mm}{$\quad \overset{U_{\theta}}{\mapsto} \quad$}
\begin{tikzpicture}
[scale = 0.8,
line width = 0.8pt,
v/.style = {circle, fill = black, inner sep = 0.8mm},u/.style = {circle, fill = white, inner sep = 0.1mm}]
\node[v] (1) at (-2, 0) {};
\node[v] (2) at (0, 0) {};
\node[v] (3) at (2, 0) {};
\node[u] (4) at (2.5, 0) {};
\draw (2.75,0) node{$\dots$};
\node[u] (5) at (3, 0) {};
\node[v] (6) at (3.5, 0) {};
\node[v] (7) at (5.5, 0) {};
\node[u] (8) at (-1.8, 0.2) {};
\node[u] (9) at (-0.2, 0.2) {};
\draw (1) to (2);
\draw[-] (2) to (3);
\draw[-] (3) to (4);
\draw[-] (5) to (6);
\draw[-] (6) to (7);
\draw[draw= blue,->] (8) to (9);
\end{tikzpicture}
\end{align*}
By Lemma~\ref{55},
we see that $G$ is periodic whose period is $2(n-1)$.
\end{proof}
Note that the periodicity of the undirected paths has actually studied
in \cite{KSTY2018} by eigenvalue analysis.
In this paper, we have proven the same fact in a different way.
\section{Periodicity of mixed cycles} \label{107}
Finally, we discuss periodicity of mixed cycles.
Strategy is similar to the mixed paths, but the discussion is more complicated.
Recall that
a mixed cycle $G$ on $n$ vertices is $H_{\eta}$-cospectral with $C_n^j$ for some $j \in \{0,1,\dots, n\}$ by Proposition~\ref{03}.
For $m_1, m_2 \in \MB{Z}$,
the greatest common divisor of $m_1$ and $m_2$ is denoted by $(m_1, m_2)$.
Note that $(0, m_1) = m_1$.
The following is our third main theorem.
\begin{thm}
{\it
Let $G = (V, \MC{A})$ be a mixed cycle on $n$ vertices equipped with an $\eta$-function $\theta$.
Then,
$G$ is periodic if and only if $\eta \in \MB{Q}\pi$.
In addition,
we suppose that $\eta \in \MB{Q}\pi$ and the mixed cycle $G$ is $H_{\eta}$-cospectral with $C_n^j$.
Let $\eta = \frac{p}{q}\pi$, where $p$ and $q$ are coprime.
Then, the period $\tau$ of $G$ is the following:
\begin{equation} \label{65}
\tau =
\begin{cases}
\frac{2qn}{(j, 2q)} \quad &\text{if $p$ is odd,} \\
\frac{qn}{(j, q)} \quad &\text{if $p$ is even.} \\
\end{cases}
\end{equation}
}
\end{thm}
\begin{proof}
By Proposition~\ref{03},
$G$ is $H_{\eta}$-cospectral with $C_n^j$ for some $j \in \{0,1,\dots, n\}$.
Since $G$ and $C_n^j$ are 2-regular,
they are also $\tilde{H}_{\eta}$-cospectral.
By Theorem~\ref{21},
$U_{\theta}(G)$ and $U_{\theta}(C_n^j)$ have the same eigenvalues.
From Lemma~\ref{13},
periodicity is determined by the eigenvalues of the time evolution matrices,
so it is sufficient to discuss only periodicity of $C_n^j$ for $j \in \{0,1,\dots, n\}$.
Let $U_{\theta} = U_{\theta}(C_n^j)$,
and let $a$ be an arbitrary arc of $C_n^j$.
By Lemma~\ref{25}, we have
\begin{equation} \label{26}
U_{\theta}^{n} \BM{e^{(a)}} = e^{\pm j \eta i} \BM{e^{(a)}}.
\end{equation}
If $\eta \not\in \MB{Q}\pi$,
then $j l \eta \not\in 2\pi \MB{Z}$ for any $l \in \MB{N}$,
so the mixed graph is not periodic.
On the other hand,
we suppose $\eta \in \MB{Q}\pi$, say $\eta = \frac{p}{q}\pi$.
Then,
\[ U_{\theta}^{2qn} \BM{e^{(a)}} = e^{\pm 2qj \cdot \frac{p}{q} \pi i} \BM{e^{(a)}}
= e^{\pm 2pj \pi i} \BM{e^{(a)}} = \BM{e^{(a)}}. \]
Thus, the mixed graph is periodic.
Next, we determine the period $\tau$.
Let $\eta \in \MB{Q}\pi$ and let $\eta = \frac{p}{q}\pi$, where $p$ and $q$ are coprime.
By Lemma~\ref{25},
the vector $U_{\theta}^k \BM{e^{(a)}}$ coincides with a complex multiple of $\BM{e^{(a)}}$
if and only if $k$ is a multiple of $n$.
Thus, the period $\tau$ is a multiple of $n$,
namely, $\tau = ln$ for some $l \in \MB{N}$.
Then,
\[ \BM{e^{(a)}} = U_{\theta}^{ln} \BM{e^{(a)}} = e^{\pm l j \cdot \frac{p}{q} \pi i} \BM{e^{(a)}}, \]
so $\frac{pjl}{q} \pi \in 2\pi\MB{Z}$, i.e., $pjl \in 2q \MB{Z}$.
We would like to find $\min \{ l \in \MB{N} \mid pjl \in 2q \MB{Z} \}$.
If $j = 0$, we have $\min \{ l \in \MB{N} \mid pjl \in 2q \MB{Z} \} = 1$,
so the period is $n$.
This satisfies (\ref{65}).
We assume that $j > 0$ in the discussion below.
First, we consider the case where $p$ is odd.
We will show that
\[ \min \{ l \in \MB{N} \mid pjl \in 2q \MB{Z} \} = \frac{2q}{(j,2q)}. \]
Since $p$ is odd and $(p,q) = 1$, we have
\begin{equation} \label{27}
(p,2q) = 1.
\end{equation}
Let $d = (j,2q)$.
There exists $j' \in \MB{N}$ such that
\begin{equation} \label{28}
j = j' d
\end{equation}
and
\begin{equation} \label{29}
(j', 2q) = 1.
\end{equation}
Therefore, we have
\begin{align*}
\min \{ l \in \MB{N} \mid pjl \in 2q \MB{Z} \}
&= \min \{ l \in \MB{N} \mid jl \in 2q \MB{Z}\} \tag{by (\ref{27})} \\
&= \min \{ l \in \MB{N} \mid j' d l \in 2q \MB{Z}\} \tag{by (\ref{28})} \\
&= \min \{ l \in \MB{N} \mid d l \in 2q \MB{Z} \} \tag{by (\ref{29})} \\
&= \frac{2q}{d}. \tag{since $d$ is a divisor of $2q$}
\end{align*}
Next, we consider the case where $p$ is even.
We will show that
\begin{equation}
\min \{ l \in \MB{N} \mid pjl \in 2q \MB{Z} \} = \frac{q}{(j,q)}. \label{66}
\end{equation}
If $p=0$,
then $q=1$ since $p$ and $q$ are coprime.
We have $\min \{ l \in \MB{N} \mid pjl \in 2q \MB{Z} \} = 1$ and $\frac{q}{(j,q)} = 1$.
Equality~(\ref{66}) is satisfied in this case.
We assume that $p > 0$.
Since $p$ is even, we have
\begin{equation} \label{31}
p = 2p'
\end{equation}
for some $p' \in \MB{N}$.
$p$ and $q$ are coprime, so
\begin{equation} \label{32}
(p',q) = 1.
\end{equation}
Let $d = (j,q)$.
There exists $j' \in \MB{N}$ such that
\begin{equation} \label{33}
j = j' d
\end{equation}
and
\begin{equation} \label{34}
(j', q) = 1.
\end{equation}
Therefore, we have
\begin{align*}
\min \{ l \in \MB{N} \mid pjl \in 2q \MB{Z} \}
&= \min \{ l \in \MB{N} \mid p' j l \in q \MB{Z}\} \tag{by (\ref{31})} \\
&= \min \{ l \in \MB{N} \mid jl \in q \MB{Z}\} \tag{by (\ref{32})} \\
&= \min \{ l \in \MB{N} \mid j' d l \in q \MB{Z}\} \tag{by (\ref{33})} \\
&= \min \{ l \in \MB{N} \mid d l \in q \MB{Z} \} \tag{by (\ref{34})} \\
&= \frac{q}{d}. \tag{since $d$ is a divisor of $q$}
\end{align*}
We have the statement.
\end{proof}
\section*{Acknowledgements}
This paper is based on the graduation theses of the second and third authors.
We would like to thank their advisor, Professor Norio Konno,
for his fruitful comments and helpful advice.
| {
"timestamp": "2021-04-20T02:06:25",
"yymm": "2104",
"arxiv_id": "2104.08424",
"language": "en",
"url": "https://arxiv.org/abs/2104.08424",
"abstract": "In this paper, we determine periodicity of quantum walks defined by mixed paths and mixed cycles. By the spectral mapping theorem of quantum walks, consideration of periodicity is reduced to eigenvalue analysis of $\\eta$-Hermitian adjacency matrices. First, we investigate coefficients of the characteristic polynomials of $\\eta$-Hermitian adjacency matrices. We show that the characteristic polynomials of mixed trees and their underlying graphs are same. We also define $n+1$ types of mixed cycles and show that every mixed cycle is switching equivalent to one of them. We use these results to discuss periodicity. We show that the mixed paths are periodic for any $\\eta$. In addition, we provide a necessary and sufficient condition for a mixed cycle to be periodic and determine their periods.",
"subjects": "Combinatorics (math.CO); Quantum Physics (quant-ph)",
"title": "Periodicity of quantum walks defined by mixed paths and mixed cycles",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9852713857177955,
"lm_q2_score": 0.8152324826183822,
"lm_q1q2_score": 0.8032252378315721
} |
https://arxiv.org/abs/0907.3772 | The Maximum Wiener Index of Trees with Given Degree Sequences | The Wiener index of a connected graph is the sum of topological distances between all pairs of vertices. Since Wang gave a mistake result on the maximum Wiener index for given tree degree sequence, in this paper, we investigate the maximum Wiener index of trees with given degree sequences and extremal trees which attain the maximum value. | \section{Introduction}
The Wiener index of a molecular graph, introduced by Wiener
\cite{wiener1947} in 1947,
is one of the oldest and most widely used topological indices in
the quantitative structure property relationships. In the
mathematical
literature, the Wiener index seems to be the first studied by
Entringer et al. \cite{entringer1976}. For more information and background,
the readers may refer to a recent and very
comprehensive survey \cite{dobrynin2001} and a book \cite{rouvray2002}
which is dedicated to Harry Wiener on the Wiener index and the references therein.
Through this paper, all graphs are finite, simple and undirected.
Let $G= (V,~E)$ be a simple connected graph with
vertex set $V(G)=\{v_1,\cdots, v_n\}$ and edge set $E(G)$. Denote
by $d_G(v_i)$ (or for short $ d(v_i)$) the {\it degree} of vertex
$v_i$. The {\it distance} between vertices $v_i$ and $v_j$ is the
minimum number of edges between $v_i$ and $v_j$ and denoted by
$d_G(v_i, v_j)$ (or for short $d(v_i,v_j)$). The {\it Wiener index}
of a connected graph $G$ is defined as
\begin{equation}\label{weiner-def}
W(G)=\sum_{\{v_i, v_j\}\subseteq V(G)}d(v_i, v_j).
\end{equation}
A {\it tree} is a connected and acyclic graph. A {\it caterpillar}
is a tree in which a single path (called {\it Spine}) is incident to
(or contains) every edge. For
other terminology and notions, we follow from \cite{bondy1976}.
Entringer et al. \cite{entringer1976} proved that
the path $P_n$ and the star $K_{1, n-1}$ have
the maximum and minimum Wiener indices, respectively, in the set
consisting of all trees of order $n$. Dankelmann
\cite{dankelmann1994} obtained the all extremal graphs in the set of
all connected graphs with given the order and the matching number
which attained the maximum Wiener value. Moreover, Fischermann et
al. \cite{fischermann2002} and Jelen et al. \cite{jelen2003}
independently determined all trees which have the minimum Wiener
indices among all trees of order $n$ and maximum degree $\Delta$.
A nonincreasing
sequence of nonnegative integers
$\pi=(d_1,d_2,\cdots, d_{n})$ is called {\it graphic} if there
exists a simple graph having $\pi$ as its vertex degree sequence.
Hence it is natural to
consider the following problem.
\begin{problem}\label{problem}
Let $\pi=(d_1, \cdots, d_n)$ be graphic degree sequence and
$${\mathcal{G}}_{\pi}=\{G: {\rm \ the\ degree \ sequence\ of} \ G\ {\rm is} \ \pi\}.$$
Find the upper (lower) bounds for the Wiener index of
all graphs $G$ in ${\mathcal G}_{ \pi}$ and characterize all
extremal graphs which attain the upper (lower) bounds.
\end{problem}
Moreover, we call a graph {\it maximum (minimum) optimal} if it
maximizes (minimizes) the Wiener index in $\mathcal{G_{\pi}}$.
Recently, by the different techniques, Wang \cite{wang2008} and
Zhang et al.\cite{zhang2008} independently characterized the tree
that minimizes the Wiener index among trees of given degree
sequences. Moreover, they proved that the minimum optimal trees for
a given tree degree sequence $\pi$ are unique.
On the other hand,
Wang in \cite{wang2008} also "{\it proved}" the only maximum optimal
tree that maximizes the Wiener index among trees of given degree
sequences. The result can be stated as follows:
\begin{theorem}\cite{wang2008}\label{wang}
Given the degree sequence and the number of vertices, the greedy
caterpillar maximizes the Wiener index, where the greedy caterpillar
with degree sequence $(d_1,\cdots, d_n)$ ($ d_1\ge d_2\ge \cdots \ge
d_k\ge 2>d_{k+1}=1$) is formed by attaching pending edges to a path
$v_1, v_2, \cdots, v_k$ of length $k-1$ such that
$$d(v_1)\ge d(v_k)\ge d(v_2)\ge d(v_{k-1})\ge \cdots\ge
d(v_{\lceil\frac{k+1}{2}\rceil}). $$
\end{theorem}
Unfortunately, this result is not correct. For example:
\begin{example}\label{example}
Let $\pi=(13, 5, 5, 5, 4, 3, 1, \cdots, 1)$ be a degree sequence of
tree with $31$ vertices. Let $T_1$ and $T_2$ be two trees with
degree sequences $\pi$ (see Fig.1).
\setlength{\unitlength}{0.1in}
\begin{picture}(60,15)
\put(3,5){\circle{0.5}} \put(3.25,5){\line(1,0){10}}
\put(13.5,5){\circle{0.5}}\put(13.75,5){\line(1,0){10}}
\put(24,5){\circle{0.5}} \put(24.25,5){\line(1,0){10}}
\put(34.5,5){\circle{0.5}} \put(34.75,5){\line(1,0){10}}
\put(45,5){\circle{0.5}}
\put(45.25,5){\line(1,0){10}}
\put(55.5,5){\circle{0.5}}
\put(2.8,5.2){\line(-1,2){3.5}} \put(3.2,5.2){\line(1,2){3.5}}
\put(13.5,5.25){\line(0,1){7}}
\put(23.8,5.2){\line(-1,2){3.5}} \put(24.2,5.2){\line(1,2){3.5}}
\put(34.3,5.2){\line(-1,2){3.5}} \put(34.7,5.2){\line(1,2){3.5}}
\put(44.8,5.2){\line(-1,2){3.5}} \put(45.2,5.2){\line(1,2){3.5}}
\put(55.3,5.2){\line(-1,2){3.5}} \put(55.7,5.2){\line(1,2){3.5}}
\put(-0.8,12.5){\circle{0.5}} \put(6.87,12.5){\circle{0.5}}
\put(2.2,12){$\cdots$}\put(2.2,13.3){$12$} \put(2.5,3.3){$v_{1}$}
\put(13.5,12.5){\circle{0.5}} \put(13,3.3){$v_{2}$}
\put(20.2,12.5){\circle{0.5}} \put(27.8,12.5){\circle{0.5}}
\put(23.2,13.3){$2$} \put(23.2, 3.3){$v_{3}$}
\put(30.61,12.5){\circle{0.5}} \put(38.3,12.5){\circle{0.5}}
\put(33.7,12){$\cdots$} \put(34.8,13.3){$3$} \put(34, 3.3){$v_{4}$}
\put(41.3,12.5){\circle{0.5}} \put(48.8,12.5){\circle{0.5}}
\put(44,12){$\cdots$} \put(44.8,13.3){$3$}\put(44, 3.3){$v_{5}$}
\put(51.7,12.5){\circle{0.5}}\put(59.2,12.5){\circle{0.5}}
\put(54.5,12){$\cdots$}\put(55,13.3){$4$}\put(55, 3.3){$v_{6}$}
\put(28, 1){$T_1$}
\end{picture}
\setlength{\unitlength}{0.1in}
\begin{picture}(60,15)
\put(3,5){\circle{0.5}} \put(3.25,5){\line(1,0){10}}
\put(13.5,5){\circle{0.5}}\put(13.75,5){\line(1,0){10}}
\put(24,5){\circle{0.5}} \put(24.25,5){\line(1,0){10}}
\put(34.5,5){\circle{0.5}} \put(34.75,5){\line(1,0){10}}
\put(45,5){\circle{0.5}}
\put(45.25,5){\line(1,0){10}}
\put(55.5,5){\circle{0.5}}
\put(2.8,5.2){\line(-1,2){3.5}} \put(3.2,5.2){\line(1,2){3.5}}
\put(13.3,5.2){\line(-1,2){3.5}} \put(13.7,5.2){\line(1,2){3.5}}
\put(23.8,5.2){\line(-1,2){3.5}} \put(24.2,5.2){\line(1,2){3.5}}
\put(34.5,5.25){\line(0,1){7}}
\put(44.8,5.2){\line(-1,2){3.5}} \put(45.2,5.2){\line(1,2){3.5}}
\put(55.3,5.2){\line(-1,2){3.5}} \put(55.7,5.2){\line(1,2){3.5}}
\put(-0.8,12.5){\circle{0.5}} \put(6.87,12.5){\circle{0.5}}
\put(2.2,12){$\cdots$}\put(2.2,13.3){$12$} \put(2.5,3.3){$v_{1}$}
\put(9.7,12.5){\circle{0.5}} \put(17.2,12.5){\circle{0.5}}
\put(12.6,12){$\cdots$} \put(13.5,13.3){$3$}
\put(13,3.3){$v_{2}$}
\put(20.2,12.5){\circle{0.5}} \put(27.8,12.5){\circle{0.5}}
\put(23.2,13.3){$2$} \put(23.2, 3.3){$v_{3}$}
\put(34.5,12.5){\circle{0.5}}
\put(34, 3.3){$v_{4}$}
\put(41.3,12.5){\circle{0.5}} \put(48.8,12.5){\circle{0.5}}
\put(44,12){$\cdots$} \put(44.8,13.3){$3$}\put(44, 3.3){$v_{5}$}
\put(51.7,12.5){\circle{0.5}}\put(59.2,12.5){\circle{0.5}}
\put(54.5,12){$\cdots$}\put(55,13.3){$4$}\put(55, 3.3){$v_{6}$}
\put(20, 1){$T_2$} \put(25,-1){\bf Figure 1 $T_1$ and $T_2$}
\end{picture}
\end{example}
Clearly, $T_2$ is a greedy caterpillar and $T_1$ is not a greedy
caterpillar. Moreover, they have the same degree sequences $\pi$. By
calculation, it is easy to see that
$$W(T_2)=9870< W(T_1)=9886.$$
Hence this example illustrates that Theorem~\ref{wang} in
\cite{wang2008} is not correct.
Motivated by Problem \ref{problem} and Example \ref{example}, we
try to investigate the extremal trees which attain the maximum
Wiener index among all trees with given degree sequences. The
problem seems to be difficult. Because we find that the extremal
tree depends on the values of components of degree sequences.
The rest of the paper is organized as follows. In Section 2,
we discuss some properties of the extremal tree with the maximum Wiener
index and give an upper bound in terms of degree sequences.
In Section
3, the extremal trees with the maximum Wiener index among given
degree sequences $(d_1, \cdots, d_n)$, where $d_1\ge \cdots \ge
d_k\ge 2>d_{k+1}=1$ and $k\le 6$ are characterized. Moreover, the
extremal maximal trees are not unique.
\section{Properties of extremal trees with the maximum Wiener index }
Let $\mathcal{T_{\pi}}$ be the set of all trees with degree
sequences $\pi=(d_1, d_2, \cdots, d_n)$ with $d_1\ge
d_2\ge\cdots\ge d_n$. Shi in \cite{shi1993} proved that a maximum
optimal tree must be a caterpillar.
\begin{lemma}\cite{shi1993}\label{shi}
Let $T^*$ be a maximum optimal tree in $\mathcal{T_{\pi}}$. Then
$T^*$ is a caterpillar.
\end{lemma}
From Lemma~\ref{shi}, we only need to consider all caterpillars with
a degree sequence $\pi$. In order to study the structure of the
maximum optimal trees, we present a formula for Wiener index of any
caterpillar.
\begin{lemma}\label{formula}
Let $T$ be a caterpillar of order $n$ with the degree sequence
$\pi=(d(v_1), \cdots,$ $ d(v_k), d(v_{k+1}),\cdots,d(v_n))$(see
Figure 2).
\setlength{\unitlength}{0.1in}
\begin{picture}(60,20)
\put(10,5){\circle{0.5}} \put(10.25,5){\line(1,0){10}}
\put(20.5,5){\circle{0.5}}\put(20.75,5){\line(1,0){5}}
\put(26.5,4.55){$\cdot$}\put(27.5,4.55){$\cdot$}\put(28.5,4.55){$\cdot$}
\put(30,5){\circle{0.5}}
\put(31.5,4.55){$\cdot$}\put(32.5,4.55){$\cdot$}\put(33.5,4.55){$\cdot$}
\put(35.25,5){\line(1,0){5}}
\put(40.5,5){\circle{0.5}} \put(40.75,5){\line(1,0){10}}
\put(51,5){\circle{0.5}}
\put(9.8,5.2){\line(-1,2){3.5}} \put(10.2,5.2){\line(1,2){3.5}}
\put(20.3,5.2){\line(-1,2){3.5}} \put(20.7,5.2){\line(1,2){3.5}}
\put(29.8,5.2){\line(-1,2){3.5}} \put(30.2,5.2){\line(1,2){3.5}}
\put(40.3,5.2){\line(-1,2){3.5}} \put(40.7,5.2){\line(1,2){3.5}}
\put(50.8,5.2){\line(-1,2){3.5}} \put(51.2,5.2){\line(1,2){3.5}}
\put(6.2,12.5){\circle{0.5}} \put(13.8,12.5){\circle{0.5}}
\put(16.65,12.5){\circle{0.5}} \put(24.25,12.5){\circle{0.5}}
\put(26.23,12.5){\circle{0.5}} \put(33.7,12.5){\circle{0.5}}
\put(36.7,12.5){\circle{0.5}} \put(44.23,12.5){\circle{0.5}}
\put(47.2,12.5){\circle{0.5}} \put(54.7,12.5){\circle{0.5}}
\put(9,12){$\cdots$}\put(19,12){$\cdots$}\put(29,12){$\cdots$}
\put(39,12){$\cdots$}\put(50,12){$\cdots$}
\put(9,13.3){$y_1$}\put(18.2,13.3){$y_{2}-1$}
\put(28.5,13.3){$y_{i}-1$} \put(37.5,13.3){$y_{k-1}-1$}
\put(50,13.3){$y_{k}$}
\put(9.5,3.3){$v_{1}$}
\put(20,3.3){$v_{2}$} \put(30, 3.3){$v_{i}$} \put(40,3.3){$v_{k-1}$}
\put(50.2,3.3){$v_{k}$}
\put(25,1){\bf Figure 2}\put(34, 1){$T$}
\end{picture}
If $d(v_i)=y_i+1\ge 2$ for $i=1, \cdots, k$ and
$d(v_{k+1})=\cdots=d(v_n)=1$, then
\begin{equation}\label{weiner-f}
W(T)=(n-1)^2+F(y_1, \cdots, y_k),
\end{equation}
where
\begin{equation}\label{fx}
F(y_1, \cdots,
y_k)=\sum_{i=1}^{k-1}(\sum_{j=1}^iy_j)(\sum_{j=i+1}^ky_j).
\end{equation}
\end{lemma}
\begin{proof}
It is well known \cite{hosoya1971} that the formula
(\ref{weiner-def}) is equal to
$$W(T)=\sum_{e}n_1(e)n_2(e),$$
where $e=(u, v)$ is an edge of $T$, and $n_1(e)$ (resp. $n_2(e)$) is
the number of vertices of the component of $T-e$ containing $u$
(resp. $v$). For $e_i=(v_i, v_{i+1})\in E(T),$ the numbers of
vertices of the two components of $T-e_i$ are
$\sum_{j=1}^id(v_j)-(i-1)$ and $\sum_{j=i+1}^kd(v_j)-(k-i-1)$ for
$i=1, \cdots, k-1,$ respectively. Hence
\begin{eqnarray*}
W(T)&=&\sum_{e\in E(T)}n_1(e)n_2(e)\\
&=&\sum_{e {\rm {\ is \ pendent\ edge}}}n_1(e)n_2(e)+
\sum_{ e {\rm \ is \ not \ pendent\ edge}}n_1(e)n_2(e)\\
&=&
(n-1)(n-k)+\sum_{i=1}^{k-1}(\sum_{j=1}^id(v_j)-(i-1))(\sum_{j=i+1}^kd(v_j)-(k-i-1))
\\
&=&(n-1)(n-k)+\sum_{i=1}^{k-1}(1+\sum_{j=1}^iy_j)(1+\sum_{j=i+1}^ky_j)\\
&=&(n-1)(n-k)+(k-1)(1+\sum_{j=1}^ky_j)+\sum_{i=1}^{k-1}(\sum_{j=1}^iy_j)(\sum_{j=i+1}^ky_j)\\
&=&(n-1)^2+F(y_1, \cdots, y_k),
\end{eqnarray*}
where last equality is due to
$\sum_{j=1}^ky_j=\sum_{j=1}^kd(v_j)-k=2(n-1)-(n-k)-k=n-2$. This
completes the proof.
\end{proof}
{\bf Remark} In this sequel, the caterpillar $T$ in
Lemma~\ref{formula} is denoted by $T(y_1,\cdots, y_k)$. Then degree
sequence of $T(y_1,\cdots, y_k)$ is $(y_1+1, \cdots, y_k+1, 1,
\cdots, 1)$. The following theorem give a characterization of a
maximum optimal tree.
\begin{theorem}\label{optimal-equal}
Let $\pi=(d_1, \cdots, d_n)$ with $d_1\ge\cdots \ge d_k\ge 2\ge
d_{k+1}=\cdots=d_n=1$. Then $T$ is a maximum optimal tree in
${\mathcal{T}}_{\pi}$ if and only if $T$ is a caterpillar $T(x_1,
\cdots, x_k)$ and $(x_1, \cdots, x_k)$ satisfies
\begin{equation}
F(x_1, \cdots , x_k)=\max\{F(y_1, \cdots,
y_k)=\sum_{i=1}^{k-1}(\sum_{j=1}^iy_j)(\sum_{j=i+1}^ky_j):\ y_1\ge
y_k \},
\end{equation}
where $(y_1, \cdots, y_k)$ is any permutation of $(d_1-1, \cdots,
d_k-1)$.
\end{theorem}
\begin{proof} Necessity. Since $T$ is a maximum optimal tree in ${\mathcal{T}}_{\pi}$, by
Lemmas~\ref{shi}, $T$ must be a caterpillar and can be denoted by
$T(z_1, \cdots, z_k)$ with $(z_1, \cdots, z_k)$ is the permutation
of $(d_1-1, \cdots, d_k-1)$. Moreover, by Lemma~\ref{formula}, we
have
$$W(T(z_1, \cdots, z_k))=(n-1)^2+F(z_1, \cdots, z_k).$$
For any permutation $(y_1, \cdots, y_k)$ of $(d_1-1, \cdots, d_k-1)$
with $y_1\ge y_k$, there exists a caterpillar $T_1$ with the degree
sequence $\pi$ such that
$$W(T_1)=(n-1)^2+F(y_1, \cdots, y_k).$$
Because $T(z_1, \cdots, z_k)$ is a maximum optimal tree in
${\mathcal{T}}_{\pi}$, we have
$$F(y_1, \cdots, y_k)=W(T_1)-(n-1)^2\le W(T(z_1, \cdots, z_k))-(n-1)^2=F(z_1, \cdots,
z_k).$$
Sufficiency. If $T$ is a caterpillar $T(x_1, \cdots, x_k)$ and
$(x_1, \cdots, x_k)$ satisfies
\begin{equation}
F(x_1, \cdots , x_k)=\max\{F(y_1, \cdots,
y_k)=\sum_{i=1}^{k-1}(\sum_{j=1}^iy_j)(\sum_{j=i+1}^ky_j): y_1\ge
y_k \},
\end{equation}
where the maximum is taken over all permutations $(y_1, \cdots,
y_k)$
of $(d_1-1, \cdots, d_k-1)$. Let $T_1$ be any tree with the degree
sequence $\pi$. By Lemma~\ref{shi}, there exists a caterpillar $T_2$
with the degree sequence $\pi$ such that $W(T_1)\le W(T_2)$. Then
$T_2$ must be $T(y_1, \cdots, y_k)$, where $(y_1, \cdots, y_k)$ is
the permutation of $(d_1-1, \cdots, d_k-1)$. Hence
$$W(T_1)\le W(T_2)= (n-1)^2+F(y_1, \cdots, y_k)\le (n-1)^2+F(x_1, \cdots,
x_k)=W(T(x_1, \cdots, x_k)).$$ Therefore $T(x_1, \cdots, x_k)$ is a
maximum optimal tree. This completes the proof.
\end{proof}
Now we can present an upper bound for the Wiener index of any tree
with given degree sequence $\pi$ in terms of degree sequences.
\begin{theorem}\label{upperbound}
Let $T$ be a tree with a given degree sequence $\pi=(d_1,\cdots,
d_n)$, where $d_1\ge \cdots\ge d_k>d_{k+1}=\cdots=d_n=1$. Then
\begin{equation}
W(T)\le (n-1)^2+\frac{k(k-1)}{4}\sum_{i=1}^k(d_i-1)^2
\end{equation}
with equality if and only if $k=2$ and $d_1=d_2$.
\end{theorem}
\begin{proof}
Let $T(x_1, \cdots, x_k)$ be a
caterpillar and
$(x_1, \cdots, x_k)$ satisfy
\begin{equation}
F(x_1, \cdots , x_k)=\max\{F(y_1, \cdots,
y_k)=\sum_{i=1}^{k-1}(\sum_{j=1}^iy_j)(\sum_{j=i+1}^ky_j): y_1\ge
y_k \},
\end{equation}
where $(y_1, \cdots, y_k)$ is any permutation of $(d_1-1, \cdots,
d_k-1)$. By Theorem~\ref{optimal-equal}, $W(T)\le W(T(x_1, \cdots,
x_k))$. Clearly,
\begin{eqnarray*} F(x_1,
\cdots, x_k)=\sum_{i=1}^{k-1}(\sum_{j=1}^ix_j)(\sum_{j=i+1}^kx_j)
=\frac{1}{2}(x_1, \cdots, x_k)C(x_1,\cdots, x_k)^T,
\end{eqnarray*}
where $$ C=\left(\begin{array}{cccccc} 0 &1&
2 &\cdots & k-2& k-1\\
1 & 0& 1& \cdots & k-3& k-2\\
\cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\
k-1& k-2 & k-3&\cdots & 1 &0
\end{array}\right).$$
By Perron-Frobenius theorem (for example, see \cite{horn1985}), the
largest eigenvalue $\lambda_1(C)$ of $C$ is at most
$\frac{k(k-1)}{2}$ with equality if and only if $k=2$. Hence by
Rayleigh quotient,
$$(x_1, \cdots, x_k)C(x_1,\cdots, x_k)^T\le
\lambda_1(C)\sum_{i=1}^kx_i^2$$ with equality if and only if $(x_1,
\cdots, x_k)^T$ is an eigenvector of $C$ corresponding to the
eigenvalue $\lambda_1(C)$. Therefore,
$$F(x_1,
\cdots, x_k)\le \frac{k(k-1)}{4}\sum_{i=1}^kx_i^2$$ with equality if
and only if $k=2$ and $x_1=x_2$. Hence
$$ W(T)\le (n-1)^2+\frac{k(k-1)}{4}\sum_{i=1}^k{x_i}^2\le
(n-1)^2+\frac{k(k-1)}{4}\sum_{i=1}^k(d_i-1)^2$$ with equality if and
only if $k=2$ and $d_1=d_2$, since $(d(v_1), \cdots, d(v_k))$ is a
permutation of $(d_1, \cdots, d_k)$. This completes the proof.
\end{proof}
\begin{lemma}\label{function}
Let $w_1\ge w_2\ge\cdots\ge w_k\ge 1$ be the positive integers with
$k\ge 5$. Let
$$F(z_1, \cdots, z_k)=\max\{F(y_1, \cdots, y_k)=\sum_{i=1}^{k-1}(\sum_{j=1}^iy_j)(\sum_{j=i+1}^ky_j): y_1\ge y_k\},$$
where $(y_1, \cdots, y_k)$ is any permutation of $(w_1, \cdots,
w_k)$. Then there exists a $2\le t\le k-2$ such that the following
holds:
\begin{equation}\label{z1-zt}
z_1+\cdots+ z_{t-2}\le z_{t+1}+\cdots+z_k \end{equation} and
\begin{equation}\label{zt-zk}
z_1+\cdots+z_{t-1}>z_{t+2}+\cdots+z_k.
\end{equation}
Further, if equations (\ref{z1-zt}) is strict, then
\begin{equation}\label{z1-zt-ztrict}
z_1\ge z_2\ge\cdots\ge z_t, \quad \quad z_t\le z_{t+1}\le \cdots\le
z_k.
\end{equation}
If equations (\ref{z1-zt}) becomes equality, then
\begin{equation}\label{lemma-z1-zt-1}
z_1\ge z_2\ge\cdots\ge z_t, \quad \quad z_t\le z_{t+1}\le \cdots\le
z_k
\end{equation}
or
\begin{equation}\label{lemma-z1-zt-2}
z_1\ge z_2\ge\cdots\ge z_{t-1}, \quad \quad z_{t-1}\le z_{t}\le
\cdots\le z_k.
\end{equation}
\end{lemma}
\begin{proof}
Let $$f(p)=\sum_{i=1}^{p-2}z_i-\sum_{i=p+1}^kz_i,\ \ \ 2\le p\le
k-2.$$ Clearly $f(2)<0$, $f(k-1)>0$ and
$$f(2)\le f(3)\le\cdots\le f(k-1).$$
Hence there exists a $ 2\le t\le k-2$ such that $f(t)\le 0$ and
$f(t+1)>0$. In other words, equations (\ref{z1-zt}) and
(\ref{zt-zk}) hold. By the definition of $F(z_1, \cdots, z_k),$ we
have for $1\le i\le k-1$,
\begin{eqnarray*}
0&\le& F(z_1, \cdots,z_{i-1}, z_i, z_{i+1}, \cdots, z_k)-F(z_1,
\cdots, z_{i-1}, z_{i+1}, z_i, \cdots, z_k)\\
&=&(z_{i+1}-z_i)(\sum_{j=1}^{i-1}z_j-\sum_{j=i+2}^kz_j) .
\end{eqnarray*}
But for $1\le i\le t-2$, by (\ref{z1-zt}), we have
$\sum_{j=1}^{i-1}z_j<\sum_{j=i+2}^kz_j$. Hence $z_1\ge \cdots\ge
z_{t-1}$. On the other hand, for $t\le i\le k-1$, by (\ref{zt-zk}),
we have $\sum_{j=1}^{i-1}z_j>\sum_{j=i+2}^kz_j$. Therefore $z_t\le
z_{t+1}\cdots\le z_k$.
If (\ref{z1-zt}) is strict, then $(z_1+\cdots+z_{t-2})-(z_{t+1}+\cdots+z_k)<0$,
which implies
$z_{t-1}\ge z_t$. So (\ref{z1-zt-ztrict}) holds.
If (\ref{z1-zt}) becomes equality, i.e.,
$z_1+\cdots+z_{t-2}=z_{t+1}+\cdots+z_k$, then it is easy to see that
(\ref{lemma-z1-zt-1}) or (\ref{lemma-z1-zt-2}) holds. This
completes the proof.
\end{proof}
\begin{corollary}\label{k=6fun}
Let $w_1\ge w_2\ge\cdots\ge w_6\ge 1$ be the positive integers. Let
$$F(z_1, \cdots, z_6)=\max\{F(y_1, \cdots, y_6)=
\sum_{i=1}^{5}(\sum_{j=1}^iy_j)(\sum_{j=i+1}^6y_j):\ \ y_1\ge
y_6\},$$ where $(y_1, \cdots, y_6)$ is any permutation of $(w_1,
\cdots, w_6)$. Then $(z_1, \cdots, z_6)$ is equal to one of the
following five $(w_1, w_6, w_5, w_4, w_3, w_2)$, $(w_1, w_5, w_6,
w_4, w_3, w_2)$, $(w_1, w_4, w_6, w_5, w_3, w_2)$, $(w_1, w_4, w_5,
w_6, w_3, w_2)$ and $(w_1, w_3, w_6, w_5, w_4, w_2)$.
\end{corollary}
\begin{proof}
By Lemma~\ref{function}, there are just three cases:
{\bf Case 1} $t=2$. Then by Lemma~\ref{function}, $z_1\ge z_2$ and
$z_2\le z_3\le z_4\le z_5\le z_6$. Hence $(z_1, \cdots, z_6)$ must
be $(w_1, w_6, w_5, w_4, w_3, w_2)$.
{\bf Case 2} $t=3$. Then $z_1\le z_4+ z_5+z_6$ and
$z_1+z_2>z_5+z_6.$ Moreover, $z_1\ge z_2\ge z_3$ and $ z_3\le
z_4\le z_5\le z_6$; or $z_1\ge z_2$ and $z_2\le z_3\le z_4\le z_5\le
z_6$. Therefore $(z_1, \cdots, z_6)$ must be one of $(w_1, w_6,
w_5, w_4, w_3, w_2)$, $(w_1, w_5, w_6, w_4, w_3, w_2)$, $(w_1, w_4,
w_6, w_5, w_3, w_2)$ and $(w_1, w_3, w_6, w_5, w_4, w_2)$.
{\bf Case 3} $t=4$. Then $z_1+z_2\le z_5+z_6$. Moreover, $z_1\ge
z_2\ge z_3$ and $ z_3\le z_4\le z_5\le z_6$; or $z_1\ge z_2\ge
z_3\ge z_4$ and $z_4\le z_5\le z_6$. Therefore, $(z_1, \cdots, z_6)$
must be one of $(w_1, w_4, w_6, w_5, w_3, w_2)$, $(w_1, w_5,
w_6, w_4, w_3, w_2)$ and $(w_1, w_4, w_5, w_6, w_3, w_2)$. This
completes the proof.
\end{proof}
\begin{theorem}\label{char} Let $\pi=(d_1, \cdots, d_n)$ be a tree
degree sequence with $d_1\ge d_2\ge\cdots\ge d_k\ge 2$,
$d_{k+1}=\cdots=d_n=1$ and $k\ge 5$. If a caterpillar $T(x_1,
\cdots, x_k)$ is a maximum optimal tree in ${\mathcal{T}}_{\pi}$
with $F(x_1, \cdots, x_k)$ in equation (\ref{weiner-f}). Then there
exists a $2\le t\le k-2$ such that either
$$\sum_{i=1}^{t-2}x_i\le\sum_{i=t+1}^kx_i,\quad
\sum_{i=1}^{t-1}x_i>\sum_{t+2}^kx_i,\quad x_1\ge x_2\ge\cdots\ge
x_{t-1}\ge x_t, \quad x_{t}\le x_{t+1}\le \cdots\le x_k;$$ or
$$\sum_{i=1}^{t-2}x_i=\sum_{i=t+1}^kx_i,\quad
\sum_{i=1}^{t-1}x_i>\sum_{t+2}^kx_i, \quad x_1\ge x_2\ge\cdots\ge
x_{t-1}\ge x_t, \quad x_{t}\le x_{t+1}\le \cdots\le x_k;$$
or
$$\sum_{i=1}^{t-2}x_i=\sum_{i=t+1}^kx_i,\quad
\sum_{i=1}^{t-1}x_i>\sum_{t+2}^kx_i, \quad x_1\ge x_2\ge\cdots\ge
x_{t-1}, \quad x_{t-1}\le x_{t}\le \cdots\le x_k.$$
\end{theorem}
\begin{proof} It follows from Theorem~\ref{optimal-equal} and
Lemma~\ref{function} that the assertion holds.
\end{proof}
\section{The maximum optimal tree with many leaves}
In this section, for a given degree sequence $\pi=(d_1, \cdots,
d_n)$ with at least $n-6$ leaves, we give the maximum optimal trees with
the maximum Wiener index in ${\mathcal{T}}_{\pi}$. Moreover, the
maximum optimal tree may be not unique.
\begin{theorem}\label{k=2--4}
Let $\pi=(d_1, \cdots,d_k, \cdots, d_n)$ be tree degree sequence with
$n-k$ leaves for $2\le k\le 4.$ Then the maximum optimal tree in ${\mathcal T}_{ \pi}$ is
the greedy caterpillar.
In other words,
if $k=2$, then $W(T)=(n-1)^2+(d_1-1)(d_2-1)$, for $T\in {
\mathcal{T}}_{\pi}.$
If $k=3$, then for any $T\in {\mathcal{T}}_{\pi},$
$$W(T)\le (n-1)^2+(d_1-1)(d_2+d_3-2)+(d_1+d_2-2)(d_3-1)$$
with equality if and only if $T$ is the caterpillar $T(d_1-1, d_3-1,
d_2-1).$
If $k=4,$ then for any $T\in {\mathcal{T}}_{\pi},$
$$W(T)\le (n-1)^2+(d_1-1)(d_2+d_3+d_4-3)+(d_1+d_2-2)(d_3+d_4-2)+(d_1+d_2+d_3-3)(d_4-1)$$
with equality if and only if $T$ is the caterpillar $T(d_1-1, d_4-1, d_3-1,
d_2-1)$.
\end{theorem}
\begin{proof} If $k=2$, it is obvious. If $k=3$, it is easy to see
that $F(d_1-1, d_2-1, d_3-1)\le F(d_1-1, d_3-1, d_2-1). $ By
Theorem~\ref{optimal-equal}, the assertion holds.
If $k=4$, then by Theorem~\ref{optimal-equal}, let $T$ be a
caterpillar $T(x_1, x_2, x_3, x_4)$ and
$$F(x_1, x_2, x_3,
x_4)=\max\{F(y_1, y_2, y_3, y_4): y_1\ge y_4\},$$
where $(y_1, y_2,
y_3, y_4)$ is any permutation of $(d_1-1, d_2-1, d_3-1, d_4-1)$.
Because
$$F(x_1, x_2, x_3, x_4)-F(x_2, x_1, x_3, x_4)=(x_1-x_2)(x_3+x_4)\ge
0$$ and $$ F(x_1, x_2, x_3, x_4)-F(x_1, x_2, x_4,
x_3)=(x_4-x_3)(x_1+x_2)\ge 0,$$ we have $x_1\ge x_2$ and $x_4\ge
x_3$. So $(x_1, x_2, x_3, x_4)=(d_1-1, d_4-1, d_3-1, d_2-1)$. This
completes the proof.
\end{proof}
\begin{theorem}\label{k=5}
Let $\pi=(d_1, \cdots,d_k, \cdots, d_n)$ be tree degree sequence with
$n-5$ leaves.
(1). If $d_1> d_2+d_3$, then the maximum optimal tree in
${\mathcal T}_{ \pi}$ is the only
caterpillar $T(d_1-1, d_5-1, d_4-1, d_3-1, d_2-1)$.
(2). If $d_1=d_2+d_3$, then there are the exactly two maximum
optimal trees in ${\mathcal T}_{ \pi}$: one tree is the
caterpillar $T(d_1-1, d_5-1, d_4-1, d_3-1, d_2-1)$;
the other tree is the caterpillar $T(d_1-1, d_4-1, d_5-1,
d_3-1, d_2-1)$.
(3). If $d_1< d_2+d_3$, then the maximum optimal tree in ${\mathcal T}_{ \pi}$ is the only
caterpillar $T(d_1-1, d_4-1, d_5-1, d_3-1, d_2-1)$.
\end{theorem}
\begin{proof} By Theorem\ref{optimal-equal}, let $T(x_1, x_2, x_3, x_4, x_5)$
be a maximum optimal tree in ${\mathcal{T}}_{\pi}$. If
$d_1>d_2+d_3$, then by Theorem~\ref{char}, it is easy to see that
$t=2$, and $x_1\ge x_2$ and $x_2\le x_3\le x_4\le x_5$. Hence $(x_1,
x_2, x_3, x_4, x_5)=(d_1-1, d_5-1, d_4-1, d_3-1, d_2-1)$.
If $d_1<d_2+d_3$, then by Theorem~\ref{char}, it is easy to see that
$x_1\ge x_2\ge x_3$ and $x_3\le x_4\le x_5$. Hence $(x_1, x_2, x_3,
x_4, x_5)=(d_1-1, d_4-1, d_5-1, d_3-1, d_2-1)$ or $(d_1-1, d_3-1,
d_5-1, d_4-1, d_2-1)$. But $W(T(d_1-1, d_4-1, d_5-1, d_3-1,
d_2-1))-W(T(d_1-1, d_3-1, d_5-1, d_4-1,
d_2-1))=2(d_1-d_2)(d_3-d_4)\ge 0$ with equality if and only if
$d_1=d_2$ or $d_3=d_4$. Hence the assertion (3) holds.
If $d_1=d_2+d_3$, then by Theorem ~\ref{char}, it is easy to see
that either $x_1\ge x_2$ and $x_2\le x_3\le x_4\le x_5$; or $x_1\ge
x_2\ge x_3$ and $x_3\le x_4\le x_5$. Hence $(x_1, x_2, x_3, x_4,
x_5)=(d_1-1, d_5-1, d_4-1, d_3-1, d_2-1)$ or $(d_1-1, d_4-1, d_5-1,
d_3-1, d_2-1)$. Moreover, $F(d_1-1, d_5-1, d_4-1, d_3-1, d_2-1)=F
(d_1-1, d_4-1, d_5-1, d_3-1, d_2-1)$. Hence (2) holds.
\end{proof}
\begin{lemma}\label{6diff}
Let $w_1\ge w_2\ge\cdots\ge w_6\ge 1$ be positive integers and
$$ F(y_1, \cdots,
y_k)=\sum_{i=1}^{k-1}(\sum_{j=1}^iy_j)(\sum_{j=i+1}^ky_j).$$ Then
\begin{equation}\label{k61}
F(w_1, w_6, w_5, w_4, w_3, w_2)-F(w_1, w_5, w_6, w_4, w_3,
w_2)=(w_1-w_2-w_3-w_4)(w_5-w_6),
\end{equation}
\begin{equation}\label{k62}
F(w_1, w_5, w_6, w_4, w_3, w_2)-F(w_1, w_4, w_6, w_5, w_3,
w_2)=2(w_1-w_2-w_3)(w_4-w_5),
\end{equation}
\begin{equation}\label{k63}
F(w_1, w_4, w_6, w_5, w_3, w_2)-F(w_1, w_4, w_5, w_6, w_3,
w_2)=(w_1+w_4-w_2-w_3)(w_5-w_6),
\end{equation}
\begin{equation}\label{k64}
F(w_1, w_4, w_5, w_6, w_3, w_2)-F(w_1, w_3, w_6, w_5, w_4,
w_2)=(3w_3-3w_4-w_5+w_6)(w_1-w_2).
\end{equation}
\end{lemma}
\begin{proof}
By a simple calculation, it is easy to see that the assertion holds.
\end{proof}
\begin{theorem}\label{k=6}
Let $\pi=(d_1, \cdots,d_6, \cdots, d_n)$ be tree degree sequence with
$n-6$ leaves, i.e., $d_1\ge\cdots\ge d_6\ge 2$ and $d_7=\cdots=d_n=1$.
(1). If $d_1>d_2+d_3+d_4-2$, then
there is only one maximum optimal tree $T(d_1-1, d_6-1, d_5-1,
d_4-1, d_3-1, d_2-1)$ in ${\mathcal T}_{ \pi}$.
(2). If $d_1=d_2+d_3+d_4-2$, then
there are exactly two maximum optimal trees in ${\mathcal T}_{
\pi}$: one maximum optimal tree is $T(d_1-1, d_6-1, d_5-1,
d_4-1, d_3-1, d_2-1)$; the other maximum optimal tree is $T(d_1-1,
d_5-1, d_6-1, d_4-1, d_3-1, d_2-1)$.
(3). $d_2+d_3-1<d_1<d_2+d_3+d_4-2$, then there is only one maximum
optimal tree $T(d_1-1, d_5-1, d_6-1, d_4-1, d_3-1, d_2-1)$ in
${\mathcal T}_{ \pi}$.
(4). If $d_2+d_3-1=d_1$, then there are exactly two maximum optimal
trees in ${\mathcal T}_{
\pi}$: one maximum optimal tree is $T(d_1-1, d_5-1, d_6-1,
d_4-1, d_3-1, d_2-1)$; the other maximum optimal tree is $T(d_1-1,
d_4-1, d_6-1, d_5-1, d_3-1, d_2-1)$.
(5). If $\ \ \max\{d_2+d_3-d_4,\
d_2+\frac{1}{3}(d_5-d_6)\}<d_1<d_2+d_3-1$, then there is only one
maximum optimal tree $T(d_1-1, d_4-1, d_6-1, d_5-1, d_3-1, d_2-1)$
in ${\mathcal T}_{ \pi}$.
(6). If $d_1=d_2+d_3-w_4> d_2+\frac{1}{3}(d_5-d_6)$, then there are
exactly two maximum optimal trees in ${\mathcal T}_{
\pi}$: one maximum optimal tree is $T(d_1-1, d_4-1, d_6-1,
d_5-1, d_3-1, d_2-1)$; the other maximum optimal tree is $T(d_1-1,
d_4-1, d_5-1, d_6-1, d_3-1, d_2-1)$.
(7). If $d_1= d_2+\frac{1}{3}(d_5-d_6)>d_2+d_3-d_4$, then there are
exactly two maximum optimal trees in ${\mathcal T}_{
\pi}$: one maximum optimal tree is $T(d_1-1, d_4-1, d_6-1,
d_5-1, d_3-1, d_2-1)$; the other maximum optimal tree is $T(d_1-1,
d_3-1, d_6-1, d_5-1, d_4-1, d_2-1)$.
(8). If $d_1=d_2+d_3-d_4= d_2+\frac{1}{3}(d_5-d_6)$, then there are
exactly three maximum optimal trees in ${\mathcal T}_{
\pi}$: they are $T(d_1-1, d_4-1, d_6-1,
d_5-1, d_3-1, d_2-1)$; $T(d_1-1, d_4-1, d_5-1, d_6-1, d_3-1, d_2-1)$
and $T(d_1-1, d_3-1, d_6-1, d_5-1, d_4-1, d_2-1)$.
(9). If $d_2+\frac{1}{3}(d_5-d_6)\le d_1<d_2+d_3-d_4$, or $d_1\le d_2+\frac{1}{3}(d_5-d_6)<
d_2+d_3-d_4$,
then there is only one
maximum optimal tree $T(d_1-1, d_4-1, d_5-1, d_6-1, d_3-1, d_2-1)$
in ${\mathcal T}_{ \pi}$.
(10). If $d_2+d_3-d_4\le d_1<d_2+\frac{1}{3}(d_5-d_6)$; or $d_1\le
d_2+d_3-d_4< d_2+\frac{1}{3}(d_5-d_6)$,
then there is only one
maximum optimal tree $T(d_1-1, d_3-1, d_6-1, d_5-1, d_4-1, d_2-1)$
in ${\mathcal T}_{ \pi}$.
(11). If $d_1< d_2+\frac{1}{3}(d_5-d_6)=
d_2+d_3-d_4$,
then there are exactly two maximum optimal trees in ${\mathcal T}_{
\pi}$: one maximum optimal tree is $T(d_1-1, d_3-1, d_6-1,
d_5-1, d_4-1, d_2-1)$; the other maximum optimal tree is $T(d_1-1,
d_4-1, d_5-1, d_6-1, d_3-1, d_2-1)$.
\end{theorem}
\begin{proof}
The proof is referred to appendix since it is technique.
\end{proof}
{\bf Remark}. From Theorem~\ref{k=6}, we can see that the maximum
optimal trees depend on the values of all components of the tree
degree sequences and not unique, while the minimum optimal tree is
unique for a given tree degree sequence. Moreover, Theorem~\ref{k=6}
explains that it seems to be difficult for characterize all the
maximum optimal trees for a given tree degree sequence.
\frenchspacing
| {
"timestamp": "2009-07-22T06:39:17",
"yymm": "0907",
"arxiv_id": "0907.3772",
"language": "en",
"url": "https://arxiv.org/abs/0907.3772",
"abstract": "The Wiener index of a connected graph is the sum of topological distances between all pairs of vertices. Since Wang gave a mistake result on the maximum Wiener index for given tree degree sequence, in this paper, we investigate the maximum Wiener index of trees with given degree sequences and extremal trees which attain the maximum value.",
"subjects": "Combinatorics (math.CO)",
"title": "The Maximum Wiener Index of Trees with Given Degree Sequences",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9715639677785087,
"lm_q2_score": 0.8267117919359419,
"lm_q1q2_score": 0.8032033887825647
} |
https://arxiv.org/abs/0910.5456 | Neighborhoods of univalent functions | The main result shows a small perturbation of a univalent function is again a univalent function, hence a univalent function has a neighborhood consisting entirely of univalent functions.For the particular choice of a linear function in the hypothesis of the main theorem, we obtain a corollary which is equivalent to the classical Noshiro-Warschawski-Wolff univalence criterion.We also present an application of the main result in terms of Taylor series, and we show that the hypothesis of our main result is sharp. | \section{Introduction}
We denote by $U_{r}=\left\{ z\in \mathbb{C}:\left\vert z\right\vert
<r\right\} $ the open disk of radius $r>0$ centered at the origin and we let
$U=U_{1}$. The class of functions $f:D\rightarrow \mathbb{C}$ analytic in
the domain $D$ will be denoted by $\mathcal{A}\left( D\right) $.
It is known that if $f:D\rightarrow \mathbb{C}$ is a univalent map in a
domain $D$, then $f^{\prime }\neq 0$ in $D$. The non-vanishing of the
derivative of an analytic function (local univalence) is not in general
sufficient to insure the univalence of the function, as it can be seen by
considering for example the exponential function $f\left( z\right) =e^{z}$
defined in the upper half-plane.
The classical Noshiro-Warschawski-Wolff univalence criterion gives a partial
converse of the above result, as follows:
\begin{theorem}
If $f:D\rightarrow \mathbb{C}$ is analytic in the convex domain $D$
and
\begin{equation*}
\func{Re}f^{\prime }\left( z\right) >0,\qquad z\in D,
\end{equation*}%
then $f$ is univalent in $D$.
\end{theorem}
In the present paper we introduce the constant $K\left( f,D\right) $
associated with a function $f:D\rightarrow \mathbb{C}$ analytic in a
domain $D$, which is a measure of the "degree of univalence" of $f$
(see Proposition \ref{characterization of univalence} and the remark
following it).
Using the constant $K\left( f,D\right) $ thus introduced, in Theorem
\ref{main theorem} we obtain a sufficient condition for univalence,
which shows that a small perturbation of a univalent function is
again univalent. As a theoretical consequence of this result, it
follows that a univalent function has a neighborhood consisting
entirely of univalent functions (see Remark \ref{Nbds of univalent
functions}).
The Theorem \ref{main theorem} is sharp, in the sense that we cannot
replace the upper bound appearing in the hypothesis of this theorem
by a larger one, as shown in Example \ref{Exemplul 1}.
For the particular choice of a linear function in Theorem \ref{main
theorem}, we obtain a simple sufficient condition for univalence
(Corollary \ref{corollary of main theorem 2}), which is shown to be
equivalent to the Noshiro-Warschawski-Wolff univalence criterion.
The main result in Theorem \ref{main theorem} can be viewed
therefore as a generalization of this classical result, in which the
linear function is replaced by a general univalent function.
The paper concludes with another application of the main result in
the case of analytic functions defined in the unit disk. Thus, in
Theorem \ref{Application to Taylor series} and the corollary
following it, we obtain sufficient conditions for the univalence of
an analytic function defined in the unit disk in terms of the
coefficients of its Taylor series representation, which might be of
independent interest.
\section{Main results}
Given a function $f:D\rightarrow \mathbb{C}$ analytic in the domain $D$ we
introduce the constant $K\left( f,D\right) $ defined as follows:
\begin{equation}
K\left( f,D\right) =\inf_{\substack{ a,b\in D \\ a\neq b}}\left\vert \frac{%
f\left( a\right) -f\left( b\right) }{a-b}\right\vert
\end{equation}
Note that from the definition follows immediately that if the function $f$
is not univalent in $D$ then $K\left( f,D\right) =0$ The constant $K\left(
f,D\right) $ characterizes the univalence of the function $f$ in $D$ in the
following sense:
\begin{proposition}
\label{characterization of univalence}Let $f:D\rightarrow
\mathbb{C}$ be an analytic function in the domain $D$. If $K\left(
f,D\right) >0$ then $f$ is univalent in $D$.
Conversely, if $f$ is univalent in $D$ and $\Omega \subset \overline{\Omega }%
\subset D$ is a domain strictly contained in $D$, then $K\left( f,\Omega
\right) >0$.
\end{proposition}
\begin{proof}
The first statement follows from the inequality
\begin{equation*}
\left\vert f\left( a\right) -f\left( b\right) \right\vert \geq \left\vert
a-b\right\vert K\left( f,D\right) >0,
\end{equation*}%
for any distinct points $a,b\in D$.
To prove the converse, note that%
\begin{equation*}
K\left( f,\Omega \right) \geq \inf_{a,b\in \overline{\Omega }}\left\vert
F\left( a,b\right) \right\vert ,
\end{equation*}%
where $F:\overline{\Omega }\times \overline{\Omega }\rightarrow \mathbb{C}$
is the function defined by%
\begin{equation*}
F\left( a,b\right) =\left\{
\begin{array}{l}
\frac{f\left( a\right) -f\left( b\right) }{a-b},\qquad a\neq b \\
f^{\prime }\left( a\right) ,\qquad \quad a=b%
\end{array}%
\right. .
\end{equation*}
Note that since $f:D\rightarrow \mathbb{C}$ is analytic, $F$ is continuous
on the closed set $\overline{\Omega }\times \overline{\Omega }$, and
therefore $F$ attains its minimum modulus on this set:%
\begin{equation*}
K\left( f,\Omega \right) \geq \inf_{a,b\in \overline{\Omega }}\left\vert
F\left( a,b\right) \right\vert =\left\vert F\left( \alpha ,\beta \right)
\right\vert ,
\end{equation*}%
for some $\alpha ,\beta \in \overline{\Omega }$.
If $\alpha \neq \beta $, then $\left\vert F\left( \alpha ,\beta \right)
\right\vert =\left\vert \frac{f\left( \alpha \right) -f\left( \beta \right)
}{\alpha -\beta }\right\vert >0$ since $\alpha ,\beta \in \overline{\Omega }%
\subset D$ and $f$ is univalent in $D$, and if $\alpha =\beta $ then $%
\left\vert F\left( \alpha ,\alpha \right) \right\vert =\left\vert f^{\prime
}\left( \alpha \right) \right\vert >0$, again by the univalence of $f$ in $D$%
. It follows that in all cases we have%
\begin{equation*}
K\left( f,\Omega \right) \geq \left\vert F\left( \alpha ,\beta \right)
\right\vert >0,
\end{equation*}%
concluding the proof.
\end{proof}
\begin{remark}
\label{K might be 0}Note that the converse in the above proposition may not
hold for $\Omega =D$ without the additional hypothesis, as shown in the
example below.
In order to have the equivalence%
\begin{equation*}
f\text{ univalent in }D\Longleftrightarrow \text{ }K\left( f,D\right) >0,
\end{equation*}%
one needs additional hypotheses, which guarantee the existence of a
continuous extension of $f,f^{\prime }$ to $\overline{D}$, such that
$f$ is injective on $\overline{D}$ and $f^{\prime }\neq 0$ in
$\overline{D}$.
For example, in the case $D=U$, if the boundary of the image domain $f\left(
U\right) $ is a Jordan curve of class $C^{1,\alpha }$ $\left( 0<\alpha
<1\right) $, by Carath\'{e}odory theorem the function $f$ has a continuous
injective extension to $\overline{D}$, and also, by Kelogg-Warschawski
theorem, the function $f^{\prime }$ has continuous extension to $\overline{D}
$, with $f^{\prime }\neq 0$ in $\overline{D}$ (see for example \cite%
{Pommerenke}, p. 24 and pp. 48 -- 49). Following the proof above with $%
\Omega $ replaced by $U$, we obtain $K\left( f,U\right) >0$, and therefore
in this case we have
\begin{equation*}
f\text{ univalent in }U\Longleftrightarrow \text{ }K\left( f,U\right) >0.
\end{equation*}
\end{remark}
\begin{example}
Let $D=U-\left[ 0,1\right] $ be the unit disk with a slit along the positive
real axis. Since $D$ is simply connected, there exists a conformal map $%
f:U\rightarrow D$ between the unit disk $U$ and $D$ (see Figure \ref{Figura
1} below). The map $f$ has a continuous extension to $\overline{U}$, and
without loss of generality we may assume that there exists $\theta \in
(0,2\pi )$ such that $f\left( e^{i\theta }\right) =f\left( e^{-i\theta
}\right) \in \left( 0,1\right) $.
The function $f$ is univalent in $U$, but $K\left( f,U\right) =0$ since
\begin{equation*}
K\left( f,U\right) \leq \lim_{\substack{ a\rightarrow e^{i\theta } \\ %
b\rightarrow e^{-i\theta }}}\left\vert \frac{f\left( a\right) -f\left(
b\right) }{a-b}\right\vert =\left\vert \frac{f\left( e^{i\theta }\right)
-f\left( e^{-i\theta }\right) }{e^{i\theta }-e^{-i\theta }}\right\vert =0.
\end{equation*}
\end{example}
\begin{figure}[thb]
\begin{center}
\includegraphics[scale=0.6]{fig1}
\caption{An example of a univalent function in $U$ for which
$K\left( f,U\right) =0$.}
\label{Figura 1}
\end{center}
\end{figure}
The main result is contained in the following:
\begin{theorem}
\label{main theorem}Let $f:D\rightarrow \mathbb{C}$ be a non-constant
analytic function in the convex domain $D$. If there exists an analytic
function $g:D\rightarrow \mathbb{C}$ univalent in $D$ such that%
\begin{equation}
\left\vert f^{\prime }\left( z\right) -g^{\prime }\left( z\right)
\right\vert \leq K\left( g,D\right) ,\qquad z\in D,
\label{sufficient condition for univalency}
\end{equation}%
then the function $f$ is also univalent in $D$.
\end{theorem}
\begin{proof}
Assuming that $f$ is not univalent in $D$, there exists distinct points $%
z_{1,2}\in D$ such that $f\left( z_{1}\right) =f\left( z_{2}\right) $.
Integrating the derivative of $f-g$ along the line segment $\left[
z_{1},z_{2}\right] \subset D$ and using the hypothesis (\ref{sufficient
condition for univalency}) we obtain%
\begin{eqnarray*}
\left\vert g\left( z_{2}\right) -g\left( z_{1}\right) \right\vert
&=&\left\vert \left( f\left( z_{2}\right) -g\left( z_{2}\right) \right)
-\left( f\left( z_{1}\right) -g\left( z_{1}\right) \right) \right\vert \\
&=&\left\vert \int_{\left[ z_{1},z_{2}\right] }f^{\prime }\left( z\right)
-g^{\prime }\left( z\right) dz\right\vert \\
&\leq &\int_{\left[ z_{1},z_{2}\right] }\left\vert f^{\prime }\left(
z\right) -g^{\prime }\left( z\right) \right\vert \left\vert dz\right\vert \\
&\leq &\int_{\left[ z_{1},z_{2}\right] }K\left( g,D\right) \left\vert
dz\right\vert \\
&=&K\left( g,D\right) \left\vert z_{1}-z_{2}\right\vert .
\end{eqnarray*}
Since the points $z_{1,2}$ are assumed to be distinct, from the definition
of the constant $K\left( g,D\right) $ we obtain equivalently%
\begin{equation}
\left\vert \frac{g\left( z_{2}\right) -g\left( z_{1}\right) }{z_{2}-z_{1}}%
\right\vert \leq K\left( g,D\right) =\inf_{\substack{ a,b\in D \\ a\neq b}}%
\left\vert \frac{g\left( a\right) -g\left( b\right) }{a-b}\right\vert \leq
\left\vert \frac{g\left( z_{2}\right) -g\left( z_{1}\right) }{z_{2}-z_{1}}%
\right\vert ,
\end{equation}%
and therefore%
\begin{equation}
K\left( g,D\right) =\inf_{\substack{ a,b\in D \\ a\neq b}}\left\vert \frac{%
g\left( a\right) -g\left( b\right) }{a-b}\right\vert =\left\vert \frac{%
g\left( z_{2}\right) -g\left( z_{1}\right) }{z_{2}-z_{1}}\right\vert .
\label{minimum attained}
\end{equation}
Consider now the auxiliary function $G:D-\left\{ z_{2}\right\} \rightarrow
\mathbb{C}$ defined by
\begin{equation}
G\left( z\right) =\frac{g\left( z\right) -g\left( z_{2}\right) }{z-z_{2}}%
,\qquad z\in D-\left\{ z_{2}\right\} ,
\end{equation}%
and note that since $g$ is analytic in $D$, $G$ is also analytic in $%
D-\left\{ z_{2}\right\} $ and moreover the limit
\begin{equation}
\lim_{z\rightarrow z_{2}}G\left( z\right) =\lim_{z\rightarrow z_{2}}\frac{%
g\left( z\right) -g\left( z_{2}\right) }{z-z_{2}}=g^{\prime }\left(
z_{2}\right)
\end{equation}%
exists and it is finite. The function $G$ can be therefore extended by
continuity to an analytic function in $D$, denoted also by $G$.
Since%
\begin{equation*}
\inf_{z\in D}\left\vert G\left( z\right) \right\vert =\inf_{\substack{ z\in D
\\ z\neq z_{2}}}\left\vert G\left( z\right) \right\vert =\inf_{\substack{ %
z\in D \\ z\neq z_{2}}}\left\vert \frac{g\left( z\right) -g\left(
z_{2}\right) }{z-z_{2}}\right\vert \geq \inf_{\substack{ a,b\in D \\
a\neq b }}\left\vert \frac{g\left( a\right) -g\left( b\right)
}{a-b}\right\vert =K\left( g,D\right) ,
\end{equation*}%
combining with (\ref{minimum attained}) we obtain that
\begin{equation*}
\inf_{z\in D}\left\vert G\left( z\right) \right\vert \geq K\left( g,D\right)
=\left\vert \frac{g\left( z_{2}\right) -g\left( z_{1}\right) }{z_{2}-z_{1}}%
\right\vert =\left\vert G\left( z_{1}\right) \right\vert \geq \inf_{z\in
D}\left\vert G\left( z\right) \right\vert ,
\end{equation*}%
which shows that minimum value of the modulus of $G$ in $D$ is attained at $%
z_{1}$:%
\begin{equation*}
\inf_{z\in D}\left\vert G\left( z\right) \right\vert =\left\vert G\left(
z_{1}\right) \right\vert .
\end{equation*}
However, since the function $g$ is univalent in $D$, from the definition of $%
G$ it follows that $G\left( z\right) \neq 0$ for any $z\in D-\left\{
z_{2}\right\} $, and also $G\left( z_{2}\right) =g^{\prime }\left(
z_{2}\right) \neq 0$, and therefore the function $G$ does not vanish in $D$.
Applying the maximum modulus principle to the analytic function $1/G$ it
follows that $\left\vert G\right\vert $ must be constant in $D$, and
therefore $G$ is constant in $D$.
It follows that
\begin{equation}
g\left( z\right) =g\left( z_{2}\right) +c\left( z-z_{2}\right) ,\qquad z\in
D, \label{g must be linear}
\end{equation}%
for a certain constant $c\in \mathbb{C}$ (from the definition of $G$ it can
be seen that the constant $c$ can be written in the form $c=g^{\prime
}\left( z_{2}\right) e^{i\theta }$, for some $\theta \in \mathbb{R}$).
The relation (\ref{g must be linear}) shows that $g$ is a linear function,
and therefore the constant $K\left( g,D\right) $ becomes in this case%
\begin{eqnarray*}
K\left( g,D\right) &=&\inf_{\substack{ a,b\in D \\ a\neq b}}\left\vert
\frac{g\left( a\right) -g\left( b\right) }{a-b}\right\vert \\
&=&\inf_{\substack{ a,b\in D \\ a\neq b}}\left\vert \frac{\left( g\left(
z_{2}\right) +c\left( a-z_{2}\right) \right) -\left( g\left( z_{2}\right)
+c\left( b-z_{2}\right) \right) }{a-b}\right\vert \\
&=&\inf_{\substack{ a,b\in D \\ a\neq b}}\left\vert \frac{c\left( a-b\right)
}{a-b}\right\vert \\
&=&\left\vert c\right\vert .
\end{eqnarray*}
The hypothesis (\ref{sufficient condition for univalency}) of the theorem
can be written therefore as follows%
\begin{equation*}
\left\vert f^{\prime }\left( z\right) -c\right\vert \leq \left\vert
c\right\vert ,\qquad z\in D,
\end{equation*}%
which shows that either $f$ is linear in $D$ (and thus univalent, since $f$
is assumed to be non-constant in $D$), or the following strict inequality
holds%
\begin{equation*}
\left\vert f^{\prime }\left( z\right) -c\right\vert <\left\vert c\right\vert
,\qquad z\in D.
\end{equation*}
Repeating the proof above with $g\left( z\right) \equiv cz$ we obtain%
\begin{eqnarray*}
\left\vert cz_{2}-cz_{1}\right\vert &=&\left\vert \left( f\left(
z_{2}\right) -cz_{2}\right) -\left( f\left( z_{1}\right)
-cz_{1}\right) \right\vert \\
&=&\left\vert \int_{\left[ z_{1},z_{2}\right] }f^{\prime }\left(
z\right) -cdz\right\vert \\
& \leq & \int_{\left[ z_{1},z_{2}\right] }\left\vert f^{\prime
}\left( z\right) -c\right\vert \left\vert dz\right\vert
\\
&<& \left\vert c\right\vert \left\vert z_{2}-z_{1}\right\vert ,
\end{eqnarray*}%
a contradiction.
The contradiction obtained shows that the function $f$ is univalent in $D$,
concluding the proof of the theorem.
\end{proof}
In the particular case $D=U$, from the previous thereom we obtain
immediately the following sufficient criterion for univalence in the unit
disk:
\begin{theorem}
\label{main theorem 2}Let $f:U\rightarrow \mathbb{C}$ be a non-constant
analytic function in the unit disk. If there exists an analytic function $%
g:U\rightarrow \mathbb{C}$ univalent in $U$ such that%
\begin{equation}
\left\vert f^{\prime }\left( z\right) -g^{\prime }\left( z\right)
\right\vert \leq K\left( g,U\right) ,\qquad z\in U,
\label{sufficient condition for univalency 2}
\end{equation}%
then the function $f$ is also univalent in $U$.
\end{theorem}
As a corollary of Theorem \ref{main theorem} we obtain immediately
the following:
\begin{corollary}
\label{corollary of main theorem 2}If $f:D\rightarrow \mathbb{C}$ is
non-constant and analytic in the convex domain $D$ and there exists $c>0$ such that%
\begin{equation}
\left\vert f^{\prime }\left( z\right) -c\right\vert \leq c,\qquad
z\in D, \label{Noshiro-Warschawski-Wolff type condition}
\end{equation}%
then $f$ is univalent in $D$.
\end{corollary}
\begin{proof}
Considering the univalent function $g:D\rightarrow \mathbb{C}$
defined by $g\left( z\right) =cz$, we have $g^{\prime }\left(
z\right) =c$
for $z\in D$ and%
\begin{equation*}
K\left( g,D\right) =\inf_{\substack{ a,b\in D \\ a\neq b}}\left\vert
\frac{g\left( a\right) -g\left( b\right) }{a-b}\right\vert =\inf_{\substack{ %
a,b\in D \\ a\neq b}}\left\vert \frac{ca-cb}{a-b}\right\vert =c,
\end{equation*}%
and therefore the claim follows from Theorem \ref{main theorem}
above.
\end{proof}
\begin{remark}
Let us note that the previous corollary can also be obtained as a direct
consequence of the classical Noshiro-Warschawski-Wolff univalence criterion,
since the hypothesis (\ref{Noshiro-Warschawski-Wolff type condition})
implies the hypothesis%
\begin{equation}
\func{Re}f^{\prime }\left( z\right) >0,\qquad z\in D.
\label{Noshiro-Warschawski-Wolff hypothesis}
\end{equation}%
of this theorem (the fact that the above inequality is a strict
inequality follows from the maximum principle, the function $f$
being assumed to be non-constant in $D$).
Conversely, the Noshiro-Warschawski-Wolff univalence criterion
follows from the previous corollary. To see this, note that in order
to prove the univalence of $f$, it suffices to prove the univalence
of $f$ in $D_{r}=r D$, for an arbitrarily fixed $r\in (0,1)$.
If the condition (\ref{Noshiro-Warschawski-Wolff hypothesis}) holds,
there exists $c>0$ such that $$f^{\prime }\left( D_{r}\right)
\subset \left\{ w\in
\mathbb{C}:\left\vert w-c\right\vert <c\right\} ,$$ or equivalent%
\begin{equation*}
\left\vert f^{\prime }\left( z\right) -c \right\vert <c,\qquad z\in
D_{r}.
\end{equation*}%
Applying Corollary \ref{corollary of main theorem 2} to the
restriction of of $f$ to $D_r$, it follows that the function $f$ is
univalent in $D_{r}.$ Since $r\in \left( 0,1\right) $ was
arbitrarily fixed, it follows that $f$ is univalent in $U$,
concluding the proof of the claim.
\end{remark}
The remark above shows that Corollary \ref{corollary of main theorem
2} and the Noshiro-Warschawski-Wolff univalence criterion are
equivalent, and therefore Theorem \ref{main theorem} is a
generalization of it. The Noshiro-Warschawski-Wolff univalence
criterion can be viewed as a particular case of the main Theorem
\ref{main theorem}, corresponding to the choice of a linear function
$g$.
\begin{remark}
\label{Nbds of univalent functions}Fixing an arbitrarily univalent function $%
g:U\rightarrow \mathbb{C}$ for which $K\left( g,U\right) \neq 0$ (see Remark %
\ref{K might be 0} above), Theorem \ref{main theorem 2} shows that a whole
neighborhood $V\left( g\right) =\left\{ f\in \mathcal{A}:\left\vert
\left\vert f^{\prime }-g^{\prime }\right\vert \right\vert \leq K\left(
g,U\right) \right\} $ of $g$ consists entirely of univalent functions in $U$
($\left\vert \left\vert \cdot \right\vert \right\vert $ denotes here the
supremum norm in the space $\mathcal{A}_{0}=\left\{ f\in \mathcal{A}:f\left(
0\right) =0\right\} $ of normalized analytic functions). Loosely stated,
Theorem \ref{main theorem 2} shows that an univalent function has a
neighborhood consisting entirely of univalent functions.
\end{remark}
The hypotheses of Theorem \ref{main theorem} and Theorem \ref{main theorem 2}
are sharp, in the sense that we cannot replace the right side of the
inequalities (\ref{sufficient condition for univalency}), respectively (\ref%
{sufficient condition for univalency 2}), by larger constants, as can be
seen from the following example.
\begin{example}
\label{Exemplul 1}Consider the function $f:U\rightarrow \mathbb{C}$ defined
by $f\left( z\right) =z+az^{2}$, $z\in U$, where $a\in \mathbb{C}$ is a
parameter.
Using Theorem \ref{main theorem 2} above with $g\left( z\right) \equiv z$,
for which $K\left( g,U\right) =1$, we obtain that the function $f$ is
univalent in $U$ if
\begin{equation*}
\left\vert 2az\right\vert \le 1,\qquad z\in U,
\end{equation*}%
that is if $\left\vert 2a\right\vert \leq 1$.
This result is sharp, since the function $f$ is univalent iff $\left\vert
a\right\vert \leq \frac{1}{2}$, as it can be checked by direct computation.
\end{example}
The univalence of the function $f$ in the previous example can also be
obtained by using the Noshiro-Warschawski-Wolff univalence criterion (for $%
\left\vert a\right\vert \leq 1/2$ we have $\func{Re}f^{\prime }\left(
z\right) >0$ for any $z\in U$). The next example shows that we may still use
Theorem \ref{main theorem 2} also in situations when the
Noshiro-Warschawski-Wolff univalence criterion cannot be applied:
\begin{example}
\label{Exemplul 2}Consider the linear map $g:U\rightarrow \mathbb{C}$
defined by $g\left( z\right) =\frac{z}{1-z}$. The function $g$ is univalent
in $U$ and we have%
\begin{equation*}
K\left( g,U\right) =\inf_{\substack{ a,b\in U \\ a\neq b}}\left\vert \frac{%
g\left( a\right) -g\left( b\right) }{a-b}\right\vert =\inf_{\substack{ %
a,b\in U \\ a\neq b}}\left\vert \frac{\frac{a}{1-a}-\frac{b}{1-b}}{a-b}%
\right\vert =\inf_{\substack{ a,b\in U \\ a\neq b}}\frac{1}{\left\vert
1-a\right\vert \left\vert 1-b\right\vert }=1.
\end{equation*}
The function $f:U\rightarrow \mathbb{C}$ defined by $f\left( z\right) =\frac{%
z^{2}}{1-z}$ is analytic in $U$ and satisfies%
\begin{equation*}
\left\vert f^{\prime }\left( z\right) -g^{\prime }\left( z\right)
\right\vert =1\leq K\left( g,U\right) ,\qquad z\in U,
\end{equation*}%
and therefore by Theorem \ref{main theorem 2} it follows that $f$ is
univalent in the unit disk.
The univalence of $f$ does not follow however by the
Noshiro-Warschawski-Wolff univalence criterion since
$\func{Re}f^{\prime }\left( z\right) $ takes (arbitrarily small)
negative values for $z\in U$ sufficiently close to $1$.
\end{example}
As another application of Theorem \ref{main theorem 2}, in the next result
we show that by perturbing the coefficients of the Taylor series of an
univalent function, the resulting function is also univalent. More
precisely, we have the following:
\begin{theorem}
\label{Application to Taylor series} Let $g:U\rightarrow \mathbb{C}$
be an analytic univalent function with
Taylor series representation%
\begin{equation}
g\left( z\right) =\sum_{n=0}^{\infty }b_{n}z^{n},\qquad z\in U\text{.}
\label{Taylor series for g}
\end{equation}
If the coefficients $a_{0},a_{1},\ldots \in \mathbb{C}$ satisfy the
inequality
\begin{equation}
\sum_{n=1}^{\infty }n\left\vert a_{n}-b_{n}\right\vert <K\left( g,U\right)
\label{hypothesis on coefficients of Taylor series}
\end{equation}%
then the function $f:U\rightarrow \mathbb{C}$ defined by
\begin{equation}
f\left( z\right) =\sum_{n=0}^{\infty }a_{n}z^{n},\qquad z\in U,
\label{Taylor series for f}
\end{equation}%
is analytic and univalent in $U$.
\end{theorem}
\begin{proof}
Since $g$ is univalent in $U$, the radius of convergence of the Taylor
series (\ref{Taylor series for g}) is at least $1$, hence
\begin{equation*}
\lim \sup \sqrt[n]{\left\vert b_{n}\right\vert }\leq 1,
\end{equation*}%
and therefore $\left\vert b_{n}\right\vert \leq 1$ for all $n$ sufficiently
large.
Using the hypothesis (\ref{hypothesis on coefficients of Taylor series}) we
obtain%
\begin{equation*}
\lim \sup \sqrt[n]{\left\vert a_{n}\right\vert }\leq \lim \sup \sqrt[n]{%
\left\vert b_{n}\right\vert +\left\vert a_{n}-b_{n}\right\vert }\leq \lim
\sup \sqrt[n]{1+\frac{K\left( g,U\right) }{n}}=1,
\end{equation*}%
and therefore the radius of convergence of the series in (\ref{Taylor series
for f}) is at least $1$, thus the function $f$ is well defined by (\ref%
{Taylor series for f}) and it is analytic in $U$.
Since
\begin{eqnarray*}
\left\vert f^{\prime }\left( z\right) -g^{\prime }\left( z\right)
\right\vert &=&\left\vert \sum_{n=0}^{\infty
}na_{n}z^{n-1}-\sum_{n=0}^{\infty }nb_{n}z^{n-1}\right\vert \\
&\leq &\sum_{n=1}^{\infty }n\left\vert a_{n}-b_{n}\right\vert \left\vert
z\right\vert ^{n-1} \\
&\leq &\sum_{n=1}^{\infty }n\left\vert a_{n}-b_{n}\right\vert \\
&<&K\left( g,U\right) ,
\end{eqnarray*}%
for any $z\in U$, by Theorem \ref{main theorem 2} follows that $f$ is
univalent in $U$, concluding the proof.
\end{proof}
Using a comparison with the generalized harmonic series, from the
above we can obtain the following:
\begin{corollary}
Let $g:U\rightarrow \mathbb{C}$ be an analytic univalent function with
Taylor series representation%
\begin{equation}
g\left( z\right) =\sum_{n=0}^{\infty }b_{n}z^{n},\qquad z\in U\text{.}
\end{equation}
If the coefficients $a_{0},a_{1},\ldots \in \mathbb{C}$ satisfy the
inequality
\begin{equation}
\left\vert a_{n}-b_{n}\right\vert <K\left( g,U\right) \frac{\zeta \left(
p\right) }{n^{p+1}},\qquad n=1,2,\ldots ,
\end{equation}%
for some $p>1$ ($\zeta $ denotes the Riemann zeta function), then the
function $f:U\rightarrow \mathbb{C}$ defined by
\begin{equation}
f\left( z\right) =\sum_{n=0}^{\infty }a_{n}z^{n},\qquad z\in U,
\end{equation}%
is analytic and univalent in $U$.
\end{corollary}
\begin{example}
Considering the function $g\left( z\right) =\frac{z}{1-z}=\sum_{n=1}^{\infty
}z^{n}$ defined in Example \ref{Exemplul 2}, which is analytic and univalent
in $U$ and has $K\left( g,U\right) =1$, from the previous theorem it follows
that the function $f:U\rightarrow \mathbb{C}$ defined by $f\left( z\right)
=\sum_{n=0}^{\infty }a_{n}z^{n}$ is analytic and univalent in $U$ if the
coefficients $a_{n}$ satisfy the inequality%
\begin{equation*}
\sum_{n=1}^{\infty }n\left\vert a_{n}-1\right\vert <1.
\end{equation*}
Using for example the fact that $\zeta \left( 2\right) =\frac{\pi ^{2}}{6}%
\approx 1.645$, from the previous corollary it follows that the function $f$
is also analytic and univalent in $U$ if the coefficients $a_{n}$ satisfy
the inequality%
\begin{equation*}
\left\vert a_{n}-1\right\vert \leq \frac{\pi ^{2}}{6n^{3}}\approx \frac{1.645%
}{n^{3}},\qquad n=1,2,\ldots
\end{equation*}
\end{example}
| {
"timestamp": "2009-10-28T19:32:27",
"yymm": "0910",
"arxiv_id": "0910.5456",
"language": "en",
"url": "https://arxiv.org/abs/0910.5456",
"abstract": "The main result shows a small perturbation of a univalent function is again a univalent function, hence a univalent function has a neighborhood consisting entirely of univalent functions.For the particular choice of a linear function in the hypothesis of the main theorem, we obtain a corollary which is equivalent to the classical Noshiro-Warschawski-Wolff univalence criterion.We also present an application of the main result in terms of Taylor series, and we show that the hypothesis of our main result is sharp.",
"subjects": "Complex Variables (math.CV)",
"title": "Neighborhoods of univalent functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822876987039989,
"lm_q2_score": 0.8175744761936437,
"lm_q1q2_score": 0.8030933507393816
} |
https://arxiv.org/abs/1812.04874 | Exponential convexifying of polynomials | Let $X\subset\mathbb{R}^n$ be a convex closed and semialgebraic set and let $f$ be a polynomial positive on $X$. We prove that there exists an exponent $N\geq 1$, such that for any $\xi\in\mathbb{R}^n$ the function $\varphi_N(x)=e^{N|x-\xi|^2}f(x)$ is strongly convex on $X$. When $X$ is unbounded we have to assume also that the leading form of $f$ is positive in $\mathbb{R}^n\setminus\{0\}$. We obtain strong convexity of $\varPhi_N(x)=e^{e^{N|x|^2}}f(x)$ on possibly unbounded $X$, provided $N$ is sufficiently large, assuming only that $f$ is positive on $X$. We apply these results for searching critical points of polynomials on convex closed semialgebraic sets. | \section{Introduction}
In \cite{KS} we considered several questions concerning convexification of a polynomial $f$
which is positive on a closed convex set $X\subset \mathbb R^n$.
One of the main results in \cite{KS}, is the following [Theorem 5.1]: \emph{if $X$ is a compact set than there exists a positive integer $N$ such that the function
\begin{equation}\label{conv1}
\phi_N(x)=(1+|x|^2)^Nf(x)
\end{equation}
is strongly convex on $X$}. Moreover, explicit estimates for the exponent $N$ were given in \cite{KS}. They depend on the diameter of $X$, the size of coefficients of the polynomial $f$ and on the minimum of $f$ on $X$.
In fact a stronger version of \eqref{conv1} was given in \cite{KS}; \emph{there exists an integer $N$, which can be explicitly estimated, such that the polynomials
\begin{equation*
\phi_{N,\xi}=(1+|x-\xi|^2)^Nf(x),\quad \xi\in X,
\end{equation*}
are strongly convex on $X$}. The fact that $N$ can be chosen independent of $\xi$ was crucial for a construction
of an algorithm which for a given polynomial $f$, positive in the convex compact semialgebraic set $X$, produces a sequence $a_\nu\in X$ starting from an arbitrary point $a_0\in X$, defined by induction: $a_\nu=\operatorname{argmin}_{X}\phi_{N,a_{\nu-1}}$, i.e., $a_{\nu}\in X$ is the unique point of $X$ at which $\phi_{N,a_{\nu-1}}$ has a global minimum on $X$. The sequence $a_\nu$ converges to a lower critical point of $f$ on $X$ (see \cite[Theorem 7.5]{KS}).
In the case of non-compact closed convex set $X$ the results mentioned above require an additional assumption, that the \emph{leading form} $f_d$ of $f$,
satisfy
\begin{equation}\label{eqpositivityfd}
f_d(x)>0\quad\hbox{for }x\in\mathbb R^n\setminus\{0\}.
\end{equation}
Under this assumption we have that: \emph{if a polynomial $f$ is positive on $X$ then for any $R>0$ there exists $N_0$ such that for each $\xi\in X$, $|\xi|\leq R$,
$N>N_0$ the polynomial $\phi_{N,\xi}$ is strongly convex on $X$}.
The assumption \eqref{eqpositivityfd} is necessary for local convexity of $\phi_{N,\xi}$ in a neighborhood of infinity, see \cite[Proposition 6.3]{KS}. However, this assumption is not sufficient to obtain convexity of the polynomial $\phi_{N,\xi}$ for some fixed $N>0$ independent of $\xi\in X$.
For instance
the polynomial $f(x)=1+x^2$, has this property, cf. \cite[Example 4.5]{KS}.
The main goal of this paper is to study convexification of polynomials functions by exponential factors of the form
$e^{N|x-\xi|^2}$ or by double exponential of the form $e^{e^{N|x-\xi|^{2}}}$.
Surprisingly they play distinct roles.
We set
$$
\varphi _{N,\xi}(x):=e^{N|x-\xi|^2}f(x).
$$
and prove the following (see Theorem \ref{uwypuklanie na zwartym} and Corollary \ref{uwypuklanie na zwartymcor2}):
\emph{if a polynomial $f$ is positive on a compact and convex set $X\subset \mathbb R^n$,
than there exists effectively computed number $N_0$ such that for any $N> N_0$ and $\xi\in \mathbb R^n$ the function $\varphi _{N,\xi}(x)$
is strongly convex on $X$.}
If $X$ is not compact, we obtain the above assertions under the assumption
\eqref{eqpositivityfd}, see Theorem \ref{convexonconvexsetnewvar}.
In general the assumption \eqref{eqpositivityfd}
can not be ommited as we show in Example \ref{exacounter}.
Surprisingly convexification in the noncompact case without assumption \eqref{eqpositivityfd} is possible using double exponential factors. Namely in Theorems \ref{tedoubleexp} and \ref{convexonconvexsetnewxdoublevarsemi} we prove that:
\emph{if $X\subset \mathbb R^n$ is a convex and closed semialgebraic set and $f$ is a polynomial positive on $X$, then for any $R>0$ there exists effectively computed number $N_0$ such that for any $N> N_0$ and any $\xi\in \mathbb R^n$, $|\xi|\leq R$, the function
$$
\varPhi_{N,\xi}(x):=e^{e^{N|x-\xi|^{2}}}f(x)
$$
is strongly convex on $X$.}
In the case when $X$ is a convex and closed set, but non necessary semialgebraic, the result still holds (Theorems \ref{tedoubleexp2} and \ref{convexonconvexsetnewxdouble}) under an additional assumption
\begin{equation}\label{eqestmonX}
\inf\{f(x): x\in X\}\ge m>0.
\end{equation}
In the above theorems one can replace $\varPhi_{N,\xi}$ by the function
$$
\Phi_{N,\xi}(x):=e^{Ne^{|x-\xi|^{2}}}f(x).
$$
It turns out that convexification of polynomials using exponential function is somehow more natural and powerful than the convexification by the factors of the form
$(1+|x-\xi|^2)^N$ done in \cite{KS}. In particular it applies also to the noncompact case and the explicit formulae for the exponent $N$ are nicer.
We believe that the results mentioned above could be of interest, also to study o-minimal structures expanded by the exponent function. It fits particularly to the structure
$\mathbb R_{\exp}$ semialgebraic sets expanded by the exponent function. The remarkable fact that $\mathbb R_{\exp}$ is indeed an o-minimal structure
was established by A. Wilkie \cite{W}.
It would be interesting to explain
a different power of exponential and double exponential for convexification.
The main difficulty when determining explicitely the number $N$ such that the function $\varphi_N$ is strongly convex on a convex compact set $X$, comes from an effective estimation of the number $m$ in \eqref{eqestmonX}
and the number $R=\max\{|x|:x\in X\}$. Using results of G.~Jeronimo, D. Perrucci, E.~Tsigaridas \cite{JPT} we show in Theorem \ref{uwypuklaniecalkowitychna zwartym}) and Theorem \ref{uwypuklanie na zwartym} how it is feasible when $X$ is a compact semialgebraic set described by polynomial inequalities with integer coefficients and $f$ is also a polynomial with integer coefficients (see Theorem \ref{uwypuklanie na zwartym calk}).
As an application to optimization we propose an algorithm which produces, starting from an arbitrary point $a_0\in X$,
a sequence $a_\nu \in X$ which tends to a lower critical point of a polynomial $f$ restricted to
$X$ or to infinity. We assume that
$X\subset \mathbb R^n$ is a closed convex semialgebraic set and $f$ a polynomial which is bounded from below on $X$. Then by adding to $f$ an appropriate constant we may assume that $f\ge m> 0$ on $X$. If $X$ is unbounded we assume also condition \eqref{eqpositivityfd}.
Hence by the above mentioned theorems we obtain strong convexity of $\varphi_\xi(x)=e^{|x-\xi|^2}f(x)$ for $\xi\in X$.
Let us choose any $a_0\in X$ and set by induction:
$a_\nu=\operatorname{argmin}_{X}\varphi_{N,a_{\nu-1}}$. Then we prove that the sequence $a_\nu $ tends to a lower critical point of a polynomial $f$ restricted to
$X$ or to infinity. Note that computing $a_\nu$, that is minimizing $\varphi_{N,a_{\nu-1}}$ on $X$, is usually easier since the function is convex.
This type of algorithm, based on convexification, is called sometimes {\it proximal}, see for instance \cite{Bolte}.
Observe that computing the critical point of $\varphi_{N,a_{\nu-1}}$ involves only algebraic equations.
The paper is organized as follows. In Section \ref{one variable} we prove that the function $\varphi _N$ in one variable is strongly convex on a closed interval $I\subset \mathbb R$, provided $f(x)>m$ for $x\in I$ and some $m>0$ and $N\in\mathbb{R}$ is sufficiently large. We also estimate from above the number $N$. In Section \ref{Sect3} we consider this problem in the several variables case on a compact convex set $X$ (see Theorem \ref{uwypuklanie na zwartym}). In Sections \ref{Sect44} and \ref{secdoubleexp} we consider the case when the set $X$ is not compact.
\section{Convexifying polynomials}\label{one variable}
\subsection{Convexifying $C^2$-functions in one variable}
In this section we prove that if $f$ is a function of class $C^2$ positive on a closed interval $I\subset \mathbb R$ (not necessary compact), then for $N$ large enough the function $t\mapsto e^{Nt^2}f(t)$ is strongly convex on $I$.
Let $f: \mathbb R\rightarrow \mathbb R$ be $C^2$ function. For any $N\in \mathbb R $ and $p,q\in\mathbb R$ we define the following function:
$$
\varphi _{N,p,q}(t):=e^{N(t^2+pt+q)}f(t), \ \ t\in \mathbb R.
$$
For positive numbers $m,D$ we put
$$
\mathcal{N}(m,D):=\frac{D}{2m}+\frac{D^2}{2m^2}.
$$
\begin{lemat}\label{lemat dla funkcji2}
Let $f$ be a function of class $C^2$, which is positive on a closed interval $I\subset \mathbb R$. Let
$m,D\in\mathbb{R}$ be such that
\begin{equation}\label{minimum f}
0<m\leq \inf\{f(t): t \in I\},
\end{equation}
and
\begin{equation}\label{ograniczenie pochodnych}
|f'(t)|\leq D, \quad |f''(t)|\leq D \quad for \quad t \in I.
\end{equation}
Assume that
$p^2\leq 4q$ and
\begin{equation}\label{nierownosc dla N2}
N>\mathcal{N}(m,D)
\end{equation}
then
$$\varphi _{N,p,q}'' (t)\geq -\frac{D^2}{m}-D+2Nm>0$$ for $t \in I$, thus $\varphi _{N,p,q}$ is strongly convex on $I$.
\end{lemat}
\begin{proof}
By definition of $\varphi_{N,p,q}$ we have
\begin{equation*}\label{eqwaznewprzykladzie}
\varphi_{N,p,q}''(t)=e^{N(t^2+pt+q)}[N^2(2t+p)^2f(t)+2N(2t+p)f'(t)+2Nf(t)+f''(t)].
\end{equation*}
Hence, from the assumptions we obtain
\begin{equation}\label{oszacowanie}
\varphi_{N,p,q}''(t)\geq e^{N(t^2+pt+q)}[N^2(2t+p)^2m-2N|2t+p|D+2Nm-D]
\end{equation}
for $t\in I$.Note that the function
$$
\mathbb{R} \ni \lambda \mapsto N^2m\lambda^2-2N D\lambda+2Nm-D$$
attains its minimum, equal $-\frac{D^2}{m}-D+2Nm$, at the point $\lambda =\frac{D}{Nm}$. Thus for
$
N>\mathcal{N}(m,D)
$
we have $$N^2m\lambda^2-2ND|\lambda|+2Nm-D>0$$ for any $\lambda\in \mathbb R$.
Therefore
$$
\varphi_{N,p,q}''(t)\geq -\frac{D^2}{m}-D+2Nm>0\quad \hbox{for }t\in I,
$$
which implies that $\varphi_{N,p,q}$ is strongly convex on $I$.
\end{proof}
From Lemma \ref{lemat dla funkcji2} we immediately obtain
\begin{cor}\label{cor lemat dla funkcji2}
Let $f$ be a function of class $C^2$ on a closed interval $I\subset \mathbb R$. Let
$m,D\in\mathbb{R}$ be such that
\begin{equation}
m< \inf\{f(t): t \in I\},
\end{equation}
and
\begin{equation}\label{ograniczenie pochodnychdwa}
|f'(t)|\leq D, \quad |f''(t)|\leq D, \quad \hbox{for} \quad t \in I.
\end{equation}
Than for any $\xi\in\mathbb R$ and any $N\geq 1$ the function
$$
\psi_{N,\xi}(t)=e^{N(t-\xi)^2}[f(t)-m+D],\quad t\in I
$$
is strongly convex on $I$. In particular the function
$$
\varphi(t)=e^{(t-\xi)^2}[f(t)-m+D],\quad t\in I,
$$
is strongly convex on $I$.
\end{cor}
\subsection{Convexifying polynomials in several variables}\label{Sect3}
We will show that the function $\varphi _N$ in $n$ variables is strongly convex on a compact convex set $X\subset\mathbb{R}^n$, provided $f$ is a polynomial positive on $X$ and $N$ is suficiently large.
Let
$f\in \mathbb R [x]$ be a real polynomial in $x=(x_1, \ldots , x_n)$ of the form
\begin{equation}\label{funkcja f}
f=\sum_{j=0}^d\sum_{|\nu |=j}a_{\nu }x^\nu ,
\end{equation}
where $a_\nu \in \mathbb R,$ $x^\nu =x^{\nu_{1}}_1\cdots x^{\nu_{n}}_n$ and $|\nu |=\nu _1 + \cdots + \nu _n$ for $\nu =(\nu _1, \cdots , \nu _n)\in \mathbb N^n$ (we assume that $0\in \mathbb N$).
For $R>0$ we denote
$$
D_n (f,R):=\max \bigg\{1,\sum_{j=1}^d\sum_{|\nu |=j}j|a_ \nu |R^{j-1};\sum_{j=1}^d\sum_{|\nu |=j}j(j-1)|a_ \nu |R^{j-2}\bigg\}.
$$
\begin{tw}\label{uwypuklanie na zwartym}
Let $f\in \mathbb R [x]$ be a polynomial which is positive on a compact and convex set $X\subset \mathbb R^n$. Let $R=\max\{|x|: x \in X\}$ and
$$0<m\leq \min\{f(x): x\in X\}.$$
Than for any $\xi\in \mathbb R^n$, any $D \geq D_n (f,R)$ and any real $N> \mathcal{N}(m,D)$ the function $\varphi _{N,\xi}(x):=e^{N|x-\xi|^{2}}f(x)$ is strongly convex on $X$.
\end{tw}
\begin{proof}
Let
\begin{equation}\label{eqdefA}
A:=\{(\alpha,\beta) \in \mathbb R^n \times \mathbb R^n: \langle \alpha, \beta \rangle=0,\; |\beta|=1\},
\end{equation}
where $\langle \cdot,\cdot \rangle$ denotes the standard scalar product on $\mathbb R^n.$ Set
$$\gamma _{\alpha, \beta}(t):=\beta t + \alpha, \ \ t\in \mathbb R.$$
Clearly, the family of all curves $\gamma _{\alpha, \beta}$, where $(\alpha,\beta)\in A$ describes all affine lines in $\mathbb R^n.$
Denote by $B\subset A$ the set of all $(\alpha,\beta)\in A$ such that the line parametrized by $\gamma _{\alpha, \beta}$ intersects the set X. Then $B$ is a compact set and $$B\subset \{(\alpha ,\beta )\in A: |\alpha |\leq R\}.$$
We will prove that for any $(\alpha ,\beta )\in B$ and $N > \mathcal{N}(m,D)$ the function $\varphi_{N,\xi} \circ \gamma _{ \alpha , \beta }$ is strongly convex on
$$I_{\alpha ,\beta }:=\{t\in \mathbb R: \gamma_{\alpha ,\beta }(t) \in X\}.$$
Because $X$ is a compact and convex set, so $I_{\alpha ,\beta }$ is a closed interval or only one point.
It is obvious that for $(\alpha;\beta) \in B$ the set $\{t\in \mathbb R: |\gamma _{\alpha ,\beta }(t)|\leq R\}$ is an interval, which contains the point 0 or it is equal to $\{0\}$. Denote this interval by $[-R_{\alpha ,\beta },R_{\alpha ,\beta }]$ (under convention $[0,0]=\{0\}$). Then
$$I_{\alpha ,\beta }\subset [-R_{\alpha ,\beta },R_{\alpha ,\beta }]\subset [-R;R].$$
Let $f$ be of the form (\ref{funkcja f}). Than for $t\in I_{\alpha, \beta}$ we have
$|\gamma _{\alpha ,\beta } (t)| \leq R $.
Let us fix $(\alpha, \beta) \in B$.
Then
$$
|(f\circ \gamma _{\alpha ,\beta })'(t)|\leq \sum_{j=1}^d\sum_{|\nu |=j}j|a_ \nu |R^{j-1}
$$
and
$$
|(f\circ \gamma _{\alpha ,\beta })''(t)|\leq \sum_{j=1}^d\sum_{|\nu |\leq j}j(j-1)|a_ \nu |R^{j-2}.
$$
Consequently,
$$
|(f\circ \gamma _{\alpha ,\beta })'(t)|\leq D, \ \ \ |(f\circ \gamma _{\alpha ,\beta })''(t)|\leq D \ \ \ \hbox{for} \ \ \ t\in I_{\alpha, \beta}.$$
Take any $\xi\in\mathbb R^n$, then
\begin{multline*
|\gamma _{\alpha ,\beta }(t)-\xi|^2=\langle \beta t+\alpha-\xi ,\beta t+\alpha-\xi \rangle \\
= t^2-2\langle \beta,\xi\rangle t+|\alpha| ^2-2\langle \alpha,\xi\rangle+|\xi|^2
\end{multline*
then for $p=-2\langle \beta,\xi\rangle$ and $q=|\alpha| ^2-2\langle \alpha,\xi\rangle+|\xi|^2$, we have $p^2\leq 4q$ and
$$
\varphi _N \circ \gamma _{\alpha, \beta}(t)=e^{N(t^2+pt+q)}f(\gamma _{\alpha, \beta}(t)).
$$
So, by Lemma {\ref{lemat dla funkcji2}} we get that $\left(\varphi _N \circ \gamma _{\alpha, \beta}\right)''(t)\geq -\frac{D^2}{m}-D+2Nm>0$ for $t\in I_{\alpha,\beta}$ and $\varphi_{N,\xi}$ is strongly convex on $X$,
provided $N>\mathcal{N}(m,D)$.
\end{proof}
From Theorem \ref{uwypuklanie na zwartym} we obtain the following corollary.
\begin{cor}\label{uwypuklanie na zwartymcor2}
Let $f\in \mathbb R [x]$ and let $X\subset \mathbb R^n$ be a compact and convex set. Let $R=\max\{|x|: x \in X\}$ and let $m\in \mathbb{R}$ be a constant such that
$$
m\leq \min\{f(x): x\in X\}.
$$
Than for any $D> D_n (f,R)$ and any $\xi\in\mathbb R^n$, the function
$$
\varphi_\xi(x):=e^{|x-\xi|^2}[f(x)-m+D], \quad x\in \mathbb{R}^n
$$
is strongly convex on $X$.
\end{cor}
By a similar argument as in the proof of Theorem \ref{uwypuklanie na zwartym}, we obtain the following fact.
\begin{rem}\label{uwypuklanie na zwartymrem}
Let $f:\mathbb{R}^n\to \mathbb{R}$ be a function of class $C^2$ and let $X\subset \mathbb R^n$ be a compact and convex set. Assume that $m,D\in \mathbb{R}$ are numbers satisfying
$$
m < \min\{f(x): x\in X\}
$$
and the first and second directional derivatives of $f$ in directions of vectors of length $1$, are bounded by $D$ on $X$. Then the function
$$
\varphi_\xi(x)=e^{|x-\xi|^2}[f(x)-m+D], \quad x\in \mathbb{R}^n, \quad \xi\in \mathbb{R}^n
$$
is strongly convex on $X$.
\end{rem}
\subsection{Convexifying polynomials with integer coefficients}\label{SectINTEGER}
For actual applications of Theorem \ref{uwypuklanie na zwartym} it is important to compute the number $\mathcal{N}(m,D)$
for a given convex semialgebraic set $X $
and a polynomial $f$ which is positive on $X$. Hence
the main difficulty is to compute (or rather estimate) $m=\min\{f(x):x\in X\}$ and $R=\max\{|x|:x\in X\}$. This actually possible if we suppose that
$f$ has integer coefficients and $X$ is described by equations and inequalities with integer coefficients.
More precisely, let $X\subset \mathbb R^n$, $n\ge 2$, be a compact semialgebraic set of the form
\begin{multline}\label{formXc}
X=\{x\in\mathbb{R}^n:g_1(x)=0,\ldots,g_l(x)=0,g_{l+1}(x)\geq 0,
\ldots,\\
g_k(x)\geq 0\},
\end{multline}
where $g_1,\ldots,g_k\in \mathbb{Z}[x]$.
Under the above notations
G. Jeronimo, D.~Perrucci, E.~Tsigaridas in \cite{JPT} proved that
\begin{tw}\label{uwypuklaniecalkowitychna zwartym}
Let $f, g_1,\ldots,g_k\in \mathbb{Z}[x]$ be polynomials with degrees bound by an even integer $d$ and coefficients of absolute values at most $H$, and let $\tilde H=\max\{H,2n+2k\}$. If $f(x)>0$ for $x\in X$ and $X$ of the form \eqref{formXc} is compact, then
$$
\min\{f(x):x\in X\}\geq \left(2^{4-\frac{n}{2}}\tilde H d^n\right)^{-n2^nd^n}.
$$
\end{tw}
For a positive real number $H$ and positive integers $d,n,k$ we put
$$
\frak{b}(n,d,H,k)=\left(2^{4-\frac{n}{2}}\max\{H,2n+2k\}d^n\right)^{-n2^nd^n}
$$
From Theorems \ref{uwypuklaniecalkowitychna zwartym} and \ref{uwypuklanie na zwartym} we immediately obtain
\begin{tw}\label{uwypuklanie na zwartym calk}
Let $X\subset \mathbb R^n$ be a compact and convex semialgebraic set of the form \eqref{formXc} and let $f, g_1,\ldots,g_k\in \mathbb{Z}[x]$ be polynomials with degrees bound by an even integer $d $ and coefficients of absolute values at most $H$.
Set
$$
R=\sqrt{\big[\frak{b}(n+1,\max\{d,4\},H,k+2)\big]^{-1}-1},\quad m=\frak{b}(n.d,H,k).
$$
Then
\begin{equation}\label{boundR}
\max\{|x|:x\in X\}\leq R.
\end{equation}
Moreover, if $f(x)>0$ for $x\in X$, then for any $D \geq D_n \left(f,R\right)$, $N> \mathcal{N}\left(m,D\right)$ and for any $\xi\in\mathbb R^n$
the function $$\varphi _{N,\xi}(x):=e^{N|x-\xi|^{2}}f(x)$$ is strongly convex on $X$.
\end{tw}
\begin{proof}
By Theorem \ref{uwypuklaniecalkowitychna zwartym} we have $0<m\leq \min\{f(x):x\in X\}$.
Let
$$
Y=\{(x,y)\in\mathbb R^n\times\mathbb R:x\in X,\,(1+|x|^2)y^2-1=0,\,y\ge 0\},
$$
and let $h(x,y)=y^2$. Then $Y\subset \mathbb R^{n+1}$ is a compact semialgebraic set defined by $k+2$ polynomial equations and inequalities of degrees bounded by $\max\{d,4\}$. Moreover, the absolute values of coefficients of those polynomials and $h$ are bounded by $H$. Then, by Theorem \ref{uwypuklanie na zwartym calk},
$$
\min\{h(x,y):(x,y)\in Y\}\geq \frak{b}(n+1,\max\{d,4\},H,k+2)
$$
and consequently we obtain \eqref{boundR}. Summing up, Theorem \ref{uwypuklanie na zwartym} gives the assertion.
\end{proof}
\section{Convexifying polynomials on non-compact sets}\label{Sect44}
In this section we will show that the function $\varphi _N(x)=e^{N|x|^2}f(x)$ in $n$ variables is strongly convex on a closed and convex set $X\subset\mathbb{R}^n$ (not necessary compact), provided the polynomial $f$ takes values larger than a certain number $m>0$, the leading form of a polynomial $f$ has only positive values and $N$ is suficiently large.
\subsection
Convexifying polynomials in one variable}
For a polynomial $f\in\mathbb R[t]$ of the form $f(t)=a_0t^d+a_1t^{d-1}+\cdots+a_d$, $a_0,\ldots,a_d\in\mathbb R$, $a_0\ne 0$, we put
$$
K(f):=2\max_{1\leq i\leq d}\left|\frac{a_i}{a_0}\right|^{1/i}.
$$
\begin{lemat}\label{K(g)<K(f)var}
Let $f\in \mathbb{R}[t]$ be a polynomial of degree $d>0$ which is positive on a closed interval $I\subset \mathbb R$ (not necessary compact). Let $m\in\mathbb R$ be a positive number such that
$$
\inf\{f(t):t\in I\}\geq m.
$$
Let $g_N\in\mathbb R[t]$ be a polynomial of the form
$$
g_N=2Nf^2-(f')^2+ff'',
$$
and let $\Theta_N\in\mathbb{R}[t,\xi]$ be a polynomial of the form
\begin{equation}\label{thetaNformvar}
\Theta _N(t,\xi):=4N^2(t-\xi)^2f(t)+4N(t-\xi)f'(t)+2Nf(t)+f''(t)
\end{equation}
for $N\in \mathbb{R}$ and $N \geq 1$. Then
for $N\geq \mathcal{N}(m,D)$, where $D\geq D_1(f,R)$ and $R\geq \max\{K(f),K(g_1)\}$, we have
$$
\Theta_N(t,\xi)>0\quad\hbox{for }(t,\xi)\in I\times \mathbb R.
$$
\end{lemat}
\begin{proof}
Consider the following quadratic function in $x$
$$
4N^2x^2f(t)+4Nxf'(t)+2Nf(t)+f''(t).
$$
Then its discrirminant is of the form $\Delta(t)=-16N^2g_N(t)$.
Take $R\geq \max\{K(f),K(g_1)\}$, $D\geq D_1(f,R)$ and $N>\mathcal{N}(m,D)$. Then we have
$$
g_N(t)\geq 2Nf^2(t)-D^2-f(t)D\geq f^2(t)\left(2N-\tfrac{D^2}{m^2}-\tfrac{D}{m}\right)>0
$$
for $t\in I$, $|t|\leq R$. On the other hand $g_N(t)\geq g_1(t)>0$ for $t\in I$, $|t|\geq R$. So $\Delta(t)<0$ for $t\in I$ and we deduce the assertion.
\end{proof}
\begin{tw}\label{thm3unboundedvar}
Let $f \in \mathbb R[t] $ be a polynomial of degree $d>0$ and let $I\subset \mathbb{R}$ be a closed interval (not necessary compact). Assume that there exists $m\in \mathbb R$ such that
$$
0<m\leq \inf\{f(t): t\in I\}.
$$
Let $R>\max\{K(f),K(2f^2-(f')^2+ff'')$ and $D\geq D_1(f,R)$.
Then for any $N\in \mathbb{R}$, $N\geq \mathcal{N}(m,D)$, and any $\xi\in\mathbb R$ the function
$$
\varphi_{N,\xi}(t)=e^{N(t-\xi)^2}f(t)
$$
is strongly convex on $I$.
\end{tw}
\begin{proof}
It suffices to observe that $\varphi''_{N,\xi}(t)=e^{N|t-\xi|^2}\Theta_N(t,\xi)$ and apply Lemma \ref{K(g)<K(f)var}.
\end{proof}
\subsection
Convexifying polynomials in several variables}
\begin{tw}\label{convexonconvexsetnewvar}
Let $X\subset \mathbb R^n$ be a convex closed set. Assume that
$f$ is a polynomial of degree $d>0$ which is positive on~$X$,
\begin{equation}\label{assumptioncompactvar}
f_d^{-1}(0) = \{0\}
\end{equation}
and there exists $m>0$ such that
\begin{equation}\label{estfcompact1var}
\inf \{f(x):x\in X\} \ge m.
\end{equation}
Then there exists $N_0\in\mathbb N$ such that for any integer $N\geq N_0$ and any $\xi\in\mathbb R^n$
the function $\varphi_{N,\xi}(x)=e^{N|x-\xi|^2}f(x)$ is strongly convex on $X$.
\end{tw}
\begin{proof}
Take any line of the form $\gamma_{\alpha,\beta}(t)=\beta t +\alpha$, where $\alpha,\beta\in\mathbb R^n$, $|\beta|=1$ and $\langle \alpha,\beta\rangle=0$. Then
$$
(\varphi_{N,\xi}\circ \gamma_{\alpha,\beta})(t)=e^{N(t^2+|\alpha|^2-2\langle \beta,\xi\rangle t-2\langle\alpha,\xi\rangle)}f(\gamma_{\alpha,\beta}(t)).
$$
Then
\begin{equation*}\label{eqbis}
\begin{split}
(\varphi_{N,\xi}\circ \gamma_{\alpha,\beta})''(t)=&e^{N(t^2+|\alpha|^2-2\langle \beta,\xi\rangle t-2\langle\alpha,\xi\rangle)}[4N^2(f\circ\gamma_{\alpha,\beta})(t)y^2\\
&+4N(f\circ\gamma_{\alpha,\beta})'(t)y+ 2N(f\circ\gamma_{\alpha,\beta})(t)\\
&+(f\circ\gamma_{\alpha,\beta})''(t)],
\end{split}
\end{equation*}
where $y=t+\langle\beta,\xi\rangle$. Consider the function in the square bracket as a quadratic function in $y$. Then its discriminant is of the form
$$
\Delta=-16N^2[2N(f\circ\gamma_{\alpha,\beta})^2(t)+(f\circ\gamma_{\alpha,\beta})(t)(f\circ\gamma_{\alpha,\beta})''(t)-((f\circ\gamma_{\alpha,\beta})'(t))^2].
$$
Note that $(f\circ\gamma_{\alpha,\beta})'(t)$ and $(f\circ\gamma_{\alpha,\beta})''(t)$ are the first and the second directional derivatives of $f$ at $\gamma_{\alpha,\beta}(t)$ in the direction $\beta$ and $|\beta|=1$.
Observe that there exists $N_0$ such that for any $N\geq N_0$ we have $\Delta<0$. Indead, it suffices to prove that for any $x\in X$ and any $\beta \in\mathbb R^n$, $|\beta|=1$ we have
\begin{equation}\label{eqvar1}
2Nf(x)^2+f(x)\partial^2_\beta f(x)-(\partial_\beta f(x))^2>0.
\end{equation}
If $f_d(x)<0$ for $x\in\mathbb R^n\setminus \{0\}$ then the set $X$ is compact and the inequality follows from the assumption that $f(x)\geq m$ for $x\in X$. Indead, let $D\geq \max \{|\partial _\beta f(x)|,|\partial^2_\beta f(x)|\}$ for $x\in X$, $|\beta|=1$. Since $X$ is compact, then
$$
2Nf(x)^2+f(x)\partial^2_\beta f(x)-(\partial_\beta f(x))^2\ge 2Nm^2-mD-D^2>0
$$
for $N>\mathcal{N}(m,D)$. This gives \eqref{eqvar1}.
Consider the case when $f_d(x)>0$ for $x\in\mathbb R^n\setminus\{0\}$, and let
$$
f_{d*}=\inf \{f_d(x):x\in S_n\},
$$
where $S_n$ is the unit sphere in $\mathbb{R}^n$, i.e., $S_n=\{x\in\mathbb{R}^n:|x|=1\}$.
Let $f$ be a polynomial of the form \eqref{funkcja f}. We set
$$
\|f\|:
\sum_{|\nu|\leq d}|a_\nu|
$$
Then $\|f\|\geq \|f_d\|\geq f_{d*}$. If $f_{d*}>0$ then we set
$$
\mathbb{K}(f):= \frac{2\|f\|}{f_{d*}}
$$
and
$$
m(f):=f_{d*}-\sum_{j=0}^{d-1}\mathbb{K}(f)^{j-d}\sum_{|\nu|=j}|a_\nu|.
$$
In the further part of the proof we will need the following lemma.
\begin{lemat}\label{lemmamestvar}
If $d=\deg f>0$ and $f_{d*}>0$, then $m(f)>0$ and $f(x)\geq m(f)|x|^d$ for any $x\in \mathbb{R}^n$ such that $|x|\geq \mathbb{K}(f)$.
\end{lemat}
\begin{proof}
Put
$$
h(t):=f_{d*}t^d-\sum_{j=0}^{d-1}\left(\sum_{|\nu|=j}|a_\nu|\right)t^{j}.
$$
Since $\frac{\|f\|}{f_{d*}}\geq 1$, then
$$
K(h)=2\max_{1\leq i\leq d}\left|\frac{\sum_{|\nu|=d-i}|a_\nu|}{f_{d*}}\right|^{{1}/{i}}< 2\max_{1\leq i\leq d}\left|\frac{\|f\|}{f_{d*}}\right|^{{1}/{i}}=\mathbb{K}(f),
$$
and since $h'(t)>0$ for $t >K(h)$, then $h(|x|)\geq h(\mathbb{K}(f))>0$ for $|x|\geq \mathbb{K}(f)$. Moreover, $m(f)\mathbb{K}(f)^d=h(\mathbb{K}(f))$, so $m(f)>0$. On the other hand
$$
m(f)|x|^d\leq \left(f_{d*}-\sum_{j=0}^{d-1}|x|^{j-d}\sum_{|\nu|=j}|a_\nu|\right)|x|^d=h(|x|)\leq f(x)
$$
for $|x|\geq \mathbb{K}(f)$. This gives the assertion of Lemma \ref{lemmamestvar}.
\end{proof}
Take $R\geq \mathbb{K}(f)$, and $D\geq D_n(f,R)$ then for $N\geq \mathcal{N}(m,D)$ we have
$$
2Nf(x)^2+f(x)\partial^2_\beta f(x)-(\partial_\beta f(x))^2\geq 2Nm^2-mD-D^2>0
$$
for $|x|\leq R$.
For $|x|\geq R$, we have
\begin{multline*}
2Nf(x)^2+f(x)\partial^2_\beta f(x)-(\partial_\beta f(x))^2\\
\geq 2Nm^2(f)|x|^{2d}-m(f)|x|^dD_n(f,|x|)-D_n^2(f,|x|).
\end{multline*}
Since for $|x|\geq 1$,
$$
D_n(f,|x|)\leq D_n(f,1)|x|^{d-1},
$$
then for $|x|\geq R$ and $|\beta|=1$ we have
\begin{multline*}
2Nf(x)^2+f(x)\partial^2_\beta f(x)-(\partial_\beta f(x))^2\\
\geq 2Nm^2(f)|x|^{2d}-m(f)D_n(f,1)|x|^{2d-1}-D_n^2(f,1)|x|^{2d-2}\\
\geq |x|^{2d}[2Nm^2(f)-m(f)D_n(f,1)-D_n^2(f,1)]>0.
\end{multline*}
for $N>\mathcal{N}(m(f),D_n(f,1))$. This gives \eqref{eqvar1}. Moreover, there exists $\epsilon>0$ such that
$$
2Nf(x)^2+f(x)\partial^2_\beta f(x)-(\partial_\beta f(x))^2>\epsilon.
$$
for any $x\in X$ and $|\beta|=1$.
From \eqref{eqvar1} and the above there follows that $\Delta<-16N^2\epsilon$ for any $\alpha,\beta$ and $t$ such that $\gamma_{\alpha,\beta}(t)\in X$. Since $f(x)\geq m$ for $x\in X$, then $(\varphi_{N,\xi}\circ \gamma_{\alpha,\beta})''(t)\geq \tfrac{\epsilon}{m}$ if $\gamma_{\alpha,\beta}(t)\in X$. This gives the assertion.
\end{proof}
By analogous argument as for Theorem \ref{convexonconvexsetnewvar} and under notations of the proof we obtain the following corollary.
\begin{cor}\label{uwypuklanie na wypuklymcor2}
Let $f\in \mathbb R [x]$ be a polynomial of degree $d$ and let $X\subset \mathbb R^n$ be a convex and closed set.
Under assumptions of Theorem \ref{convexonconvexsetnewvar} and notations of the proof,
for any $R\geq \mathbb{K}(f)$, $D\geq D_n(f,R)$ and
$N>\max\{\mathcal{N}(m,D),\mathcal{N}(m(f),D_n(f,1))\}$, and any $\xi\in\mathbb R^n$
the function $\varphi_{N,\xi}(x)=e^{N|x-\xi|^2}f(x)$ is strongly convex on $X$. In particular
the function
$$
\varphi_\xi(x):=e^{|x-\xi|^2}[f(x)-m+D
$$
is strongly convex on $X$.
\end{cor}
The assumption \eqref{assumptioncompactvar}
that $f_d(x)\ne 0$ for $x\ne 0$, in Theorem \ref{convexonconvexsetnewvar},
can not be omited as the following example shows.
\begin{exa}\label{exacounter}
Let $f\in \mathbb R[x,y,z]$ be a polynomial of the form
$$
f(x,y,z)=(y^2+z^2+1)\left[(x-1)^2(x+1)^2 +(yz+1)^2+y^2\right].
$$
Since $(y^2+z^2+1)\left[(yz+1)^2+y^2\right]\ge \frac{1}{2}$ for $(y,z)\in\mathbb R^2$ then we easily see that
$$
f(x,y,z)\ge \tfrac{1}{2}\quad \hbox{for }(x,y,z)\in\mathbb R^3.
$$
Note that $\deg f=6$, and the leading form $f_6(x,y,z)=(y^2+z^2)(x^4+y^2z^2)$ has nontrivial zeroes.
Now take any $N\in \mathbb R$ and $\varphi_N(x,y,z)=e^{N(x^2+y^2+z^2)}f(x,y,z)$.
Then for $\xi\ne 0$ we have
$$
\varphi_N(0,\xi^{-1},-\xi)=e^{N(\xi^{-2}+\xi^2)}(\xi^{-2}+\xi^2+1)(1+\xi^{-2})
$$
and
$$
\varphi_N(-1,\xi^{-1},-\xi)=\varphi_N(1,\xi^{-1},-\xi)=e^{N(\xi^{-2}+\xi^2)}(\xi^{-2}+\xi^2+1)e^N\xi^{-2}.
$$
Hence for sufficiently large $\xi$,
$$
\varphi_N(-1,\xi^{-1},-\xi)=\varphi_N(1,\xi^{-1},-\xi)<\varphi_N(0,\xi^{-1},-\xi),
$$
therefore $\varphi_N$ can not be a convex function.
\end{exa}
The assumption \eqref{assumptioncompactvar} in Theorem \ref{convexonconvexsetnewvar}, cannot be replaced by a condition $\lim_{|x|\to\infty} f(x)=\infty$.
Indead, consider a modification of the previous example of the form
$$
f_k(x,y,z)=(y^2+z^2+1)^k\left[(x-1)^2(x+1)^2 +(yz+1)^2+y^2\right],
$$
where $k\geq 2$. Then $\lim_{|(x,y,z)|\to\infty} f(x,y,z)=\infty$ and the function $\varphi_N(x,y,z)=e^{N(x^2+y^2+z^2)}f_k(x,y,z)$ is not convex for any $N\in\mathbb R$ by the previous argument.
It turns out that the use of a double exponential function leads to a convexity of an appropriate function on $X$.
We show it in the next section.
\section{Double exponential convexifying polynomials}\label{secdoubleexp}
In this section, without the assumption that the leading form of a polynomial $f\in\mathbb R[x]$ in $n$ variables has only positive values, we will show that the function $\varPhi _N(x)=e^{e^{N|x|^2}}f(x)$ is strongly convex on a closed and convex semialgebraic set $X\subset\mathbb{R}^n$ (not necessary compact), provided the polynomial $f$ takes positive values on $X$ and $N$ is suficiently large.
\begin{tw}\label{tedoubleexp}
Let $X\subset \mathbb R^n$ be a closed and convex semialgebraic set, and let $f\in\mathbb R[x]$ be a polynomial which has only positive values on $X$. Then there exists $N_0\in\mathbb R$ such that for any $N\geq N_0$ the function $\varPhi _N(x)=e^{e^{N|x|^2}}f(x)$ is strongly convex on $X$.
\end{tw}
\begin{proof}
Let $f$ be of the form \eqref{funkcja f}, $d=\deg f$. Then $f(x)$,
, the first and second directional derivatives of $f$ in directions of vectors of length $1$ at $x\in X$, are bounded by
$\tilde D(1+|x|^d)$, where
$$
\tilde D:=|a_0|+\sum_{|\nu|=1}|a_\nu|+\sum_{j=2}^d\sum_{|\nu |=j}j(j-1)|a_ \nu |.
$$
Take an affine line in $\mathbb R^n$ of the for
$$
\gamma _{\alpha, \beta}(t):=\beta t + \alpha, \ \ t\in \mathbb{R},
$$
where $(\alpha,\beta)\in A$ and the set $A$ is defined in \eqref{eqdefA}. Then $|\beta|=1$, $\langle \alpha,\beta\rangle=0$, and $|\gamma_{\alpha,\beta}(t)|^2=t^2+|\alpha|^2$.
Let write the second derivative of $\varPhi_N\circ \gamma_{\alpha,\beta}$ in the form
$$
(\varPhi_N\circ\gamma_{\alpha,\beta})''(t)=e^{e^{N(t^2+|\alpha|^2)}}(a(t) t^2 +b(t)t +c(t)),
$$
where
\begin{equation*}
\begin{split}
a(t) & = 4N^2(e^{N(t^2+|\alpha|^2)}+e^{2N(t^2+|\alpha|^2)})f\circ\gamma_{\alpha,\beta}(t),
\\
b(t) & = 4Ne^{N(t^2+|\alpha|^2)}(f\circ\gamma_{\alpha,\beta})'(t),
\\
c(t) & = 2Ne^{N(t^2+|\alpha|^2)}f\circ\gamma_{\alpha,\beta}(t)
+(f\circ\gamma_{\alpha,\beta})''(t)
\end{split}
\end{equation*}
The discriminant of the polynomial
$
P_t(\lambda) = a(t)\lambda^2 + b(t)\lambda+c(t)
$
is of the form
\begin{equation*}
\begin{split}
\Delta=16N^2e^{2N(t^2+|\alpha|^2)}
&\left[\left((f\circ\gamma_{\alpha,\beta})'(t)\right)^2\right.
\\
&-f\circ\gamma_{\alpha,\beta}(t)(f\circ\gamma_{\alpha,\beta})''(t)\left(1- e^{-N(t^2+|\alpha|^2)}\right)\\
&\left.-2N(f\circ\gamma_{\alpha,\beta})^2(t)\left(1+ e^{N(t^2+|\alpha|^2)}\right)\right].
\end{split}
\end{equation*}
So, by the choice of the
number $\tilde D$, we have
\begin{equation*}
\begin{split}
\Delta\leq 32N^2e^{2N(t^2+|\alpha|^2)}&\left[ \tilde D^2\left(1+|\gamma_{\alpha,\beta}(t)|^d\right)^2\right.\\
&\;\; \left.-N(f\circ\gamma_{\alpha,\beta})^2(t)\left(1+ e^{N|\gamma_{\alpha,\beta}(t)|^2}\right)
\right].
\end{split}
\end{equation*}
Since the set $X$ is semialgebraic and $f^{-1}(0)\cap X=\emptyset$, then by H\"ormander-{\L}ojasiewicz inequality, see eg. \cite[Corollary 2.4]{KS0}, there exist $C,K,\mathcal{L}>0$, where $K,\mathcal{L}\in \mathbb{Z}$, $K\geq d$, depend on $d$ and the {\it complexity} of $X$, (i.e., degrees and the number of polynomials describing $X$) such that
$$
f(x)\geq C\left({1+|x|^K}\right)^{-\mathcal{L}}\quad \hbox{for }x\in X.
$$
Moreover, the numbers $K,\mathcal{L}$ are effectively computable.
By the above,
\begin{equation*}
\begin{split}
\Delta\leq 32N^2e^{2N(t^2+|\alpha|^2)}\left({1+|\gamma_{\alpha,\beta}(t)|^K}\right)^{-\mathcal{L}}&\left[ \tilde D^2\left(2+|\gamma_{\alpha,\beta}(t)|^K\right)^{\mathcal{L}+2}\right.
\\
&\left.\;\; -NC^
\left(1+ e^{N|\gamma_{\alpha,\beta}(t)|^2}\right)\right].
\end{split}
\end{equation*}
If $N$ is large enough, then for any $x\in\mathbb R^n$ we have $$\tilde D^2(2+|x|^K)^{\mathcal{L}+2}<NC^2(1+e^{N|x|^2}).$$
Therefore $\Delta<0$, so
$P_t(\lambda) >0$ for any $\lambda \in \mathbb R$.
Consequently
$$
(\varPhi_N\circ\gamma_{\alpha,\beta})''(t)=e^{e^{N(t^2+|\alpha|^2)}}P_t(t) >0,
$$
for $t\in \mathbb R$. Note that $\lim_{|t|\to \infty}(\varPhi_N\circ \gamma_{\alpha,\beta})''(t)=+\infty$, hence there exists $\mu>0$ such that $(\varPhi_N\circ\gamma_{\alpha,\beta})''(t)\geq \mu $ for $t\in\mathbb R$. Moreover, the number $\mu$ can be chosen independet of $\gamma_{\alpha,\beta}$. This gives the assertion.
\end{proof}
\begin{rem}\label{remeffectcomputed}
The number $N_0$ in Theorem \ref{tedoubleexp} can be effectively computed, provided we can estimate the constant $C$. More precisely, under notations in the proof, if $k > (\mathcal{L}+2)K$, then for $|x|\geq 1$ we have
$$
NC^2\left(1+e^{N|x|^2}\right)\geq NC^2+\sum_{j=0}^k\frac{C^2N^{j+1}}{j!}|x|^{2j}>\tilde D^2\left(2+|x|^K\right)^{\mathcal{L}+2}
$$
for
$$
N>k!\max_{i=0,\ldots,\mathcal{L}+2}\left({\tilde D^2C^{-2}2^{\mathcal{L}+2-i}\binom{\mathcal{L}+2}{i}}\right)
$$
If additionally $N\geq \tilde D^2 C^{-2}3^{\mathcal{L}+2}$, then the above inequality holds for any
$x\in\mathbb R^n$.
\end{rem}
\begin{rem}\label{remdoubleexp}
We cannot omit the assumption in Theorem \ref{tedoubleexp} that the set $X$ is semialgebraic. For instance if $f(x,y)=-y^2+y$ and $X=\{(x,y)\in\mathbb R^2:e^{-e^x}\leq y\leq \tfrac{1}{2},\; x\geq 0\}$, then $f(x,y)>0$ on $X$, but the function $\varPhi_N(x,y)$ is not convex on $X$ for any $N\in \mathbb R$.
\end{rem}
Assuming that $f(x)\geq m$ on $X$, for some $m>0$, we can omit the assumption in Theorem \ref{tedoubleexp} on semialgebraicity of $X$. More precisely, by a similar argument as in the proof of Theorem \ref{tedoubleexp} we obtain
\begin{tw}\label{tedoubleexp2}
Let $X\subset \mathbb R^n$ be a closed and convex set, and let $f\in\mathbb R[x]$ be a polynomial such that $f(x)\geq m$ for $x\in X$ and some $m>0$. Then there exists $N_0\in\mathbb R$ such that for any $N\geq N_0$ the function $\varPhi _N(x)=e^{e^{N|x|^2}}f(x)$ is strongly convex on $X$.
\end{tw}
\begin{rem}\label{tedoubleexp2bis}
By a similar argument as for the proof of Theorem \ref{tedoubleexp2} we obtain that the assertion of this Theorem occurs not only for the function $\varPhi_{N}$ but also for the function $\Phi_{N}(x)=e^{Ne^{|x|^2}}f(x)$. More precisely, we have:
Let $X\subset \mathbb R^n$ be a closed and convex set, and let $f\in\mathbb R[x]$ be a polynomial such that $f(x)\geq m$ for $x\in X$ and some $m>0$. Then there exists $N_0\in\mathbb R$ such that for any $N\geq N_0$ the function $\Phi _{N}$
is strongly convex on $X$.
\end{rem}
A similar argument as for Theorem \ref{tedoubleexp} gives the following theorems
\begin{tw}\label{convexonconvexsetnewxdoublevarsemi}
Let $X\subset \mathbb R^n$ be a convex closed semialgebraic set and let $r>0$. If $f$ is a polynomial such that
\begin{equation}\label{estfcompact1xdoublevarsemi}
f(x)>0\quad\hbox{for }x\in X,
\end{equation}
then there exists $N_0\in\mathbb N$ such that for any integer $N\geq N_0$ and any $\xi\in\mathbb R^n$, $|\xi|\leq r$,
the function $\varPhi_{N,\xi}(x)=e^{e^{N|x-\xi|^2}}f(x)$ is strongly convex on $X$.
Moreover, there exists $\alpha\in\mathbb R$ such that the function
$$
\varPhi_\xi(x)=e^{e^{|x-\xi|^2}}[f(x)+\alpha], \quad x\in \mathbb{R}^n
$$
is strongly convex on $X$, provided $ \xi\in \mathbb{R}^n$, $|\xi|\leq r$.
\end{tw}
\begin{tw}\label{convexonconvexsetnewxdouble}
Let $X\subset \mathbb R^n$ be a convex closed set and let $r>0$. Assume that
$f$ is a polynomial of degre $d>0$ such that
there exists $m\in\mathbb R$ such that
\begin{equation}\label{estfcompact1xdouble}
0<m < \inf \{f(x):x\in X\}.
\end{equation}
Then there exists $N_0\in\mathbb N$ such that for any integer $N\geq N_0$ and any $\xi\in\mathbb R^n$, $|\xi|\leq r$
the function $\varPhi_{N,\xi}(x)=e^{e^{N|x-\xi|^2}}f(x)$ is strongly convex on $X$.
Moreover, there exists $\alpha\in\mathbb R$ such that the function
$$
\varPhi_\xi(x)=e^{e^{|x-\xi|^2}}[f(x)+\alpha], \quad x\in \mathbb{R}^n
$$
is strongly convex on $X$, provided $ \xi\in \mathbb{R}^n$, $|\xi|\leq r$.
\end{tw}
It is impossible to obtain $N$ in the above theorem, such that the function $\varPhi_{N,\xi}$ is convex for any $\xi\in X$ as the following example shows.
\begin{exa}\label{exacounter2}
Let $f\in \mathbb R[x,y,z]$ be a polynomial of the form
$$
f(x,y,z)=\left[(yz+1)^2+y^2\right]\left[\left(xz^2-1\right)^2\left(xz^2+1\right)^2 +y^2+z^2+1\right].
$$
Analogously as in Example \ref{exacounter}
we see that
$$
f(x,y,z)\ge \tfrac{1}{2}\quad \hbox{for }(x,y,z)\in\mathbb R^3,
$$
and the leading form $f_{16}(x,y,z)=y^2z^{10}x^4$ has nontrivial zeroes.
Now take any $N\in \mathbb R$ and $\varPhi_N(x,y,z)=e^{e^{N(x^2+y^2+z^2)}}f(x,y,z)$.
Then for $\xi=(0,t^{-1},-t)$, $t>0$ we have
$$
\frac{\partial^2 \varPhi_{N,\xi}}{\partial x^2}(\xi)=e\left(2Nf(\xi)+\frac{\partial^2 f}{\partial x^2}(\xi)\right).
$$
Since
$$
f(\xi)=2t^{-2}+t^{-4}+1
$$
and
$$
\frac{\partial^2 f}{\partial x^2}(\xi)=-4t^2,
$$
then
we easily see that $\frac{\partial^2 \varPhi_{N,\xi}}{\partial x^2}(\xi)<0$ for sufficiently large $t$. So, $\varPhi_{N,\xi}$ can not be a convex function.
\end{exa}
\begin{rem}\label{remtripleexpmotnotimproves}
It is worth noting that the use of triple exponential convexifying $\phi_N(x)=e^{e^{e^{N|x|^2}}}f(x)$ of a polynomial $f$ does not improve convexity of the function
$\phi_{N,\xi}(x)=e^{e^{e^{N|x-\xi|^2}}}f(x)$ regardless of $\xi\in X$.
\end{rem}
\section{Algorithm for searching lower critical points}\label{Algorithm0}
\subsection{Searching lower critical points in a compact set}
In this part we give an algorithm which produces, starting from an arbitrary point, a sequence of points converging to a lower critical point of a polynomial on a convex compact semialgebraic set. A similar algorithm was proposed in \cite{KS}.
Let $X\subset \mathbb R^n$ be a closed set and let $f$ be a function of class $C^1$ in a neighborhood $U\subset \mathbb R^n$ of $X$.
We denote the set of lower critical points of the function $f$ on the set $X$ by $\Sigma _X f.$ It is obvious that the set of ordinary critical points $\Sigma f$ of the function $f$ is contained in the set $\Sigma _X f.$
Our algorithm for approximation of lower critical points of $f$ is based on the iteration of computation of the smallest value of the strongly convex function $\varphi _{\xi }$ on the convex and compact set $X$. More precisely, let
$$
R\ge
\max\{|x|:x\in X\}
$$
Take any polynomial $f\in\mathbb{R}[x]$ of the form \eqref{funkcja f}. Let
$$
m=-\sum_{j=0}^dR^j\sum_{|\nu |=j}|a_{\nu }|,
$$
and let
$$
D>D_n (f,2R).
$$
Then we have
$$
f(x)-m+D\geq D \quad\hbox{for }x\in X,
$$
and from Corollary \ref{uwypuklanie na zwartymcor2}, we have that for any $\xi\in X$, the function
$$
\varphi_\xi(x)=e^{|x-\xi|^2}[f(x)-m+D],\quad x\in \mathbb{R}^n
$$
is $\mu$-strongly convex on $X$ for some $\mu>0$. Since we are looking for lower critical points of $ f $, so without loss of generality, we may assume that $-m+D=0$, therefore
$$
\varphi_\xi(x)=e^{|x-\xi|^2}f(x),\quad x\in \mathbb{R}^n
$$
is $\mu$-strongly convex function in $X$ for any $\xi\in X$.
Any strictly convex function $\varphi$ defined on a compact and convex set $X$ has the unique point, denoted by $\operatorname{argmin}_X \varphi$, in which the function $\varphi$ has the minimal value on the set $X$. Therefore, chosing an arbitrary point $a_0 \in X$, we can determine by induction a sequence $a_\nu \in X$, $\nu\in\mathbb{N}$, in the following way
\begin{equation}\label{eqanu}
a_\nu :=\operatorname{argmin}_X \varphi_{a_{\nu -1}}\quad \hbox{for }\nu \geq 1.
\end{equation}
\begin{tw}\label{approxi}
Let $X\subset \mathbb R^n$ be a compact convex semialgebraic set and $f:\mathbb R ^n \rightarrow \mathbb R$ a positive polynomial on $X$. Let $a_\nu $ be a sequence defined as $a_ \nu :=\operatorname{argmin}_X \varphi_{a_{\nu -1}}$ with $a_0 \in X.$ Then the limit $$
a_*=\lim_{\nu \rightarrow \infty } a_ \nu
$$
exist and $a_* \in \Sigma _X f.$
\end{tw}
The proof of Theorem \ref{approxi} follows word by word
the proof of Theorem 6.5 in \cite{KS}, where we should use the following three lemmas instead of the corresponding lemmas in \cite{KS}.
\begin{lemat}
For any $\nu \in \mathbb N$, we have
$$|a_{\nu +1}-a_{\nu }|=\operatorname{dist}(a_\nu , f^{-1}(f(a_{\nu +1})) \cap X).$$
\end{lemat}
\begin{lemat}\label{lemma44x}
For any $\nu \in \mathbb N$ we have
$$f(a_{\nu +1})\leq \frac{f(a_\nu )-\frac{\mu }{2}|a_{\nu +1}-a_\nu |^2}{e^{|a_{\nu +1}-a_\nu |^2}}.$$
In particular the sequence $f(a_\nu )$ is decreasing.
\end{lemat}
\begin{proof}
Since $\varphi _\xi $ is strongly convex, the definition of $a_{\nu +1}$ implies that the function
$$
[0,1] \ni t\mapsto \varphi _{a_\nu }(a_\nu +t(a_{\nu +1}-a_\nu ))
$$
decrease, so $\langle a_{\nu +1}-a_{\nu }, \nabla \varphi _{a_{\nu }}(a_{\nu +1 })\rangle \leq 0$.
Again by the fact that $\varphi _{a_\nu }$ is $\mu$-strictly convex, we get
$$
f({a_\nu })\geq f(a_{\nu +1})e^{|a_{\nu }-a_{\nu +1}|^2}+\frac{\mu}{2}|a_{\nu }-a_{\nu +1} |.$$
This gives the assertion.
\end{proof}
We can also addapt the following lemma (\cite[Lemma 6.3]{KS}).
\begin{lemat}
Let $f:[0,\eta ]\rightarrow \mathbb R$ be a $C^1$ function such that $0<f\leq C$ and $f'\leq -\eta $ on $[0,\eta ]$ for some $C\geq \frac{1}{2}$ and $\eta >0.$
Assume that $\varphi (x)=e^{x^2}f(x)$ is strictly convex on $[0,\eta ].$ Then $b_1:=\operatorname{argmin}_{[0,\eta ]}\varphi \geq \frac{\eta }{2C}.$ Hence $f(0)-f(b_1)\geq \frac{\eta ^2}{2C}.$
\end{lemat}
\begin{rem}\label{effectivemin}
The function $\varphi_{a_{\nu-1}}$ is defined by using the function $\exp$.
However, to determine the minimum value of this function on a compact convex semialgebraic set $X$ it is enough to solve only polynomial equations and inequalities. More precisely, the set $X$ is the union of a finite collection of basic semialgebraic sets, so we may assume that
$$
a_\nu\in X=\{x\in\mathbb{R}^n:g_1(x)\geq 0,\ldots,g_k(x)\geq 0\},
$$
where $g_1,\ldots,g_k\in\mathbb{R}[x]$. Then
$$
X=\{x\in\mathbb{R}^n:g_1(x)e^{|x-a_{\nu-1}|^2}\geq 0,\ldots,g_k(x)e^{|x-a_{\nu-1}|^2}\geq 0\}.
$$
Therefore, when applying Lagrange Multipliers or Karush-Kuhn-Tucker Theorem to compute the point $a_\nu$ it is enought to solve a system of polynomial equations and inequalities.
\end{rem}
\subsection{Searching lower critical points in an unbounded set}
Let $X\subset \mathbb R^n$ be a convex and closed semialgebraic set. Let $f\in \mathbb R[x]$ be a polynomial of degree $d>0$ of the form \eqref{funkcja f} and let $f_d$ be the leading form of $f$. Assume that $f_{d*} >0$.
Then by Theorem \ref{convexonconvexsetnewvar}, we may effectively compute a real number $N\geq 1$ such that the function $\varphi_{N,\xi}(x)=e^{N|x-\xi|^2}f(x)$ for $\xi\in\mathbb R^n$ is strongly convex on $X$. Moreover, $\varphi_{N,\xi}(x)\geq f(x)\geq m(f)|x|^d$ for $x\in X$, $|x|\geq \mathbb{K}(f)$, so we have
\begin{equation*}\label{factlim}
\lim_{x\in X,\,|x|\to\infty} \varphi_{N,\xi}(x)=+\infty.
\end{equation*}
Then we may uniquely determine the sequence
\begin{equation}\label{eqanu2}
a_\nu :=\operatorname{argmin}_{X} \varphi_{a_{\nu -1}}\quad \hbox{for }\nu \geq 1.
\end{equation}
Analogous argument as for Theorem \ref{approxi} gives the following theorem.
\begin{tw}\label{approxi2}
Let $a_\nu $ be a sequence defined by \eqref{eqanu2},
Then the limit $$
a_*=\lim_{\nu \rightarrow \infty } a_ \nu
$$
exist and $a_* \in \Sigma _X f.$
\end{tw}
\begin{rem}\label{generalcase}
If $X\subset \mathbb R^n$ is a closed and convex semialgebraic set and a polynomial $f\in\mathbb R[x]$ is positive on $X$ and it is proper on $X$ (i.e., $\lim_{x\in X,\,|x|\to \infty}f(x)=+\infty$), then by Theorem \ref{convexonconvexsetnewxdoublevarsemi} one can repeat the argument from Theorem \ref{approxi} and obtain a sequence $a_\nu\in X$ such that $\lim_{\nu\to\infty}a_\nu=a_*\in\Sigma_Xf$.
If we assume only that $f(x)>0$ on $X$,
then the sequence $a_\nu$ can tend to infinity. Moreover, in the construction of $a_\nu$ we have to change $N$ step by step
\end{rem}
| {
"timestamp": "2018-12-13T02:09:26",
"yymm": "1812",
"arxiv_id": "1812.04874",
"language": "en",
"url": "https://arxiv.org/abs/1812.04874",
"abstract": "Let $X\\subset\\mathbb{R}^n$ be a convex closed and semialgebraic set and let $f$ be a polynomial positive on $X$. We prove that there exists an exponent $N\\geq 1$, such that for any $\\xi\\in\\mathbb{R}^n$ the function $\\varphi_N(x)=e^{N|x-\\xi|^2}f(x)$ is strongly convex on $X$. When $X$ is unbounded we have to assume also that the leading form of $f$ is positive in $\\mathbb{R}^n\\setminus\\{0\\}$. We obtain strong convexity of $\\varPhi_N(x)=e^{e^{N|x|^2}}f(x)$ on possibly unbounded $X$, provided $N$ is sufficiently large, assuming only that $f$ is positive on $X$. We apply these results for searching critical points of polynomials on convex closed semialgebraic sets.",
"subjects": "Algebraic Geometry (math.AG); Classical Analysis and ODEs (math.CA)",
"title": "Exponential convexifying of polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9908743636887528,
"lm_q2_score": 0.8104789109591831,
"lm_q1q2_score": 0.803082775179834
} |
https://arxiv.org/abs/1710.05342 | Three problems on exponential bases | We consider three special and significant cases of the following problem. Let D be a (possibly unbounded) set of finite Lebesgue measure in R^d. Find conditions on D for which the standard exponential basis on the unit cube of R^d is a frame, a Riesz sequence, or a Riesz basis on L^2(D). | \section{Introduction}
\label{sec:intro}
We are interested in the following problem.
Let $D\subset\mathbb R^d$ be a set of Lebesgue measure $|D|<\infty$.
Let $E( \Z^d)=\{e^{2\pi i n\cdot x}\}_{n\in\Z^d}$ be the standard exponential basis for the unit cube $ \cal Q_d =[-\frac 12, \frac 12]^d$.
Can $E(\Z^d)$ be frame, or a Riesz sequence, or a Riesz basis for $L^2(D)$?
We have recalled definitions and general facts about frames, Riesz sequences and Riesz bases in Section 2.
Our investigation was motivated by the following problems:
\begin{itemize}
\item[{\bf P1.}] {\it (The broken interval)}. Let $ J= [0,\alpha)\cup [\alpha+r, L+r)$, with $0<\alpha<\, L$ and $r>0$. For which values of the parameters is the set $E(\Z)$ a Riesz basis, a Riesz sequence or a frame in $L^2(J)?$
\end{itemize}
It is easy to verify that $E(\Z)$ is a frame on $J$ when $L+r\leq 1$ and it is a Riesz sequence when either $\alpha\ge 1$ of $L-\alpha\ge 1$ (see also Lemma \ref{L-dil-basis}). It is proved in \cite{Laba} that $E(\Z)$ is an orthonormal basis for $J$ if and only if the measure of $J$ is $L=1$ and the "gap" $ r$ is a non-negative integer.
\begin{itemize}
\item[{\bf P2.}] {\it (The rotated square)}. Let $Q_h= [-\frac h2, \frac h2]\times [-\frac h2, \frac h2]$ be a square with side $h>0$. For $\theta\in [0, 2\pi)$, we let $\rho_\theta:\mathbb R^2\to\mathbb R^2$ be the rotation $\rho_\theta(x,y)= (x\cos\theta-y\sin\theta, \ x\sin\theta+y\cos\theta)$.
For which values of $\theta $ is $E(\Z^2)$ a Riesz basis, a Riesz sequence or a frame on $ \rho_\theta( Q_h)$?
\end{itemize}
%
Also the solution to this problem is trivial only for certain values of the parameters (for example, when $\theta$ is an integer multiple of $\frac \pi 2$).
\medskip
The next problem was kindly suggested by Chun Kit Lai.
\begin{itemize}
\item[{\bf P3.}] {\it (The translated parallelepiped)}. Let $P \subset \mathbb R^d$ be a parallelepiped with sides parallel to the vectors $ v_1$, ...,\, $ v_d\in\mathbb R^d$. Find conditions on these vectors for which that the set $E(\Z^d)$ is a Riesz basis, a Riesz sequence, or a frame in $L^2(P)$ .
\end{itemize}
We recall that a {\it lattice} is the image of $\Z^d$ by a linear invertible transformation \newline $B:\mathbb R^d\to\mathbb R^d$
and we observe that
Problem 3 is equivalent to the following: {\it for which lattices $\Lambda=B\Z^d$ is the set $E(B\Z^d)=\{e^{2\pi i Bn\cdot x}\}_{n\in\Z^d}$ a Riesz basis, or a Riesz sequence, or a frame in $L^2(\cal Q_d)$? }
Problem 3 is related to certain optimization problems on lattices that have deep applications in computer sciences and in cryptography. See Section 7.1 for details and references.
\medskip
We first prove necessary and sufficient conditions for which $E(\Z^d)$ is a Riesz sequence or a frame on a given domain $D\subset \mathbb R^d$ and then we completely solve Problems 1, 2 and 3.
We let \begin{equation}\label{e-Phi} \Phi(x)=\sum_{m\in\Z^d} \chi_D(x+m), \end{equation} where $\chi_D$ denotes the characteristic function of $D$. Note that $\Phi(x)$ only takes non-negative integer values. Our first result is the following
\begin{theorem}
\label{T-Riesz }
$E(\Z^d)$ is a Riesz sequence in $L^2(D)$
if and only if there exist constants $0<A\leq B <\infty$ for which $A\leq \Phi(x) \leq B$ for a.e. $x\in \cal Q_d $.
\end{theorem}
That is, we prove that $E(\Z^d)$ is a Riesz sequence in $L^2(D)$ if and only if the integer translates of $D$ (i.e., the sets
$D+n=\{x+n,\, x\in D\}$, with $n\in\Z^d$) cover $\mathbb R^d $ with the possible exception of a set of measure zero.
\medskip
It is interesting to compare Theorem \ref{T-Riesz } with results in \cite{AAC, K, GL}.
In these papers the authors consider domains that {\it multi-tile } $\mathbb R^d$, i.e. bounded measurable sets $S \subset\mathbb R^d$ for which there exist a set of translations $\Lambda $ and an integer $h>0$ such that $\sum_{\lambda\in\Lambda} \chi_{S+\lambda}(x)\equiv h$ a.e.; if $h=1$, we say that $S$ {\it tiles} $\mathbb R^d$.
It is proved in \cite[Theorem 1]{GL} and in \cite[Theorem 1]{K} that bounded domains that multi-tile $\mathbb R^d$ with a lattice of translation have an exponential basis; in the recent \cite[Theorem 4.4] {AAC} the converse of \cite[Theorem 1]{K} is proved.
If $\Phi$ is as in \eqref{e-Phi} and $\Phi(x)\equiv k$ a.e., then $D$ multi-tiles $\mathbb R^d$ with lattice of translations $\Z^d$. By Theorem \ref{T-Riesz }, $E(\Z^d)$ is a Riesz sequence on $L^2(D)$; when $D$ is bounded, it is shown in \cite[Theorem 1]{K} that $E(\Z^d)$ can be completed to an exponential basis for $L^2(D)$, but when $D$ is not bounded an example in \cite{AAC} shows that that may not be possible.
\medskip
Next, we investigate conditions for which $E(\Z^d)$ is a frame on $D $.
The following result is proved in \cite[Lemma 2.10]{GLi}. See also the recent \cite[Theorem 2]{BHM}.
\begin{theorem} \label{T-frame}
$E(\Z^d)$ is a frame on $L^2( D)$ if and only if for every $ m, s\in\Z^d$, with $ m\ne s$,
we have that \begin{equation}\label{e-ass-fr}
|(D+ m)\cap (D+ s)|=0.
\end{equation}
\end{theorem}
In other words,
$E(\Z^d)$ is a frame in $L^2(D)$ if and only if the integer translates of $D$ only overlap on sets of measure zero. Equivalently, $E(\Z^d)$ is a frame on $L^2( D)$ if and only if $ \Phi(x) \leq 1$ for a.e. $x\in\mathbb R^d$.
\medskip
We finally prove the following
\begin{theorem}\label{T-basis} Assume that $|D|=1$. The following are equivalent
in $L^2(D)$:
\begin{itemize}\item [a)]
$E(\Z^d)$ is a frame
\item [b)] $E(\Z^d)$ is a complete
\item [c)] The integer translates of $D$ tile $\mathbb R^d$.
\item[d)] $E(\Z^d)$ is an orthonormal Riesz basis
\item[e)] $E(\Z^d)$ is a Riesz sequence
\end{itemize}
\end{theorem}
We recall that a set $\{w_i\}_{i\in I}$ is {\it complete} in a Hilbert space $ (H ,\ \l \ , \ \r_H)$ if and only if $\l u, w_i\r_H=0$ for every $i\in I$ implies $u=0$. A frame is complete but the converse is not necessarily true.
B. Fuglede proved in \cite{F} that if $\Lambda=A\Z^d$ is a lattice in $\mathbb R^d$, the set $E(A\Z^d)$ is an orthogonal exponential basis in $L^2(D)$ if and only if $\{D +\mu\}_{\mu\in( A^t)^{-1} \Z^d }$ tiles $\mathbb R^d$. Here, $(A^t)^{-1}$ denotes the inverse of the transpose of $A$.
Thus, the equivalence of c) and d) in Theorem \ref{T-basis} is a special case of Fuglede's theorem.
The connections between tiling and exponential bases are deep and interesting and have been intensely investigated. We refer the reader to the introduction and to the references cited in \cite{BHM}. See also \cite{K2}.
\medskip
This paper is organized as follows:
in Section 2 we present preliminary definitions and known results. We prove Theorems \ref{T-Riesz }, \ref{T-frame} and \ref{T-basis} in Sections 3 and 4. We solve Problems 1, 2 and 3 in Sections 5, 6 and 7.
\medskip \noindent
{\it Acknowledgments.} The first author wishes to thank C.K. Lai for bringing Problem 3 to our attention and E. Hernandez for stimulating discussions that helped us improve the quality of this paper. We also wish thank the anonymous referees for their thorough reading of our manuscript and for their valuable suggestions.
\section{ Preliminary and notation}
We denote with $x\cdot y= x_1y_1+\,...\,+ x_dy_d$ the inner product of $x=(x_1, ...,\, x_d)$, $y=(y_1, ...,y_d)\in\mathbb R^d$.
We let $\|f\|_2=\left(\int_{\mathbb R^d} |f(x)|^2 dx\right)^{\frac 12}$ be the standard norm in $L^2(\mathbb R^d)$; we let ${\bf c}=\{c_j\}_{j\in\Z^d}$ and we denote with $\|{\bf c}\|_{\ell^2}=(\sum_{j\in\Z^d } |c_j|^2)^{\frac 12}$ the standard norm in $\ell^2(\Z^d)$. We denote with $\l f ,\, g \r_{2}=\int_{\mathbb R^d} f(x)\bar g(x)dx$ the inner product in $L^2(\mathbb R^d)$. When there is no ambiguity we will use the same notation also for the inner product in $\ell^2(\Z^d)$.
The Fourier transform of a function $f\in L^2(\mathbb R^d)\cap L^1(\mathbb R^d)$ is $\hat f(x)=\int_{\mathbb R^d} f(t) e^{-2\pi i x\cdot t} dt. $
We will often say that a family of sets $\{D_\lambda\}_{\lambda\in\Lambda}$ { covers} $\mathbb R^d$ with the understanding that $\mathbb R^d-\cup_{\lambda\in\Lambda} D_\lambda $ may be a nonempty set of measure zero.
We use the notation $\tau_w$ to denote the translation operator $g \to g (\cdot+w)$.
\subsection{Frames and Riesz bases}
We have used the excellent textbooks \cite{Heil} and \cite{Cr} for most of the definitions and preliminary results presented in this section.
Let $H$ be a separable Hilbert space with inner product $\langle\ ,\ \rangle $ and norm $||\ ||=\sqrt{\l \ , \ \r} $.
%
A sequence of vectors ${\mathcal V}= \{v_j\}_{j\in\Z} \subset H $ is a
{\it frame} if
there exist constants $0< A, \ B<\infty$ such that the following inequality holds for every $w\in H$.
\begin{equation}\label{e2-frame}
A||w||^2\leq \sum_{j\in\Z} |\l w, v_j\r |^2\leq B ||w||^2.
\end{equation}
%
We say that ${\cal V}$ is a {\it tight frame } if $A=B$ and is a {\it Parseval frame} if $A=B=1$.
The left inequality in \eqref{e2-frame} implies that ${\cal V}$ is complete in $H$ but it may not be linearly independent. A {\it Riesz basis} is a linearly independent frame.
An equivalent definition of Riesz basis is the following: the set
${\mathcal V}$ is a {\it Riesz sequence} if there exists constants $0<A\leq B <\infty$ such that, for every finite set of coefficients $ \{a_j\}_{j\in J}\subset\mathbb C $,
we have that
\begin{equation}\label{e2- Riesz-sequence}
A \sum_{j\in J} |a_j|^2 \leq \left\Vert \sum_{j\in J} a_j v_j \right\Vert^2 \leq B \sum_{j\in J} |a_j|^2,
\end{equation}
and it is {\it Riesz basis} if it also satisfies \eqref{e2-frame}.
If ${\mathcal V}$ is a Riesz basis, the constants $A$ and $B$ in \eqref{e2-frame} and \eqref{e2- Riesz-sequence} are the same (see \cite[Proposition 3.5.5]{Cr}).
An orthonormal basis is a Riesz basis; we can write $w=\sum_{j\in\Z} \l v_j, \, w\r v_j$ for every $v\in H$ and this representation formula yields the following important identities: for every $ w,\ z\in H$,
\begin{equation}\label{e-Planch}
||w||^2= \sum_{n\in\Z } |\l v_n,\, w\r|^2, \quad \l w, z\r= \sum_{n\in\Z } \l v_n,\, w\r\overline{\l v_n, z\r}.
\end{equation}
The following useful proposition can be found in \cite[Prop. 3.2.8]{Cr}.
\begin{proposition}\label{prop-C} A sequence of unit vectors in $H$ is a Parseval frame if
and only if it is an orthonormal Riesz basis.
\end{proposition}
\medskip
Let $D\subset \mathbb R^d$ be a measurable set, with $|D|<\infty$.
An {\it exponential basis} of $L^2(D)$ is a Riesz basis made of functions in the form of $ e^{2\pi i x\cdot \lambda }$, where $ \lambda \in\mathbb R^d$.
Exponential bases are important in the applications because they allow one to represent functions in $L^2(D)$ in a stable manner, with coefficients that are easy to calculate.
The following lemma is easy to prove (see e.g \cite[Prop. 2.1]{DK}).
\begin{lemma}\label{L-dil-basis} Let $D_1\subset D\subset D_2$ be measurable sets of $\mathbb R^d$ , with $|D_2|<\infty$.
%
Let ${\cal V}=\{ e^{2\pi i x\cdot \lambda_n }\}_{n\in\Z}$ be Riesz basis of $L^2(D)$ with frame constants $0<A\leq B<\infty$; then, ${\cal V}$ is a Riesz sequence on $L^2(D_2)$ and a frame on $L^2(D_1)$
with the same frame constants.
\end{lemma}
\subsection{The Beurling density}
In \cite{B, B1} A. Beurling characterized sampling sets by means of their density.
For $h > 0$ and $x \in\mathbb R^d$, we let $Q _h(x)$ denote the closed cube centered at $x$ with
side length $h$.
Let $\Lambda=\{\lambda_j\}_{j\in\Z}\subset \mathbb R^d$ be {\it uniformly discrete}, i.e., we assume that
$|\lambda_j-\lambda_k|\ge \delta>0$
whenever $\lambda_j\ne\lambda_k$. Following \cite{CDH} we denote with
\begin{align*} {\mathcal D}^+(\Lambda) &= \limsup_{h\to \infty} \frac{\sup_{x\in\mathbb R^d} |\Lambda\cap Q_h(x)|}{h^d}\\
{\mathcal D}^-(\Lambda) &= \liminf_{h\to \infty} \frac{\inf_{x\in\mathbb R^d} |\Lambda\cap Q_h(x)|}{h^d}
\end{align*}
the upper and lower density of $\Lambda$.
If ${\mathcal D}^-(\Lambda)={\mathcal D}^+(\Lambda)$ we say that $\Lambda$ has {\it uniform Beurling density} ${\mathcal D}(\Lambda)$.
Theorem \ref{T-density-exp} below is a generalization of theorems of Landau and Beurling \cite{B, Landau} in dimension $d\ge 1$. See also \cite{NO} and \cite[Sect. 2]{S}.
\begin{theorem}\label{T-density-exp}
If $E(\Lambda)=\{e^{2\pi i \lambda_j\cdot x}\}_{j\in\Z}$ is a frame in $L^2(D)$, then ${\mathcal D}^-(\Lambda)\ge |D|$.
If $E(\Lambda)$ is a Riesz sequence in $L^2(D)$, then ${\mathcal D}^+(\Lambda)\leq |D|$.
\end{theorem}
Thus, a necessary condition for $E(\Lambda)$ to be a Riesz basis in $L^2(D)$ is that ${\mathcal D} (\Lambda)= |D|$. In the special case of $\Lambda=\Z^d$ we have the following
\begin{corollary}\label{C-density}
If $E(\Z^d)$ is a frame in $L^2(D)$ then $|D|\leq 1$; if $E(\Z^d)$ is a Riesz sequence in $L^2(D)$ then $|D|\ge 1$.
\end{corollary}
\subsection{Shift invariant spaces}
We let
$$
V^2(\varphi):=\overline{\Span \{\tau_k\varphi \}_{k\in\Z^d } }
$$
where $\varphi\in L^{2}(\mathbb R^d)$ and ``bar'' denotes the closure in $L^2(\mathbb R^d)$.
The space $ V^2(\varphi)$ is {\it shift-invariant}, i.e. if $f\in V^2(\varphi)$ then also $\tau_m f\in V^2(\varphi)$ for every $m\in\Z^d$.
Shift-invariant spaces of functions appear naturally in signal theory and in other branches of applied sciences.
Following \cite{Aldroubi}, \cite{Christiansen}, \cite{CCS}
we say that the translates $\{ \tau_k\varphi \}_{k\in\Z^d}$ form a Riesz basis in $V^2 ( \varphi ) $ if there exist constants $0<A,\ B<\infty$ such that, for every finite set of coefficients ${\bf d}= \{d_j\} \subset\mathbb C $, we have that
\begin{equation}\label{E-p-basis-2}
A \|{\bf d}\|_{\ell^2} \leq \| \sum_j d_j \tau_j\varphi \|_{2} \leq B\|{\bf d}\|_{\ell^2}.
\end{equation}
If \eqref{E-p-basis-2} holds, then $
V^2(\varphi )= \left\{ f= \sum_{k\in\Z^d}d_k \tau_k\varphi, \ {\bf d} \in \ell^2\ \right\},
$
and the sequence $\{d_k \}_{k\in\Z^d}$ is uniquely determined by $f$.
\medskip
The following theorem is well known: see e.g \cite{JM} or \cite[Prop. 1.1]{AS}.
\begin{theorem} \label{T-basis2}
The set $\{ \tau_m\varphi \} _{m\in\Z^d}$ is a Riesz basis in $V^2(\varphi)$ with frame constants $0<A,\ B<\infty$ if and only if,
\begin{equation}\label{e1}
A= \inf_{y\in \cal Q_d}\sum_{m \in\Z^d} |\hat \varphi(y+m)|^2 \leq \sup_{y\in \cal Q_d}\sum_{m \in\Z^d} |\hat \varphi(y+m)|^2 = B.
\end{equation}
\end{theorem}
\section{ Proof of Theorem \ref{T-Riesz }}
Let $\ell^2_0(\Z^d)\subset \ell^2(\Z^d)$ be the set of sequences ${\bf a}=(a_n)_{n\in\Z^d}$ such that $a_n=0 $ whenever $|n|\ge N$, with $N=N({\bf a})\ge 0$.
Let
$ S({\bf a}) = \sum_{n\in\Z^d} a_n e^{2\pi i n\cdot x}$.
Recall that $E(\Z^d)$ is a Riesz sequence in $L^2(D)$ if and only if there exists constants $0<A,\ B<\infty$ such that
\begin{equation}\label{ineq-S}
A\|{\bf a}\|_2^2\leq \|S({\bf a}) \|_{L^2(D)} ^2\leq B\|{\bf a}\|_2 ^2
\end{equation}
for every ${\bf a}\in \ell^2_0(\Z^d).$
We gather:
\begin{align}\nonumber
\|S({\bf a}) \|_{L^2(D)} ^2= & \int_D \left|\sum_{n\in\Z^d} a_n e^{2\pi i n\cdot x}\right|^2 dx= \int_D\left(\sum_{n, m\in\Z^d} a_n \overline{a_m}\, e^{2\pi i (n-m)\cdot x}\right)dx
\\\label{Sa}
=& \sum_{n, m\in\Z^d} a_n \overline{a_m}\, \int_D e^{2\pi i (n-m)\cdot x} dx
= \sum_{n, m\in\Z^d} a_n \overline{a_m} \widehat{\chi_D}(n-m).
\end{align}
Let $T_D$ be the operator, initially defined in $\ell^2_0(\Z^d)$, as:
\begin{equation}\label{e-TD} T_D({\bf a})_m= \sum_{n \in\Z^d} a_n \widehat{\chi_D}(n-m), \ m\in\Z^d.
\end{equation}
The calculation above shows that
$
\|S({\bf a}) \|_{L^2(D)} ^2=\l T_D({\bf a}), \ {\bf a}\r_2,
$
where $\l\, , \,\r_2 $ denotes the inner product in $\ell^2(\Z^d)$. We can easily verify that $T_D({\bf a})$ is self-adjoint and, in view of \eqref{Sa}, that
$ \l T_D({\bf a}),\ {\bf a}\r_2\ge 0$ for every ${\bf a}\in\ell^2_0(\Z^d)$; thus, \eqref{ineq-S} holds if and only if
\begin{equation}\label{e-1}
A\|{\bf a}\|_2\leq \l T_D({\bf a}), \ {\bf a}\r_2 \leq B\|{\bf a}\|_2,\quad {\bf a}\in \ell^2_0(\Z^d).
\end{equation}
%
To prove \eqref{e-1} we need the following
\begin{lemma}\label{L-Haase}
Assume that $\dsize ||T_D||_{\ell^2\to\ell^2}= \sup_{||{\bf a}||_2=1}|| T_D ({\bf a})||_2 <\infty$.
The inequality below holds for every ${\bf a}\in \ell^2_0(\Z^d)$ such that $||{\bf a} ||_2=1$.
\begin{equation}\label{e2}
\displaystyle \frac{ ||T_D({\bf a})||_2^2}{ ||T_D||_{\ell^2\to\ell^2}}\leq \l T_D({\bf a}),\, {\bf a}\r_2 \leq ||T_D||_{\ell^2\to\ell^2}
.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Theorem \ref{T-Riesz }]
Let $\Phi(x) $ be as in \eqref{e-Phi}. We show that if there exist constants $0<A' \leq B' <\infty$ such that $A' \leq \Phi(x)\leq B' $ a.e. in $\cal Q_d$, then $E(\Z^d)$ is a Riesz sequence in $L^2(D)$.
Since $\Phi(x)=\sum_{m \in\Z^d} \chi_D(x+m) =\sum_{m \in\Z^d} |\chi_D(x+m)|^2 $,
by Theorem \ref{T-basis2} the set $\{ \tau_m\hat \chi_D \} _{m\in\Z^d}$ is a Riesz basis of $V^2(\hat \chi_D )$ with frame constants $0<A' ,\ B' <\infty$.
In view of \eqref{E-p-basis-2} and \eqref{e-TD}, the inequality
\begin{equation}\label{e-prime}
A' \|{\bf a}\|_2\leq \|T_D{\bf a}\|_2^2\leq B' \|{\bf a}\|_2
\end{equation}
holds for every ${\bf a}\in \ell^2_0(\Z^d)$. By Lemma \ref{L-Haase}, we have \eqref{e-1}, as required.
If $E(Z^d)$ is a Riesz sequence on $D$, we argue as in the proof of \cite[Theorem 3.1]{selvan}. Using Plancherel's identity and the Poisson summation formula, from \eqref{Sa} we obtain
\begin{align}\nonumber
||S({\bf a}) ||_{L^2(D)} ^2 &= \sum_m {\Big \vert} \sum_{n\in\Z^d} a_n\widehat{\chi_D}(n-m) {\Big \vert} ^2 = \int_{\cal Q_d} {\Big\vert} \sum_{m\in\Z^d} \sum_{n\in\Z^d} a_n\widehat{\chi_D}(n-m) e^{2\pi i x\cdot m} {\Big\vert}^2dx
\\ \nonumber & =\int_{\cal Q_d} {\Big\vert} {\Big( } \sum_{n\in\Z^d} a_n e^{2\pi i x\cdot n} {\Big )}\sum_{m\in\Z^d} \widehat{\chi_D}(n-m) e^{2\pi i x\cdot (m-n)} {\Big\vert}^2dx
%
\\ \label{new e}& =
\int_{\cal Q_d} \, {\Big\vert}\sum_{n\in\Z^d} a_n e^{2\pi i n\cdot x}{\Big\vert}^2\, |\Phi(x)|^2 dx.
%
\end{align}
By assumption, the integral in \eqref{new e} is finite and so so $\Phi(x) <\infty$ a.e..
To show that $\Phi(x) >0$ a.e. we argue by
contradiction: suppose that there exists $\Omega\subset D$, with $|\Omega|>0$, where $\Phi(x)\equiv 0$. We can assume that $\Omega\subset \cal Q_d$. Since $E(\Z^d)$ is a Riesz basis in $L^2(\cal Q_d)$, we can write $\chi_{\cal Q_d-\Omega}(x)=\sum_{n\in\Z^d} b_ne^{2\pi i n\cdot x}$, with $\vec b\in \ell^2(\Z^d)$. Thus, $\int_{\cal Q_d} |\Phi(x)|^2 \, \left|\sum_{n\in\Z^d} b_n e^{2\pi i n\cdot x}\right|^2dx =0$ which, together with \eqref{new e}, contradicts \eqref{ineq-S}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{L-Haase}]
The right inequality in \eqref{e2} is \cite[Theorem 13.8]{Haase} so we only need to prove the left inequality. Let $\alpha= \sup_{||{\bf a}||_2=1}|\l T_D({\bf a}),\, {\bf a}\r|$ and $U= \alpha I-T_D$, where $I$ is the identity operator in $\ell^2(\Z^d)$. It is easy to verify that $U$ is positive and that
\begin{equation}\label{e-id3}
T_D\,U\,T_D+\,U\,T_D\,U\,=\alpha^2T_D-\alpha T_D^2.
\end{equation}
The operators $T_D\,U\,T_D$ and $\,U\,T_D\,U\,$ are positive too; indeed, for every ${\bf a}\in \ell^2$, we have that $\l T_D\,U\,T_D {\bf a}, \, {\bf a}\r_2= \l \,U\,(T_D{\bf a}), \ T_D{\bf a}\r_2\ge 0$ and $\l \,U\,T_D\,U\,{\bf a} , \, {\bf a}\r_2= \l T_D(\,U\,{\bf a}), \ \,U\, {\bf a}\r_2 \ge 0$ because $T_D$ and $\,U\,$ are both positive. By \eqref{e-id3}, also the operator $\alpha T_D- T_D^2$ is positive. For every ${\bf a}\in \ell^2$ with $||{\bf a}||_2=1$, we have that
$$
\l (\alpha T_D- T_D^2) {\bf a}, \ {\bf a}\r_2= \alpha \l T_D{\bf a}, \ {\bf a}\r_2- \l T_D^2{\bf a}, \ {\bf a}\r_2
= \alpha \l T_D{\bf a}, \ {\bf a}\r_2-||T_D{\bf a}||_2^2\ge 0
$$
and the left inequality in \eqref{e2} is proved.
\end{proof}
\medskip
\noindent
{\it Remark.} From the identity \eqref{new e} it follows that the constants $A$ and $B$ in \eqref{ineq-S} are the minimum and maximum of $\Phi(x)$ on the unit square $\cal Q_d$.
Thus,
$A $ and $B $ are integers.
When $|D|=1$, Theorem \ref{T-basis} shows that $E(\Z^d)$ is a Riesz sequence if and only the integer translates of $D$ tile $\mathbb R^d$, and so $A =B =1$. In general, if $k \leq |D|< k+1$ for some positive integer $k$, we can easily verify that the integer translates of $D $ cover $\mathbb R^d$ $k$ times but not $k+1$ times.
Thus, $A \leq |D|$ and $B \ge |D|$.
\section{Proof of Theorem \ref{T-frame} }
\medskip
Let $D\subset \mathbb R^d$ be measurable, with $|D|\leq 1$. By Lemma \ref{L-dil-basis},
the theorem is trivial when $D\subset \cal Q_d $, so we assume that $ D- \cal Q_d $ has positive measure. Let $D_1$, ...,\, $D_N ,\,...$ be a (possibly infinite) family of disjoint sets of positive measure such that $D-\cal Q_d= \cup_{j} D_j$. We can choose the $D_j$ in such way that, for certain vectors $v_1$, ... $v_N,\, ... \in \Z^d$, we have that $D_j+v_j\subset \cal Q_d. $ Let $D_0= D\cap \cal Q_d $ and $v_0=0$.
%
We prove the following
\begin{lemma} \label{L-frame2}
$E(\Z^d)$ is frame for $L^2( D)$ if and only, for every $ v \in\Z^d$ and every $k\ne j$,
\begin{equation}\label{e-assumptions-frame}|( v +D_j)\cap D_k |= 0. \end{equation}
\end{lemma}
It is easy to verify that \eqref{e-assumptions-frame} is equivalent to \eqref{e-ass-fr}, and so Theorem \ref{T-frame} is equivalent to Lemma \ref{L-frame2}.
\begin{proof}
Assume that $|(D_1 + v) \cap D_0|>0 $ for some $v\in\Z^d$ (the proof is similar in the other cases). We can assume without loss of generality that $D_1 + v \subset \cal Q_d $ (see Figure 1); otherwise we let $D_1= D_1' \cup D_1''$, with $D'_1+v\subset \cal Q_d$ and we replace $D_1$ with $D_1'$. We show that $E(\Z^d)$ is not a frame on $L^2(D)$.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
%
\fill[gray!20] (-1.5,0)-- (-1.5, 3)-- ( -.5,3)--(0, 4)--(.5,3)--(1.5,3)--(1.5, 0)--( .7, 0)-- (.7, .7)--(-.7, .7)-- (-.7, 0)--(-1.5,0);
\fill[gray!20] ( -.5,3)--(0, 4)--(.5,3)-- (-.5,3);
\draw[black]( -.5,0)--(0, 1)--(.5,0)-- (-.5,0);
\fill[gray!60] ( -.5,0)--(0, 1)--(.5,0)-- (-.5,0);
\draw[ black] (-1.5,0)-- (-1.5, 3)-- ( -.5,3)--(0, 4)--(.5,3)--(1.5,3)--(1.5, 0)--( .7, 0)-- (.7, .7)--(-.7, .7)-- (-.7, 0)--(-1.5,0);;
\draw[thick, black, dashed] (-1.5,0)--(-1.5,3)--(1.5, 3)--(1.5,0)--(-1.5, 0);
\draw [-> ] (0,3)-- (0,1.5);
\draw (.2 ,2.2) node [black ] {$ v$};
\draw (-.5 ,1.5) node [black ] {$D_0$};
\draw (0 ,3.3) node [black ] { \small{$D_1$}};
\draw (0 , .2) node [black ] { \tiny{$ D_1\!+\!v$}};
\draw (2 , 1.5) node [black ] {$\cal Q_d $};
\end{tikzpicture}
\caption{ }
\end{center}
\end{figure}
Every $f\in L^2(D)$ can be written as $f = f_0 + f_1$ where $f_0 = f \chi_{D-D_1}$ and $f_1 = f \chi_{D_1}$.
%
Recall that $\tau_w g(x)= g (x+w)$.
It follows that
\begin{align}\nonumber
|\inner{e^{2\pi i n\cdot x}}{f}_{L^2(D)}|^2
&= |\inner{e^{2\pi i n\cdot x}}{f_0}_{L^2(D-D_1)} + \inner{e^{2\pi i n\cdot x}}{f_1}_{L^2(D_1)}|^2 \\\label{1}
&= |\inner{e^{2\pi i n\cdot x }}{f_0}_{L^2(D-D_1)} + \inner{e^{2\pi i n\cdot (x-v)}}{\tau_{- v}f_1 }_{L^2( D_1+v))}|^2 \\\nonumber
&= |\inner{e^{2\pi i n\cdot x}}{f_0}_{L^2(\cal Q_d )} + \inner{e^{2\pi i n\cdot x}}{\tau_{-v}{f_1}}_{L^2(\cal Q_d )}|^2 \\\nonumber
&= |\inner{e^{2\pi i n\cdot x}}{f_0}_{L^2(\cal Q_d )}|^2 + |\inner{e^{2\pi i n\cdot x}}{\tau_{-v}{f_1}}_{\cal Q_d )}|^2 \\\nonumber & \quad + 2\Real{\left(\inner{e^{2\pi i n\cdot x}}{f_0}_{L^2(\cal Q_d )} \conjugate{\inner{e^{2\pi i n\cdot x}}{\tau_{-v}{f_1}}_{L^2(\cal Q_d )}}\right)}.
\end{align}
We have used the change of variables $x\to x-v$ in the second inner product in \eqref{1} and the fact that $e^{2\pi i n\cdot v}=1$. Since $E(\Z^d)$ is an orthonormal basis in $\cal Q_d $, the identities \eqref{e-Planch} in Section 2 and the calculation above yield
%
\begin{equation}\label{e-id1}
%
\begin{split}
\sum_{n \in \Z^d} |\inner{e^{2\pi i n\cdot x}}{f}_{L^2(D)}|^2
%
&= \norm{f_0}_{L^2(\cal Q_d )}^2 + \norm{ \tau_{-v} f_1 }_{L^2(\cal Q_d )}^2 \\ & + 2\Real{ \inner{f_0}{\tau_{-v}{f_1}}_{L^2(\cal Q_d )} }.
\end{split}
\end{equation}
If we let $B= (D-D_1)\cap (D_1 +v)$
we can choose $ f=f_1+f_0$, with $ f_1(x)= \chi_{B}(x+v)$ and $f_0(x) =- \chi_B(x) $; from \eqref{e-id1} it readily follows that
%
$\sum_{n \in \Z^d} |\inner{e^{2\pi i n\cdot x}}{f}_{L^2(D)}|^2 =0$, which contradicts
\eqref{e2-frame}.
\medskip
We now assume that $|(w+D_j)\cap D_k|=0$ for every $k\ne j$ and every $w\in\Z^d$; we prove that $E(\Z^d)$ is a tight frame in $L^2(D)$.
We assume for simplicity that $ D_1+v \subset \cal Q_d $ for some $v \in\Z^d$. Let $f=f_0+f_1$ be as in the first part of the proof. By assumption, $|(D_1+v) \cap (D-D_1)| =0$, and so \eqref{e-id1} yields
\begin{align*}\sum_{n \in \Z^d} |\inner{e^{2\pi i n\cdot x}}{f }_{L^2(D)}|^2& = \norm{f_0}_{L^2(\cal Q_d )}^2 + \norm{ \tau_{-v} f_1 }_{L^2(\cal Q_d )}^2
\\ &=\norm{f \chi_{D-D_1}}_{L^2(\cal Q_d )}^2 + \norm{ f \chi_{D_1 +v}}_{L^2(\cal Q_d )}^2
= ||f||_{L^2(D)}.
\end{align*}
Thus, $E(\Z^d)$ is a tight frame in $L^2(D)$ as required.
\end{proof}
\medskip
The proof of Theorem \ref{T-frame} shows that if \eqref{e-assumptions-frame} is not satisfied, we can produce a function $f\in L^2(D)$ for which $\l f,\, e^{2\pi i x\cdot n}\r_{L^2(D)}=0$ for every $n\in\Z^d$, and so $E(\Z^d)$ is not complete. This observation proves the following:
\begin{corollary}\label{C-complete}
$E(\Z^d)$ is complete in $L^2(D)$ if and only if the integer translates of $D$ intersect on sets of measure $0$.
\end{corollary}
\begin{proof}[Proof of Theorem \ref{T-basis}]
We have proved that a) $\iff$ b); we show that $b)\iff c)$.
By Corollary \ref{C-complete},
$E(\Z^d)$ is complete in $L^2(D)$ if and only if the integer translates of $D$ overlap only on sets of measure zero. Thus, c) $\Rightarrow$ b). Let us prove that b) $\Rightarrow$ c); let $D_0,\, D_1$, ...,\, $D_N ,\,...$ and $v_0, \,v_1, \, ...,\, \, v_N,\, ...$ be as in the proof of Lemma \ref{L-frame2}.
Since $|(D_j+v_j)\cap (D_k+v_k)|=0 $ when $k\ne j$, and
\begin{equation}\label{e-Q-D}
1= |\cal Q_d|=|D|= |D_0| + \sum_j |D_j+v_j| \end{equation}
necessarily $ \cup_j (D_j+v_j) =\cal Q_d$ and the integer translates of $D$ tiles $\mathbb R^d$.
By Fuglede's theorem, c) $\iff$ d). Clearly d) $\Rightarrow$ e); to finish the proof of the theorem we show that e) $\Rightarrow$ c). By Theorem \ref{T-Riesz }, the integer translates of $D$ cover $\mathbb R^d$; thus, $ \cup_j (D_j+v_j) =\cal Q_d$ and from \eqref{e-Q-D} follows that the $D_j+v_j$'s can only intersect on set of measure zero. Thus, the integer translated of $D$ can only intersect on sets of measure zero and c) is proved.
\end{proof}
\section{The broken interval}
In this section we solve the first problem stated in the introduction. We let $J= [0,\alpha)\cup [\alpha+r, L+r)\subset \mathbb R$, with $0<\alpha<L$ and $r>0$.
By Lemma \ref{L-frame2} and Theorem \ref{T-Riesz }, $E(\Z)$ is a frame on $L^2(J)$ if and only if the integer translates of $J$ do not overlap in $[0,1]$ and it is a Riesz sequence if and only if the integer translates of $J$ cover $\mathbb R$.
\medskip
Let $[r]$ be the integer part of $r$, i.e., the largest integer $n \leq r$; let $\{r\}= r - [r]$ be the fractional part of $r$.
We prove the following
\begin{theorem}\label{T-riesz-interval}
a) $E(\Z)$ is a frame on $J$ if and only if $L + \{r\} \leq 1$.
\\
b) $E(\Z)$ is a Riesz sequence on $J $ if and only if one of the following is true: \begin{itemize}
\item[i)]
$\alpha \geq 1$ or $L - \alpha \geq 1$
\item[ ii)]
$\{r\} = 0$ and $L \geq 1$
\item[ iii)] $1 \leq L < 2$ and $L + \{r\} \geq 2$;
\end{itemize}
\medskip
\end{theorem}
%
To prove b) we will need the following
\begin{lemma}\label{L-frac-part}
The integer translates of $J$ cover $\mathbb R$ if and only if the integer translates of $J' = [0,\alpha) \cup [\alpha + \{r\}, L + \{r\})$ cover $\mathbb R$.
\end{lemma}
\begin{proof}
If the integer translates of $J$ cover $\mathbb R$ then for $ x\in\mathbb R$, there is an integer $m$ such that either $x \in (m, \alpha +m)$ or $x \in (\alpha + r + m,\ L + r + m)$. If $x \in (m, \alpha + m)$ then clearly $x\in J'+m$ as well. If $x \in (\alpha + r + m,\ L + r + m)$, then $x \in (\alpha + \{r\} + [r] + m,\ L + \{r\} + [r] + m)$, i.e. $x$ is in the translation of $J'$ by $[r]+m$. The converse is similar.
\end{proof}
\begin{proof}[Proof of Theorem \ref{T-riesz-interval}]
By Theorem \ref{T-frame} and Lemma \ref{L-frac-part}, $E(\Z)$ is a frame on $J$ if and only if the integer translates of $[0,\alpha)\cup[\alpha + \{r\}, L + \{r\})$ do not intersect in $[0,1]$, or if and only if $(0,\alpha) \cap (\alpha + \{r\} - 1, L + \{r\} - 1) = \varnothing$. This is equivalent to having either $\alpha \leq \alpha + \{r\} -1$, which is impossible, or $L + \{r\} \leq 1.$ That proves part a).
\medskip
Let us prove part b). By Lemma \ref{L-frac-part} we can assume that $r = \{r\}$, i.e. that $0 \leq r < 1$.
By Theorem \ref{T-Riesz }, $E(\Z)$ is a Riesz sequence on J if and only if the integer translates of J cover $\mathbb R$.
If one of the connected components $[0,\alpha)$ or $[\alpha + r, L + r)$ covers $\mathbb R$ by integer translations, we have that either $\alpha \geq 1$ or $L - \alpha \geq 1$, and i) is proved.
If neither component covers $\mathbb R$ by integer translations, i.e. if both $\alpha < 1$ and $L - \alpha < 1$, we can consider 2 sub-cases:
\begin{itemize}\item If $r = 0$, we have that $J=[0, L)$, and the integer translates of $J$ cover $\mathbb R$ if and only if $L \geq 1$; that proves ii).
\item Suppose next that $r > 0$.
The integer translates of $J$ cover $\mathbb R$ if and only if the "gap" $(\alpha, \alpha + r)$ is covered by integer translates of $J$. This is possible if and only if
$ (1, \alpha +1) \cup (\alpha + r - 1, L + r -1)\supset (\alpha, \alpha + r) $ (see Figure2).
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
%
\draw[ black, ->] (-.5 ,0 )-- ( 5 , 0 ) ;
\draw[ black, thick] (0,0)-- ( 1.5, 0);
\draw[ black, thick] ( 2.5,0)-- ( 3.5, 0 );
\draw[ black, thick] (1,1)-- ( 2.5, 1);
\draw[ black, thick] ( 3.5,1)-- ( 4.5, 1);
\draw ( 0,-.2 ) node [black ] {{\tiny $ 0 $}};
\draw ( .2, .2 ) node [black ] {{\tiny $ J $}};
\draw ( 1.5, -.2) node [black ] {{\tiny $ \alpha $}};
\draw (2.5, -.2 ) node [black ] {{\tiny $ r+\alpha $}};
\draw (3.5,-.2 ) node [black ] {{\tiny $ r+L $}};
\draw (5,1 ) node [black ] {{\tiny{\bf $J+1 $}}};
\draw [black,fill] (1,1 ) circle [radius=0.06];
\draw [black,fill] (2.5,1 ) circle [radius=0.06];
\draw [black,fill] (3.5,1 ) circle [radius=0.06];
\draw [black,fill] (4.5,1 ) circle [radius=0.06];
\draw[black, dashed] (1,1)--(1,0);
\draw [black,fill] (0, 0) circle [radius=0.06];
\draw [black,fill] (1.5, 0) circle [radius=0.06];
\draw [black,fill] (2.5, 0) circle [radius=0.06];
\draw [black,fill] (3.5, 0) circle [radius=0.06];
\draw [black,fill] (1, 0) circle [radius=0.06];
\draw (1,-.2 ) node [black ] {{\tiny $ 1 $}};
\end{tikzpicture}
\caption{ }
\end{center}
\end{figure}
We have $(1, \alpha + 1) \cap (\alpha, \alpha + r) = (1, \alpha + r)$, because $\alpha , r < 1$. Thus, J covers $\mathbb R$ if and only if
$(\alpha + r -1, L + r - 1) \cap (\alpha, \alpha + r) \supset (\alpha, 1)$. This is equivalent to the conditions
%
\begin{center}
$ \begin{cases} r - 1 \leq 0 \\ L + r -1 \geq 1 \\ \alpha + r \geq 1 \end{cases} \Longleftrightarrow \begin{cases} r \leq 1 \\ L + r \geq 2 \\ \alpha + r \geq 1 \end{cases}.$
\end{center}
%
Since $\alpha < 1$ and $L - \alpha < 1$ by assumption, and recalling that $r < 1$, we can see at once that the condition $L + r \geq 2$ implies $\alpha + r \geq 1$. Indeed, if $\alpha + r < 1$, then $$L + r = L - \alpha + \alpha + r < L - \alpha + 1 < 2.$$ Thus, the integer translates of $J$ covers $\mathbb R$ if and only if $L + r \geq 2$, and we have iii). The theorem is proved.
\end{itemize}
%
\end{proof}
\section{The rotated square}
Let $ Q_h= Q_h (0)=[-\frac{h}{2}, \frac{h}{2}] \times [-\frac{h}{2}, \frac{h}{2}]$ be the square in $\mathbb R^2$ centered at the origin with sides of length $h$. Let $A_\theta=\begin{bmatrix}
\cos{(\theta)} & \sin{(\theta)} \\
-\sin{(\theta)} & \cos{(\theta)}
\end{bmatrix}$ be the matrix of a rotation by an angle $\theta$, and let
$
Q_{h,\theta} =A_{\theta}Q_h(0)
$
be the square obtained from the rotation of $Q_h(0)$. The following theorem offers a complete solution to Problem 2:
\begin{theorem}
a) $E(\Z^2)$ is a Riesz sequence on $L^2(Q_{h,\theta})$ if and only if
$ h\ge 1-\sin(2\theta) $.
b) $E(\Z^2)$ is a frame on $L^2(Q_{h,\theta})$ if and only if $h\leq \frac{1}{\sin\theta+\cos\theta}$.
\end{theorem}
\begin{proof}
We first prove Part a).
Let $P_1= (\frac 12, \frac 12), \ P_2= (-\frac 12, \frac 12)\ P_3= (-\frac 12, -\frac 12)\ P_4= (\frac 12, -\frac 12)$ be the vertices of $\cal Q_2$. We first find conditions on $h$ and $\theta$ for which the points $P_1$, ..., $P_4$ lie on the sides of $Q_{h,\theta}$.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
%
\draw[ black, dashed] (-1.5 ,-1.5 )-- ( 1.5 , -1.5 )-- (1.5 , 1.5 )--(-1.5 , 1.5 )--(-1.5 , -1.5 );
\fill[gray!10](-2.4,0)-- ( 0, -2.4)-- ( 2.4,0)-- (0, 2.4 )--(-2.4,0);
\draw[ black] (-2.4,0)-- ( 0, -2.4)-- ( 2.4,0)-- (0, 2.4 )--(-2.4,0);
\draw[ black, dashed] (-1.5 ,-1.5 )-- ( 1.5 , -1.5 )-- (1.5 , 1.5 )--(-1.5 , 1.5 )--(-1.5 , -1.5 );
\draw[ black] (-3 ,0)-- ( 0, -3 )-- ( 3 ,0)-- (0, 3 )--(-3 ,0);
\draw (0 ,0) node [black ] { $ Q_{h,\theta}$};
%
\draw (1.8, -1.8 ) node [black ] {{\small $ P_4 $}};
\draw (1.8, 1.8 ) node [black ] {{\small $ P_1 $}};
\draw (-1.8, 1.8 ) node [black ] {{\small $ P_2 $}};
\draw (-1.8, -1.8 ) node [black ] {{\small $ P_3 $}};
\draw (-3.3,0 ) node [black ] {{\small $ Q_3 $}};
\draw (3.3,0 ) node [black ] {{\small $ Q_1 $}};
\draw (0, -3.3 ) node [black ] {{\small $ Q_4 $}};
\draw (0,3.3) node [black ] {{\small $ Q_2 $}};
\draw (2.5,2 ) node [black ] {{ $Q_{l(\theta), \theta}$}};
\fill[gray!60] ( 1.5 , 1.5 )-- ( 1.1 , 1.5 )-- (1.1 , 1.3 )--( 1.5 , 1.3)--( 1.5 , 1.5 );
\draw[ black, thick] ( 1.5 , 1.5 )-- ( 1.1 , 1.5 )-- (1.1 , 1.3 )--( 1.5 , 1.3)--( 1.5 , 1.5 );
\fill[gray!60] ( -1.5 , -1.5 )-- (- 1.1 , -1.5 )-- (-1.1 , -1.3 )--(- 1.5 , -1.3)--( -1.5 , -1.5 );
\draw[ black, thick] (- 1.5 ,- 1.5 )-- ( -1.1 , - 1.5 )-- (-1.1 , -1.3 )--( -1.5 , -1.3)--( -1.5 , -1.5 );
\fill[gray!60] ( 1.5 , -1.5 )-- ( 1.1 , -1.5 )-- ( 1.1 , -1.3 )--( 1.5 , -1.3)--( 1.5 , -1.5 );
\draw[ black, thick] ( 1.5 ,- 1.5 )-- ( 1.1 , - 1.5 )-- ( 1.1 , -1.3 )--( 1.5 , -1.3)--( 1.5 , -1.5 );
\fill[gray!60] ( -1.5 , 1.5 )-- (- 1.1 , 1.5 )-- (-1.1 , 1.3 )--(- 1.5 , 1.3)--( -1.5 , 1.5 );
\draw[ black, thick] (- 1.5 , 1.5 )-- ( -1.1 , 1.5 )-- (-1.1 , 1.3 )--( -1.5 , 1.3)--( -1.5 , 1.5 );
\draw [black,fill] (1.5, 1.5) circle [radius=0.06];
\draw [black,fill] (1.5, -1.5) circle [radius=0.06];
\draw [black,fill] (-1.5, 1.5) circle [radius=0.06];
\draw [black,fill] (-1.5,- 1.5) circle [radius=0.06];
\end{tikzpicture}
\caption{ }
\end{center}
\end{figure}
Let $\ell_{1}: y-\frac 12=\tan(\theta)(x-\frac 12)$,
$ \ell_{2}: y-\frac 12=-\frac 1{\tan(\theta) } (x+\frac 12)$ and $\ell_{3}: y+\frac 12=\tan(\theta)(x+\frac 12)
$
be the equations of the sides of $Q_{h,\theta}$ that contain the points $P_1$, $P_2$ and $P_3$, resp.
It is easy to verify that $\ell_2$ intersects $\ell_1$ and $\ell_3$ at the points
$
Q_2= \left( -\frac 12\cos(2\theta), \ \frac 12(1-\sin(2\theta))\right)$ and $
Q_3= \left( -\frac 12(1-\sin(2\theta)) , \ -\frac 12\cos(2\theta)\right)
$,
and that the length of the segment that join $Q_2$ and $Q_3$ equals to $ l(\theta) =1-\sin(2\theta)$.
Thus, when $h\ge l(\theta)$, the set $E(\Z^2)$ is a Riesz sequence on $L^2(Q_{h, \theta})$.
We show that when $h < l(\theta)$ the integer translates of $Q_{h, \theta}$ do not cover the plane anymore. Indeed, if $h < l(\theta)$, the four vertices of $\cal Q_2$ are outside the square $Q_{l(\theta), \theta}$ and have positive distance from the boundary of $Q_{h,\theta}$. We can find a small rectangle $R$ with sides parallel to the sides of $\cal Q_2$ for which $ R+P_j \subset \cal Q_2-Q_{h, \theta}$ for every $j$ (see Figure 3). The integer translates of $Q_{h,\theta}$ cannot cover the rectangles $R+P_j$ and so the condition of Theorem \ref{T-Riesz } is not verified.
\medskip
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
%
\draw[ black, dashed] (-2, 1)--(3,1)--(3,6)--(-2, 6)--(-2, 1);
\draw (3.5,5.2) node [black ] {{\small $ \cal Q_2 $}};
\draw[black] (1,7)--(4,3)--(0,0)--(-3,4)--(1,7);
\fill[gray!60](.5,6)--(.5, 6.3)--(.2, 6.3)--(.2, 6)--(.5, 6);
\draw[black ] (.5,6)--(.5, 6.3)--(.2, 6.3)--(.2, 6)--(.5, 6);
%
\draw (.1,6.5) node [black ] {{\small ${ R } $}};
\fill[gray!30](1,6)--(-2,4)--(0,1)--( 3,3)--(1,6);
\draw[black] (1,6)--(-2,4)--(0,1)--( 3,3)--(1,6);
\draw (1, 6.4) node [black ] {{\small $ Q_1 $}};
\draw (-2.4, 4) node [black ] {{\small $ Q_2 $}};
\draw (0, .6 ) node [black ] {{\small $ Q_3 $}};
\draw (3.4, 3) node [black ] {{\small $ Q_4 $}};
\draw (0,3) node [black ] {{ $Q_{s(\theta), \theta}$}};
\draw [black,fill] (1,6) circle [radius=0.06];
\draw [black,fill] (-2,4) circle [radius=0.06];
\draw [black,fill] (0,1)circle [radius=0.06];
\draw [black,fill] (3,3)circle [radius=0.06];
\end{tikzpicture}
\caption{ }
\end{center}
\end{figure}
Let us prove Part b):
the vertices $Q_1$, ..,\, $Q_4$ of $Q_{h,\theta}$ lie on the sides of $\cal Q_2$ if and only if there exists $0<t\leq 1$ for which $Q_1=(t-\frac 12,\, \frac 12)$, $Q_2= (-\frac 12,\, t-\frac 12)$, $Q_3= ( \frac 12 -t,\, -\frac 12)$ and $Q_4= ( \frac 12, \, \frac 12 -t)$.
If we let $\tan(\theta)$ be the slope of the line that joins $Q_3$ and $Q_4$, we can see at once that
$ \tan(\theta)= \frac {1-t}{t}$; thus, $ t=\frac{1}{1+\tan(\theta)}= \frac{\cos(\theta)}{\sin(\theta)+\cos(\theta)}.$
The length of the segment $[Q_3Q_4]$ is then:
$$
s(\theta)= \sqrt{ t^2+(1-t )^2}= \frac{1}{\sin(\theta)+\cos(\theta)} $$
and $E(\Z^2)$ is a frame on $ Q_{h,\theta}$ whenever $h\leq s(\theta)$.
Let us show that $E(\Z^2)$ is not a frame on $Q_{h,\theta}$ whenever $h>s(\theta)$. Indeed, if $h>s(\theta)$, the set $Q_{h,\theta}-\cal Q_2$ has positive measure; we can find a small rectangle $R$ with sides parallel to the sides of $\cal Q_2$ and vectors $n_1$, ..., $n_4\in\Z^2$ such that $R+n_j\subset Q_{h,\theta}-\cal Q_2$ (see Figure 3). Thus, the integers translates of $Q_{h,\theta}$ overlap on the rectangles $R+n_j$ and by Theorem \ref{T-frame}, $E(\Z^2)$ is not a frame on $L^2(Q_{h,\theta})$.
\end{proof}
\section{The translated parallelepiped} In this section we solve Problem 3.
Let $P\subset \mathbb R^d$ be a parallelepiped with sides parallel to vectors $ v_1$, ...,\, $ v_d$.
We let $A=\{a_{i,j}\}_{1\leq i,j\leq d}$ be the matrix whose columns are $ v_1$, ...,\, $ v_d$; we let $A^{-1}=\{b_{i,j}\}_{1\leq i,j\leq d}$.
We prove the following
\begin{theorem}\label{T-frame-parall}
a) The set $E(\Z^d)$ is a frame on $L^2(P)$ if and only if $\det(A) \leq 1$ and $\max_{{1\leq i,k, j\leq d}\atop{j\ne k}}|a_{i,j}|+ |a_{i,k}|\leq 1$
b) The set $E(\Z^d)$ is a Riesz sequence on $L^2(P)$ if and only if $\det(A ) \ge1$ and $\max_{1\leq i, j\leq d} |b_{i,j}| \leq 1$.
\end{theorem}
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
%
\draw[ black, ->] (-1, 0)--(7,0);
\draw[ black, ->] (0,-1)--(0, 5);
\draw[black] (0,0)--(2,2)--(3.5,2)--(1.5,0)--(0,0);
\draw[black] (1.8 ,1.5)--(3.8 ,3.5)--(5.3 ,3.5)--(3.3 ,1.5)--(1.8 ,1.5);
\draw(1, -.3) node[black] {$v_1$};
\draw(1.8, -.3 ) node[black] {$1$};
\draw (1.1 ,.5) node [black ] {{\small $ P $}};
\draw (6 ,2.8) node [black ] {{\small $ P +(1,1) $}};
\draw[black, dashed] (1.8, 1.5)-- (1.8,0);
\draw[black, dashed] (1.8, 1.5)-- (0,1.5);
\draw(-.2, 1.5 ) node[black] {$1$};
\draw(1.7, 2 ) node[black] {$v_2$};
\end{tikzpicture}
\caption{ }
\end{center}
\end{figure}
\begin{proof}
Observe that if $|P|= \det(A) > 1$, the set $E(\Z^d)$ cannot be a frame on $L^2(P)$ so in part a) we assume $\det(A) \leq 1$. Similarly, for part b) we assume that $\det(A) \ge 1$.
\medskip
We prove part a) by induction on the dimension $d$.
By Theorem \ref{T-frame}, the set $E(\Z^d)$ is a frame on $L^2(P)$ if and only if the integer translates of $P$ overlap only on sets of zero measure.
In dimension $d=2$, we let $ v_1=(a_{1,1}, a_{2,1})$ and $ v_ 2=(a_{1,2}, a_{2,2})$ be the vectors that are parallel to the sides of $P$. When the components of $v_1$ and $v_2$ are non-negative, we can easily verify that $P$ overlaps with $P+(1,1)$ if and only if the sum of the projections of $v_1$ and $v_2$ on the $x_1$ and $x_2$ axes has measure $\ge 1$ (see Figure 4). Thus, $P$ overlaps with $P+(1,1)$ if and only if $ a_{1,1} + a_{1,2} \ge 1$ and $a_{2,1} + a_{2,2} \ge 1$. These conditions imply that no pair of integer translates of $P$ intersect.
For general $v_1$ and $v_2$ we can similarly verify that the integer translates of $P$ do not intersect if and only if $ | a_{1,1} |+ |a_{1,2}| \ge 1$ and $|a_{2,1}| + |a_{2,2} |\ge 1$.
We now assume that part a) of the theorem is valid in dimension $d\ge 2$. We prove that is is valid also in dimension $d+1$.
Let $P$ be a parallelepiped in $\mathbb R^{d+1}$.
The integer translates of $P$ overlap on sets of positive measure in $\mathbb R^{d+1}$ if and only if the integer translates of the faces of $P$ overlap on sets of positive measure in $\mathbb R^{d}$.
Let $P_h$ be the face of $P$ spanned by the vectors $ v_1$, ..., $ v_{h-1}, \ v_{h+1}$, ...,\, $ v_{d+1}$.
Let $ e_1=(1, 0, ...,\,0)$, ..., $ e_{d+1} =(0, ...0,\,1)$ be the standard orthonormal basis in $\mathbb R^{d+1}$ and let $H_j$ be the orthogonal complement of $e_j$. Clearly, the integer translates of $P_h$ overlap if and only if the integer translates of the orthogonal projections of $P_h$ on the $H_j$'s overlap.
The projection of $P_h$ on $H_k$ is a parallelepiped in $\mathbb R^d$ spanned by the vectors
$ w_1$, ..., $ w_{h-1}, \ w_{h+1}$, ...,\, $ w_{d+1}$ where $ w_j$ is the projection of $v_j$ on $H_k$, i.e., it is the vector $ v_j$ with the $k$-th component removed.
By assumptions, $\max_{{1 \leq i,k, j\leq d+1\atop{i\ne k}}\atop{k\ne j\ne h}}\{|a_{i,j }| + |a_{i,k }| \}\leq 1$.
This inequality is valid for every face of $P$ and for every projection, and so we have that $\max_{{1 \leq i,j, k\leq d+1 }\atop{k\ne j}}\{|a_{i,k}|+ |a_{i,j}|\}\leq 1$ as required.
\medskip
We now prove part b). By Theorem \ref{T-Riesz }, the integer translates of $P$ must cover $\mathbb R^d$.
Since $P$ is the image of the unit cube $[0,1]^d$ via the linear transformation $A(x)= A x$, we can write $P=A([0,1]^d)$. Thus, $E(\Z^d)$ is a Riesz sequence in $L^2(P)$ if and only if
$ \bigcup_{n\in\Z^d} (A([0,1]^d) + n) =\mathbb R^d $, or:
$$
A^{-1}\left(\bigcup_{n\in\Z^d} (A([0,1]^d) + n)\right)= \bigcup_{n\in\Z^d} ( [0,1]^d + A^{-1} n)=\mathbb R^d.$$
The translates of the unit cube $[0,1]^d$ cover $\mathbb R^d$ if and only if the components of the vectors $A^{-1}e_k$ are all $\leq 1$, i.e., if and only if $\max_{1\leq i,j\leq d}|b_{i,j}|\leq 1$.
\end{proof}
\subsection{The shortest vector problem}
Let $A:\mathbb R^d\to\mathbb R^d$ be linear and invertible; consider the parallelepiped $P=A(Q)$, where $Q = [0,1]^d.$ The sides of $P$ are parallel to the columns of the matrix that represents $A$.
By Corollary \ref{C-complete}, the set $E(\Z^d)$ is complete in $L^2(A(Q))$ if and only if the integer translates of $A(Q)$ do not intersect.
%
The integer translates of $A(Q)$ intersect if and only if there are $x,\ y \in Q $ such that $Ax = Ay + n$, for some nonzero $n \in \Z^d$. We can also say that the translates of $A(Q_d)$ intersect if and only if there exist $x, y \in Q$ and $n \in \Z^d$ such that $A^{-1}n = x-y$, i.e., if and only if there exists $n \in \Z^d$ such that $A^{-1}n \in D = \{w \mid \| w\|_{\infty} < 1 \}$.
\medskip
These considerations show that Problem 3 is related to the so-called {\it shortest vector problems} (SVP): Given a lattice $\mathcal{L}$ and a norm $||\ ||$ on $\mathbb{R}^d$, find the minimum length $\lambda = \min_{0 \neq v \in \mathcal{L}}\| v\|$ of a nonzero lattice point. The SVP is known to be NP-hard (see \cite{Aj}).
The conjectured intractability of the SVP and of other optimization problems on lattices is central in the construction of secure lattice-based cryptosystems.
For more information on this problem see e.g. \cite{AD}, \cite{Kn} and the references cited in these papers.
\medskip
| {
"timestamp": "2018-04-12T02:02:13",
"yymm": "1710",
"arxiv_id": "1710.05342",
"language": "en",
"url": "https://arxiv.org/abs/1710.05342",
"abstract": "We consider three special and significant cases of the following problem. Let D be a (possibly unbounded) set of finite Lebesgue measure in R^d. Find conditions on D for which the standard exponential basis on the unit cube of R^d is a frame, a Riesz sequence, or a Riesz basis on L^2(D).",
"subjects": "Functional Analysis (math.FA)",
"title": "Three problems on exponential bases",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462187092608,
"lm_q2_score": 0.8128673133042217,
"lm_q1q2_score": 0.8030691884912619
} |
https://arxiv.org/abs/1801.01724 | A Lipschitz condition along a transversal foliation implies local uniqueness for ODEs | We prove the following result: if a continuous vector field $F$ is Lipschitz when restricted to the hypersurfaces determined by a suitable foliation and a transversal condition is satisfied at the initial condition, then $F$ determines a locally unique integral curve. We also present some illustrative examples and sufficient conditions in order to apply our main result. | \section{Introduction}
Uniqueness for ODEs is an important and quite old subject, but still an active field of research \cite{cons,DiNoSi14,ferreira}, being Lipschitz uniqueness theorem the cornerstone on the topic. Besides the existence of many generalizations of that theorem, see \cite{agalak,cl,hartman}, one recent and fruitful line of research has been the searching for alternative or weaker forms of the Lipschitz condition. For instance, let $U\subset{\mathbb R}^2$ be an open neighborhood of $(t_0,x_0)$ and $f:U\subset{\mathbb R}^2\to{\mathbb R}$ be continuous and consider the scalar initial value problem
\begin{equation}\label{eq-ivp-scalar} x'(t)=f(t,x(t)),\, x(t_0)=x_0. \end{equation}
It was proved, independently by Mortici, \cite{mortici}, and Cid and Pouso \cite{cidpouso1,cp2}, that local uniqueness holds provided that the following conditions are satisfied:
\begin{itemize}
\item$f(t,x)$ is Lipschitz with respect to $t$,
\item$f(t_0,x_0)\not=0$.
\end{itemize}
A more general result had been proved before by Stettner and Nowak, \cite{sn}, but in a paper restricted to German readers. They proved that if $U\subset {\mathbb R}^2$ is an open neighborhood of $(t_0,x_0)$, $f:U\subset {\mathbb R}^2\to{\mathbb R}$ is continuous and $(u_1,u_2)\in{\mathbb R}^2$ such that
\begin{itemize}
\item $|f(t,x)-f(t+ku_1,x+ku_2)|\le L |k|\quad \mbox{on $D$}$,
\item $u_2\not= f(t_0,x_0) u_1$,
\end{itemize}
then the scalar problem \eqref{eq-ivp-scalar} has a unique local solution. By taking either $(u_1,u_2)=(0,1)$ or $(u_1,u_2)=(1,0)$ this result covers both the classical Lipschitz uniqueness theorem and the previous alternative version. Moreover this result has been remarkably generalized in \cite{DiNoSi14} by Dibl{\'\i}k, Nowak and Siegmund by allowing the vector $(u_1,u_2)$ to depend on $t$.
Let us now consider the autonomous initial value problem for a system of differential equations
\begin{equation}\label{peq-aut}
z'(t)=F(z(t)),\ z(t_0)=p_0,
\end{equation}
where $n\in{\mathbb N}$, $F:U\subset{\mathbb R}^{n+1}\to{\mathbb R}^{n+1}$ and $p_0\in U$.
Trough the paper we shall need the following definition: if $g:D\subset{\mathbb R}^{n+1}\to E$, where $E$ is a normed space, we will say that $g$ is {\it Lipschitz in $D$ when fixing the first variable} if there exists $L>0$ such that for all $(s,x_1,x_2,\ldots,x_n),(s,y_1,y_2,\ldots,y_n)\in D$ we have that
$$\|g(s,x_1,x_2,\ldots,x_n)-g(s,y_1,y_2,\ldots,y_n)\|_{E} \le L \|(x_1,x_2,\ldots,x_n)-(y_1,y_2,\ldots,y_n)\|,$$
and where $\|\cdot\|$ stands for any norm in ${\mathbb R}^{n+1}$. Moreover, for any function $g$ with values in ${\mathbb R}^{n+1}$ we denote $g=(g_1,g_2,\dots,g_{n+1})$.
The following alternative version of Lipschiz uniqueness theorem for systems has been proved by Cid in \cite{Cid03}.
\begin{thm} \label{cor-cid} Let $U\subset{\mathbb R}^{n+1}$ an open neighborhood of $p_0$ and $F:U\to{\mathbb R}^{n+1}$ continuous. If
moreover
\begin{itemize}
\item $F$ is Lipschitz in $U$ when fixing the first variable,
\item $F_{1}(p_0)\ne0$,
\end{itemize}
then there exists $\a>0$ such that problem \eqref{peq-aut}has a unique solution in $[t_0-\a,t_0+\a]$.
\end{thm}
\begin{rem} The classical Lipschitz theorem is included in the previous one. In order to see this, let $n\in{\mathbb N}$, $U\subset{\mathbb R}^{n+1}$ be an open set, $f:U\to{\mathbb R}^n$ and $(t_0,x_0)\in U$ and consider the non-autonomous problem
\begin{equation}\label{peq}
x'(t)=f(t,x(t)),\ x(t_0)=x_0.
\end{equation}
As it is well known, problem \eqref{peq} is equivalent to the autonomous one \eqref{peq-aut}, where \begin{displaymath}F(z_1,z_2,\ldots,z_{n+1}):=(1,f(z_1,z_2,\ldots,z_{n+1})),\end{displaymath} and $p_0:=(t_0,x_0)$. Now, if $f(t,x)$ is Lipschitz with respect to $x$ then $F(z_1,z_2,\ldots,z_{n+1})$ is Lipschitz when fixing the first variable and moreover $F_{1}(p_0)=1\ne0$, so Theorem \ref{cor-cid} applies.
\end{rem}
Recently, Dibl{\'\i}k, Nowak and Siegmund have obtained in \cite{SiNoDi16} a generalization of both \cite{Cid03} and \cite{sn}. Their result reads as follows:
\begin{thm}\label{thDNS} Let $U\subset{\mathbb R}^{n+1}$ an open neighborhood of $p_0$, $F:U\to{\mathbb R}^{n+1}$ be continuous and $\cal{V}$ a linear hyperplane in ${\mathbb R}^{n+1}$ such that
\begin{itemize}
\item $F$ is Lipschitz continuous along $\cal{V}$, that is, there exists $L>0$ such that if $x,y\in U$ and $x-y\in \cal{V}$ then
$$\|F(x)-F(y)\|\le L \|x-y\| $$
\item Transversality condition: $F(p_0)\not\in \cal{V}$.
\end{itemize}
then there exists $\a>0$ such that problem~\eqref{peq-aut} has a unique solution in $[t_0-\a,t_0+\a]$.
\end{thm}
The previous theorem has the following geometric meaning: uniqueness for the autonomous system \eqref{peq-aut} follows provided that the continuous vector field $F$ is Lipschitz when restricted to a family of parallel hyperplanes to $\cal{V}$ that covers $U$ and that the vector field at the initial condition $F(p_0)$ is transversal to $\cal{V}$.
Our main goal in this paper is to extend Theorem \ref{thDNS} from the linear foliation generated by the hyperplane $\cal{V}$ to a general $n$-foliation. The paper is organized as follows: in Section 2 we present our main result which relies on an appropriate change of coordinates and Theorem \ref{cor-cid}. We will show by examples that our result is in fact a meaningful generalization of Theorem \ref{thDNS}. In Section 3 we present some useful results about Lipschitz functions, including the definition of a modulus of Lipschitz continuity along a hyperplane that will be used in Section 4 for obtaining explicit sufficient conditions on $F$ for the existence of a suitable $n$-foliation. Another key ingredient for that result shall be a general rotation formula proved too at Section 4.
Through the paper $\<\cdot,\cdot \>$ shall denote the usual scalar product in the Euclidean space.
\section{The main result: a general uniqueness theorem}
\begin{dfn}Let $p_0\in {\mathbb R}^{n+1}$. Assume there exist open subsets $V\subset{\mathbb R}^n$, $U\subset{\mathbb R}^{n+1}$, an open interval $J\subset {\mathbb R}$ with $0\in J$ and a family of differentiable functions $\{g_s:V\to U\}_{s\in J}$ such that $g_0(0)=p_0\in U$ and $\Phi:(s,y)\in J\times V \to g_s(y)\in U$ is a diffeomorphism. Then we say $\{g_s\}_{s\in J}$ is a \emph{local $n$-foliation of U at $p_0$}.
\end{dfn}
\begin{rem}An observation regarding notation. If $\Phi:{\mathbb R}^{n+1}\to{\mathbb R}^{n+1}$ is a diffeomorphism, we denote by $\Phi'$ its derivative and by $\Phi^{-1}$ its inverse. Also, we write $(\Phi^{-1})'$ for the derivative of the inverse. Observe that $\Phi'$ takes values in ${\mathcal M}_{n+1}({\mathbb R})$ so, although we cannot consider the functional inverse of $\Phi'$, we can consider the inverse matrix, whenever it exists, of every $\Phi'(x)$ for $x\in{\mathbb R}^{n+1}$. We denote this function by $(\Phi')^{-1}$. Clearly, the chain rule implies that
\begin{displaymath}(\Phi')^{-1}(x)=(\Phi^{-1})'(\Phi(x)).\end{displaymath}
\end{rem}
The following is our main result.
\begin{thm}\label{thmgen} Let $U\subset{\mathbb R}^{n+1}$, $V\subset{\mathbb R}^{n}$ be open sets, $p_0\in U$, $F:U\subset{\mathbb R}^{n+1}\to{\mathbb R}^{n+1}$ a continuous function and $\{g_s:V\to U\}_{s\in J}$ a local $n$-foliation of $U$ at $p_0$ which defines the diffeomorphism $\Phi:J\times V\to U$. If the following assumptions hold,
\begin{itemize}
\item{(C1)} Transversality condition: \begin{equation}\label{transcon}\<\(\frac{\partial\Phi_1^{-1}}{\partial z_1}(p_0),\dots,\frac{\partial\Phi_1^{-1}}{\partial z_{n+1}}(p_0)\),F(p_0)\>\ne0,\end{equation}
\item{(C2)} Lipschitz condition along the foliation:
$F\circ \Phi$ and $(\Phi')^{-1}$ are Lipschitz in a neighborhood of zero when fixing the first variable,
\end{itemize}
then there exists $\a>0$ such that problem~\eqref{peq-aut} has a unique solution in $[t_0-\a,t_0+\a]$.
\end{thm}
\begin{proof} Consider the change of coordinates \begin{align}\label{coc}z=(z_1,\dots,z_{n+1})=\Phi(s,y_1,\dots,y_n):=g_s(y_1,\dots,y_n).\end{align}
Since $\{g_s\}_{s\in J}$ is a foliation, $\Phi$ is a diffeomorphism. Then, considering $y=(s,y_1,\dots,y_n)$, differentiating \eqref{coc} with respect to $t$ and taking into account equation \eqref{peq-aut},
\begin{equation}\label{eqderiv}\frac{\dif z}{\dif t}=\Phi'(y)\frac{\dif y}{\dif t}=F(z)=(F\circ \Phi)(y).\end{equation}
Since $\Phi$ is a diffeomorphism, $\Phi'(y)$ is an invertible matrix for every $y$, so
\[\frac{\dif y}{\dif t}=\Phi'(y)^{-1}(F\circ \Phi)(y).\]
By definition of $g_s$, $\Phi(0)=p_0$, so we can consider the problem
\begin{equation}\label{redeq}\frac{\dif y}{\dif t}(t)=h(y),\ y(t_0)=0,\end{equation}
where
\begin{equation*}h(y)=\Phi'(y)^{-1}F( \Phi(y)).\end{equation*}
Now, by (C2) we have that $h$ is the product of locally Lipschitz functions when fixing the first variable. Furthermore, if $e_1=(1,0,\dots,0)\in{\mathbb R}^n$ and taking into account (C1),
\[h_1(0)=e_1^T\Phi'(0)^{-1}F(p_0)=e_1^T(\Phi^{-1})'(p_0)F(p_0)=\<\(\frac{\partial\Phi_1^{-1}}{\partial z_1}(p_0),\dots,\frac{\partial\Phi_1^{-1}}{\partial z_{n+1}}(p_0)\),F(p_0)\>\ne0.\]
Hence, we can apply Theorem \ref{cor-cid} to problem \eqref{redeq} and conclude that problem~\eqref{peq-aut} has, locally, a unique solution.
\end{proof}
\begin{rem}
1) Condition \eqref{transcon} can be easily interpreted geometrically: the vector \begin{displaymath}\(\frac{\partial\Phi_1^{-1}}{\partial z_1}(p_0),\dots,\frac{\partial\Phi_1^{-1}}{\partial z_{n+1}}(p_0)\),\end{displaymath} is normal to the hypersurface given by $g_0(V)$ at $p_0$. So, condition \eqref{transcon} means that the vector $F(p_0)$ is not tangent to that hypersurface, and therefore it is called the \emph{'transversality condition'}. \medbreak
\noindent 2) Notice that from \cite[Example 3.1]{Cid03} we know that if the transversality condition \eqref{transcon} does not hold then the Lipschitz condition along the foliation, that is (C2), is not enough to ensure uniqueness. On the other hand, by \cite[Example 3.4]{Cid03} we also know that (C1) and a Lipschitz condition along a local (n-1)-foliation do not imply uniqueness. So, in some sense, conditions (C1) and (C2) are sharp.
\end{rem}
Theorem \ref{thmgen} generalizes the main result in \cite{SiNoDi16}, where only foliations consisting of hyperplanes are considered. In the next example we show the limitations of linear (or affine) coordinate changes which are used in \cite{SiNoDi16}.
\begin{exa}\label{exa-change} Let $F(x,y):=1+(y-x^2)^{\frac{2}{3}}$. Is there a linear change of coordinates $\Phi$ such that $F\circ\Phi$ is Lipschitz in a neighborhood of zero when fixing the first variable? The answer is no. Any linear change of variables $\Phi$ will be given by two linearly independent vectors $v, w\in{\mathbb R}^2$ as $\Phi(z,t)=z w+t v$. If $F\circ\Phi$ is Lipschitz in a neighborhood of zero when fixing the first variable, that is, $z$, that implies that the directional derivative of $F$ at any point of the neighborhood in the direction of $v$, whenever it exists, is a lower bound for any Lipschitz constant. To see that this cannot happen, take $S=\{(x,y)\in{\mathbb R}^2\ : y=x^2\}$ and realize that $F$ is differentiable in ${\mathbb R}^2\backslash S$, with
\[\nabla F(x,y)=\frac{2}{3}(y-x^2)^{-\frac{1}{3}}(-2x,1),\quad \mbox{for} \, \, (x,y)\in{\mathbb R}^2\backslash S.\]
Let $v=(v_1,v_2)\in{\mathbb R}^2$. The directional derivative of $F$ at $(x,y)$ in the direction of $v$ is
\[D_vF(x,y)=\<\nabla F(x,y),v\>=\frac{2}{3}(y-x^2)^{-\frac{1}{3}}(v_2-2v_1x),\quad \mbox{for} \, \, (x,y)\in{\mathbb R}^2\backslash S.\]
Now consider a neighborhood $N$ of $0$. In particular, we can consider the points of the form $(x,y)=(\l,\l^2+\mu)\in N\backslash S$ for $\mu\ne0$ and $\l\in(-\epsilon,\epsilon)$, so
\[D_vF(x,y)=\frac{2}{3}\frac{v_2-2 \lambda v_1}{ \mu ^{1/3}}.\]
This quantity is unbounded in $ N\backslash S$ unless the numerator is $0$ for every $\l\in(-\epsilon,\epsilon)$, but that means that $v=0$, so $v$ and $w$ cannot be linearly independent.
Hence, no linear change of coordinates $\Phi$ makes $F\circ\Phi$ Lipschitz in a neighborhood of zero when fixing the first variable.\par
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.5]{Example}
\end{center}
\end{figure}
Nevertheless, take $(x,y)=\Phi(z,t)=g_z(t)=(t,z+t^2)$. We have $\Phi^{-1}(x,y)=(y-x^2,x)$ and both are differentiable, so $\Phi$ is a diffeomorphism. Now,
$(F\circ\Phi)(z,t)=1+z^\frac{2}{3}$, which is clearly Lipschitz when fixing the first variable.
In the figure you can see the parabolas $g_z(t)$ foliating the plane, where $g_0(t)$ is the thicker one.
\end{exa}
\begin{exa} With what we learned from Example \ref{exa-change}, it is easy to see that uniqueness for the scalar initial value problem
\begin{equation}\label{eqex1}x'(t)=1+(x(t)-t^2)^{\frac{2}{3}}, \quad x(0)=0,\end{equation}
can not be dealt with \cite[Theorem 2]{SiNoDi16} neither with \cite[Theorem 1]{DiNoSi14}. However, by using the local 1-foliation associated to diffeomorphism $\Phi$ given in Example \ref{exa-change}, it is easy to show that conditions (C1) and (C2) of Theorem \ref{thmgen} are satisfied. Therefore, we have the local uniqueness of solution.
\end{exa}
\section{Some results about Lipschitz functions}
We will now establish some properties of Lipschitz functions that will be useful for checking condition (C2) in Theorem \ref{thmgen}. Before that, consider the following Lemma.
\begin{lem}\label{leminv} Let $A,B,C\in{\mathcal M}_n({\mathbb R})$, $A$ and $C$ invertible. Then
\[ \|ABC\| \ge \frac{\|B\|}{\|A^{-1}\| \|C^{-1}\|},\]
where $\|\cdot\|$ is the usual matrix norm.
\end{lem}
\begin{proof}It is enough to observe that
\[\|B\|=\|A^{-1}ABCC^{-1}\|\le\|A^{-1}\|\|ABC\|\|C^{-1}\|.\]
\end{proof}
\begin{lem}\label{lemeq}Let $U$ be an open subset of ${\mathbb R}^n$ and $g:U\to GL_n({\mathbb R})$.
\begin{enumerate}
\item If $g$ is locally Lipschitz and $g^{-1}$ (the inverse matrix function) is locally bounded, then $g^{-1}$ is locally Lipschitz.
\item If $g$ is locally Lipschitz when fixing the first variable and $g^{-1}$ is locally bounded, then $g^{-1}$ is locally Lipschitz when fixing the first variable.
\end{enumerate}
\end{lem}
\begin{proof}
1. Let $K$ be a compact subset of $U$, $k_1$ be a Lipschitz constant for $g$ in $K$ and $k_2$ a bound for $g^{-1}$ in $K$. Then, for $x,y\in K$, using Lemma \ref{leminv},
\begin{align*}k_1\|x-y\| & \ge\|g(x)-g(y)\|=\|g(x)(g(y)^{-1}-g(x)^{-1})g(y)\|\ge\frac{\|g(y)^{-1}-g(x)^{-1}\|}{k_2^2}. \end{align*}
Hence, $\|g(x)^{-1}-g(y)^{-1}\|\le k_1k_2^2\|x-y\|$ in $K$ and $g^{-1}$ is locally Lipschitz.\par
2. We proceed as in 2. Let $K$ be a compact subset of $U$, $(t,x),(t,y)\in K$, $k_1$ be a Lipschitz constant for $g$ in $K$ when fixing $t$ and $k_2$ a bound for $g^{-1}$ in $K$. Then,
\begin{align*}k_1\|x-y\| & \ge\|g(t,x)-g(t,y)\|=\|g(t,x)(g(t,y)^{-1}-g(t,x)^{-1})g(t,y)\|\ge\frac{\|g(t,y)^{-1}-g(t,x)^{-1}\|}{k_2^2}. \end{align*}
Hence, $\|g(t,x)^{-1}-g(t,y)^{-1}\|\le k_1k_2^2\|x-y\|$ and $g^{-1}$ is locally Lipschitz when fixing the first variable.\par
\end{proof}
\begin{cor}\label{coreq}Let $U$ be an open subset of ${\mathbb R}^n$, $f:U\to f(U)\subset{\mathbb R}^n$ be a diffeomorphism (notice that, in that case, $f':U\to GL_n({\mathbb R})$).
\begin{enumerate}
\item If $f'$ is locally Lipschitz and $(f')^{-1}$ is locally bounded, then $(f')^{-1}$ is locally Lipschitz.
\item If $f'$ is locally Lipschitz and $(f')^{-1}$ is locally bounded, then $(f^{-1})'$ is locally Lipschitz.
\item If $f'$ is locally Lipschitz when fixing the first variable and $(f')^{-1}$ is locally bounded, then $(f')^{-1}$ is locally Lipschitz when fixing the first variable.
\end{enumerate}
\end{cor}
\begin{proof}
1. Just apply Lemma \ref{lemeq}.1 to $g=f'$.\par
2. Notice that \begin{displaymath}(f^{-1})'(x)=(f')^{-1}(f^{-1}(x)),\end{displaymath} and that $(f')^{-1}$ is locally Lipschitz by the previous claim. On the other hand,
since $f'$ is locally continuous we have that $f$ is locally a ${\cal C}^1$-diffeomorphism, and thus
$f^{-1}$ is locally Lipschitz. Therefore $(f^{-1})'$ is locally Lipschitz since it is the composition of two locally Lipschitz functions. \par
3. Just apply Lemma \ref{lemeq}.2 to $g=f'$.
\end{proof}
\subsection{A modulus of continuity for Lipschitz functions along an hyperplane}
Let $U$ be an open subset of ${\mathbb R}^{n+1}$, $p_0\in U$ and consider the tangent space of $U$ at $p$, which can be identified with ${\mathbb R}^{n+1}$. Consider now the real Grassmannian $\Gr(n,n+1)$, that is, the manifold of hyperplanes of ${\mathbb R}^{n+1}$. We know that $\Gr(n,n+1)\cong \Gr(1,n+1)={\mathbb P}^n$, that is, we can identify unequivocally each hyperplane with their perpendicular lines, which are elements of the projective space ${\mathbb P}^n$.\par
\begin{dfn}
Consider $B_{n+1}(p,\d)\subset{\mathbb R}^{n+1}$ to be the open ball of center $p$ and radius $\d$. Then, for a function $F:U\to{\mathbb R}^{n+1}$ and every $p\in U$, $v\in{\mathbb P}^n$ and $\d\in{\mathbb R}^+$ we define the \textit{modulus of continuity}
\[\omega_F(p,v,\d):=\sup_{\substack{x,y\in B_{n+1}(p,\d)\\ x-p,y-p\perp v\\x\ne y}}\frac{\|F(x)-F(y)\|}{\|x-y\|}\in[0,+\infty].\]
We also define
\[\omega_F(p,v):=\lim_{\d\to0} \omega_F(p,v,\d)=\lim_{\d\to0}\sup_{\substack{x,y\in B_{n+1}(p,\d)\\ x-p,y-p\perp v\\x\ne y}}\frac{\|F(x)-F(y)\|}{\|x-y\|}=\lils{\substack{(x,y)\to(p,p)\\x-p,y-p\perp v\\x\ne y}}\frac{\|F(x)-F(y)\|}{\|x-y\|}\in[0,+\infty].\]
\end{dfn}
\begin{rem}
If $\omega_F(p,v)<+\infty$, then there exist $\d,\epsilon\in{\mathbb R}^+$ such that
\[\|F(x)-F(y)\|\le(\omega_F(p,v)+\epsilon)\|x-y\|,\ x,y\in B_{n+1}(p,\d),\ x-p,y-p\perp v.\]
Equivalently,
\[\|F(x+p)-F(y+p)\|\le(\omega_F(p,v)+\epsilon)\|x-y\|,\ x,y\in B_{n+1}(0,\d),\ x,y\perp v.\]
Let $A$ be a orthonormal matrix such that its first column is parallel to $v$. In that case,
since $A$ is orthogonal, $x\perp e_1$ implies that $Ax \perp v$.Then,
\begin{equation*}\|F(Ax+p)-F(Ay+p)\|\le(\omega_F(p,v)+\epsilon)\|A(x-y)\|,\ x,y\in B_{n+1}(0,\d),\ x,y\perp e_1.\end{equation*}
That is, taking into account that $\|A\|=1$,
\begin{equation*}\|F(A(0,x)+p)-F(A(0,y)+p)\|\le(\omega_F(p,v)+\epsilon)\|x-y\|,\ x,y\in B_{n}(0,\d).\end{equation*}
Hence, if $\phi(x)=Ax+p$ then $F\circ \phi$ is locally Lipschitz in an neighborhood of the origin \textit{when the first variable is equal to zero}.
\end{rem}
The following lemma illustrates the relation between the modulus of continuity $\omega_F$ and the partial derivatives of $F$.
\begin{lem}\label{lemreg}Assume $F$ is continuously differentiable in a neighborhood $N$ of $p$. Then
\[\omega_F(p,v)=\sup_{\substack{w\perp v\\\|w\|=1}}\|D_wF(p)\|.\]
\end{lem}
\begin{proof} Since $F'(z)$ is continuous at $p$, for $\{\epsilon_n\}\to 0$ there exists $\{\d_n\}\to 0$ such that if $z\in B_{n+1}(p,\d_n)$ and
$\|w\|=1$ then $\|F'(z)(w)\|\le \|F'(p)(w)\|+\epsilon_n$. Hence, using the Mean Value Theorem,
\begin{align*} \sup_{\substack{x,y\in B_{n+1}(p,\d_n)\\ x-p,y-p\perp v\\x\ne y}}\frac{\|F(x)-F(y)\|}{\|x-y\|}\le & \sup_{\substack{x,y,z \in B_{n+1}(p,\d_n)\\ x-p,y-p \perp v\\x\ne y}}\frac{\|F'(z)(x-y)\|}{\|x-y\|} \le \sup_{\substack{z \in B_{n+1}(p,\d_n)\\ u \in B_{n+1}(0,2 \d_n) \\ u \perp v\\u\ne 0}}\frac{\|F'(z)(u)\|}{\|u\|} \\
= & \sup_{\substack{z \in B_{n+1}(p,\d_n)\\ d \in (0,2 \d_n) \\ w \perp v\\ \|w\|=1}}\frac{\|F'(z)(d w)\|}{\|d w\|}= \sup_{\substack{z \in B_{n+1}(p,\d_n) \\ w \perp v\\ \|w\|=1}} \|F'(z)(w)\| \\
\le & \sup_{\substack{w \perp v\\ \|w\|=1}} \|F'(p)( w)\|+\epsilon_n = \sup_{\substack{w \perp v\\ \|w\|=1}} \|D_w F(p)\|+\epsilon_n .\end{align*}
Then, taking the limit when $n\to\infty$, we obtain
\[\omega_F(p,v)\le \sup_{\substack{w\perp v\\\|w\|=1}}\|D_wF(p)\|.\]
On the other hand, assume $w\in{\mathbb S}^n$ and $w\perp v$. Then $F(p+tw)=F(p)+t(D_wF(p)+g(t))$ where $g$ is continuous and $\lim_{t\to0}g(t)=0$. Therefore,
\[\|D_wF(p)\|=\left\|\frac{F(p+tw)-F(p)}{t}-g(t)\right\|\le \sup_{\substack{x,y\in B_{n+1}(p,t)\\ x-p,y-p\perp v\\x\ne y}}\left[\frac{\|F(x)-F(y)\|}{\|x-y\|}+|g(t)|\right].\]
Taking the limit when $t$ tends to zero, $\|D_wF(p)\|\le\omega_F(p,v)$, which ends the proof.
\end{proof}
\begin{rem} This definition of the modulus of continuity $\omega_F(\cdot,\cdot)$ is somewhat similar to the definition of strong absolute differentiation which appears in \cite[expression (1)]{ChIn}:
\noindent Let $(X,d_X)$ and $(Y,d_Y)$ be two metric spaces and consider $F:X\to Y$ and $p\in X$. We say $F$ is \emph{strongly absolutely differentiable at $p$} if and only if the following limit exists:
\[F^{|\prime|}(p):=\lim_{\substack{(x,y)\to(p,p)\\ x\ne y}}\frac{d_Y(F(x),F(y))}{d_X(x,y)}.\]
However, notice that there some important differences between $\omega_F(\cdot,\cdot)$ and $F^{|\prime|}$ when $X={\mathbb R}^n$ and $Y={\mathbb R}^m$. First, since $\omega(\cdot,\cdot)$ is defined with a supremmum, $\omega(\cdot,\cdot)$ is well defined in more cases than $F^{|\prime|}$. Also, in the definition of $\omega_F(\cdot,v)$, we are avoiding the direction of a certain vector $v$. This means that, while strong absolute differentiation implies continuity at the point (see \cite[Theorem 3.1]{ChIn}), $\omega(\cdot,\cdot)$ does not.\par
Regarding the similarities, when the partial derivatives of $F$ exist, $F^{|\prime|}=\|\sum_{k=1}^n\frac{\partial F}{\partial x_k}\|$ (see \cite[Theorem 3.6]{ChIn}).
\end{rem}
\begin{exa}\label{examc}Consider again $F(x,y):=1+(y-x^2)^{\frac{2}{3}}$ and $S=\{(x,y)\in{\mathbb R}^2\ : y=x^2\}$. As was stated in Example \ref{exa-change}, we have that $F|_{{\mathbb R}^2\backslash S}\in{\mathcal C}^\infty({\mathbb R}^2\backslash S)$ and
\[\nabla F(x,y)=\frac{2}{3}(y-x^2)^{-\frac{1}{3}}(-2x,1),\quad \mbox{for}\ (x,y)\in{\mathbb R}^2\backslash S.\]
Therefore, $\omega(p,v)<+\infty$ for every $(p,v)\in ({\mathbb R}^2\backslash S)\times{\mathbb P}^1$.\par
On the other hand, for $p=(x_0,x_0^2)\in S$ and $v=(v_1:v_2)\in{\mathbb P}^1$, if $x=(x_1,y_1)-p\perp v$ then $x=\l(-v_2,v_1)+p$ for some $\l\in {\mathbb R}$. Analogously, we take $y=\mu(-v_2,v_1)+p$ for some $\mu\in {\mathbb R}$. Hence,
\begin{align*}\omega_F(p,v)= & \lils{\substack{(x,y)\to(p,p)\\x-p,y-p\perp v\\x\ne y}}\frac{\|F(x)-F(y)\|}{\|x-y\|}=\lils{\substack{(\l,\mu)\to(0,0)\\\l\ne \mu}}\frac{|F(\l(-v_2,v_1)+p)-F(\mu(-v_2,v_1)+p)|}{\|(\l-\mu)(-v_2,v_1)\|} \\= &
\lils{\substack{(\l,\mu)\to(0,0)\\ \l\ne \mu}}\frac{|[\l(2x_0v_2+v_1)-\l^2v_2^2]^\frac{2}{3}-[\mu(2x_0v_2+v_1)-\mu^2v_2^2]^\frac{2}{3}|}{|\l-\mu|} \end{align*}
We now can consider two cases: $(v_1:v_2)=(-2x_0:1)$ and $(v_1:v_2)\ne(-2x_0:1)$. In the first case, taking into account that $z^2+z+1\ge 3/4$ for every $z\in{\mathbb R}$,
\begin{align*}\omega_F(p,v)= & \lils{\substack{(\l,\mu)\to(0,0)\\\l\ne \mu}} \frac{|(-\l^2v_2^2)^\frac{2}{3}-(-\mu^2v_2^2)^\frac{2}{3}|}{|\l-\mu|}=\lils{\substack{(\l,\mu)\to(0,0)\\\l\ne \mu}} \frac{|\mu^\frac{4}{3}-\l^\frac{4}{3}||v_2|^\frac{2}{3}}{|\l-\mu|} \\= & |v_2|^\frac{2}{3}\lils{\substack{(\l,\mu)\to(0,0)\\\l\ne \mu}} \left|\mu^\frac{1}{3}+\frac{\l}{\mu^\frac{2}{3}+\mu^\frac{1}{3}\l^\frac{1}{3}+\l^\frac{2}{3}}\right|=|v_2|^\frac{2}{3}\lils{\substack{(\l,\mu)\to(0,0)\\\l\ne \mu}} \left|\mu^\frac{1}{3}+\l^\frac{1}{3}\frac{1}{\(\frac{\mu}{\l}\)^\frac{2}{3}+\(\frac{\mu}{\l}\)^\frac{1}{3}+1}\right| \\ \le & |v_2|^\frac{2}{3}\lils{\substack{(\l,\mu)\to(0,0)\\\l\ne \mu}} \left|\mu^\frac{1}{3}+\frac{4}{3}\l^\frac{1}{3}\right|=0. \end{align*}
Observe that in this deduction we have assumed $\l\ne0$. It is clear that, when $\l=0$, the limit is zero as well.\par
In the case $(v_1:v_2)\ne(-2x_0:1)$ the quotient inside the limit is not bounded and $\omega_F(p,v)=+\infty$. Therefore,
\[\omega_F^{-1}([0,+\infty))=({\mathbb R}^2\backslash S)\times{\mathbb P}^1\cup\{((x,x^2),(-2x:1))\in{\mathbb R}^2\times{\mathbb P}^1\ :\ x\in{\mathbb R}\}.\]
\end{exa}
\section{Sufficient conditions ensuring a Lipschitz condition along a foliation}
The next Lemma is a key ingredient in the main result of this section. It gives an alternative expression to the rotation matrix provided by the Rodrigues' Rotation Formula and generalizes it for $n$-dimensional vector spaces.
\begin{lem}[Codesido's Rotation Formula]\label{crf} Let $x,y\in{\mathbb R}^{n+1}$ and define $K_x^y\in{\mathcal M}_{n+1}({\mathbb R})$ as
\[K_x^y :=yx^T-xy^T.\]
Now, let $u,v\in {\mathbb S}^n$, $v\not= -u$, and define $R_u^v\in{\mathcal M}_{n+1}({\mathbb R})$ as
\begin{equation}\label{Codfor}R_u^v:=\Id+K_u^v+\frac{1}{1+\<u,v\>}(K_u^v)^2,\end{equation}
where $\Id$ is the identity matrix of order $n+1$.
Then, $R_u^v\in \operatorname{SO}(n+1)$ and $R_u^vu=v$, that is, $R_u^v$ is a rotation in ${\mathbb R}^{n+1}$ that sends the unitary vector $u$ to $v$. Furthermore, the function $R:\{(u,v)\in{\mathbb S}^n\times{\mathbb S}^n\ :\ u\ne-v\}\to \operatorname{SO}(n+1)$, defined by $R(u,v):=R_u^v$, is analytic.
\end{lem}
\begin{proof} First, we show that
$R_u^v\in\operatorname{O}(n+1)$, that is, $(R_u^v)^T=(R_u^v)^{-1}$. Observe that $(K_u^v)^T=-K_u^v$ and so $[(K_u^v)^2]^T=(K_u^v)^2$. That is, $(R_u^v)^T= \Id-K_u^v+\frac{1}{1+\<u,v\>}(K_u^v)^2$.
Therefore,
\begin{align*}(R_u^v)^TR_u^v= & \left[\Id-K_u^v+\frac{1}{1+\<u,v\>}(K_u^v)^2\right]\left[\Id+K_u^v+\frac{1}{1+\<u,v\>}(K_u^v)^2\right] \\ = & \Id+\frac{1-\<u,v\>}{1+\<u,v\>}(K_u^v)^2+\frac{1}{(1+\<u,v\>)^2}(K_u^v)^4.\end{align*}
Now,
\begin{align*}(K_u^v)^2= & (vu^T-uv^T)^2=vu^Tvu^T+uv^Tuv^T-vu^Tuv^T-uv^Tvu^T=\<u,v\>(vu^T+uv^T)-(vv^T+uu^T),\\ (K_u^v)^4= & \left[\<u,v\>(vu^T+uv^T)-(vv^T+uu^T)\right]^2=\(\<u,v\>^2-1\)(K_u^v)^2.\end{align*}
Therefore,
\begin{align*}(R_u^v)^TR_u^v= & \Id+\frac{1-\<u,v\>}{1+\<u,v\>}(K_u^v)^2-\frac{1-\<u,v\>^2}{(1+\<u,v\>)^2}(K_u^v)^2=\Id.\end{align*}
Clearly, $R_u^v$ is analytic on $S=\{(u,v)\in{\mathbb S}^n\times{\mathbb S}^n\ :\ u\ne-v\}$ and so is the determinant function. Now, we are going to prove that $S$ is a connected set: firstly, define
the linear subspaces
\begin{displaymath}V_1:=\{z \in {\mathbb R}^{2n+2} : \ z_i=-z_{n+1+i}, \quad i=1,2,\ldots n+1\},\end{displaymath}
\begin{displaymath}V_2:=\{z \in {\mathbb R}^{2n+2} : \ z_i=0, \quad i=1,2,\ldots n+1\},\end{displaymath}
\begin{displaymath}V_3:=\{z \in {\mathbb R}^{2n+2} : \ z_{n+1+i}=0, \quad i=1,2,\ldots n+1\},\end{displaymath}
and note that $\codim(V_i)=n+1\ge 2$ for all $i\in \{1,2,3\}$. Then, it is know that $X:={\mathbb R}^{n+1}\setminus (V_1 \cup V_2\cup V_3)$ is connected, see \cite[Chapter V, Problem 5]{Dieu69}, and since the projection $\pi: X \to S$ defined as
\begin{displaymath}\pi(z)=\left(\frac{(z_1,z_2,\ldots,z_{n+1})}{\|(z_1z_2,\ldots,z_{n+1})\|},\frac{(z_{n+2},z_{n+3},\ldots,z_{2n+2})}{\|(z_{n+2},z_{n+3},\ldots,z_{2n+2})\|} \right),\end{displaymath}
is continuous and onto, we have that $S$ is connected too.
Therefore, $|R_u^v|$ is continuous on the connected set $S$ and takes values in $\{-1,1\}$, so $|R_u^v|$ is constant. Since $|R_u^u|=|\Id|=1$ we have that $|R_u^v|=1$ on $S$, that is, $R_u^v\in\operatorname{SO}(n+1)$.\par
Last, observe that
\begin{align*}R_u^vu & =u+(vu^T-uv^T)u+\frac{\<u,v\>(vu^T+uv^T)u-(vv^T+uu^T)u}{1+\<u,v\>} \\ &=u+v-uv^Tu+\frac{\<u,v\>(v+uv^Tu)-(vv^Tu+u)}{1+\<u,v\>} \\ & =v+\frac{\<u,v\>(v+uv^Tu+u-uv^Tu)-(vv^Tu+u)+u-uv^Tu}{1+\<u,v\>}=v+\frac{\<u,v\>(v+u)-vv^Tu-uv^Tu}{1+\<u,v\>}\\ &=v+\frac{\<u,v\>(v+u)-\<u,v\>v-\<u,v\>u}{1+\<u,v\>}=v.\end{align*}
\end{proof}
\begin{rem} For $n=1$ the function $R$ admits a continuous extension to ${\mathbb S}^1\times{\mathbb S}^1$. Indeed, let us consider $u,v\in {\mathbb S}^1$, $v\not=-u$. Then $u=(\cos(\a),\sin(\a))$ and $v=(\cos(\b),\sin(\b))$ for some $\a,\b \in {\mathbb R}$, with $\b\not= \a+(2k+1)\pi$, $k\in {\mathbb Z}$. Now, a direct computation shows that
\begin{displaymath}R_{u}^v=\left(\begin{array}{ccc} \cos(\a-\b) & \sin(\a-\b) \\ -\sin(\a-\b) & \cos(\a-\b) \end{array}\right).\end{displaymath}
Therefore,
\begin{displaymath}\lim_{v\to -u} R_{u}^v =\lim_{\beta \to \alpha+\pi} \left(\begin{array}{ccc} \cos(\a-\b) & \sin(\a-\b) \\ -\sin(\a-\b) & \cos(\a-\b) \end{array}\right)= \left(\begin{array}{cc}-1 & 0 \\0 & -1\end{array}\right).\end{displaymath}
However, for $n\ge 2$ the function $R$ does not admit a continuous extension to ${\mathbb S}^n\times{\mathbb S}^n$. To see this, consider $u\in{\mathbb S}^n$, $w\in{\mathbb R}^{n+1}$, $w\perp u$, $w\not=0$ and define $v(w)=(w-u)/\|w-u\|$. Observe that $v(w)\in {\mathbb S}^n$, $v(w)\not= -u$, $\displaystyle\lim_{\|w\|\to 0} v(w)=-u$ and
\[K_u^{v(w)}=\frac{1}{\|w-u\|}K_u^w.\]
Hence,
\begin{align*}R_u^{v(w)} & =\Id+\frac{1}{\|w-u\|}K_u^w+\frac{\| w-u\|}{\| w-u\|+\<u,w\>-1}\frac{1}{\| w-u\|^2}(K_u^w)^2 \\ &
=\Id+\frac{1}{\|w-u\|}K_u^w+\frac{-ww^T-\|w\|^2uu^T}{\| w-u\|(\| w-u\|-1)}.
\end{align*}
Now, consider $\bar{w}\perp u$ with $\|\bar{w}\|=1$.Therefore, if it exists,
\[\lim_{v \to -u}R_u^{v}=\lim_{t \to 0}R_u^{ v(t \bar{w})}=\Id+\lim_{t \to0}\frac{-t^2 (\bar{w}\bar{w}^T-uu^T)}{\sqrt{t^2+1}(\sqrt{t^2+1}-1)}=\Id-2 (\bar{w}\bar{w}^T-uu^T).\]
But in ${\mathbb R}^{n+1}$, with $n\ge 2$, there exist at least two independent unitary vectors $\bar{w}_1$ and $\bar{w}_2$ in $\<u\>^{\perp}$, each of them leading to a different value of the right-hand side of the previous expression. Hence, the $\displaystyle \lim_{v \to -u}R_u^{v}$ does not exist and thus $R$ can not be continuously extended to ${\mathbb S}^n\times{\mathbb S}^n$.
\end{rem}
The following is the main result in this section and gives sufficient conditions for the existence of a $n$-foliation which allows $F$ to satisfy condition (C2) in Theorem \ref{thmgen}.
\begin{thm}\label{mainthm}Let $U$ be an open subset of ${\mathbb R}^{n+1}$, $p_0\in U$ and $F:U\to{\mathbb R}^{n+1}$ continuous. Assume there exists an open interval $J$ with $0\in J$ and a simple path $\c=(\c_1,\c_2)\in{\mathcal C}^1(J, U\times{\mathbb P}^n)$ such that the following conditions hold:
\begin{itemize}
\item[(i)] $\c_1(0)=p_0.$
\item[(ii)] There exist $\d,M\in{\mathbb R}^+$, such that $\omega_F(\c_1(t),\c_2(t),\d)<M$ for all $t\in J$.
\item[(iii)] $\c_1'(0) \not\perp \c_2(0)$.
\end{itemize}
Then, there exists an open neighborhood of zero $\hat{U}\subset U\subset {\mathbb R}^{n+1}$ such that $\Phi(s,y)$ is
a local $n$-foliation of $\hat{U}$. Moreover, $F\circ \Phi$ and $(\Phi')^{-1}$ are Lipschitz in a neighborhood of zero when fixing the first variable.
\end{thm}
\begin{proof} Assume, without loss of generality, that $\c_1$ is parameterized by arc length, that is, $\|\c_1'(t)\|=1$ for all $t\in J$. Consider ${\mathbb S}^n$ as covering space of ${\mathbb P}^n$ with the usual projection $\pi:{\mathbb S}^n\to{\mathbb P}^n$. Take $v_0\in\pi^{-1}(\c_2(0))$, such that $v_0\not= -e_1$ where $e_1=(1,0,\dots,0)\in{\mathbb R}^{n+1}$, and consider the lift $\tilde\c=(\c_1,\tilde\c_2):J\to V\times{\mathbb S}^n$ of $\c$ such that $\tilde \c(0)=(p_0,v_0)$.
Now, $\tilde\c_2$ is continuous, and $\<e_1,\tilde \c_2(0)\>=\<e_1,v_0\>\not=-1$ so we can consider an open interval $\tilde J \subset J$ where $\<e_1,\tilde \c_2(s)\>\ne-1$ (that is, $\tilde \c_2(s)\ne-e_1$) for $s\in\tilde J$. Since $\tilde \c$ is differentiable and $\|\tilde \c_2(s)\|=1$ for every $s\in \tilde J$, we can consider the continuously differentiable function
\begin{center}\begin{tikzcd}[row sep=tiny]
\tilde J \arrow{r}{A} & \operatorname{SO}(n+1)\\
s \arrow[mapsto]{r} & A(s):=R_{e_1}^{\tilde\c_2(s)}
\end{tikzcd}
\end{center}
where $R_u^v$ is defined as in Lemma \ref{crf}. Observe that denoting by $a_j(s)$ the columns of $A(s)$, that is,
\begin{displaymath}A(s)=\left(\begin{array}{c|c|c|c}a_1(s) & a_2(s) & \ldots & a_{n+1}(s)\end{array}\right),\end{displaymath}
we have that $a_1(s)=\tilde\c_2(s)$ and $\{ a_2(s),a_3(s),\ldots, a_{n+1}(s)\}$ is an orthonormal basis of $\tilde\c_2(s)^{\perp}$, (remember that $A(s) e_1=\tilde\c_2(s)$ and that $A(s)$ is an orthogonal matrix).
Now, we can define the differentiable function
$\Phi: \tilde J\times {\mathbb R}^{n}\to {\mathbb R}^{n+1}$ given by
\begin{displaymath}\Phi(s,y):=\c_1(s)+A(s)(0,y).\end{displaymath}
\noindent {\it Claim 1. $g_s(y):=\Phi(s,y)$ is a local $n$-foliation.}
We easily compute
\begin{displaymath}\frac{\partial \Phi}{\partial s}(s,y)=\c_1'(s)+A'(s) (0,y),\end{displaymath}
\begin{displaymath}\frac{\partial \Phi}{\partial y}(s,y)=\left(\begin{array}{c|c|c|c}a_2(s) & a_3(s) & \ldots & a_{n+1}(s)\end{array}\right).\end{displaymath}
So
\begin{displaymath}\Phi'(0,0)=\left(\begin{array}{c|c|c|c|c}\c_1'(0) & a_2(0) & a_3(0) & \ldots & a_{n+1}(0)\end{array}\right),\end{displaymath}
and since, by (iii), $\c_1'(0)\not\perp \tilde \c_2(0)=a_1(0)$ we have
\begin{displaymath}J_{\Phi}(0,0)=|\Phi'(0,0)|\not=0.\end{displaymath}
Then, by the inverse function theorem there exist open sets $\hat{J}\subset \tilde J ,\hat{V}\subset V$ and $\hat{U}\subset U$ such that
$\hat{J}\times\hat{V}$ contains the origin and $\Phi:\hat{J}\times\hat{V}\to\hat{U}$ is a
diffeomorphism. Moreover, by (i), $ \Phi(0,0)=p_0$, so $\Phi(s,y)$,
a local $n$-foliation of $\hat{U}$.
\medbreak
\noindent {\it Claim 2. $F\circ \Phi$ is Lipschitz continuous in a neighborhood of zero when fixing the first variable.}
Notice that, by construction, $\Phi(s,y)- \c_1(s)\in\<\tilde\c_2(s)\>^{\perp}$. Now, condition (ii) implies that
\begin{align*} & \|F\circ\Phi(s,y_1)-F\circ\Phi(s,y_2)\| = \|F(\c_1(s)+A(s)(0,y_1))-F(\c_1(s)+A(s)(0, y_2))\| \\ \le & \omega_F(\c_1(s),\c_2(s),\d)\|\c_1(s)+A(s)(0,y_1)-\c_1(s)+A(s)(0, y_2)\|\le M \sup_{s\in \hat{J}}\|A(s)\| \|y_1-y_2\|.\end{align*}
for every $s\in \hat{J}$ and $y_1,y_2\in B_{n}\left(0,\displaystyle\frac{\d}{\sup_{s\in \hat{J}}\|A(s)\|}\right)$.\par
\noindent {\it Claim 3. $(\Phi')^{-1}$ is Lipschitz continuous in a neighborhood of zero when fixing the first variable.}\par
Fix $s\in \hat{J}$. We have that
\[\Phi'(s,y)=\left(\begin{array}{c|c|c|c|c}\c_1'(s)+A'(s)(0,y) & a_2(s) & a_3(s) & \ldots & a_{n+1}(s)\end{array}\right).\]
Then,
\[\|\Phi'(s,x)-\Phi'(s, y)\|\le \sup_{s\in\hat{J}}\|A(s)\| \|x- y\|,\]
so $\Phi'$ is Lipschitz continuous in a neighborhood of zero when fixing $s$.
On the other hand, $(\Phi')^{-1}$ is a continuous function, therefore locally bounded. Hence, by Corollary \ref{coreq}.3, $(\Phi')^{-1}$ is Lipschitz continuous in a neighborhood of zero when fixing the first variable.
\end{proof}
\renewcommand{\abstractname}{Acknowledgments}
\begin{abstract}The authors want to express their gratitude towards Prof. Santiago Codesido (Universit\'e de Gen\`eve, Switzerland) for suggesting the rotation matrix expression \eqref{Codfor} and several useful discussions.
\end{abstract}
| {
"timestamp": "2018-01-08T02:15:42",
"yymm": "1801",
"arxiv_id": "1801.01724",
"language": "en",
"url": "https://arxiv.org/abs/1801.01724",
"abstract": "We prove the following result: if a continuous vector field $F$ is Lipschitz when restricted to the hypersurfaces determined by a suitable foliation and a transversal condition is satisfied at the initial condition, then $F$ determines a locally unique integral curve. We also present some illustrative examples and sufficient conditions in order to apply our main result.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "A Lipschitz condition along a transversal foliation implies local uniqueness for ODEs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9850429160413819,
"lm_q2_score": 0.8152324960856175,
"lm_q1q2_score": 0.8030389951958712
} |
https://arxiv.org/abs/1707.00857 | Existence results for a linear equation with reflection, non-constant coefficient and periodic boundary conditions | This work is devoted to the study of first order linear problems with involution and periodic boundary value conditions. We first prove a correspondence between a large set of such problems with different involutions to later focus our attention to the case of the reflection. We study then different cases for which a Green's function can be obtained explicitly and derive several results in order to obtain information about its sign. Once the sign is known, maximum and anti-maximum principles follow. We end this work with more general existence and uniqueness of solution results. | \section{Introduction}
In a previous paper by the authors \cite{Cab4}, a Green's function for the following linear problem with reflection was found.
\begin{equation}\label{eqoriginal}x'(t)+\omega x(-t)=h(t), t\in I;\quad x(-T)=x(T),\end{equation}
where $T\in{\mathbb R}^+$, $\omega\in{\mathbb R}\backslash\{0\}$ and $h\in L^1(I)$, with $I=[-T,T]$. The precise form of this Green's function was given by the following theorem.
\begin{thm}[\cite{Cab4}, Proposition 3.2]\label{Greenf} Suppose that $\omega \neq k \, \pi/T$, $k \in {\mathbb Z}$. Then problem (\ref{eqoriginal}) has a unique solution given by the expression
\begin{equation}
\label{e-u}
u(t):=\int_{-T}^T\overline G(t,s)h(s)\dif s,
\end{equation}
where $$\overline{G}(t,s):=\omega\,G(t,-s)-\frac{\partial G}{\partial s}(t,s)$$
and $G$ is the Green's function for the harmonic oscillator
$$x''(t)+\omega^2x(t)=0;\ x(T)=x(-T),\ x'(T)=x'(-T).$$
\end{thm}
The sign properties of this Green's function were further studied in \cite{Cab5}, where the methods similar to those found in \cite{gijwjiea, gijwjmaa, gijwems} are used in order to derive existence and multiplicity results.\par
Still, obtaining the Green's function of problem \eqref{eqoriginal} with a non-constant coefficient has not been accomplished yet. In this article we will study this case and further generalize the existence of Green's functions. Through a correspondence theorem, we will also be able to extend these results to problems with other involutions. We will also obtain new maximum and anti-maximum principles and existence and uniqueness results.
\section{Order one linear problems with involutions}
Assume $\phi$ is a differentiable involution on $[\phi(T),T]$. Let $a,b,c,d\in L^1([\phi(T),T])$ and consider the following problem
\begin{equation}\label{proinv1} d(t)x'(t)+c(t)x'(\phi(t))+b(t)x(t)+a(t)x(\phi(t))=h(t),\ x(\phi(T))=x(T).
\end{equation}
It would be interesting to know under what circumstances problem \eqref{proinv1} is equivalent to another problem of the same kind but with a different involution, in particular the reflection. The following two results will help us to clarify this situation.
\begin{lem}[\textsc{Correspondence of Involutions}]\label{corofinv} Let $\phi$ and $\psi$ be two differentiable involutions\footnote{Every differentiable involution is a diffeomorphism.} on the intervals $[\phi(T),T]$ and $[\psi(S),S]$ respectively. Let $t_0$ and $s_0$ be the unique fixed points of $\phi$ and $\psi$ respectively. Then, there exists an orientation preserving diffeomorphism $f:[\psi(S),S]\to[\phi(T),T]$ such that $f(\psi(s))=\phi(f(s))\nkp\fa s\in[\psi(S),S]$.
\end{lem}
\begin{proof}
Let $g:[\psi(S),s_0]\to[\phi(T),t_0]$ be an orientation preserving diffeomorphism, that is, $g(s_0)=t_0$. Let us define
$$f(s):=\begin{cases} g(s) & \text{ if } s\in[\psi(S),s_0], \\ (\phi\circ g\circ\psi)(s) & \text{ if } s\in(s_0,S].\end{cases}$$
\par
Clearly, $f(\psi(s))=\phi(f(s))\nkp\fa s\in[\psi(S),S]$. Since $s_0$ is a fixed point for $\psi$, $f$ is continuous. Furthermore, because $\phi$ and $\psi$ are involutions, $\phi'(t_0)=\psi'(s_0)=-1$, so $f$ is differentiable. $f$ is invertible with inverse
$$f^{-1}(t):=\begin{cases} g^{-1}(t) & \text{ if } t\in[\phi(T),t_0], \\ (\psi\circ g^{-1}\circ\phi)(t) & \text{ if } t\in(t_0,T].\end{cases}$$
$f^{-1}$ is also differentiable for the same reasons.
\end{proof}
\begin{rem}A similar argument could be done in the case of involutions defined on open, possibly not bounded, intervals.
\end{rem}
\begin{rem} The expression obtained for $f$ reminds us of the characterization of involutions given in \cite[Property 6]{Wie}.
\end{rem}
\begin{rem} It is easy to check that if $\phi$ is an involution defined on ${\mathbb R}$ with fixed point $t_0$ then $\psi(t):=\phi(t+t_0-s_0)-t_0+s_0$ is an involution defined on ${\mathbb R}$ with fixed point $s_0$ (cf. \cite[Property 2]{Wie}). For this particular choice of $\phi$ and $\psi$, we can take $g(s)=s-s_0+t_0$ in Lemma \ref{corofinv} and, in such a case, $f(s)=s-s_0+t_0$ for all $s\in{\mathbb R}$.
\end{rem}
\begin{cor}[\textsc{Change of Involution}] Under the hypothesis of Lemma \ref{corofinv},
problem \eqref{proinv1} is equivalent to
\begin{equation}\label{proinv2} \frac{d(f(s))}{f'(s)}y'(s)+\frac{c(f(s))}{f'(\psi(s))}y'(\psi(s))+b(f(s))y(s)+a(f(s))y(\psi(s))=h(f(s)),\ y(\psi(S))=y(S).
\end{equation}
\end{cor}
\begin{proof}Consider the change of variable $t=f(s)$ and $y(s):=x(t)=x(f(s))$. Then, using Lemma \ref{corofinv}, it is clear that
$$\frac{\dif y}{\dif s}(s)=\frac{\dif x}{\dif t}(f(s))\frac{\dif f}{\dif s}(s)\quad\text{and}\quad \frac{\dif y}{\dif s}(\psi(s))=\frac{\dif x}{\dif t}(\phi(f(s)))\frac{\dif f}{\dif s}(\psi(s)).$$
Making the proper substitutions in problem \eqref{proinv1} we get problem \eqref{proinv2} and vice-versa.
\end{proof}
This last results allows us to restrict our study of problem \eqref{proinv1} to the case where $\phi$ is the reflection $\phi(t)=-t$. In the following section we will further restrict our assumptions to the case where $c\equiv0$ in problem \eqref{proinv1}. A comment on how to proceed without this assumption will be done in the Appendix at the end of this work.
\section{Study of the homogeneous equation}
In this section we will study some different cases for the homogeneous equation
\begin{equation}\label{gen-eq} x'(t)+a(t)x(-t)+b(t)x(t)=0,\ t\in I,\end{equation}
where $a,b\in L^1(I)$. In order to solve it, we can consider the decomposition of equation \eqref{gen-eq} used in \cite{Cab4}. For any given function $f$, let $f_e(x):=\frac{f(x)+f(-x)}{2}$ be its even part and $f_o(x):=\frac{f(x)-f(-x)}{2}$ its odd part. Then, the solutions of equation \eqref{gen-eq} satisfy
\begin{align}\label{eq3.1}\begin{pmatrix}x_o' \\ x_e'\end{pmatrix} & =\begin{pmatrix}a_o-b_o & -a_e-b_e \\ a_e-b_e & -a_o-b_o\end{pmatrix}\begin{pmatrix}x_o \\ x_e\end{pmatrix}.
\end{align}
Realize that, a priori, solutions of system \eqref{eq3.1} need not to be pairs of even and odd functions, nor provide solutions of \eqref{gen-eq}.\par
In order to solve this system, we will restrict problem \eqref{eq3.1} to those cases where the matrix
$$M(t)=\begin{pmatrix}a_o-b_o & -a_e-b_e \\ a_e-b_e & -a_o-b_o\end{pmatrix}(t)$$
satisfies that $[M(t),M(s)]:=M(t)M(s)-M(s)M(t)=0\nkp\fa t,s\in I$, for in that case the solution of the system \eqref{eq3.1} is given by the exponential of the integral of $M$\footnote{See the Appendix for more details on this matter.}. Clearly,
$$[M(t),M(s)]=2 \begin{pmatrix} a_e(t)b_e(s)-a_e(s)b_e(t) & a_o(s)[a_e(t)+b_e(t)]-a_o(t)[a_e(s)+b_e(s)]\\a_o(t)[a_e(s)+b_e(s)]-a_o(s)[a_e(t)+b_e(t)] & a_e(s)b_e(t)-a_e(t)b_e(s)\end{pmatrix}.$$
Let $A(t):=\int_0^t a(s)\dif s$, $B(t):=\int_0^t b(s)\dif s$. Let $\overline M$ be a primitive (save possibly a constant matrix) of $M$. We study now the different cases where $[M(t),M(s)]=0\nkp\fa t,s\in I$. We will always assume $a\not\equiv0$, since the case $a\equiv0$ is the well-known case of an ODE.\par
\textbf{(C1). $b_e=k\,a,\ k\in{\mathbb R},\ |k|<1$.}
\par
In this case, $a_o=0$ and $\overline M$ has the form
$$\overline M=\begin{pmatrix}B_e & -(1+k)A_o\\ (1-k)A_o & -B_e\end{pmatrix}.$$
If we compute the exponential (see note in the Appendix for more information) we get
$$e^{\overline M(t)}=e^{-B_e(t)}\begin{pmatrix}\cos\(\sqrt{1-k^2}A(t)\) & -\frac{1+k}{\sqrt{1-k^2}}\sin\(\sqrt{1-k^2}A(t)\)\\ \frac{\sqrt{1-k^2}}{1+k}\sin\(\sqrt{1-k^2}A(t)\) & \cos\(\sqrt{1-k^2}A(t)\)\end{pmatrix}.$$
Therefore, if a solution to equation \eqref{gen-eq} exists, it has to be of the form
$$u(t)=\a e^{-B_e(t)}\cos\(\sqrt{1-k^2}A(t)\)+\b e^{-B_e(t)}\frac{1+k}{\sqrt{1-k^2}}\sin\(\sqrt{1-k^2}A(t)\).$$
with $\a$, $\b\in{\mathbb R}$. It is easy to check that all the solutions of equation \eqref{gen-eq} are of this form with $\b=-\a$.
\par
\textbf{(C2). $b_e=k\,a,\ k\in{\mathbb R},\ |k|>1$.}
This case is much similar to (C1) and it yields solutions of system \eqref{eq3.1} of the form
$$u(t)=\a e^{-B_e(t)}\cosh\(\sqrt{k^2-1}A(t)\)+\b e^{-B_e(t)}\frac{1+k}{\sqrt{k^2-1}}\sinh\(\sqrt{k^2-1}A(t)\),$$
which are solutions of equation \eqref{gen-eq} when $\b=-\a$.\par
\textbf{(C3). $b_e=a$.}
In this case the solutions of system \eqref{eq3.1} are of the form
\begin{equation}\label{eqc3}u(t)=\a e^{-B_e(t)}+2\b e^{-B_e(t)}A(t)\end{equation}
which are solutions of equation \eqref{gen-eq} when $\b=-\a$.\par
\textbf{(C4). $b_e=-a$.}
In this case the solutions of system \eqref{eq3.1} are the same as in case (C3), but they are solutions of equation \eqref{gen-eq} when $\b=0$.\par
\textbf{(C5). $b_e=a_e=0$.}
In this case the solutions of system \eqref{eq3.1} are of the form
$$u(t)=\a e^{A(t)-B(t)}+\b e^{-A(t)-B(t)},$$
which are solutions of equation \eqref{gen-eq} when $\a=0$.\par
\section{The cases (C1)--(C3) for the complete problem}
In the more complicated setting of the following nonhomogeneous problem
\begin{equation}\label{eq2cp} x'(t)+a(t)\,x(-t)+b(t)\,x(t)=h(t),\enskip a.\,e. t\in I,\quad x(-T)=x(T),
\end{equation}
we have still that, in the cases (C1)--(C3), it can be sorted out very easily. In fact, we get the expression of the Green's function for the operator.
We remark that in the three considered cases along this section the function $a$ must be even on $I$. We note also that $a$ is allowed to change its sign on $I$.
\par
First, we are going to prove a generalization of Theorem \ref{Greenf}.\par
Consider problem \eqref{eq2cp} with $a$ and $b$ constants.
\begin{equation}\label{eq2cp2} x'(t)+a\,x(-t)+b\,x(t)=h(t),\enskip t\in I,\quad x(-T)=x(T).
\end{equation}
Considering the homogeneous case ($h=0$), differentiating and making proper substitutions, we arrive to the problem.
\begin{equation}\label{eqhog} x''(t)+(a^2-b^2)x(t)=0,\enskip t\in I,\quad x(-T)=x(T),\quad x'(-T)=x'(T).
\end{equation}
Which, for $b^2<a^2$, is the problem of the harmonic oscillator. It was shown in \cite[Proposition 3.1]{Cab4} that, under uniqueness conditions, the Green's function $G$ for problem \eqref{eqhog} satisfies the following properties in the case $b^2<a^2$, but they can be extended almost automatically to the case $b^2>a^2$.
\begin{lem} The Green's function $G$ satisfies the following properties.
\begin{enumerate}
\item $G\in{\mathcal C}(I^2,{\mathbb R})$,
\item $\frac{\partial G}{\partial t}$ and $\frac{\partial^2 G}{\partial t^2}$ exist and are continuous in $\{(t,s)\in I^2\ |\ s\ne t\}$,
\item $\frac{\partial G}{\partial t}(t,t^-)$ and $\frac{\partial G}{\partial t}(t,t^+)$ exist for all $t\in I$ and satisfy
$$\frac{\partial G}{\partial t}(t,t^-)-\frac{\partial G}{\partial t}(t,t^+)=1\nkp\fa t\in I,$$
\item $\frac{\partial^2 G}{\partial t^2}+(a^2-b^2)G=0\text{ in }\{(t,s)\in I^2\ |\ s\ne t\},$
\item \begin{enumerate}
\item $G(T,s)=G(-T,s)\nkp\fa s\in I$,
\item $\frac{\partial G}{\partial t}(T,s)=\frac{\partial G}{\partial t}(-T,s)\nkp\fa s\in(-T,T)$.
\end{enumerate}
\item $G(t,s)=G(s,t)$,
\item $G(t,s)=G(-t,-s)$,
\item $\frac{\partial G}{\partial t}(t,s)=\frac{\partial G}{\partial s}(s,t)$,
\item $\frac{\partial G}{\partial t}(t,s)=-\frac{\partial G}{\partial t}(-t,-s)$,
\item $\frac{\partial G}{\partial t}(t,s)=-\frac{\partial G}{\partial s}(t,s)$.
\end{enumerate}
\end{lem}
With these properties, we can prove the following Theorem (cf. \cite[Proposition 3.2]{Cab4}).
\begin{thm}\label{Greenf2} Suppose that $a^2-b^2 \neq n^2 \, (\pi/T)^2$, $n=0,1,\dots$ Then problem \eqref{eq2cp2} has a unique solution given by the expression
\begin{equation*}
\label{e-u2}
u(t):=\int_{-T}^T\overline G(t,s)h(s)\dif s,
\end{equation*}
where \begin{equation}\label{e-a-b}
\overline{G}(t,s):=a\,G(t,-s)-b\,G(t,s)+\frac{\partial G}{\partial t}(t,s)\end{equation}
is called the \textbf{Green's function} related to problem \eqref{eq2cp2}.
\end{thm}
\begin{proof}
Since problem \eqref{eq2cp2}, in the homogeneous case, can be reduced to a problem with the equation of problem \eqref{eqhog}, the classical theory of ODE tells us that problem \eqref{eq2cp2} has at most one solution for all $a^2-b^2 \neq n^2 \, (\pi/T)^2$, $n=0,1,\dots$ Let us see that function $u$ defined in (\ref{e-u}), with $\overline G$ given by \eqref{e-a-b}, fulfills (\ref{eq2cp2}):
\begin{eqnarray*}
& & u'(t)+a\, u(-t)+b\, u(t)=\frac{\dif}{\dif t}\int_{-T}^{-t}\overline G(t,s)h(s)\dif s+\frac{\dif}{\dif t}\int_{-t}^t\overline G(t,s)h(s)\dif s+\frac{\dif}{\dif t}\int_{t}^T\overline G(t,s)h(s)\dif s\\
&+&a\int_{-T}^T\overline G(-t,s)h(s)\dif s+b\int_{-T}^T\overline G(t,s)h(s)\dif s\\
&=&(\overline G(t,t^-)-\overline G(t,t^+))h(t)+\int_{-T}^T\left[a\frac{\partial G}{\partial t}(t,-s)-b\frac{\partial G}{\partial t}(t,s)+\frac{\partial^2 G}{\partial t^2}(t,s)\right]h(s)\dif s\\
&+&a\int_{-T}^T\left[a\,G(-t,-s)-b\,G(-t,s)+\frac{\partial G}{\partial t}(-t,s)\]h(s)\dif s
+b\int_{-T}^T\left[a\,G(t,-s)-b\,G(t,s)+\frac{\partial G}{\partial t}(t,s)\]h(s)\dif s.
\end{eqnarray*}
Using properties $(I)-(X)$, we deduce that this last expression is equal to $h(t)$, so the equation in problem \eqref{eq2cp2} is satisfied.
Property $(V)$ allows us to verify the boundary conditions.
$$ u(T)-u(-T)=$$
$$\int_{-T}^T\left[a\,G(T,-s)-b\,G(T,s)+\frac{\partial G}{\partial t}(T,s)-a\,G(-T,-s)+b\,G(-T,s)-\frac{\partial G}{\partial t}(-T,s)\right]h(s)\dif s=0.$$
\end{proof}
This last theorem leads us to the question ``Which is the Green's function for the case (C3) with $a,b$ constants?". The following Lemma answers that question.
\begin{lem}\label{lemGc3}Let $a\ne 0$ be a constant and let $G_{C3}$ be a real function defined as
$$G_{C3}(t,s):=\frac{t-s}{2}-a\,s\,t+\begin{cases} -\frac{1}{2}+a\,s & \text{ if } |s|<t, \\ \frac{1}{2}-a\,s & \text{ if } |s|<-t, \\ \frac{1}{2}+a\,t & \text{ if } |t|<s, \\ -\frac{1}{2}-a\,t & \text{ if } |t|<-s.\end{cases}$$
Then the following properties hold.
\begin{itemize}
\item $\frac{\partial G_{C3}}{\partial t}(t,s)+a(G_{C3}(t,s)+G_{C3}(-t,s))=0$ for a.\,e. $t,s\in (-1,1)$.
\item $\frac{\partial G_{C3}}{\partial t}(t,t^+)-\frac{\partial G_{C3}}{\partial t}(t,t^-)=1\nkp\fa t\in(-1,1)$.
\item $G_{C3}(-1,s)=G_{C3}(1,s)\nkp\fa s\in(-1,1)$.
\end{itemize}
\end{lem}
These properties are straightforward to check.\ Clearly, $G_{C3}$ is the Green's function for the problem
$$x'(t)+a[x(t)+x(-t)]=h(t), t\in[-1,1];\quad x(1)=x(-1),$$
that is, the Green's function for the case (C3) with $a,b$ constants and $T=1$. For other values of $T$, it is enough to make a change of variables.
\begin{rem} The function $G_{C3}$ can be obtained from the Green's functions for the case $(C1)$ with $a$ constant, $b_o\equiv0$ and $T=1$ taking the limit $k\to 1^-$ for $T=1$.
\end{rem}
The following theorem shows how to obtain a Green's function for non constant coefficients of the equation using the Green's function for constant coefficients. We can find the same principle, that is, to compose a Green's function with some other function in order to obtain a new Green's function, in \cite[Theorem 5.1, Remark 5.1]{Cab6} and also in \cite[Section 2]{Gau}.\par
But first, we need to now hot the Green's function should be defined in such a case. Theorem \ref{Greenf2} gives us the expression of the Green's function for problem \eqref{eq2cp2}, $\overline{G}(t,s):=a\,G(t,-s)-b\,G(t,s)+\frac{\partial G}{\partial t}(t,s)$. For instance, in the case (C1), if $\omega=\sqrt{a^2-b^2}$,
$$2\omega\sin(\omega T)\overline{G}(t,s):=\begin{cases} a \cos[\omega (s + t - T)]+ b \cos[\omega (s - t + T)] + \omega \sin[\omega (s - t + T)],& t>|s|,\\
a\cos[\omega (s + t - T)] +b \cos[\omega (-s + t + T)] - \omega \sin[\omega (-s + t + T)], & s>|t|,\\
a \cos[\omega (s + t + T)] +b \cos[\omega (-s + t + T)] - \omega \sin[\omega (-s + t + T)], & -t>|s|,\\
a \cos[\omega (s + t + T)] +b \cos[\omega (s - t + T)] + \omega \sin[\omega (s - t + T)], & -s>|t|. \end{cases}$$
Also, observe that $\overline G$ is continuous except at the diagonal, where $\overline G(t,t^-)-\overline G(t,t^+)=1$.
Similarly, we can obtain the explicit expression of the Green's function $\overline G$ for the cases (C2) and (C3) (see Lemma \ref{lemGc3}). In any case, we have that the Green's function for problem \eqref{eq2cp2} can be expressed as
$$2\omega\sin(\omega T)\overline{G}(t,s):=\begin{cases} \overline G_1(t,s),& t>|s|,\\
\overline G_2(t,s), & s>|t|,\\
\overline G_3(t,s), & -t>|s|,\\
\overline G_4(t,s), & -s>|t|, \end{cases}$$
were the $\overline G_j$, $j=1,\dots,4$ are analytic functions defined on ${\mathbb R}^2$.
In order to simplify the statement of the following Theorem, consider the following conditions.\par
$\mathbf{(C1^*)}$. (C1) is satisfied, $(1-k^2)A(T)^2\neq (n \, \pi)^2$ for all $n=0,1,\dots$ and $\cos\(\sqrt{1-k^2}A(T)\)\ne0$.\par
$\mathbf{(C2^*)}$. (C2) is satisfied and $(1-k^2)A(T)^2\neq (n \, \pi)^2$ for all $n=0,1,\dots$\par
$\mathbf{(C3^*)}$. (C3) is satisfied and $A(T)\ne0$.\par
Assume one of $(C1^*)$--$(C3^*)$. In that case, by Theorem \ref{Greenf2} and Lemma \ref{lemGc3}, we are under uniqueness conditions for the solution for the following problem \cite{Cab4}.
\begin{equation}\label{eq2} x'(t)+x(-t)+k\,x(t)=h(t),\enskip t\in [-|A(T)|,|A(T)|],\quad x(A(T))=x(-A(T)).
\end{equation}
The Green's function $G_2$ for problem \eqref{eq2} is just an specific case of $\overline G$ and can be expressed as
$$\overline{G_2}(t,s):=\begin{cases} k_1(t,s),& t>|s|,\\
k_2(t,s), & s>|t|,\\
k_3(t,s), & -t>|s|,\\
k_4(t,s), & -s>|t|. \end{cases}$$
Define now
\begin{equation}\label{Ggenral}G_1(t,s):=e^{B_e(s)-B_e(t)}H(t,s)=e^{B_e(s)-B_e(t)}\begin{cases} k_1(A(t),A(s)),& t>|s|,\\
k_2(A(t),A(s)), & s>|t|,\\
k_3(A(t),A(s)), & -t>|s|,\\
k_4(A(t),A(s)), & -s>|t|. \end{cases}\end{equation}
Defined this way, $G_1$ is continuous except at the diagonal, where $G_1(t,t^-)-\overline G_1(t,t^+)=1$. Now we can state the following Theorem.
\begin{thm}\label{thmcases123}
Assume one of $(C1^*)$--$(C2^*)$. Let $G_1$ be defined as in \eqref{Ggenral}. Assume $G_1(t,\cdot)h(\cdot)\in L^1(I)$ for every $t\in I$. Then problem \eqref{eq2cp} has a unique solution given by
$$u(t)=\int_{-T}^TG_1(t,s)h(s)\dif s.$$
\end{thm}
\begin{proof}
First realize that, since $a$ is even, $A$ is odd, so $A(-t)=-A(t)$. It is important to note that if $a$ has not constant sign in $I$, then $A$ may be not injective on $I$.
From the properties of $\bar G_2$ as a Green's function, it is clear that
$$\frac{\partial \bar G_2}{\partial t}(t,s)+\bar G_2(-t,s)+k\,\bar G_2(t,s)=0\quad\text{for a.\,e. }t,s\in A(I),$$
and so,
$$\frac{\partial H}{\partial t}(t,s)+a(t)H(-t,s)+ka(t)\,H(t,s)=0\quad\text{for a.\,e. }t,s\in I,$$
Hence
\begin{align*}
& u' (t)+a(t)\, u(-t)+(b_o(t)+k\,a(t))\, u(t)= \frac{\dif}{\dif t}\int_{-T}^TG_1(t,s)h(s)\dif s+a(t)\int_{-T}^TG_1(-t,s)h(s)\dif s\\
&\quad+(b_o(t)+k\,a(t))\int_{-T}^TG_1(t,s)h(s)\dif s\\
=\ &\frac{\dif}{\dif t}\int_{-T}^{t} e^{B_e(s)-B_e(t)}H(t,s)h(s)\dif s+\frac{\dif}{\dif t}\int_{t}^T e^{B_e(s)-B_e(t)} H(t,s)h(s)\dif s\\&\quad+ a(t)\int_{-T}^T e^{B_e(s)-B_e(t)}H(-t,s)h(s)\dif s+(b_o(t)+k\,a(t))\int_{-T}^T e^{B_e(s)-B_e(t)}H(t,s)h(s)\dif s\\
=\ &[H(t,t^-)-H(t,t^+)]h(t)+a(t)\, e^{-B_e(t)}\int_{-T}^Te^{B_e(s)}\frac{\partial H}{\partial t}(t,s)h(s)\dif s\\&\quad- b_o(t)e^{-B_e(t)}\int_{-T}^Te^{B_e(s)}H(t,s)h(s)\dif s+ a(t)e^{-B_e(t)}\int_{-T}^Te^{B_e(s)} H(-t,s)h(s)\dif s\\ &\quad+ (b_o(t)+k\,a(t))e^{-B_e(t)}\int_{-T}^Te^{B_e(s)}H(t,s)h(s)\dif s\\ =\ & h(t)+\, a(t)e^{-B_e(t)}\int_{-T}^Te^{B_e(s)}\[\frac{\partial H}{\partial t}(t,s)+a(t)H(-t,s)+ka(t)\,H(t,s)\]h(s)\dif s=h(t).
\end{align*}
The boundary conditions are also satisfied.
$$u(T)-u(-T)= e^{-B_e(T)}\int_{-T}^Te^{B_e(s)} [H(T,s)-H(-T,s)]h(s)\dif s=0.$$
In order to check the uniqueness of solution, let $u$ and $v$ be solutions of problem \eqref{eq2}. Then $u-v$ satisfies equation \eqref{gen-eq} and so is of the form given when we first studied the cases $(C1^*)$--$(C3^*)$ (see Section 3). Also, $(u-v)(T)-(u-v)(-T)=2(u-v)_o(T)=0$, but this can only happen, by what has been imposed by conditions $(C1^*)$--$(C3^*)$, if $u-v\equiv0$, thus proving the uniqueness of solution.
\end{proof}
\begin{exa}
Consider the problem
$$x'(t)=\cos(\pi t)x(-t)+\sinh(t)x(t)=\cos(\pi t)+\sinh(t), \;x(3/2)=x(-3/2). $$
Clearly we are in the case (C1). If we compute the Green's function according to Theorem \ref{thmcases123} we obtain
$$2\sin(\sin (\pi T))G_1(t,s)=e^{\cosh (s)-\cosh (t)}\begin{cases}
\sin \left(\frac{\sin (\pi s)}{\pi }-\frac{\sin (\pi t)}{\pi }-\frac{\sin (\pi T)}{\pi }\right)+ \cos \left(\frac{\sin (\pi s)}{\pi }+\frac{\sin (\pi t)}{\pi }-\frac{\sin (\pi T)}{\pi }\right), |t|<s,\\
\sin \left(\frac{\sin (\pi s)}{\pi}-\frac{\sin (\pi t)}{\pi }+\frac{\sin (\pi T)}{\pi }\right)+\cos \left(\frac{\sin (\pi s)}{\pi }+\frac{\sin (\pi t)}{\pi }+\frac{\sin (\pi T)}{\pi }\right), |t|<-s,\\
\sin \left(\frac{\sin (\pi s)}{\pi}-\frac{\sin (\pi t)}{\pi }+\frac{\sin (\pi T)}{\pi }\right)+ \cos \left(\frac{\sin (\pi s)}{\pi }+\frac{\sin (\pi t)}{\pi }-\frac{\sin (\pi T)}{\pi }\right), |s|<t,\\
\sin \left(\frac{\sin (\pi s)}{\pi }-\frac{\sin (\pi t)}{\pi }-\frac{\sin (\pi T)}{\pi }\right) + \cos \left(\frac{\sin (\pi s)}{\pi }+\frac{\sin (\pi t)}{\pi }+\frac{\sin (\pi T)}{\pi }\right), |s|<-t.\end{cases}$$
\begin{figure}[hhht]\label{figure2g}
\center{\includegraphics[width=.5\textwidth]{grafico1.png}\includegraphics[width=.5\textwidth]{grafico2.png}}\caption{Graphs of the kernel \textit{(left)} and of the functions involved in the problem \textit{(right)}.}
\end{figure}
\end{exa}
One of the most important direct consequences of Theorem \ref{thmcases123} is the existence of maximum and antimaximum principles in the case $b\equiv0$\footnote{Note that this discards the case (C3), for which $b\equiv0$ implies $a\equiv 0$, because we are assuming $a\not\equiv0$.}. To show this and what happens in the case $b$ constant, $b\ne0$, we recall here a couple results from \cite{Cab4}.
\begin{thm}[{\cite[Theorem 4.3]{Cab4}}]\label{alphasign}Let $b=0$, $\a=aT$. \par
\begin{itemize}
\item If $\a\in(0,\frac{\pi}{4})$ then $\overline G$ is strictly positive on $I^2$.
\item If $\a\in(-\frac{\pi}{4},0)$ then $\overline G$ is strictly negative on $I^2$.
\item If $\a=\frac{\pi}{4}$ then $\overline G$ vanishes on $P:=\{(-T,-T),(0,0),(T,T),(T,-T)\}$ and is strictly positive on $(I^2)\backslash P$.
\item If $\a=-\frac{\pi}{4}$ then $\overline G$ vanishes on $P$ and is strictly negative on $(I^2)\backslash P$.
\item If $\a\in{\mathbb R}\backslash[-\frac{\pi}{4},\frac{\pi}{4}]$ then $\overline G$ is not positive nor negative on $I^2$.
\end{itemize}
\end{thm}
\begin{cor}[{\cite[Corollary 4.4]{Cab4}}]\label{coralphasign} Let ${\mathcal F}_\l(I)$ be the set of real differentiable functions $f$ defined on $I$ such that $f(-T)-f(T)=\l$. The operator $R_a:{\mathcal F}_\l(I)\to L^1(I)$ defined as $R_a(x(t))=x'(t)+a\, x(-t)$, with $a\in{\mathbb R}\backslash\{0\}$, satisfies
\begin{itemize}
\item $R_m$ is strongly inverse positive if and only if $a\in(0,\frac{\pi}{4T}]$ and $\l\ge0$,
\item $R_m$ is strongly inverse negative if and only if $a\in[-\frac{\pi}{4T},0)$ and $\l\ge0$.
\end{itemize}
\end{cor}
With these results we get the following corollary to Theorem \ref{thmcases123}.
\begin{cor}\label{corsigng}Under the conditions of Theorem \ref{thmcases123}, if $a$ is nonnegative on $I$ and $b=0$,\par
\begin{itemize}
\item If $A(T)\in(0,\frac{\pi}{4})$ then $G_1$ is strictly positive on $I^2$.
\item If $A(T)\in(-\frac{\pi}{4},0)$ then $G_1$ is strictly negative on $I^2$.
\item If $A(T)=\frac{\pi}{4}$ then $G_1$ vanishes on $P:=\{(-A(T),-A(T)),(0,0),(A(T),A(T)),( A(T),- A(T))\}$ and is strictly positive on $(I^2)\backslash P$.
\item If $A(T)=-\frac{\pi}{4}$ then $G_1$ vanishes on $P$ and is strictly negative on $(I^2)\backslash P$.
\item If $A(T)\in{\mathbb R}\backslash[-\frac{\pi}{4},\frac{\pi}{4}]$ then $G_1$ is not positive nor negative on $I^2$.
\end{itemize}
Furthermore, the operator $R_a:{\mathcal F}_\l(I)\to L^1(I)$ defined as $R_a(x(t))=x'(t)+a(t)\, x(-t)$ satisfies
\begin{itemize}
\item $R_a$ is strongly inverse positive if and only if $A(T)\in(0,\frac{\pi}{4T}]$ and $\l\ge0$,
\item $R_a$ is strongly inverse negative if and only if $A(T)\in[-\frac{\pi}{4T},0)$ and $\l\ge0$.
\end{itemize}
\end{cor}
The second part of this last corollary, drawn from positivity (or negativity) of the Green's function could have been obtained, as we show below, without having so much knowledge about the Green's function. In order to show this, consider the following proposition in the line of the work of Torres \cite[Theorem 2.1]{Tor}.\par
\begin{pro}\label{proredpro}
Consider the homogeneous initial value problem
\begin{equation}\label{eqhomivp} x'(t)+a(t)\,x(-t)+b(t)\,x(t)=0,\ t\in I;\ x(t_0)=0.\end{equation}
If problem \eqref{eqhomivp} has a unique solution ($x\equiv 0$) on $I$ for all $t_0\in I$ then, if the Green function for \eqref{eq2cp} exists, it has constant sign.\par
What is more, if we further assume $a+b$ has constant sign, the Green's function has the same sign as $a+b$.
\end{pro}
\begin{proof}
Without lost of generality, consider $a$ to be a $2T$-periodic $L^1$ function defined on ${\mathbb R}$ (the solution of \eqref{eq2cp} will be considered in $I$). Let $G_1$ be the Green's function for problem \eqref{eq2cp}. Since $G_1(T,s)=G_1(-T,s)$ for all $s\in I$, and $G_1$ is continuous except at the diagonal, it is enough to prove that $G_1(t,s)\ne0\nkp\fa t,s\in I$.\par
Assume, on the contrary, that there exists $t_1,s_1\in I$ such that $G_1(t_1,s_1)=0$. Let $g$ be the $2T$-periodic extension of $G_1(\cdot,s_1)$. Let us assume $t_1>s_1$ (the other case would be analogous). Let $f$ be the restriction of $g$ to $(s_1,s_1+2T)$. $f$ is absolutely continuous and satisfies \eqref{eqhomivp} a.e. for $t_0=t_1$, hence, $f\equiv0$. This contradicts the fact of $G_1$ being a Green's function, therefore $G_1$ has constant sign.\par
Realize now that $x\equiv1$ satisfies
$$x'(t)+a(t)x(-t)+b(t)x(t)=a(t)+b(t),\ x(-T)=x(T).$$
Hence, $\int_{-T}^TG_1(t,s)(a(s)+b(s))\dif s=1$ for all $t\in I$. Since both $G_1$ and $a+b$ have constant sign, they have the same sign.
\end{proof}
The following corollaries are an straightforward application of this result to the cases (C1)--(C3) respectively.
\begin{cor}\label{cor1sig} Assume $a$ has constant sign. Under the assumptions of (C1) and Theorem \ref{thmcases123}, $G_1$ has constant sign if
$$|A(T)|< \frac{\arccos(k)}{2\sqrt{1-k^2}}.$$
Furthermore, $\sign(G_1)=\sign(a)$.
\end{cor}
\begin{proof}
The solutions of \eqref{gen-eq} for the case (C1), as seen before, are given by
$$u(t) =\a e^{-B_e(t)}\[ \cos\(\sqrt{1-k^2}A(t)\)-\frac{1+k}{\sqrt{1-k^2}}\sin\(\sqrt{1-k^2}A(t)\)\].$$
Using a particular case of the phasor addition formula\footnote{$\a\cos \c+\b\sin \c=\sqrt{\a^2+\b^2}\sin(\c+\theta)$, where $\theta\in[-\pi,\pi)$ is the angle such that $\cos\theta=\frac{\b}{\sqrt{\a^2+\b^2}}$, $\sin\theta=\frac{\a}{\sqrt{\a^2+\b^2}}$.},
$$u(t)=\a e^{-B_e(t)}\sqrt{\frac{2}{1-k}}\sin\(\sqrt{1-k^2}A(t)+\theta\),$$
where $\theta\in[-\pi,\pi)$ is the angle such that
\begin{equation}\label{sincos}\sin\theta=\sqrt{\frac{1-k}{2}}\quad\text{and}\quad\cos\theta=-\frac{1+k}{\sqrt{1-k^2}}\sqrt{\frac{1-k}{2}}=-\sqrt{\frac{1+k}{2}}.\end{equation}
Observe that this implies that $\theta\in\(\frac{\pi}{2},\pi\)$.\par
In order for the hypothesis of Proposition \ref{proredpro} to be satisfied, it is enough and sufficient to ask for $0\not\in u(I)$ for some $\a\ne0$. Equivalently, that
$$\sqrt{1-k^2}A(t)+\theta\neq \pi n\nkp\fa n\in{\mathbb Z}\nkp\fa t\in I,$$
That is,
$$A(t)\neq \frac{\pi n-\theta}{\sqrt{1-k^2}}\nkp\fa n\in{\mathbb Z}\nkp\fa t\in I.$$
Since $A$ is odd and injective and $\theta\in\(\frac{\pi}{2},\pi\)$, this is equivalent to
\begin{equation}\label{firststimate}|A(T)|< \frac{\pi-\theta}{\sqrt{1-k^2}}.\end{equation}
Now, using the double angle formula for the sine and \eqref{sincos},
$$\frac{1-k}{2}=\sin^2\theta=\frac{1-\cos(2\theta)}{2}\text{,\quad this is,\quad} k=\cos(2\theta),$$
which implies, since $2\theta\in(\pi,2\pi)$,
$$\theta=\pi-\frac{\arccos(k)}{2},$$
where $\arccos$ is defined such that it's image is $[0,\pi)$. Plugging this into inequality \eqref{firststimate} yields
$$|A(T)|<\sigma(k):= \frac{\arccos(k)}{2\sqrt{1-k^2}},\quad k\in(-1,1).$$
\par
Using $|k|<1$, $a+b=(k+1)a+b_o$ and the continuity of $G_1$ with respect to $a$ and $b$, we can prove that the sign of the Green's function is given by Proposition \ref{proredpro}.
\end{proof}
\begin{rem}
In the case $a$ is a constant $\omega$ and $k=0$, $A(I)=[-|\omega|T,|\omega|T]$, and the condition can be written as $|\omega|T<\frac{\pi}{4}$, which is consistent with the results found in \cite{Cab4}.\end{rem}
\begin{rem}
Observe that $\sigma$ is strictly decreasing on $(-1,1)$ and
$$\lim_{k\to-1^+}\sigma(k)=+\infty,\quad\lim_{k\to1^-}\sigma(k)=\frac{1}{2}.$$
\end{rem}
\begin{cor}\label{corC3sign} Under the conditions of (C3) and Theorem \ref{thmcases123}, $G_{1}$ has constant sign if $|A(T)|<\frac{1}{2}$.
\end{cor}
\begin{proof} This corollary is a direct consequence of equation \eqref{eqc3}, Proposition \ref{proredpro} and Corollary \ref{thmcases123}. Observe that the result is consistent with $\sigma(1^-)=\frac{1}{2}$.
\end{proof}
In order to prove the next corollary, we need the following 'hyperbolic version' of the phasor addition formula. It's proof can be done without difficulty.
\begin{lem}\label{hyppha}
Let $\a$, $\b,\c\in{\mathbb R}$, then
$$\a\cosh \c + \b\sinh\c= \sqrt{|\a^2-\b^2|}\begin{cases}\cosh\(\frac{1}{2}\ln\left|\frac{\a+\b}{\a-\b}\right|+\c\) & \text{if}\quad \a>|\b|,
\\ -\cosh\(\frac{1}{2}\ln\left|\frac{\a+\b}{\a-\b}\right|+\c\) & \text{if}\quad- \a>|\b|,
\\
\sinh\(\frac{1}{2}\ln\left|\frac{\a+\b}{\a-\b}\right|+\c\) & \text{if}\quad \b>|\a|,
\\
-\sinh\(\frac{1}{2}\ln\left|\frac{\a+\b}{\a-\b}\right|+\c\) & \text{if }\quad -\b>|\a|,
\\
\a\,e^\c & \text{if }\quad\a=\b,\\
\a\,e^{-\c} & \text{if }\quad\a=-\b.\\
\end{cases}$$
\end{lem}
\begin{cor}\label{cor2sig} Assume $a$ has constant sign. Under the assumptions of (C2) and Theorem \ref{thmcases123}, $G_1$ has constant sign if $k<-1$ or
$$|A(T)|<-\frac{\ln(k-\sqrt{k^2-1})}{2\sqrt{k^2-1}}.$$
Furthermore, $\sign(G_1)=\sign(k\,a)$.
\end{cor}
\begin{proof}
The solutions of \eqref{gen-eq} for the case (C2), as seen before, are given by
$$u(t) =\a e^{-B_e(t)}\[\cosh\(\sqrt{k^2-1}A(t)\)- \frac{1+k}{\sqrt{k^2-1}}\sinh\(\sqrt{k^2-1}A(t)\)\].$$
If $k>1$, then $1<\frac{1+k}{\sqrt{k^2-1}}$, so, using Lemma \ref{hyppha},
$$u(t)=-\a e^{-B_e(t)}\sqrt{\frac{2k}{k-1}}\sinh\(\frac{1}{2}\ln\left|k-\sqrt{k^2-1}\right|+\sqrt{k^2-1}A(t)\),$$
In order for the hypothesis of Proposition \ref{proredpro} to be satisfied, it is enough and sufficient to ask that $0\not\in u(I)$ for some $\a\ne0$. Equivalently, that
$$\frac{1}{2}\ln(k-\sqrt{k^2-1})+\sqrt{k^2-1}A(t)\neq 0\nkp\fa t\in I,$$
That is,
$$A(t)\neq -\frac{\ln(k-\sqrt{k^2-1})}{2\sqrt{k^2-1}}\nkp\fa t\in I.$$
Since $A$ is odd and injective, this is equivalent to
$$|A(T)|<\sigma(k):=-\frac{\ln(k-\sqrt{k^2-1})}{2\sqrt{k^2-1}},\quad k>1.$$
Now, if $k<-1$, then $\left|\frac{1+k}{\sqrt{k^2-1}}\right|<1$, so using Lemma \ref{hyppha},
$$u(t)=\a e^{-B_e(t)}\sqrt{\frac{2k}{k-1}}\cosh\(\frac{1}{2}\ln\left|k-\sqrt{k^2-1}\right|+\sqrt{k^2-1}A(t)\)\ne0\quad\text{for all}\quad t\in I,\ \a\ne0,$$
so the hypothesis of Proposition \ref{proredpro} are satisfied.\par
Using $|k|>1$, $a+b=(k^{-1}+1)b_e+b_o$ and the continuity of $G_1$ on $a$ and $b$ we can prove that the sign of the Green's function is given by Proposition \ref{proredpro}.
\end{proof}
\begin{rem}
If we consider $\sigma$ defined piecewise as in Corollaries \ref{cor1sig} and \ref{cor2sig} and continuously continued through $1/2$, we get
$$\sigma(k):=\begin{cases} \frac{\arccos(k)}{2\sqrt{1-k^2}} & \text{ if } k\in(-1,1) \\
\frac{1}{2} & \text{ if } k=1 \\
-\frac{\ln(k-\sqrt{k^2-1})}{2\sqrt{k^2-1}} & \text{ if } k>1\end{cases}$$
This function is not only continuous (it is defined thus), but also analytic. In order to see this it is enough to consider the extended definition of the logarithm and the square root to the complex numbers. Remember that $\sqrt{-1}:=i$ and that the principal branch of the logarithm is defined as $\ln_0(z)=\ln|z|+i\theta$ where $\theta\in[-\pi,\pi)$ and $z=|z|e^{i\theta}$ for all $z\in{\mathbb C}\backslash\{0\}$. Clearly, $\ln_0|_{(0,+\infty)}=\ln$.\par
Now, for $|k|<1$,
$\ln_0(k-\sqrt{1-k^2}i)=i\theta$ with $\theta\in[-\pi,\pi)$ such that $\cos\theta=k$, $\sin\theta=-\sqrt{1-k^2}$, that is, $\theta\in[-\pi,0]$. Hence, $i\ln_0(k-\sqrt{1-k^2}i)=-\theta\in[0,\pi]$. Since $\cos(-\theta)=k$, $\sin(-\theta)=\sqrt{1-k^2}$, it is clear that
$$\arccos(k)=-\theta=i\ln_0(k-\sqrt{1-k^2}i).$$
We thus extend $\arccos$ to ${\mathbb C}$ by
$$\arccos(z):=i\ln_0(z-\sqrt{1-z^2}i),$$
which is clearly an analytic function. So, if $k>1$,
$$\sigma(k)=-\frac{\ln(k-\sqrt{k^2-1})}{2\sqrt{k^2-1}}=-\frac{\ln_0(k-i\sqrt{1-k^2})}{2i\sqrt{1-k^2}}=\frac{i\ln_0(k-i\sqrt{1-k^2})}{2\sqrt{1-k^2}}=\frac{\arccos(k)}{2\sqrt{1-k^2}}.$$
$\sigma$ is positive, strictly decreasing and
$$\lim_{k\to -1^+}\sigma(k)=+\infty,\quad\lim_{k\to+\infty}\sigma(k)=0.$$
\end{rem}
In a similar way to Corollaries \ref{cor1sig},\ref{corC3sign} and \ref{cor2sig}, we can prove results not assuming $a$ to be a constant sign function. The result is the following.
\begin{cor}
Under the assumptions of Theorem \ref{thmcases123} and conditions (C1), (C2) or (C3) (let $k$ be the constant involved in such conditions), $G_1$ has constant sign if $\max A(I)<\sigma(k)$.
\end{cor}
\section{The cases (C4) and (C5)}
Consider the following problem derived from the nonhomogeneous problem \eqref{eq2cp}.
\begin{align}\label{eoparts}\begin{pmatrix}x_o' \\ x_e'\end{pmatrix} & =\begin{pmatrix}a_o-b_o & -a_e-b_e \\ a_e-b_e & -a_o-b_o\end{pmatrix}\begin{pmatrix}x_o \\ x_e\end{pmatrix}+\begin{pmatrix}h_e \\ h_o\end{pmatrix}.
\end{align}
The following theorems tell us what happens when we impose the boundary conditions.
\begin{thm} If condition (C4) holds, then problem \eqref{eq2cp} has solution if and only if
$$\int_0^Te^{B_e(s)}h_e(s)\dif s=0,$$
and in that case the solutions of \eqref{eq2cp} are given by
\begin{equation}
\label{e-c4}
u_c(t)=e^{-B_e(t)}\[c+\int_0^t\(e^{B_e(s)}h(s)+2a_e(s)\int_0^se^{B_e(r)}h_e(r)\dif r\)\dif s\]\enskip\text{for } c\in{\mathbb R}.\end{equation}
\end{thm}
\begin{proof} We know that any solution of problem \eqref{eq2cp} has to satisfy \eqref{eoparts}. In the case (C4), the matrix in \eqref{eoparts} is lower triangular
\begin{align}\label{eoparts3}\begin{pmatrix}x_o' \\ x_e'\end{pmatrix} & =\begin{pmatrix}-b_o & 0 \\ 2a_e & -b_o\end{pmatrix}\begin{pmatrix}x_o \\ x_e\end{pmatrix}+\begin{pmatrix}h_e \\ h_o\end{pmatrix}.
\end{align}
so, the solutions of \eqref{eoparts3} are given by
\begin{align*}x_o(t) &
=e^{-B_e(t)}\[\tilde c+\int_0^te^{B_e(s)}h_e(s)\dif s\]
,\\x_e(t) & =e^{-B_e(t)}\[c+\int_0^t\(e^{B_e(s)}h_o(s)+2a_e(s)\[\tilde c+\int_0^se^{B_e(r)}h_e(r)\dif r\]\)\dif s\],\end{align*}
where $c$, $\tilde c\in{\mathbb R}$. $x_e$ is even independently of the value of $c$. Nevertheless, $x_o$ is odd only when $\tilde c=0$. Hence, a solution of \eqref{eq2cp}, if it exists, it has the form
\eqref{e-c4}.\par
To show the second implication it is enough to check that $u_c$ is a solution of the problem \eqref{eq2cp}.
\begin{align*}
u'_c(t) = & -b_o(t)e^{-B_e(t)}\[c+\int_0^t\(e^{B_e(s)}h(s)+2a_e(s)\int_0^se^{B_e(r)}h_e(r)\dif r\)\dif s\] \\& +e^{-B_e(t)}\(e^{B_e(t)}h(t)+2a_e(t)\int_0^te^{B_e(r)}h_e(r)\dif r\)=h(t)-b_o(t)u(t)+2a_e(t)e^{-B_e(t)}\int_0^te^{B_e(r)}h_e(r)\dif r.
\end{align*}
Now,
\begin{align*}
& a_e(t)(u_c(-t)-u_c(t))+2a_e(t)e^{-B_e(t)}\int_0^te^{B_e(r)}h_e(r)\dif r\\= & a_e(t)e^{-B_e(t)}\[c-\int_0^t\(e^{B_e(s)}h(-s)-2a_e(s)\int_0^se^{B_e(r)}h_e(r)\dif r\)\dif s\]\\ & - a_e(t)e^{-B_e(t)}\[c+\int_0^t\(e^{B_e(s)}h(s)+2a_e(s)\int_0^se^{B_e(r)}h_e(r)\dif r\)\dif s\]+2a_e(t)e^{-B_e(t)}\int_0^te^{B_e(r)}h_e(r)\dif r\\= & -2a_e(t)e^{-B_e(t)}\int_0^te^{B_e(r)}h_e(r)\dif s+2a_e(t)e^{-B_e(t)}\int_0^te^{B_e(r)}h_e(r)\dif r= 0.
\end{align*}
Hence,
$$u_c'(t)+a_e(t)u_c(-t)+(-a_e(t)+b_o(t))u_c(t)=h(t),\ a.\,e. t\in I.$$
The boundary condition $x(-T)-x(T)=0$ is equivalent to $x_o(T)=0$, this is,
$$\int_0^Te^{B_e(s)}h_e(s)\dif s=0$$
and the result is concluded.
\end{proof}
\begin{thm} If condition (C5) holds, then problem \eqref{eq2cp} has solution if and only if
\begin{equation}
\label{conodd2}
\int_0^T e^{B(s)-A(s)}h_e(s)\dif s=0,\end{equation}
and in that case the solutions of \eqref{eq2cp} are given by
\begin{equation}
\label{e-c5}
u_c(t)=e^{A(t)}\int_0^te^{-A(s)}h_e(s)\dif s+e^{-A(t)}\[c+\int_0^te^{A(s)}h_o(s)\dif s\]\enskip\text{for } c\in{\mathbb R}.\end{equation}
\end{thm}
\begin{proof} In the case (C5), $b_o=b$ and $a_o=a$. Also, the matrix in \eqref{eoparts} is diagonal
\begin{align}\label{eoparts5}\begin{pmatrix}x_o' \\ x_e'\end{pmatrix} & =\begin{pmatrix}a_o-b_o & 0 \\ 0 & -a_o-b_o\end{pmatrix}\begin{pmatrix}x_o \\ x_e\end{pmatrix}+\begin{pmatrix}h_e \\ h_o\end{pmatrix}.
\end{align}
and the solutions of \eqref{eoparts5} are given by
\begin{align*}x_o(t) & =e^{A(t)-B(t)}\[\tilde c+\int_0^te^{B(s)-A(s)}h_e(s)\dif s\],\\x_e(t) & =e^{-A(t)-B(t)}\[c+\int_0^te^{A(s)+B(s)}h_o(s)\dif s\],\end{align*}
where $c$, $\tilde c\in{\mathbb R}$. Since $a$ and $b$ are odd, $A$ and $B$ are even. So, $x_e$ is even independently of the value of $c$. Nevertheless, $x_o$ is odd only when $\tilde c=0$. In such a case, since we need, as in the previous Theorem, that $x_o(T)=0$, we get condition \eqref{conodd2}, which allows us to deduce the first implication of the Theorem.\par
Any solution $u_c$ of \eqref{eq2cp} has the expression \eqref{e-c5}.\par
To show the second implication, it is enough to check that $u$ is a solution of the problem \eqref{eq2cp}.
$$
u_c'(t)=(a(t)-b(t))e^{A(t)-B(t)}\int_0^te^{B(s)-A(s)}h_e(s)\dif s-(a(t)+b(t))e^{-A(t)-B(t)}\[c+\int_0^te^{A(s)+B(s)}h_o(s)\dif s\]+h(t).$$
Now,
\begin{align*}\ & a(t)u_c(-t)+b(t)u_c(t)=a(t)\(-e^{A(t)-B(t)}\int_0^te^{B(s)-A(s)}h_e(s)\dif s+e^{-A(t)-B(t)}\[c+\int_0^te^{A(s)+B(s)}h_o(s)\dif s\]\)\\ & +b(t)\(e^{A(t)-B(t)}\int_0^te^{B(s)-A(s)}h_e(s)\dif s+e^{-A(t)-B(t)}\[c+\int_0^te^{A(s)+B(s)}h_o(s)\dif s\]\)\\ = &-(a(t)-b(t))e^{A(t)-B(t)}\int_0^te^{B(s)-A(s)}h_e(s)\dif s+(a(t)+b(t))e^{-A(t)-B(t)}\[c+\int_0^te^{A(s)+B(s)}h_o(s)\dif s\].
\end{align*}
So clearly,
$$u_c'(t)+a(t)u_c(-t)+b(t)u_c(t)=h(t)\quad\text{for a.e. } t\in I.$$
which ends the proof.
\end{proof}
\section{The mixed case}
When we are not on the cases (C1)-(C5), since the fundamental matrix of $M$ is not given by its exponential matrix, it is more difficult to precise when problem \eqref{eq2cp} has a solution. Here we present some partial results.
\par
Consider the following ODE
\begin{equation}\label{usualp}x'(t)+[a(t)+b(t)]x(t)=0,\quad x(-T)=x(T).\end{equation}
The following lemma gives us the explicit Green's function for this problem. Let $\upsilon=a+b$.
\begin{lem}\label{lem1} Let $h$, $a$ in problem \eqref{usualp} be in $L^1(I)$ and assume $\int_{-T}^{T}\upsilon(t)\dif t\ne0$. Then problem \eqref{usualp} has a unique solution given by
$$u(t)=\int_{-T}^{T}G_3(t,s)h(t)\dif s,$$
where
\begin{equation}\label{e-G3} G_3(t,s)=\begin{cases}\tau\,e^{\int_t^s\upsilon(r)\dif r}, & s\le t,\\(\tau-1)e^{\int_t^s\upsilon(r)\dif r}, & s> t,\end{cases}\quad \text{and}\quad \tau=\frac{1}{1-e^{-\int_{-T}^T\upsilon(r)\dif r}}.\end{equation}
\end{lem}
\begin{proof}
$$\frac{\partial G_3}{\partial t}(t,s)=\begin{cases}-\tau\,\upsilon(t)\,e^{\int_t^s\upsilon(r)\dif r}, & s\le t,\\-(\tau-1)\upsilon(t)e^{\int_t^s\upsilon(r)\dif r}, & s> t,\end{cases}=-\upsilon(t)G_3(t,s).$$
Therefore,
$$\frac{\partial G_3}{\partial t}(t,s)+\upsilon(t)G_3(t,s)=0,\ s\ne t.$$
Hence,
\begin{align*}
& u'(t)+\upsilon(t)u(t)=\frac{\dif}{\dif t}\int_{-T}^{t}G_3(t,s)h(s)\dif s+\frac{\dif}{\dif t}\int_{t}^{T}G_3(t,s)h(s)\dif s+\upsilon(t)\int_{-T}^{T}G_3(t,s)h(s)\dif s\\= & [G_3(t,t^-)-G_3(t,t^+)]h(t)+\int_{-T}^{T}\[\frac{\partial G_3}{\partial t}(t,s)+\upsilon(t)G_3(t,s)\]h(t)\dif s=h(t)\enskip\text{a.\,e. }t\in I.
\end{align*}
The boundary conditions are also satisfied.
\begin{align*}
& u(T)-u(-T)=\int_{-T}^{T}\[\tau\,e^{\int_T^s\upsilon(r)\dif r}-(\tau-1)e^{\int_{-T}^s\upsilon(r)\dif r}\]h(s)\dif s\\= & \int_{-T}^{T}
\[\frac{e^{\int_T^s\upsilon(r)\dif r}}{1-e^{-\int_{-T}^T\upsilon(r)\dif r}}-\frac{e^{-\int_{-T}^T\upsilon(r)\dif r}\,e^{\int_{-T}^s\upsilon(r)\dif r}}{1-e^{-\int_{-T}^T\upsilon(r)\dif r}}\]h(s)\dif s\\= & \int_{-T}^{T}
\[\frac{e^{\int_T^s\upsilon(r)\dif r}}{1-e^{-\int_{-T}^T\upsilon(r)\dif r}}-\frac{e^{-\int_{T}^s\upsilon(r)\dif r}}{1-e^{-\int_{-T}^T\upsilon(r)\dif r}}\]h(s)\dif s=0.
\end{align*}
\end{proof}
\begin{lem} \begin{equation}
\label{e-Fv}
|G_3(t,s)|\le F(\upsilon):=\frac{e^{\|\upsilon\|_1}}{|e^{\|\upsilon^+\|_1}-e^{\|\upsilon^-\|_1}|}.\end{equation}
\end{lem}
\begin{proof}
Observe that
$$\tau=\frac{1}{1-e^{\|\upsilon^-\|_1-\|\upsilon^+\|_1}}=\frac{e^{\|\upsilon^+\|_1}}{e^{\|\upsilon^+\|_1}-e^{\|\upsilon^-\|_1}}.$$
Hence,
$$\tau-1=\frac{e^{\|\upsilon^-\|_1}}{e^{\|\upsilon^+\|_1}-e^{\|\upsilon^-\|_1}}.$$
On the other hand,
$$e^{\int_t^s\upsilon(r)\dif r}\le\begin{cases} e^{\|\upsilon^-\|_1}, & s\le t,\\epsilon^{\|\upsilon^+\|_1}, & s>t,
\end{cases}$$
which ends the proof.
\end{proof}
The next result proves the existence and uniqueness of solution of $\eqref{eq2cp}$ when $\upsilon$ is `sufficiently small'.
\begin{thm}\label{thmpn}Let $h$, $a$, $b$ in problem \eqref{eq2cp} be in $L^1(I)$ and assume $\int_{-T}^{T}\upsilon(t)\dif t\ne0$. Let $W:=\{(2T)^\frac{1}{p}(\|a\|_{p^*}+\|b\|_{p^*})\}_{p\in[1,+\infty]}$ where $p^{-1}+(p^*)^{-1}=1$. If $F(\upsilon)\|a\|_1(\inf W)<1$, $F(\upsilon)$ defined as in \eqref{e-Fv}, then problem \eqref{eq2cp} has a unique solution.
\end{thm}
\begin{proof}
With some manipulation we get
$$h(t) = x'(t)+a(t)\(\int_t^{-t}x'(s)\dif s+x(t)\)+b(t)x(t)=x'(t)+\upsilon(t)x(t)+a(t)\int_t^{-t}(h(s)-a(s)x(-s)-b(s)x(s))\dif s.$$
Hence,
$$x'(t)+\upsilon(t)x(t)=a(t)\int_t^{-t}(a(s)x(-s)+b(s)x(s))\dif s+a(t)\int_{-t}^{t}h(s)\dif s+h(t).$$
Using $G_{3}$ defined as in \eqref{e-G3} and Lemma \ref{lem1}, it is clear that
$$x(t)=\int_{-T}^TG_3(t,s)a(s)\int_s^{-s}(a(r)x(-r)+b(r)x(r))\dif r\dif s+\int_{-T}^TG_3(t,s)\[a(s)\int_{-s}^{s}h(r)\dif r+h(s)\]\dif s,$$
this is, $x$ is a fixed point of an operator of the form $Hx(t)+\beta(t)$, so, by Banach contraction Theorem, it is enough to prove that $\|H\|<1$ for some compatible norm of $H$.
\par
Using Fubini's Theorem,
$$Hx(t)=-\int_{-T}^{T}\rho(t,r)(a(r)x(-r)+b(r)x(r))\dif r,$$
where $\rho(t,r)=\[\int_{|r|}^{T}-\int_{-T}^{-|r|}\]G_3(t,s)a(s)\dif s$.
If $\int_{-T}^{T}\upsilon(t)\dif t=\|\upsilon^+\|_1-\|\upsilon^-\|_1>0$ then $G_3$ is positive and
$$\rho(t,r)\le\int_{-T}^{T}G_3(t,s)|a(s)|\dif s\le F(\upsilon)\|a\|_1.$$
We have the same estimate for $-\rho(t,r)$.\par
If $\int_{-T}^{T}\upsilon(t)\dif t<0$ we proceed with an analogous argument and arrive as well to the conclusion that
$|\rho(t,s)|<F(\upsilon)\|a\|_1$.\par Hence, $|Hx(t)|\le F(\upsilon)\|a\|_1\int_{-T}^{T}|a(r)x(-r)+b(r)x(r)|\dif r=F(\upsilon)\|a\|_1\|a(r)x(-r)+b(r)x(r)\|_1$.
Thus, it is clear that
$$\|Hx\|_p \le (2T)^\frac{1}{p}F(\upsilon)\|a\|_1(\|a\|_{p^*}+\|b\|_{p^*})\|x\|_p,\ p\in[1,\infty],$$
which ends the proof.
\end{proof}
\begin{rem} In the hypothesis of Theorem \ref{thmpn}, realize that $F(\upsilon)\ge 1$.
\end{rem}
The following result will let us obtain some information on the sign of the solution of problem \eqref{eq2cp}. In order to prove it, we will use a theorem from \cite{Cab5} we cite below.
\par
Consider an interval $[w,d]\subset I$, the cone
\begin{equation*}\label{eqcone-cs}
K=\{u\in {\mathcal C}(I): \min_{t \in [w,d]}u(t)\geq c \|u\|\},
\end{equation*}
and the following problem
\begin{equation}\label{eqgenpro2}
x'(t) =h(t,x(t),x(-t)),\, t\in I,\quad x(-T)=x(T),
\end{equation}
where $h$ is an $L^1$-Caratheodory function.
Consider the following conditions.
\begin{enumerate}
\item[$(\mathrm{I}_{\protect\rho,\omega}^{1})$] \label{EqB2} There exist $\rho> 0$ and $\omega\in\(0,\frac{\pi}{4T}\]$ such that $f^{-\rho,\rho}_\omega <\omega$ where
$$
f^{{-\rho},{\rho}}_\omega:=\sup \left\{\frac{h(t,u,v)+\omega v}{\rho }:\;(t,u,v)\in
[ -T,T]\times [ -\rho,\rho ]\times [-\rho,\rho ]\right\}.$$
\item[$(\mathrm{I}_{\protect\rho,\omega}^{0})$] There exists $\rho >0$ such that
$$
f_{(\rho ,{\rho /c})}^\omega\cdot\inf_{t\in [w,d]}\int_{w}^{d}\overline G(t,s)\,ds>1,
$$
where
$$
f_{(\rho ,{\rho /c})}^\omega =\inf \left\{\frac{h(t,u,v)+\omega v}{\rho }%
:\;(t,u,v)\in [w,d]\times [\rho ,\rho /c]\times
[-\rho /c,\rho /c]\right\}.$$
\end{enumerate}
\begin{thm}\textrm{\cite[Theorem 5.15]{Cab5}}\label{thmgen}
Let $\omega\in\(0,\frac{\pi}{2}T\]$. Let $[w,d]\subset I$ such that $w=T-d\in(\max\{0,T-\frac{\pi}{4\omega}\},\frac{T}{2})$. Let
\begin{equation}\label{e-c}c=\frac{[1-\tan(\omega d)][1-\tan(\omega w)]}{[1+\tan(\omega d)][1+\tan(\omega w)]}.\end{equation}
Problem \eqref{eqgenpro2} has at least one non-zero solution
in $K$ if either of the following conditions hold.
\begin{enumerate}
\item[$(S_{1})$] There exist $\rho _{1},\rho _{2}\in (0,\infty )$ with $\rho
_{1}/c<\rho _{2}$ such that $(\mathrm{I}_{\rho _{1},\omega}^{0})$ and $(\mathrm{I}_{\rho _{2},\omega}^{1})$ hold.
\item[$(S_{2})$] There exist $\rho _{1},\rho _{2}\in (0,\infty )$ with $\rho
_{1}<\rho _{2}$ such that $(\mathrm{I}_{\rho _{1},\omega}^{1})$ and $(\mathrm{I}%
_{\rho _{2},\omega}^{0})$ hold.
\end{enumerate}
\end{thm}
\begin{thm}\label{thmmix2}
Let $h\in L^\infty(I)$, $a,b\in L^1(I)$ be such that $0<|b(t)|<a(t)<\omega<\frac{\pi}{2}T$ for a.\,e. $t\in I$ and $\inf h>0$. Then there exists a solution $u$ of \eqref{eq2cp} such that, $u>0$ in $\(\max\{0,T-\frac{\pi}{4\omega}\},\min\{T,\frac{\pi}{4\omega}\}\)$.
\end{thm}
\begin{proof}
Problem \eqref{eq2cp} can be rewritten as
\begin{equation*}\label{eq2cp3} x'(t)=h(t)-b(t)\,x(t)-a(t)\,x(-t),\enskip t\in I,\quad x(-T)=x(T).
\end{equation*}
With this formulation, we can apply Theorem \ref{thmgen}.
Since $0<a(t)-|b(t)|<\omega$ a.\,e., take $\rho_2\in{\mathbb R}^+$ large enough such that $h(t)<(a(t)-|b(t)|)\rho_2$ a.\,e. Hence, $h(t)<(a(t)-\omega)\rho_2-|b(t)|\rho_2+\rho_2\omega$ for a.\,e. $t\in I$, in particular, $$h(t)<(a(t)-\omega)v-|b(t)|u+\rho_2\omega\le(a(t)-\omega)v+b(t)\,u+\rho_2\omega\text{ for a.\,e. }t\in I;\ u,v\in[-\rho_2,\rho_2].$$ Therefore,
$$\sup \left\{\frac{h(t)-b(t)u-a(t)v+\omega v}{\rho_2 }:\;(t,v)\in
[ -T,T]\times [-\rho_2,\rho_2]\right\}<\omega,$$
and thus, $(\mathrm{I}_{\rho _{2},\omega}^{1})$ is satisfied.\par
Let $[w,d]\subset I$ be such that $[w,d]\subset\(T-\frac{\pi}{4\omega},\frac{\pi}{4\omega}\)$. Let $c$ be defined as in \eqref{e-c}
and $\epsilon=\omega\int_w^d\overline G(t,s)\dif s$.\par Choose $\d\in(0,1)$ such that $h(t)>\[\(1+\frac{c}{\epsilon}\)\omega-(a(t)-|b(t)|)\]\rho_2\d$ a.\,e. and define $\rho_1:=\d c\rho_2$. Therefore, $h>\[(a(t)-\omega)v+b(t)\,u(t)\]\frac{\omega}{\epsilon}\rho_1$ for a.\,e. $t\in I$, $u\in[\rho_1,\frac{\rho_1}{c}]$ and $v\in[-\frac{\rho_1}{c},\frac{\rho_1}{c}]$. Thus,
$$\inf \left\{\frac{h(t)-b(t)u-a(t)v+\omega v}{\rho_1 }:\;(t,v)\in [w,d]\times
[-\rho_1/c,\rho_1/c]\right\}>\frac{\omega}{\epsilon},$$
and hence, $(\mathrm{I}_{\rho _{1},\omega}^{0})$ is satisfied.
Finally, $(S_1)$ in Theorem \ref{thmgen} is satisfied and we get the desired result.
\end{proof}
\begin{rem} In the hypothesis of Theorem \ref{thmmix2}, if $\omega<\frac{\pi}{4}T$, we can take $[w,d]=[-T,T]$ and continue with the proof of Theorem \ref{thmmix2} as done above. This guarantees that $u$ is positive.
\end{rem}
\section{Appendix: Further considerations}
\subsection{The general case}
The equation $\eqref{proinv1}$, for the case $\phi(t)=-t$, can be reduced to the following system
\begin{align*}\L\begin{pmatrix}x_o' \\ x_e'\end{pmatrix} & =\begin{pmatrix}a_o-b_o & -a_e-b_e \\ a_e-b_e & -a_o-b_o\end{pmatrix}\begin{pmatrix}x_o \\ x_e\end{pmatrix}+\begin{pmatrix}h_e \\ h_o\end{pmatrix},
\end{align*}
where
\begin{align*}
\L=\begin{pmatrix}c_e+d_e & c_o-d_o \\ c_o+d_o & c_e-d_e\end{pmatrix}.
\end{align*}
Hence, if $\det(\L(t))=c(t)c(-t)-d(t)d(-t)\ne 0$ for a.\,e. $t\in I$, $\L(t)$ is invertible a.\,e. and
\begin{align*}\begin{pmatrix}x_o' \\ x_e'\end{pmatrix} & =\L^{-1}\begin{pmatrix}a_o-b_o & -a_e-b_e \\ a_e-b_e & -a_o-b_o\end{pmatrix}\begin{pmatrix}x_o \\ x_e\end{pmatrix}+\L^{-1}\begin{pmatrix}h_e \\ h_o\end{pmatrix}.
\end{align*}
So the general case where $c\not\equiv0$ is reduced to the case studied on Section 3, taking
$$\L^{-1}\begin{pmatrix}a_o-b_o & -a_e-b_e \\ a_e-b_e & -a_o-b_o\end{pmatrix}$$
as coefficient matrix.
\subsection{Computing the matrix exponential}
It is very well known that, in general, it is difficult to compute the exponential of a functional matrix and it is deeply related to the property of the matrix commuting with its integral. Here we summarize the findings of \cite{Kot} on this behalf.
\begin{dfn} Let $S\subset{\mathbb R}$ be an interval. Define ${\mathcal M}\subset{\mathcal C}^1({\mathbb R},{\mathcal M}_{n\times n}({\mathbb R}))$ such that for every $M\in{\mathcal M}$,
\begin{itemize}
\item there exists $P\in{\mathcal C}^1({\mathbb R},{\mathcal M}_{n\times n}({\mathbb R}))$ such that $M(t)=P^{-1}(t)J(t)P(t)$ for every $t\in S$ where $P^{-1}(t)J(t)P(t)$ is a Jordan decomposition of $M(t)$;
\item the superdiagonal elements of $J$ are independent of $t$, as well as the dimensions of the Jordan boxes associated to the different eigenvalues of $M$;
\item two different Jordan boxes of $J$ correspond to different eigenvalues;
\item if two eigenvalues of $M$ are ever equal, they are identical in the whole interval $S$.
\end{itemize}
\end{dfn}
It is straightforward to check that the functional matrices appearing in cases (C1)--(C5) belong to ${\mathcal M}$.
\begin{thm}[\cite{Kot}] Let $M\in{\mathcal M}$. Then, the following statements are equivalent.
\begin{itemize}
\item $M$ commutes with its derivative.
\item $M$ commutes with its integral.
\item $M$ commutes functionally, that is $M(t)M(s)=M(s)M(t)$ for all $t,s\in S$.
\item $M=\sum_{k=0}^r\gamma_k(t)C^k$ For some $C\in{\mathcal M}_{n\times n}({\mathbb R})$ and $\gamma_k\in{\mathcal C}^1(S,{\mathbb R})$, $k=1,\dots,r$.
\end{itemize}
Furthermore, any of the last properties imply that $M(t)$ has a set of constant eigenvectors, i.e. a Jordan decomposition $P^{-1}J(t)P$ where $P$ is constant.
\end{thm}
When we first studied the case (C1) --with the other cases we need similar considerations-- we needed to compute the exponential of the matrix
$$\overline M=\begin{pmatrix}B_e & -(1+k)A_o\\ (1-k)A_o & -B_e\end{pmatrix}.$$
$\overline M$ has two complex conjugate eigenvalues. What is more, it functionally commutes, so it has a basis of constant eigenvectors given by the constant matrix
$$Y:=\frac{1}{k-1}\begin{pmatrix}i\sqrt{1-k^2} & -i\sqrt{1-k^2} \\ k-1 & k-1 \end{pmatrix}.$$
We have that
$$Y^{-1}\overline M(t)Y=Z(t):=\begin{pmatrix} -B_e-i\,A_o\sqrt{1-k^2} & 0 \\ 0 & -B_e+i\,A_o\sqrt{1-k^2} \end{pmatrix}.$$
Hence,
$$e^{\overline M(t)}=e^{YZ(t)Y^{-1}}=Ye^{Z(t)}Y^{-1}=e^{-B_e(t)}\begin{pmatrix}\cos\(\sqrt{1-k^2}A(t)\) & -\frac{1+k}{\sqrt{1-k^2}}\sin\(\sqrt{1-k^2}A(t)\)\\ \frac{\sqrt{1-k^2}}{1+k}\sin\(\sqrt{1-k^2}A(t)\) & \cos\(\sqrt{1-k^2}A(t)\)\end{pmatrix}.$$
| {
"timestamp": "2017-07-05T02:03:32",
"yymm": "1707",
"arxiv_id": "1707.00857",
"language": "en",
"url": "https://arxiv.org/abs/1707.00857",
"abstract": "This work is devoted to the study of first order linear problems with involution and periodic boundary value conditions. We first prove a correspondence between a large set of such problems with different involutions to later focus our attention to the case of the reflection. We study then different cases for which a Green's function can be obtained explicitly and derive several results in order to obtain information about its sign. Once the sign is known, maximum and anti-maximum principles follow. We end this work with more general existence and uniqueness of solution results.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Existence results for a linear equation with reflection, non-constant coefficient and periodic boundary conditions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9850429160413819,
"lm_q2_score": 0.8152324848629214,
"lm_q1q2_score": 0.8030389841410339
} |
https://arxiv.org/abs/2105.12550 | Commuting probability in algebraic groups | We introduce the notion of commuting probability, $p(G)$, for an algebraic group $G$. This notion is inspired by the corresponding notions in finite groups and compact groups. The computation of $p(G)$ for reductive groups is readily done using the notion of $z$-classes. We introduce two generalisations of this relation, $iz$-equivalence and $dz$-equivalence. These notions lead us naturally to the notion of a regular element in $G$. Finally, with the help of this notion of regular elements, we compute $p(G)$ for a connected, linear algebraic group $G$. We also compute the set of limit points of the numbers $p(G)$ as $G$ varies over the classes of reductive groups, solvable groups and nilpotent groups. | \section{Introduction}
\emph{What is the probability that two randomly chosen elements of a finite group commute?}
\vskip2mm
This was the question asked by Erd\"{o}s and Turan in \cite{ET} where they formulated the notion of commuting probability for a finite group.
If $C$ denotes the set of ordered pairs of commuting elements of a finite $G$ then $p(G)$, the commuting probability of $G$, is defined to be the ratio $|C|/|G \times G|$.
It is proved in the same paper that this number is equal to $k/|G|$ where $k$ is the number of conjugacy classes in $G$.
Almost immediately, Gustafson (\cite{G}), generalised the notion of commuting probability to compact groups where he used the Haar measure in the definition.
There has been a flurry of activity around this topic for many years since then.
The purpose of the present paper is to introduce the notion of commuting probability in the case of algebraic groups.
In algebraic geometry, the natural measure of a certain algebraic set is its dimension.
If $G$ is a linear algebraic group then we define the set $C(G)$ to be the set of ordered pairs of commuting elements in $G$, $C(G) := \{(a, b) \in G \times G: ab = ba\}$.
This is a Zariski closed subset of $G \times G$.
We define the commuting probability of $G$, denoted by $p(G)$, as follows
$$p(G) := \frac{\dim(C(G))}{\dim(G \times G)} = \frac{\dim(C(G))}{2\dim(G)} .$$
For a finite group $G$, which can be considered as an algebraic group with $\dim(G) = 0$, we define $p(G) = 1$.
We note a basic lemma which computes the commuting probability of a direct product.
\begin{lemma}\label{direct}
If $G = H_1 \times H_2$ then
$$p(G) = \dfrac{\dim(H_1)p(H_1) + \dim(H_2)p(H_2)}{\dim (H_1) + \dim(H_2)} .$$
\end{lemma}
\begin{proof}
Since $G = H_1 \times H_2$, it follows that $C(G)$ is isomorphic to $C(H_1) \times C(H_2)$ and therefore $\dim(C(G)) = \dim(C(H_1)) + \dim(C(H_2))$ which further equals $2\dim(H_1)p(H_1) + 2\dim(H_2)p(H_2)$.
As $\dim(G) = \dim(H_1) + \dim(H_2)$, the lemma now follows.
\end{proof}
If $G$ is a connected, linear algebraic group then so is $G \times G$ and then the dimension of every proper closed subset of $G \times G$ is smaller than $\dim(G \times G)$.
Hence for a connected $G$, $p(G) = 1$ if and only if $G$ is abelian.
If $G$ is not connected, then $p(G)$ can be 1 without $G$ being abelian.
Consider $G = G_1 \times G_2$ where $G_1$ is abelian and $G_2$ is a finite non-abelian group.
Then by the above lemma, $p(G) = 1$ but $G$ is not abelian.
To avoid such cases, we will work with connected groups in the remainder of this paper.
In any case, $p(G) = p(G^0)$, so we are not losing out on the generality.
We also choose to work over $\mathbb{C}$, though the results remain valid over an algebraically closed field, indeed over a perfect field.
We begin the study of $p(G)$ with the case of reductive groups in Section 2.
If $G$ is a reductive group of dimension $n$ and rank $r$ then we prove in Theorem \ref{reductive} that $p(G) = (n+r)/2n$.
We also list the numbers $p(G)$ for all simple groups.
These calculations are used later while computing the limit points of the set of the numbers $p(G)$.
The main tool used in the proof of Theorem \ref{reductive} is the notion of $z$-classes which is not useful in the case of a non-reductive connected $G$.
Hence we introduce the notions of $iz$-equivalence and $dz$-equivalence, generalising the $z$-equivalence in the third section.
We study the corresponding equivalence classes and make some interesting observations, which may be of independent interest.
These notions lead us to the notion of a regular element in $G$.
The dimension of the centralizer of a regular element in $G$ is called the regular rank of $G$.
We prove in Theorem \ref{general} that if $G$ is a connected, linear algebraic group with dimension $n$ and regular rank $r$ then $p(G) = (n+r)/2n$.
In this section, Section 4, we also list the numbers $p(U)$ where $U$ is the unipotent radical of a fixed Borel subgroup in a simple algebraic group.
By the Theorem \ref{general}, $p(G)$ is a rational number $> 1/2$.
In Section 5, we compute all rational numbers that can occur as $p(G)$ for various classes of $G$, namely the reductive groups, solvable groups and nilpotent groups.
As a consequence, we compute the limit points of these numbers for all these above classes.
The set of limit points in all these cases is equal to the interval $[1/2, 1]$.
We then compare the results on $p(G)$ for algebraic groups with the corresponding results in finite groups.
We point out some similarities and many differences between these two notions.
This occupies the Section 6.
In the seventh section, we point out the compatibilities of $p(G)$ for a semisimple group $G$ defined over $\mathbb{F}_q$ with the corresponding finite groups of Lie type, in the asymptotic sense.
We close the paper with some concluding remarks in the last section.
We also indicate some questions that may of interest to the community.
\section*{Acknowledgements}
It is a pleasure to thank Dipendra Prasad, Saurav Bhaumik, Anuradha Garge and Anupam Kumar Singh for many useful discussions.
Anupam, in particular, has been a catalyst for this paper in more than one ways.
\section{Reductive groups}
Let $G$ be a complex reductive group.
In this section, we compute the commuting probability $p(G)$ of $G$ in terms of the dimension and rank of $G$.
The rank of a reductive group is the dimension of a maximal torus in $G$.
Since we compute dimensions, it will be sufficient to work with the group $G(\mathbb{C})$.
In the computation, we will use the notion of a $z$-class.
Let us first recall it.
Two elements $g, h \in G(\mathbb{C})$ are said to be \emph{$z$-equivalent} if their centralisers, $Z_{G(\mathbb{C})}(g)$ and $Z_{G(\mathbb{C})}(h)$, are conjugate within $G(\mathbb{C})$.
This is an equivalence relation and the corresponding equivalence classes are called \emph{$z$-classes}.
This equivalence is weaker than the one given by the conjugacy relation.
Each $z$-class is a union of certain conjugacy classes.
It is known that if $G$ is a reductive group defined over $\mathbb{C}$ then the number of $z$-classes in $G(\mathbb{C})$ is finite.
The result is true in a more general setting and we refer the reader to \cite{GS} for a detailed discussion.
For the present paper, the finiteness result over $\mathbb{C}$ is sufficient.
\begin{theorem}\label{reductive}
Let $G$ be a complex reductive group with dimension $n$ and rank $r$.
The commuting probability of $G$, $p(G)$, is equal to
$$\frac{n + r}{2n} = \frac{1}{2} + \frac{r}{2n}.$$
\end{theorem}
\begin{proof}
We will be working throughout the proof with the set of $\mathbb{C}$-rational points without mentioning the field $\mathbb{C}$ explicitly.
So, whenever we talk about a subset of an algebraic set $X$, we will always mean a subset of $X(\mathbb{C})$.
Let $\mathcal{C}$ denote the set of conjugacy classes in $G$ and let $\mathfrak{c}$ be a conjugacy class in $G$.
The set $C(G)$ is equal to the set
$$\cup_{g \in G} \{g\} \times Z_G(g) = \bigcup_{\mathfrak{c} \in \mathcal{C}} \bigg(\cup_{g \in \mathfrak{c}} \big(\{g\} \times Z_G(g)\big)\bigg) = \bigcup_{\mathfrak{c} \in \mathcal{C}} C(G)_{\mathfrak{c}} .$$
If $g, h \in \mathfrak{c}$ then $Z_G(g)$ and $Z_G(h)$ are conjugate in $G$, in particular their dimensions are the same hence $\dim C(G)_{\mathfrak{c}}$ is equal to $\dim(G)$ for each $\mathfrak{c} \in \mathcal{C}$.
Let $\mathcal{Z}$ denote the set of $z$-classes in $G$.
This is a finite set and
$$\mathcal{C} = \cup_{\mathfrak{z} \in \mathcal{Z}} \big(\mathfrak{z}/\mathfrak{c_{\mathfrak{z}}}\big)$$
where the set $\mathfrak{z}/\mathfrak{c_{\mathfrak{z}}}$ is the set of conjugacy classes in $\mathfrak{z}$.
Since $\mathcal{Z}$ is a finite set, there is one $z$-class in $G$ which is dense in $G$.
Indeed, the set of regular semisimple elements forms a $z$-class in $G$ which is dense in $G$.
We denote this $z$-class by $\mathfrak{z}_0$.
It then follows that $\bigcup_{\mathfrak{c} \subseteq \mathfrak{z}_0} C(G)_{\mathfrak{c}}$ is dense in $C(G)$ and hence $\dim(C(G))$ is equal to the dimension of $\bigcup_{\mathfrak{c} \subseteq \mathfrak{z}_0} C(G)_{\mathfrak{c}}$.
Let us fix a conjugacy class $\mathfrak{c}$ in $\mathfrak{z}_0$.
Then the dimension of $\bigcup_{\mathfrak{c} \subseteq \mathfrak{z}_0} C(G)_{\mathfrak{c}}$ is equal to $\dim(\mathfrak{z}_0/\mathfrak{c}) + \dim(G)$.
Further, $\dim(\mathfrak{z}_0/\mathfrak{c})$ is $\dim(G) - \dim(\mathfrak{c})$ which is $\dim T$ for a maximal torus in $G$.
Hence $\dim(C(G)) = \dim (G) + \dim (T) = n + r$ and
$$p(G) = \frac{n + r}{2n} = \frac{1}{2} + \frac{r}{2n}= \frac{1}{2} + \frac{\mathrm{rank}(G)}{2\dim(G)}.$$
\end{proof}
\begin{remark}
The analysis in the above proof implies that the dimension of the variety of conjugacy classes in a reductive $G$ is equal to the dimension of a maximal torus in $G$.
This is a well-known fact, see \cite[6.4]{St} for instance.
However, we have used $z$-classes because we use a generalisation of this notion in the general case.
The reason for that being that we have no information about the dimension of the variety of conjugacy classes in $G$ in the general case.
We do not even know if the set of conjugacy classes forms a (quasi-projective) variety.
\end{remark}
\begin{remark}\label{simple}
Using the above theorem, we note commuting probabilities of some groups, including all simple algebraic groups.
\begin{enumerate}
\item $p(GL_n) = \dfrac{1}{2} + \dfrac{n}{2n^2} = \dfrac{1}{2} + \dfrac{1}{2n} = \dfrac{n+1}{2n}$, for $n \geq 1$.
\vskip1mm
\item $p(SL_n) = \dfrac{1}{2} + \dfrac{n-1}{2(n^2-1)} = \dfrac{1}{2} + \dfrac{1}{2(n + 1)}$, for $n \geq 1$.
\vskip1mm
\item $p(SO_{2n+1}) = \dfrac{1}{2} + \dfrac{n}{2(2n^2 + n)} = \dfrac{1}{2} + \dfrac{1}{2(2n + 1)}$, for $n \geq 2$.
\vskip1mm
\item $p(Sp_{2n}) = \dfrac{1}{2} + \dfrac{n}{2(2n^2 + n)} = \dfrac{1}{2} + \dfrac{1}{2(2n + 1)}$, for $n \geq 3$.
\vskip1mm
\item $p(SO_{2n}) = \dfrac{1}{2} + \dfrac{n}{2(2n^2 - n)} = \dfrac{1}{2} + \dfrac{1}{2(2n - 1)}$, for $n \geq 4$.
\vskip1mm
\item $p(G_2) = \dfrac{4}{7}$, \hskip2mm $p(F_4) = p(E_6) = \dfrac{7}{13}$, \hskip2mm $p(E_7) = \dfrac{10}{19}$ and $p(E_8) = \dfrac{16}{31}$.
\end{enumerate}
\end{remark}
\section{Regular elements and regular rank}
We notice that the following three properties of $z$-classes have been crucial in the proof of Theorem \ref{reductive}.
\begin{enumerate}
\item A $z$-class is a union of conjugacy classes,
\item a reductive $G$ contains a dense $z$-class, and,
\item the centralisers of two elements in a $z$-class (are conjugate within $G$, hence are isomorphic as abstract groups, and hence they) have the same dimensions.
\end{enumerate}
\begin{example}
A non-reductive $G$ need not contain a dense $z$-class.
Let $U$ denote the group of $3 \times 3$ upper triangular matrices over $\mathbb{C}$.
Then for
$$g = \begin{pmatrix}
1 & a & b \\ & 1 & c \\ & & 1
\end{pmatrix},
\hskip5mm
Z_U(g) = \left\{\begin{pmatrix}
1 & x & y \\ & 1 & z \\ & & 1
\end{pmatrix}: cx = az \right\} .$$
The group $U/Z(U)$ is abelian.
If two centralisers in $U$ are conjugate then their images in $U/Z(U)$ must be the same.
Thus, the $z$-classes in $U$ correspond to the ratios $\frac{a}{c} \in (\mathbb{C} \cup {\infty})$ along with the central $z$-class.
The dimension of each $z$-class is less than $3$.
In particular, there is no dense $z$-class in $U$.
\end{example}
Taking a cue from the above three properties we make the following two definitions generalising the notion of $z$-equivalence.
\begin{definition}
\noindent $(1)$ Two elements $x$ and $y$ in a group $G$ are called \emph{$iz$-equivalent} if the centralisers, $Z_G(x)$ and $Z_G(y)$, are isomorphic.
\noindent $(2)$ Two elements $x$ and $y$ in an algebraic group $G$ are called \emph{$dz$-equivalent} if the centralisers, $Z_G(x)$ and $Z_G(y)$, have the same dimension.
\end{definition}
The $iz$-equivalence is also introduced by Dilpreet Kaur and Uday Bhaskar Sharma.
We are borrowing the name, $iz$-equivalence, with their kind permission.
The above two relations are indeed equivalence relations.
The corresponding equivalence classes will be called $iz$-classes and $dz$-classes, respectively.
We have the following hierarchy in terms of the strengths of these equivalences:
\begin{itemize}
\item a $dz$-class is a union of $iz$-classes,
\item an $iz$-class is a union of $z$-classes and
\item a $z$-class is a union of conjugacy classes.
\end{itemize}
These equivalence relations are, in general, different as the following examples show.
\begin{example}
The $z$-class of central elements in a group is not always a single conjugacy class.
\end{example}
\begin{example}\label{iz-but-not-z}
We once again consider the group $U$ of $3 \times 3$ upper triangular matrices over $\mathbb{C}$.
If we take
$$x = \begin{pmatrix}
1 & 1 & 1 \\ & 1 & 0 \\ & & 1
\end{pmatrix},
y = \begin{pmatrix}
1 & 0 & 1 \\ & 1 & 1 \\ & & 1
\end{pmatrix}
\hskip5mm
\mathrm{then}
\hskip5mm
Z_U(x) = \begin{pmatrix}
1 & a & b \\ & 1 & 0 \\ & & 1
\end{pmatrix},
Z_U(y) = \begin{pmatrix}
1 & 0 & c \\ & 1 & d \\ & & 1
\end{pmatrix}$$
as $a, b, c, d \in \mathbb{G}_a$.
These centralisers are isomorphic to $\mathbb{G}_a^2$.
But they are not conjugate as their images in the quotient $U/Z(U)$, which is abelian, are different one dimensional subgroups of $U/Z(U)$.
Thus, $x, y$ are $iz$-equivalent but not $z$-equivalent.
\end{example}
\begin{example}\label{dz-but-not-iz}
Finally, we take $x$ to be a regular semisimple element in $G = SL_2(\mathbb{C})$ and $y$ to be a regular unipotent element in the same group.
For instance, take
$$x = \begin{pmatrix}
\lambda & \\ & \lambda^{-1}
\end{pmatrix}, \lambda \ne 1,
\hskip5mm
\mathrm{and}
\hskip5mm
y = \begin{pmatrix}
1 & 1 \\ & 1
\end{pmatrix}$$
then the centraliser of $x$ is isomorphic to $\mathbb{G}_m$ and the centraliser of $y$ is isomorphic to $\mathbb{G}_a$.
Hence, $x$ and $y$ are $dz$-equivalent, $\dim(\mathbb{G}_m) = \dim(\mathbb{G}_a) = 1$, but not $iz$-equivalent as $\mathbb{G}_m$ and $\mathbb{G}_a$ are not isomorphic.
\end{example}
\begin{remark}
Note that in the Example \ref{iz-but-not-z}, the two centralisers are conjugate in $SL_3$, a bigger group than $U$.
There are two ways to see this.
The elements $x$ and $y$ are themselves conjugate in $SL_3$ and hence their centralisers are conjugate in $SL_3$ which, in this case, agree with their centralisers in $U$.
Alternatively, the two centralisers correspond to two different simple systems of roots in the root system $\Phi$ of $SL_3$, hence they are conjugate by a Weyl group element.
If we were to consider only abstract groups, then the celebrated HNN-theory tells us that two isomorphic centralisers in a group become conjugate in a bigger group.
\end{remark}
Emboldened by the above remark and especially the Example \ref{iz-but-not-z}, we ask
\begin{question}
If $G$ is a linear algebraic group and $H_1$, $H_2$ are two isomorphic subgroups of $G$, then is there a linear algebraic group containing $G$ in which $H_1$ and $H_2$ become conjugate?
Equivalently, if $H_1, H_2 \subseteq G$ are isomorphic then do we have an embedding of $G$ in some $GL_n$ such that $H_1, H_2$ become conjugate in $GL_n$?
\end{question}
Such a question for $iz$-equivalence vis-\'{a}-vis $z$-equivalence would be difficult to consider because, unlike Example \ref{iz-but-not-z}, the centralisers may change in a bigger group.
We therefore only ask how the $iz$-equivalence in $G$ behaves with respect to the embeddings of $G$ in bigger groups.
Do $iz$-equivalent elements in $G$ remain $iz$-equivalent in all such embeddings?
Do they become $z$-equivalent in some embedding?
What would be the situation for finite groups of Lie type?
Such questions will not make sense for other equivalences.
We may have a semisimple element $z$-equivalent to a unipotent element, in an abelian group for instance, but these two elements will never be conjugate in a bigger group.
Indeed, they may not remain $z$-equivalent in a bigger group.
Consider, for instance, the two distinct elements $1 = x, y$ in $B_2(\mathbb{F}_2)$, the Borel subgroup in $GL_2(\mathbb{F}_2)$.
Since the group $B_2(\mathbb{F}_2)$ is abelian, $x$ and $y$ are $z$-equivalent but they don't remain $z$-equivalent in bigger groups, in $B_2(\mathbb{F}_4)$ for instance.
Similarly, $x$ and $y$ in Example \ref{dz-but-not-iz} can never be $iz$-equivalent in a bigger group.
\begin{remark}
The number of $iz$-classes in group $U$ above is finite.
However, we do not know if this holds in general.
If we knew that the number of $iz$-classes in an algebraic group is finite, at least, for a class of groups (other than the reductive groups), then we would be able to generalise the proof of Theorem \ref{reductive} for this class of groups.
However, we have no such information at present.
We do not even know if there is a dense $iz$-class in a class of algebraic groups (other than the reductive groups).
Hence we consider the $dz$-classes, whose number is finite for every algebraic group.
It also follows that every algebraic group contains a dense $dz$-class.
\end{remark}
\begin{definition}\label{def-regular}
Let $G$ be a linear algebraic group.
An element $g \in G$ is called {\em regular} if the dimension of its centraliser, $Z_G(g)$, is minimum among such dimensions.
Equivalently, an element $g \in G$ is regular if its conjugacy class has the largest possible dimension.
\end{definition}
We note that the set of regular elements in a $dz$-class, which is open (and hence dense) in $G$ and the centralisers of two elements in this class have the same dimensions.
These were the three properties that we had listed at the beginning of this section.
\begin{definition}
Let $G$ be a linear algebraic group.
The \emph{regular rank} of $G$ is the dimension of the centraliser of a regular element in $G$.
\end{definition}
We are now ready to prove the general version of Theorem \ref{reductive}.
\section{Non-reductive groups}
In this section, $G$ is a connected, linear algebraic group defined over $\mathbb{C}$.
The set of regular elements in $G$ is denoted by $G_{reg}$.
We note some basic results about the regular rank.
The proofs are omitted.
\begin{remark}\label{regular}
\begin{enumerate}
\item If $G$ is reductive then the regular rank of $G$ is the same as its rank, the dimension of a maximal torus in $G$.
\item If $G_1 \subseteq G_2$ then the regular rank of $G_1$ is less than or equal to that of $G_2$.
\item The regular rank of $G_1 \times G_2$ is the sum of the regular ranks of $G_i$.
\item The regular rank of a semidirect product $G_1 \ltimes G_2$ need not be equal to the sum of the regular ranks of $G_i$.
We will see an example of this, a Borel in a semisimple group, in Lemma \ref{Borel}.
\end{enumerate}
\end{remark}
\begin{theorem}\label{general}
Let $G$ be a complex, connected, linear algebraic group.
If the dimension of $G$ is $n$ and the regular rank of $G$ is $r$ then the commuting probability of $G$ is
$$\frac{n + r}{2n} = \frac{1}{2} + \frac{r}{2n}.$$
\end{theorem}
\begin{proof}
The proof develops on the similar lines as the proof of Theorem \ref{reductive}.
We note that $C(G)$ is equal to the union $\cup_{\mathfrak{c} \in \mathcal{C}} C(G)_{\mathfrak{c}}$ where $\mathcal{C}$ is the set of conjugacy classes in $G$ and $
\dim(C(G)_{\mathfrak{c}}) = \dim(G)$ for every $\mathfrak{c} \in \mathcal{C}$.
Since the set $G_{reg}$ is dense in $G$ and is a union of conjugacy classes in $G$, we have that $\dim(C(G))$ is equal to $\dim(G_{reg}/\mathfrak{c}) + \dim(G)$
where the set $G_{reg}/\mathfrak{c}$ is the set of conjugacy classes in $G_{reg}$.
The dimension of $G_{reg}/\mathfrak{c}$ is equal to $\dim(G) - \dim(\mathfrak{c}) = \dim Z_G(g)$, where $g$ is a fixed element in $\mathfrak{c}$.
Thus, $\dim(G_{reg}/\mathfrak{c})$ is the regular rank of $G$.
Hence $\dim(C(G)) = n + r$ and
$$p(G) = \frac{n + r}{2n} = \frac{1}{2} + \frac{r}{2n}.$$
\end{proof}
\begin{remark}
For a connected linear algebraic group $G$, if the set $\mathcal{C}$ of conjugacy classes forms a variety in a natural way then it follows from the above proof that $\dim(\mathcal{C})$ is the same as the regular rank of $G$.
\end{remark}
We now compute the regular ranks of certain subgroups of semisimple groups.
\begin{lemma}\label{Borel}
Let $G$ be a complex semisimple group.
We fix a Borel subgroup $B$ in $G$ and let $U$ be the unipotent radical of $B$.
Then the regular rank of $U$ is equal to the rank of the group $G$.
The regular rank of a parabolic in $G$ is equal to the rank of $G$.
In particular, the regular rank of $B$ is equal to the rank of $G$.
\end{lemma}
\begin{proof}
Let $u$ be a regular unipotent element of $G$ which belongs to $U$.
It is proved by Steinberg, \cite[$\S 4$]{St}, that such elements exist.
The dimension of $Z_G(u)$ is equal to the rank of $G$ which is bigger than or equal to the regular rank of $U$.
Further, since $u$ is in a unique Borel subgroup of $G$, it follows that $Z_G(u) = Z_U(u)$.
Hence the regular rank of $U$ is equal to the rank of $G$.
If $P$ is a parabolic in $G$ then, up to conjugacy in $G$, $U \subset P \subset G$.
It follows, from Remark \ref{regular} (2), that the regular rank of $P$ is also equal to the rank of $G$.
\end{proof}
\begin{corollary}
Let $G$ be as in the above lemma with dimension $n$ and rank $r$, and let $U$ be as in the above lemma.
Then
$$p(U) = \frac{(\frac{n-r}{2}) + r}{n - r} = \frac{1}{2} + \frac{r}{n-r} .$$
\end{corollary}
\begin{proof}
Follows from the above two results and that $2 \dim(U) + r = n$.
\end{proof}
\begin{remark}\label{unipotent}
Using the above result, we note commuting probabilities of the unipotent radicals of Borel subgroups of all simple groups.
For a simple $G$, the corresponding unipotent radical is denoted by $U(G)$.
\begin{enumerate}
\item $p(U(SL_n)) = \dfrac{1}{2} + \dfrac{n-1}{n^2 - n} = \dfrac{1}{2} + \dfrac{1}{n + 1}$, for $n \geq 1$.
\vskip1mm
\item $p(U(SO_{2n+1})) = \dfrac{1}{2} + \dfrac{n}{2n^2} = \dfrac{1}{2} + \dfrac{1}{2n}$, for $n \geq 2$.
\vskip1mm
\item $p(U(Sp_{2n})) = \dfrac{1}{2} + \dfrac{n}{2n^2} = \dfrac{1}{2} + \dfrac{1}{2n}$, for $n \geq 3$.
\vskip1mm
\item $p(U(SO_{2n})) = \dfrac{1}{2} + \dfrac{n}{2n^2 - 2n} = \dfrac{1}{2} + \dfrac{1}{2n - 2}$, for $n \geq 4$.
\vskip1mm
\item $p(U(G_2)) = \dfrac{2}{3}$, \hskip2mm $p(U(F_4)) = p(U(E_6)) = \dfrac{7}{12}$, \hskip2mm $p(U(E_7)) = \dfrac{5}{9}$ and \hskip2mm \noindent $p(U(E_8)) = \dfrac{27}{50}$.
\end{enumerate}
\end{remark}
\section{Limit points}
It follows from Theorem \ref{general} that $\frac{1}{2} < p(G) \leq 1$.
We also have that $p(G) = 1$ if and only if $G$ is abelian.
In this section, we investigate the possible values of $p(G)$ in $[\frac{1}{2}, 1]$ as $G$ varies over all linear algebraic groups.
We will also compute the limit points of these numbers.
\begin{lemma}\label{simplebounded}
\begin{enumerate}
\item An $\alpha \in [\frac{1}{2}, 1]$ is $p(G)$ for a simple $G$ if and only if $\alpha = \frac{1}{2} + \frac{1}{2m}$ for some $m > 1$.
\item The number of simple groups $G$ with $p(G) > p/q > 1/2$ is finite.
\item The set of these numbers, from $(1)$ above, has only one limit point which is $\frac{1}{2}$.
\end{enumerate}
\end{lemma}
\begin{proof}
The first part of the lemma is evident from Remark \ref{simple}.
The inequality $p(G) > p/q$ gives $1/2 + r/2n > p/q$ which gives $r/n > (2p-q)/q > 0$ and $n/r < q/(2p-q)$.
The number $n/r$ is an integer for a simple algebraic group and it follows, from Remark \ref{simple} for instance, that there are only finitely many simple $G$ whose $n/r$ is bounded above, for every bound.
Thus, there are only finitely many simple $G$ with $p(G) > p/q > 1/2$.
Finally, from (2) it follows that $1/2$ is the only possible limit point of the set of numbers $p(G)$ where $G$ varies over simple algebraic groups.
We see that $1/2$ is indeed a limit point, $\lim_{n \to \infty}p(SL_n) = 1/2$.
\end{proof}
If we consider the reductive groups, instead of only the simple ones, then the above lemma does not hold.
For example, if $G = GL_2 \times \mathbb{G}_m^a$ then $p(G) = \frac{3 + a}{4 + a}$ and the limit of these numbers is $1$!
In fact, we have the following result:
\begin{proposition}
Every rational number from the set $(\frac{1}{2}, 1]$ is $p(G)$ for some reductive group $G$.
\end{proposition}
\begin{proof}
Assume that we have a rational number $1/2 < p/q \leq 1$.
Choose an $n$ with $1/2 < (n+1)/2n \leq p/q$.
This is possible as $\{(n+1)/2n\}$ is a decreasing sequence whose limit is $1/2$.
Let $G = GL_n^a \times \mathbb{G}_m^b$ where $a = q - p$ and $b = (2n^2p-n^2q - nq)/2$.
Here, $a \geq 0$ as $p/q \leq 1$.
Further, $b$ is an integer as $n^2 - n$ is always even and $b \geq 0$ if and only if $2np - (n+1) q \geq 0$ which holds as $(n+1)/2n \leq p/q$.
The rank of $G$ is equal to $an + b$ and the dimension is equal to $an^2 + b$.
Then
$$p(G) = \frac{(an^2 + b) + (an + b)}{2(an^2 + b)} = \frac{(n^2-n)p}{(n^2-n)q} = \frac{p}{q} .$$
\end{proof}
We have the same result for nilpotent groups.
\begin{proposition}
Every rational number from the set $(\frac{1}{2}, 1]$ is $p(G)$ for some nilpotent group $G$.
\end{proposition}
\begin{proof}
The proof follows exactly in the similar way as the proof of the above proposition.
The role of $\mathbb{G}_m$ in the above proof will be played by $\mathbb{G}_a$ here.
\end{proof}
\begin{corollary}\label{limit}
\begin{enumerate}
\item $\{p(G): G$ is reductive$\} = \{p(G): G$ is nilpotent$\} = \{p(G): G$ is solvable$\} = \{p(G): G$ is a linear algebraic group$\} = \mathbb{Q} \cap (1/2, 1]$.
\item Every $\alpha \in [1/2, 1]$ is a limit point of each of the sets in $(1)$ above.
\end{enumerate}
\end{corollary}
\section{Comparisons with the finite case}
As mentioned in the introduction, the notion of $p(G)$ for finite groups has been studied extensively.
A review of whether the analogous results hold for our notion of $p(G)$ is in order.
\subsection{Simple groups}
If $G$ is a non-abelian finite simple group then $p(G) \leq 1/12$, as was observed by J. Dixon (\cite[Introduction]{GR}).
The probabilities $p(G)$ for simple algebraic groups are also well-behaved, Lemma \ref{simplebounded}.
\subsection{Direct products}
If $G_1$ and $G_2$ are finite groups then $p(G_1 \times G_2) = p(G_1) p(G_2)$.
However, in the case of algebraic groups $p(G_1 \times G_2)$ is almost never equal to $p(G_1)p(G_2)$.
One way to understand this is that $p(G)$ for an algebraic group is always bounded below by $1/2$.
The multiplicativity of $p(G)$ for direct products of algebraic groups would say that for a non-abelian $G$ and a sufficiently large integer $r$, $p(G^r) = p(G)^r < 1/2$ giving a contradiction.
\subsection{Solvable groups and non-solvable groups}
If $G$ is a finite group with $p(G) > 3/40$ then Guralnick and Robinson prove that $G$ is either solvable or is isomorphic to $A_5 \times T$ for an abelian group $T$ (\cite[Theorem 11]{GR}).
Thus, the numbers $p(G)$ for non-solvable groups are bounded above by $3/40$, except for $p(A_5) = 1/12$.
In contrast, the numbers $p(G)$ for non-abelian reductive (hence non-solvable) algebraic groups $G$ take all rational values in $(1/2, 1)$, Corollary \ref{limit} (1).
\subsection{Limit points}
If $G$ is a finite non-abelian group then $p(G) \leq 5/8$ (\cite[Introduction]{G}).
Further, there are gaps within the numbers $p(G)$.
However, the numbers $p(G)$ for algebraic groups take all rational values in $(1/2, 1]$, Corollary \ref{limit} (1).
As a result, every real number in $[1/2, 1]$ is a limit point of the set $\{p(G): G$ is algebraic$\}$ whereas the set of limit points in the finite case is a nowhere dense set of rational numbers (\cite[Corollary 1.3]{E}).
\subsection{Isoclinism}
Two groups $G$ and $H$ are called isoclic if there are isomorphisms $\phi:G/Z(G) \to H/Z(H)$ and $\psi:G' \to H'$ such that the diagram
$$\begin{tikzcd}
G/Z(G) \times G/Z(G) \arrow{r}{\phi \times \phi} \arrow[swap]{d}{\alpha_G} & H/Z(H) \times H/Z(H) \arrow{d}{\alpha_h} \\
G' \arrow{r}{\psi} & H'
\end{tikzcd}$$
is commutative, where $\alpha_G(aZ(G), bZ(G)) = aba^{-1}b^{-1}$ and $\alpha_H$ is defined analogously.
It is proved in \cite[Lemma 2.4]{L} that if $G$ and $H$ are finite isoclinic groups then $p(G) = p(H)$.
We can define isoclinism in exactly the same way as above for algebraic groups.
Then $GL_n$ and $SL_n$ are isoclinic groups with different commuting probabilities.
\section{Compatibility with the finite groups of Lie type}
It seems from the above section that there are more differences than similarities when we compare $p(G)$ for algebraic groups with the $p(G)$ for finite groups.
This is not surprising as our definition of $p(G)$ for algebraic groups uses the dimension, which is additive for direct products, and then we take the ratios of the dimensions.
This is the central reason for many differences noted in the previous section.
In spite of this, we contend that our notion of $p(G)$ is a natural notion by comparing it with $p(G)$ of the finite groups of Lie type.
Clearly, the characteristic of the field $\mathbb{C}$ played no role in the proofs of our results.
Hence the results stand good for groups defined over an algebraically closed field.
In fact, if $G$ is a connected, linear algebraic group defined over a perfect field $k$ then the dimension of $G$ and the regular rank of $G$ is defined over $k$.
Hence the results proved in the paper hold good over the field $k$ as well.
Let $k = \mathbb{F}_q$ be the finite field with $q$ elements and let $G$ be a semisimple group defined over $k$ of dimension $n$ and rank $r$.
We recall a result of Steinberg.
\begin{theorem}[Steinberg]\cite[14.11]{St1}
The number of semisimple conjugacy classes in $G(k)$ is equal to $q^r$.
\end{theorem}
Further, the number of all conjugacy classes in $G(\mathbb{F}_q)$ is $O(q^r)$ and the cardinality of $G(\mathbb{F}_q)$ is $O(q^n)$.
The commuting probability of the finite group $G(\mathbb{F}_q)$ is therefore $O(q^r)/O(q^n) = O(q^{n+r})/O(q^{2n})$ which is compatible with the commuting probability of the algebraic group $G$.
\section{Concluding remarks}
\subsection{Non-affine groups}
The set $C(G)$ makes sense even when $G$ is not an affine algebraic group and hence we can define $p(G)$ in exactly the same way for a non-affine $G$ as in the affine case.
Further, the notion of a regular element also makes sense for an algebraic group $G$, the set of regular elements is a dense subset of $G$ and hence our main result, Theorem \ref{general}, remains valid that $p(G) = (n+r)/2n$.
However, at this point, we do not have any computation of $p(G)$ for a non-affine algebraic group, except when $G$ is an abelian variety with $p(G) = 1$.
\subsection{Further possibilities}
While the set $C(G)$ of commuting pairs in a reductive group $G$ has been studied previously, no one seems to have introduced the notion of the commuting probability of $G$.
The notion in the case of finite groups has many applications and has been extensively investigated.
We hope that this notion also gets investigated.
There are many ways in which the probabilistic studies of algebraic groups can be done.
We list some possibilities below.
It is our hope that they will also be taken up.
\begin{question}
We prove in Section 5 that every rational number in $(1/2, 1]$ is $p(G)$ for a linear algebraic group $G$.
However, in the proof we have used groups that have a large abelian direct factor.
What are the value sets of $p(G)$ where $G$ has no abelian direct factor?
What are the limit points of this set?
\end{question}
\begin{question}
If $H$ is an algebraic subgroup of $G$ then we can define $p(G, H)$ as
$$p(G, H) = \frac{\dim(C(G, H))}{\dim(G \times H)} $$
where $C(G, H) = \{(g, h) \in G \times H: gh = hg\}$.
This $p(G, H)$ gives the probability that two randomly chosen elements $g \in G$ and $h\in H$ commute with each other.
Following the sections 2--4, it does not seem difficult to compute $p(G, H)$ when $H$ is a parabolic in a reductive $G$.
It would be interesting to study this notion in general, especially when $G$ and $H$ are unipotent.
We have no information on the difficulty level of this question in the unipotent case.
\end{question}
| {
"timestamp": "2021-05-27T02:22:48",
"yymm": "2105",
"arxiv_id": "2105.12550",
"language": "en",
"url": "https://arxiv.org/abs/2105.12550",
"abstract": "We introduce the notion of commuting probability, $p(G)$, for an algebraic group $G$. This notion is inspired by the corresponding notions in finite groups and compact groups. The computation of $p(G)$ for reductive groups is readily done using the notion of $z$-classes. We introduce two generalisations of this relation, $iz$-equivalence and $dz$-equivalence. These notions lead us naturally to the notion of a regular element in $G$. Finally, with the help of this notion of regular elements, we compute $p(G)$ for a connected, linear algebraic group $G$. We also compute the set of limit points of the numbers $p(G)$ as $G$ varies over the classes of reductive groups, solvable groups and nilpotent groups.",
"subjects": "Group Theory (math.GR); Algebraic Geometry (math.AG)",
"title": "Commuting probability in algebraic groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540704659681,
"lm_q2_score": 0.8198933271118222,
"lm_q1q2_score": 0.8029658672548485
} |
https://arxiv.org/abs/2006.09321 | Interval parking functions | Interval parking functions (IPFs) are a generalization of ordinary parking functions in which each car is willing to park only in a fixed interval of spaces. Each interval parking function can be expressed as a pair $(a,b)$, where $a$ is a parking function and $b$ is a dual parking function. We say that a pair of permutations $(x,y)$ is \emph{reachable} if there is an IPF $(a,b)$ such that $x,y$ are the outcomes of $a,b$, respectively, as parking functions. Reachability is reflexive and antisymmetric, but not in general transitive. We prove that its transitive closure, the \emph{pseudoreachability order}, is precisely the bubble-sort order on the symmetric group $\Sym_n$, which can be expressed in terms of the normal form of a permutation in the sense of du~Cloux; in particular, it is isomorphic to the product of chains of lengths $2,\dots,n$. It is thus seen to be a special case of Armstrong's sorting order, which lies between the Bruhat and (left) weak orders. | \section{Introduction}
We begin by briefly recalling the theory of parking functions, introduced in various contexts in~\cite{KW,Pyke,Riordan}; see \cite{Yan} for a comprehensive survey. Consider a parking lot with $n$ parking spots placed sequentially along a one-way street. A line of $n$ cars enters the lot, one by one. The $i^{th}$ car drives to its preferred spot $a(i)$ and parks there if possible; if the spot is already occupied then the car parks in the first available spot. The list of preferences $a=(a(1),\dots,a(n))$ is called a \defterm{parking function} if all cars successfully park; in this case the \defterm{outcome} is the permutation $\outcome(a)=w=(w(1),\dots,w(n))$, where the $i^{th}$ car parks in spot $w(i)$. It is well known that the number of parking functions for $n$ cars is $(n+1)^{n-1}$. Parking functions are an established area of research in combinatorics, with connections to labeled trees, non-crossing partitions, the Shi arrangement, symmetric functions, and other topics.
In this paper, we study a generalization of parking functions in which the $i^{th}$ car is willing to park only in an interval $[a(i),b(i)]\subseteq\{1,\dots,n\}$. If all cars can successfully park then we say that the pair $(a,b)=( (a(1),\dots,a(n)), (b(1),\dots,b(n)) )$ is an \defterm{interval parking function}, or IPF. (If $b(i)=n$ for all $i$, then we recover the classical case described above.) It is easy to show that there are $n!(n+1)^{n-1}$ IPFs for $n$ cars, and that if $(a,b)$ is an IPF then the sequences $a$ and $b^*=(n+1-b(n),\dots,n+1-b(1))$ must both be parking functions, raising the question of the relationship between the permutations $\outcome(a)$ and $\outcome(b^*)$.
We say that a pair of permutations $(x,y)\in\mathfrak{S}_n\times\mathfrak{S}_n$ is \defterm{reachable}, written $x\unrhd_R y$, if there exists an IPF $(a,b)$ such that $x=\outcome(a)$ and $y^*=\outcome(b^*)$. Reachability is \emph{not} a partial order on $\mathfrak{S}_n$ because it is not transitive; however, its transitive closure is a partial order, which we call \defterm{pseudoreachability}. The main result of this paper is that pseudoreachability order on~$\mathfrak{S}_n$ is precisely the \emph{bubble-sorting order} on $\mathfrak{S}_n$ (see \cite[Example 3.4.3]{BB}), which in turn is an instance of the more general \defterm{sorting order} defined by Armstrong~\cite{Armstrong} for Coxeter systems. In particular, pseudoreachability lies between Bruhat and (left) weak order in $\mathfrak{S}_n$, and it is a self-dual distributive lattice, poset-isomorphic to the product $C_2\times\cdots\times C_n$, where $C_i$ denotes the chain with $i$ elements.
The proof proceeds as follows. The first significant result, Theorem~\ref{thm:bruhat}, states that $(x,y)$ is reachable only if $x\geq_By$, where $\geq_B$ denotes Bruhat order. By counting the fibers of the map $(a,b)\mapsto(x,y)$, we establish Theorem~\ref{thm:RC}, the Reachability Criterion, which is a key technical tool in what follows. Using this criterion, we show in \S\ref{sec:pseudoreach-order} that pseudoreachability is no weaker than left weak order, and use this result to show that it is graded by length, just like the Bruhat and weak orders. This grading is key for the proof in \S\ref{sorting} that pseudoreachability coincides with the bubble-sorting order.
Initially, we had hoped to characterize reachability of a pair $(x,y)$ in terms of pattern-avoidance conditions on $x$ and $y$. This does not appear to be possible in general, but Section~\ref{sec:avoid} contains partial results in this direction: Theorems~\ref{thm:213-avoiding} and~\ref{thm:x} give sufficient conditions for a pair $(x,y)$ to be reachable, provided that $x\geq_By$.
The authors thank Margaret Bayer for proposing the study of interval parking functions to EC and JLM at the KU Combinatorics Seminar in the spring of 2019. We are grateful to the Graduate Research Workshop in Combinatorics (GRWC) for providing the platform for this collaboration in 2019, and in particular we acknowledge helpful discussions with GRWC participants Sean English and Sam Spiro. We thank Bridget Tenner for her observant comments and Richard Stanley for his communications and suggestions for several directions of future investigation.
\section{Preliminaries} \label{sec:notation}
Square brackets always denote integer intervals: For $m,n\in\mathbb{Z}$ we put $[m,n]=\{m,\,\dots,\,n\}$ and $[n]=[1,n]$.
Lists of positive integers (including permutations) will be regarded as functions: thus we will write $a=(a(1),\dots,a(n))$ rather than $a=(a_1,\dots,a_n)$.
Thus notation such as $x[a,b]$ means $\{x(a),x(a+1),\dots,x(b)\}$.
To simplify notation, we sometimes drop the parentheses and commas: e.g., $2431=(2,4,3,1)$.
Let $a=(a(1),\dots,a(n))$ and $b=(b(1),\dots,b(n))\in\mathbb{Z}^n$. We write $a\leq_Cb$ if $a(i)\leq b(i)$ for all $i\in[n]$; this is the \defterm{componentwise partial order} on $\mathbb{Z}^n$.
The \defterm{conjugate} (or reverse complement) of $x\in[n]^n$ is the vector $x^* = (n+1-x(n), \dots, n+1-x(1))$. Conjugation is an involution that reverses componentwise order.
If $\geq$ is a partial ordering on a set $S$, then $\gtrdot$ denotes the corresponding covering relation: $x\gtrdot y$ if $x>y$ and there exists no $z$ such that $x>z>y$. It is elementary that if $\geq_1$ is a partial order at least as strong as $\geq_2$ (i.e., $x\geq_2y$ implies $x\geq_1y$), then $x>_2y$ and $x\gtrdot_1y$ together imply $x\gtrdot_2y$.
The symmetric group of all permutations of $[n]$ is denoted by~$\mathfrak{S}_n$. We will as far as possible follow the notation and terminology for the symmetric group used in \cite{BB}. We set $e=(1,\dots,n)$ (the identity permutation) and $w_0=(n,n-1,\dots,1)$.
The permutation transposing $i$ and $j$ and fixing all other values is denoted $t_{ij}$, and we set $s_i=t_{i,i+1}$; the elements $s_1,\dots,s_{n-1}$ are the \defterm{standard generators}. Our convention for multiplication is right to left, which is consistent with treating permutations as bijective functions $[n]\to[n]$. Thus $t_{ij}x$ is obtained by transposing the \textit{digits} $i,j$ wherever they appear in $x$, while $x t_{ij}$ is obtained by transposing the digits in the $i^{th}$ and $j^{th}$ \textit{positions}.
We list some standard facts from the theory of $\mathfrak{S}_n$ as a Coxeter system of type~A, with generators $S=\{s_1,\dots,s_{n-1}\}$; see \cite{BB} for details. The \defterm{length} $\ell(x)$ of $x\in\mathfrak{S}_n$ is the smallest number $k$ such that $x$ can be written as a product $s_{i_1}\cdots s_{i_k}$ of standard generators; in this case $s_{i_1}\cdots s_{i_k}$ is called a \defterm{reduced word} for $x$. It is a standard fact that length equals number of inversions:
\begin{equation} \label{length-inv}
\ell(x)=\{(i,j):\ 1\leq i<j\leq n,\ x(i)>x(j)\}.
\end{equation}
The \defterm{Bruhat order} is the partial order $>_B$ on $\mathfrak{S}_n$ defined as the transitive closure of the relations $x>t_{ij}x$ whenever $\ell(x)>\ell(t_{ij}x)$. (Multiplying $x$ by $t_{ij}$ on the right rather than the left produces the same order, because $x t_{ij} x^{-1}$ is a transposition and $x t_{ij}=(x t_{ij} x^{-1})x$.)
The \defterm{(left) weak order} $>_W$ is the transitive closure of the relations $x>s_ix$ whenever $s$ is a standard generator and $\ell(x)>\ell(sx)$. Both of these orders make $\mathfrak{S}_n$ into a graded poset with bottom element $e$ and top element $w_0$.
\section{Parking functions and interval parking functions} \label{sec:intro}
We begin by recalling the theory of parking functions, introduced in various contexts in~\cite{KW,Pyke,Riordan}; see \cite{Yan} for a comprehensive survey.
Let $a=(a(1),\,\dots,\,a(n))\in[n]^n$. Consider a parking lot with $n$ parking spaces placed sequentially along a one-way street. Cars 1,\,\dots,\,$n$ enter the lot in order and try to park.
{\bf Algorithm~A:} The $i^{th}$ car parks in the first available space in the range $[a(i),n]$. If no space in the range $[a(i),n]$ is available, the algorithm fails.
If Algorithm~A succeeds in parking every car, then the preference vector $a$ is called a \defterm{parking function}. The set of all parking functions $a=(a(1),\,\dots,\,a(n))$ is denoted $\PF_{n}$. It is well known that $|\PF_n|=(n+1)^{n-1}$ and that
\[\PF_n = \{a\in[n]^n:\ \tilde a(i)\leq i\ \ \forall i\}\]
where $\tilde{a}$ is the unique non-decreasing rearrangement of $a$; in particular, every rearrangement of a parking function is a parking function.
The \defterm{outcome} of a parking function $a\in\PF_n$ is the permutation $x=\outcome(a)=(x(1),\,\dots,\,x(n))$, where $x(i)$ is the spot in which car $i$ parks given the preference list $a$.
We now modify Algorithm~A to obtain our central object of study.
{\bf Algorithm~B:} Let $a,b\in[n]^n$ with $a\leq_Cb$. The $i^{th}$ car parks in the first available space in the range $[a(i),b(i)]$. If no space in the range $[a(i),b(i)]$ is available, the algorithm fails.
\begin{definition}
If Algorithm~B succeeds in parking every car, then $\mathbf{c}=(a,b)$ is called an \defterm{interval parking function}, or IPF. The set of all interval parking functions for $n$ cars is denoted $\IPF_n$. The \defterm{feasible interval} for the $i^{th}$ car is $[a(i),b(i)]$.
\end{definition}
For example,
\[\IPF_2 = \{(11,12),\ (11,22),\ (12,12),\ (12,22),\ (21,21),\ (21,22)\}.\]
Unlike ordinary parking functions, IPFs are \emph{not} invariant under the action of $\mathfrak{S}_2$ by permuting cars. For example, $(11,12)$ is an IPF but $(11,21)$ is not.
\begin{prop} \label{IPF-characterization} Let $a,b\in[n]^n$. Then:
\begin{enumerate}
\item $a\in\PF_n$ if and only if $(a,(n,\dots,n))\in\IPF_n$.
\item $(a,b)\in\IPF_n$ if and only if $a\in\PF_n$ and $\outcome(a) \le_C b$.
\end{enumerate}
\end{prop}
\begin{proof}
For (1), if $b(i)=n$ for all $i$ then Algorithm~B is identical to Algorithm~A.
For (2), if the given conditions hold, then the execution of Algorithm~B mimics that of Algorithm~A. On the other hand, if $a$ is not a parking function, then some car will not find a spot, while if $\outcome(a)\not\leq_Cb$ then some car will not find a spot in its own feasible interval.
\end{proof}
As a consequence of the proof of (2), the outcome $\outcome(\mathbf{c})$ of $\mathbf{c}=(a,b)$ is just $\outcome(a)$. Moreover, for every $a\in\PF_n$, there are precisely $n!$ choices for $b$ such that $(a,b)\in\IPF_n$. (This fact was first observed by Sean English.) In particular,
\begin{equation} \label{count-IPF}
\left|\IPF_{n}\right| = n!(n+1)^{n-1}.
\end{equation}
\begin{prop} \label{lots-of-facts}
Let $\mathbf{c} = (a,b)\in\IPF_n$. Then:
\begin{enumerate}
\item $b^*\in\PF_n$.
\item $a \le_C \outcome(\mathbf{c}) \le_C b$ and $\outcome(b^*)^* \le_C b$.
\end{enumerate}
\end{prop}
\begin{proof}
\noindent
\begin{enumerate}
\item From $\outcome(\mathbf{c}) \leq_C b$, one has $b^* \leq_C \outcome(\mathbf{c})^*$, the latter is a permutation. Hence $b^*$ is a parking function.
\item Evidently $a \le_C \outcome(\mathbf{c}) \le_C b$. By (1), $b^*$ is a parking function. Thus $b^* \le_C \outcome(b^*)$. Conjugation reverses the order $\leq_C$ and is an involution, so $\outcome(b^*)^* \le_C (b^*)^* = b$.
\qedhere
\end{enumerate}
\end{proof}
\section{The Bruhat property} \label{sec:Bruhat}
In this section, we prove another property of interval parking functions related to Bruhat order on permutations. We use the following characterization of Bruhat order~\cite[Thm.~2.1.5, p.32]{BB}: $y\leq_Bx$ if and only if
\begin{equation} \label{bruhat-criterion}
y\langle i,j\rangle\leq x\langle i,j\rangle \qquad \forall i,j\in[n]
\end{equation}
where
\begin{equation} \label{angle-brackets}
u\langle i,j\rangle = \#\{k\in[i]:\ u(k)\geq j\}.
\end{equation}
(This quantity is notated $u[i,j]$ in \cite{BB}, but we reserve that notation for the image of an interval under a permutation.)
For later use, we observe that by pigeonhole, it is always the case that
\begin{equation} \label{bracket-ineq}
x\langle i,j\rangle\geq i-j+1.
\end{equation}
Suppose that $\mathbf{c}=(a,b)$ is an IPF, and let $x=\outcome(a)$ and $y=\outcome(b^*)^*$. Then $x\langle i,j\rangle$ is the number of cars $1,\,\dots,\,i$ that park at or after spot $j$ under the parking function $a$.
\begin{theorem} \label{thm:bruhat}
Suppose that $\mathbf{c} = (a,b)$ is an IPF. Let $x=\outcome(a)$ and $y=\outcome(b^*)^*$. Then $x\geq_By$.
\end{theorem}
\begin{proof}
First, we may assume without loss of generality that $x=a$, because replacing $a$ with $x$ doesn't change the execution of Algorithm~B (the $i^{th}$ car will have to drive to spot $x(i)$ anyway, and it is able to park there because $\mathbf{c}$ is an IPF).
Fix $i,j\in[n]$, and let $p=x\langle i,j\rangle$ and $q=y\langle i,j\rangle$. By~\eqref{bruhat-criterion} we wish to show that $p\geq q$. By definition of $y\langle i,j\rangle$ we have
\begin{equation} \label{bruhat:1}
\Big|y[1,i]\cap[j,n]\Big|=q
\end{equation}
or equivalently
\begin{equation} \label{bruhat:2}
\Big|y^*[n-i+1,n]\cap[1,n+1-j]\Big|=q.
\end{equation}
Therefore, when Algorithm~A is run on the parking function $b^*$ with outcome $y^*$, the first $n-i$ cars must leave open at least $q$ spaces in the range $[1,n+1-j]$, so they cannot fill as many as $(n+1-j)-q+1=n-j-q+2$ of them. Therefore, $b^*[1,n-i]$ can contain no subset
$\{v(1),\,\dots,\,v^*(n-j-q+2)\}$ such that
\[(v(1),\,\dots,\,v^*(n-j-q+2))\leq_C (q,\,\dots,\,n+1-j).\]
Equivalently, $\{b(i+1),\,\dots,\,b(n)\}$ can contain no subset
$\{v(1),\,\dots,\,v(n-j-q+2)\}$ such that
\[(v(1),\,\dots,\,v(n-j-q+2))\geq_C (j,\,\dots,\,n-q+1).\]
It follows that when Algorithm~B is run on $\mathbf{c}$, no more than $n-j-q+1$ of the last $n-i$ cars will park in the spots $[j,n]$. On the other hand, since $x=\outcome(\mathbf{c})$, no more than $p=x\langle i,j\rangle$ of the first $i$ cars can park in the spots $[j,n]$. Therefore, the total number of cars that park in $[j,n]$ is at most
\[(n+1-j-q)+p = |[j,n]|+(p-q).\]
On the other hand, exactly $|[j,n]|$ cars park in $[j,n]$. It follows that $p\geq q$, as desired.
\end{proof}
Theorem~\ref{thm:bruhat} asserts that there is a well-defined \defterm{bioutcome} function
\begin{equation} \label{define-bioutcome}
\begin{array}{llll}
\bioutcome:&\IPF_n&\to&\{(x,y)\in\mathfrak{S}_n\times\mathfrak{S}_n:\ x\geq_By\}\\
&(a,b)&\mapsto&(\outcome(a),\outcome(b^*)^*).
\end{array}
\end{equation}
We say that a pair $(x,y)\in\mathfrak{S}_n\times\mathfrak{S}_n$ is \defterm{reachable} if it is in the image of $\bioutcome$; in this case we write $x\unrhd_R y$. (We use this notation rather than $x\geq_R y$ because reachability is not a partial order on $\mathfrak{S}_n$, as we will discuss shortly.)
Then Theorem~\ref{thm:bruhat} asserts that all reachable pairs are related in Bruhat order.
\begin{remark} \label{unreachable}
If $a$ and $b^*$ are parking functions such that $\outcome(a)\geq_B\outcome(b^*)^*$, it does \emph{not} follow that $\mathbf{c}=(a,b)$ is an IPF. For example, if $a=w_0$ and $b$ is a permutation, then certainly $a=\outcome(a)\geq_B\outcome(b^*)^*=b$, but $(a,b)$ is an IPF only if $b=w_0$ as well.
Moreover, if $x,y\in\mathfrak{S}_n$ with $x\geq_By$, there does not necessarily exist any IPF $\mathbf{c}=(a,b)$ such that $\bioutcome(\mathbf{c})=(x,y)$. For example, when $n=3$, take $(x,y)=(321,213)$, so that $y^*=132$. Then $a=321$ is the only parking function with $\outcome(a)=x$. By Prop.~\ref{lots-of-facts}(2) we must have $b\geq_Ca$, so $b\in\{321,331, 322, 332, 323,333\}$ and $b^*\in\{321,311,221,211,121,111\}$. But none of these parking functions have outcome $y^*=132$.
\end{remark}
The relation of reachability is reflexive (because $\bioutcome(x,x)=(x,x)$ for all $x\in\mathfrak{S}_n$) and antisymmetric (as a consequence of Theorem~\ref{thm:bruhat}). However, it is not transitive: for example,
$321\mkern-1mu\not\mathrel{\mkern1mu\unrhd_R}\mkern1mu 213$, as just shown, but $(321,312)=\bioutcome(312,322)$ and $(312,213)=\bioutcome(312,313)$ are reachable. This observation motivates the following definition.
\begin{definition} \label{def-pseu}
We say that $(x,y)$ is \defterm{pseudoreachable}, written $x\geq_P y$, if there is a sequence $x=x_0\unrhd_R x_1\unrhd_R\cdots\unrhd_R x_k=y$. That is, pseudoreachability is the transitive closure of reachability. As such, it is a partial order on $\mathfrak{S}_n$, which by Theorem~\ref{thm:bruhat} is no stronger than Bruhat order.
\end{definition}
For reference, we summarize the various order-like relations that we will consider.
\medskip
\begin{center} {\renewcommand\arraystretch{1.2}
\begin{tabular}{lll} \hline
$a\geq_C b$ & Componentwise order & on $\mathbb{Z}^n$\\
$x\geq_B y$ & Bruhat order & \rdelim\}{4}{1em}[\ on $\mathfrak{S}_n$] \\
$x\geq_W y$ & Left weak order\\
$x\unrhd_R y$ & Reachability (not transitive)\\
$x\geq_P y$ & Pseudoreachability\\ \hline
\end{tabular} }
\end{center}
\medskip
\section{Reachability via counting fibers of the bioutcome map} \label{sec:reachable}
Fix a pair of permutations $(x,y)\in\mathfrak{S}_n\times\mathfrak{S}_n$. How can we determine if $(x,y)$ is reachable? More generally, what is the number $\phi(x,y)=|\bioutcome^{-1}(x,y)|$ of IPFs $(a,b)$ with bioutcome~$(x,y)$?
We can answer this enumerative question quickly, although the resulting formula is recursive and somewhat opaque. First, for each $i$, the number of possibilities $\mathsf{c}_i=\mathsf{c}_i(x,y)$ for $a(i)$ is the size of the largest block of spaces ending in $x(i)$ that are all occupied by one of the first $i$ cars.
That is,
\[\mathsf{c}_i=\mathsf{c}_i(x,y) = \max\left\{ j \in [1, x(i)]: x^{-1}(x(i)-k) \le i \text{ for all } 0 \le k \le j-1 \right\}.\]
Second, given $a(1),\dots,a(i)$, the number of possibilities for $b(i)$ is $\mathsf{d}_i=\mathsf{d}_i(x,y) = \#\mathsf{D}_i(x,y)$, where
\[\mathsf{D}_i(x,y)=\{k\in[0,J_i-1]:\ y(i) + k \ge x(i)\}\]
and
\[J_i =\max\{j\in[1,n+1-y(i)]:\ y^{-1}(y(i)+s) \ge i \text{ for all }0 \le s \le j-1\}.\]
The definition of $J_i$ is analogous to that of $\mathsf{c}_i$: it is the size of the largest block of spaces ending in $n+1-y(i)$ that are all occupied by one of the first $n+1-i$ cars, so it is the number of possible values for $b^*_i$ under which $\outcome(b^*)=y^*$.
The additional condition $y(i)+k\geq x(i)$ in the definition of $\mathsf{D}_i$ ensures that $(a,b)$ is an IPF because the upper bound on $x(i)$ given by $b(i)$ does not conflict with where the $i^{th}$ car parks under Algorithm~B.
The sequences $\mathsf{c}=(\mathsf{c}_1,\dots,\mathsf{c}_n)$ and $\mathsf{d}=(\mathsf{d}_1,\dots,\mathsf{d}_n)$
then determine the size of the fibers of $\bioutcome$:
\begin{equation}
\phi(x,y)=\left|\bioutcome^{-1}(x,y)\right| = \prod_{i=1}^{n} \mathsf{c}_i\mathsf{d}_i.
\end{equation}
\begin{example}
Let $x = 361245$ and $y = 341256$. Then $\mathsf{c}=(1,1,1,2,4,5)$ and $\mathsf{d}=(4,1,2,1,2,1)$, so there are $2^34^25^1=640$ IPF's with bioutcome $(x,y$).
\end{example}
It is clear from the definition that $1\leq\mathsf{c}_i\leq i$ for all $i$. On the other hand, one or more $\mathsf{d}_i$ may be zero. The pair $(x,y)$ is reachable if and only if $\mathsf{d}_i>0$ for all $i$; we refer to this as the \textbf{Count Criterion} for reachability.
Evidently, the largest fiber occurs when $x$ and $y$ both equal the identity permutation in $\mathfrak{S}_n$. In this case $\mathsf{c}=(1,2,\dots,n)$ and $\mathsf{d}=(n,n-1,\dots,1)$, and the fiber size is $(n!)^2$. At the opposite end of the spectrum, if $x=y=(n,\dots,1)$, then $\phi(x,y) = 1$.
Perhaps a better way to think about reachability is the following criterion. If we are solely interested in reachability and not the number of IPFs that achieve a given outcome, we can rephrase reachability more directly in terms of the permutations $x$ and $y$.
\begin{theorem}[\textbf{Reachability Criterion}]\label{thm:RC}
Let $x,y\in\mathfrak{S}_n$. Then
\begin{equation}\label{RC}
x\unrhd_R y \quad\iff\quad [y(i),x(i)] \subseteq y[i,n] \quad \forall\, i \in [n].\tag{\textbf{RC}}
\end{equation}
\end{theorem}
\begin{proof}
Let $i\in[n]$. We will show that $\mathsf{d}_i(x,y)>0$ if and only if $[y(i),x(i)] \subseteq y[i,n]$.
Suppose that $[y(i),x(i)] \setminus y[i,n]\neq \emptyset$. That is, there is some $m\in[y(i),x(i)]$ such that $y^{-1}(m)<i$. Thus $J_i \leq m-y(i)$, so $y(i)+k<m\leq x(i)$ for all $k<J_i$, so $\mathsf{d}_i(x,y)=0$.
Now assume that $[y(i),x(i)] \subseteq y[i,n]$. We wish to show that $\mathsf{D}_i\neq\emptyset$. If $y(i)\geq x(i)$, then $0\in\mathsf{D}_i$. On the other hand, if $y(i)<x(i)$, then $m=x(i)-y(i)>0$, and for all $0 \leq k \leq m$ we have $y^{-1}(y(i)+k) \ge i$. Therefore $J_i > m$ and $m\in\mathsf{D}_i$.
\end{proof}
It is worth emphasizing that the Reachability Criterion is sufficient, but not necessary, for showing that $x\geq_Py$. For example, the pair $(x,y)=(321,213)$ fails~\eqref{RC} for $i=2$, but nonetheless $x\geq_Py$.
\begin{prop} \label{cf-df-facts}
The sequence $\mathsf{d}(x,y)$ has the following properties.
\begin{enumerate}[label=(\alph{enumi})]
\item\label{dfone} $\mathsf{d}_1\geq1$.
\item\label{dfi} For each $i$, if $y(i)\geq x(i)$, then $\mathsf{d}_i\geq1$.
\item\label{dfn} If $x\geq_By$, then $\mathsf{d}_n=1$.
\end{enumerate}
\end{prop}
\begin{proof} The first two assertions are direct consequences of~\eqref{RC}. For~\ref{dfone}, we have $[y(1),x(1)] \subseteq [n] = y[n]$, and for~\ref{dfi}, if $y(i)\geq x(i)$ then $[y(i),x(i)]\subseteq\{y(i)\}\subseteq y[i,n]$.
For~\ref{dfn}, if $y\leq_B x$, then $y(n) \ge x(n)$ (a consequence of the inequalities~\eqref{bruhat-criterion} for $i=n-1$ and all $j$), so $\mathsf{d}_n>0$ by part~\ref{dfi}. Observe that
\[J_n =\max\{j:\ y(n)+k\leq n \text{ and } y^{-1}(y(n)+k) \ge n \text{ for all }0 \le k \le j-1\} = 1\]
because the conditions are true for $k=0$ but false for $k>0$. Therefore, $\mathsf{D}_n=\{k\in[0,0]:\ y(n)\geq x(n)\}=\{0\}$ and $\mathsf{d}_n=\#\mathsf{D}_n=1$.
\end{proof}
\section{Pseudoreachability order is graded} \label{sec:pseudoreach-order}
In this section, we prove that the pseudoreachability order $\geq_P$ on $\mathfrak{S}_n$ is graded by length, just like the Bruhat and weak orders.
Temporarily, we will use the notation $x{\,\mathrlap{\gtrdot}\rhd}_R\, y$ to mean that $x\unrhd_R y$ and $\ell(x) = \ell(y) + 1$. Note that if $x{\,\mathrlap{\gtrdot}\rhd}_R\, y$ then $x\gtrdot_Py$ (because $x\gtrdot_By$). Our goal is to prove the converse of the last statement, which will imply that pseudoreachability is graded by length.
We have already shown that pseudoreachability order is no stronger than Bruhat order $\geq_B$. We next show that it is no weaker than left weak order $\geq_W$.
\begin{prop} \label{lem:inv-1}
If $x\gtrdot_Wy$, then $x{\,\mathrlap{\gtrdot}\rhd}_R\, y$.
\end{prop}
\begin{proof}
Suppose that $x\gtrdot_Wy$, i.e., that $x=s_ay$, where $j=y^{-1}(a)<y^{-1}(a+1)=k$. Then Prop.~\ref{cf-df-facts}\ref{dfi} implies that $\mathsf{d}_i(x,y)>0$ for all $i\in[n]\setminus\{j\}$. Meanwhile $[y(j),x(j)]=\{a,a+1\}=\{y(j),y(k)\}\subseteq[y(j),y(n)]$, so~\eqref{RC} implies that $\mathsf{d}_j(x,y)>0$ as well.
\end{proof}
For each $x\in\mathfrak{S}_n$, let $\hat x$ be the permutation in $\mathfrak{S}_{n-1}$ defined by
\begin{equation} \label{hats-on}
\hat x(i) = \begin{cases} x(i) & \text{ if } x(i)<x(n),\\ x(i)-1 & \text{ if } x(i)>x(n).\end{cases}
\end{equation}
\begin{lemma} \label{lem:proj}
Let $x,y \in \mathfrak{S}_n$ with $x(n) = y(n)$.
Then $x\unrhd_R y$ if and only if $\hat x\unrhd_R\hat y$.
\end{lemma}
\begin{proof}
By~\eqref{RC}, the proof reduces to showing that
\begin{subequations}
\begin{equation} \label{reach-xy}
[y(i),x(i)] \subseteq y[i,n] \qquad \forall i\in[n]
\end{equation}
if and only if
\begin{equation} \label{reach-proj}
[\hat y(i),\hat x(i)] \subseteq \hat y[i,n] \qquad \forall i\in[n-1].
\end{equation}
\end{subequations}
($\implies$) Assume that~\eqref{reach-xy} holds. Let $i\in[n-1]$ and $a \in [\hat y(i),\hat x(i)]$. There are two cases to consider.
\textit{Case 1a}: $a < y(n)$. Then
$\hat y(i)\leq a<y(n)$, so $\hat y(i)=y(i)$ (since~\eqref{hats-on} implies that if $\hat y(i)=y(i)-1$ then $\hat y(i)\geq y(n)$).
Thus
\[ [\hat y(i),a] = [y(i),a] \subseteq [y(i),x(i)] \subseteq y[i,n] \]
because $a\leq\hat x(i)\leq x(i)$, and by~\eqref{reach-xy}.
Therefore $a = y(k)=\hat y(k)$ for some $k\in[i,n-1]$.
\textit{Case 1b}: $a \geq y(n)$. Then, since $\hat y(i) \geq y(i) - 1$ and $x(i) \geq \hat x(i) \geq y(n)$, $a \in [\hat y(i),\hat x(i)]$ implies that $a \in [y(i)-1,x(i)-1]$, i.e., $y(i) \leq a+1 \leq x(i)$.
By~\eqref{reach-xy} there is some $k\in[i,n]$ such that $a+1 = y(k)$. In fact $k\neq n$ (since $a+1>y(n)$), so $\hat y(k) = y(k) - 1 = a$ and so $a \in \hat y[i,n-1]$.
In both cases we have proved~\eqref{reach-proj}.
\medskip
($\impliedby$) Assume that~\eqref{reach-proj} holds. It is immediate that~\eqref{reach-xy} holds when $i=n$, so fix $i\in[n-1]$ and $a \in [y(i),x(i)]$. We wish to show that $a=y(k)$ for some $k\in[i,n]$. This is clear if $a=y(n)$, so assume $a\neq y(n)$.
\textit{Case 2a}: $a < y(n)$. Since $a\in[y(i),x(i)]$, either $a = x(i)$ or $a < x(i)$. If $a = x(i)$, then $a = x(i) = \hat x(i)$. If $a < x(i)$, then $a \leq \hat x(i)$ since $\hat x(i) \geq x(i) - 1$. In either case,
\[ [y(i),a] = [\hat y(i),a] \subseteq [\hat y(i),\hat x(i)] \subseteq \hat y[i,n-1]. \]
Thus $a=\hat y(k)=y(k)$ for some $k\in[i,n-1]$.
\textit{Case 2b}: $a > y(n)$. Since $a\in[y(i),x(i)]$, either $a = y(i)$ or $a > y(i)$. If $a = y(i)$, then $a - 1 = y(i) - 1 = \hat y(i)$ since $y(i) > y(n)$. If $a > y(i)$, then we know that $a-1 \geq \hat y(i)$ since $y(i) \geq \hat y(i)$. It follows that $a-1\in[\hat y(i),\hat x(i)]$, so, by~\eqref{reach-proj}, there is some $k\in[i,n-1]$ such that $a-1 = \hat y(k)\geq y(n)$. Therefore, $a = y(k)$.
In both cases we have proved~\eqref{reach-xy}.
\end{proof}
\begin{corollary} \label{cor:rcover}
Let $x,y \in \mathfrak{S}_n$ with $x(n) = y(n)$.
Then $x{\,\mathrlap{\gtrdot}\rhd}_R\, y$ if and only if $\hat x{\,\mathrlap{\gtrdot}\rhd}_R\,\hat y$.
\end{corollary}
\begin{proof}
The definition of $\hat x$ implies that
\begin{equation} \label{hat-inv}
\ell(\hat x) = \ell(x)-(n-x(n)),
\end{equation}
which together with Lemma~\ref{lem:proj} produces the desired result.
\end{proof}
\begin{prop} \label{reachable-graded}
Let $x,y\in\mathfrak{S}_n$ such that $x\unrhd_R y$, and let $m=\ell(x)-\ell(y)$. Then there exists a chain
\begin{equation} \label{desired-chain}
x_0=y{\,\mathrlap{\lessdot}\lhd}_R\, x_1{\,\mathrlap{\lessdot}\lhd}_R\,\cdots{\,\mathrlap{\lessdot}\lhd}_R\, x_m=x.
\end{equation}
\end{prop}
\begin{proof}
The proof proceeds by double induction on $n$ and $m$. The conclusion is trivial when $n\leq2$ or $m\leq1$. Accordingly, let $n>2$ and $m>1$, and assume inductively that the theorem holds for all $(n',m')<_C(n,m)$.
First, suppose that $x(n) = y(n)$. Then $\hat{x}\unrhd_R\hat{y}$ by Lemma~\ref{lem:proj} where $\hat x,\hat y$ are defined by~\eqref{hats-on}. Moreover,
$\ell(\hat{x}) - \ell(\hat{y}) = \ell(x) - \ell(y) = m$ by~\eqref{hat-inv}. Therefore, by the induction hypothesis, there is a chain
$\hat{y} = \hat x_0 {\,\mathrlap{\lessdot}\lhd}_R\, \hat x_1 {\,\mathrlap{\lessdot}\lhd}_R\, \cdots {\,\mathrlap{\lessdot}\lhd}_R\, \hat x_m = \hat{x}$
in $\mathfrak{S}_{n-1}$,
which by Corollary~\ref{cor:rcover} can be lifted to a chain of the form~\eqref{desired-chain}.
Second, suppose that $x(n) \neq y(n)$. Since $x\geq_By$ by Theorem \ref{thm:bruhat}, in fact $x(n) < y(n)$ (as noted in the proof of Prop.~\ref{cf-df-facts}\ref{dfn}).
Let $p=y(n)-1$; then $p\in[1,n-1]$, so we may set $q=y^{-1}(p)$ and $z=s_py=y t_{q,n}$.
Then $z\gtrdot_Wy$ and so $z{\,\mathrlap{\gtrdot}\rhd}_R\, y$ by Prop.~\ref{lem:inv-1}. We will show that $x\unrhd_R z$ using~\eqref{RC}.
\textit{Case 1}: $1\leq i\leq q$. Then $[z(i),x(i)]\subseteq[y[i],x(i)]$ and $y[i,n]=z[i,n]$, so $\mathsf{d}_n(x,y)\geq1$ implies $\mathsf{d}_n(x,z)\geq1$.
\textit{Case 2}: $q<i<n$.
Then $p=y(q)\not\in y[i,n]$, so by~\eqref{RC} $p\not\in[y(i),x(i)]$. Thus $p+1\not\in[y(i)+1,x(i)+1]$,
and certainly $p+1=y(n)\neq y(i)$. Thus $[y(i),x(i)] \subseteq y[i,n]\setminus\{y(n)\}=y[i,n-1]$ and
\[[z(i),x(i)] = [y(i),x(i)] \subseteq y[i,n-1]=z[i,n-1]\subseteq z[i,n]\]
so again $\mathsf{d}_n(x,z)\geq1$.
\textit{Case 3}: $i=n$. Then $x(n) \leq y(n)-1 =z(n)$, so $\mathsf{d}_{n}(x,z) \geq 1$ by Prop.~\ref{cf-df-facts}\ref{dfi}.
Taken together, the three cases imply $x\unrhd_R z$. By induction there is a chain $x_1=z{\,\mathrlap{\lessdot}\lhd}_R\,\cdots{\,\mathrlap{\lessdot}\lhd}_R\, x_m=x$, and appending $x_0=y$ produces a chain of the form~\eqref{desired-chain}.
\end{proof}
\begin{theorem} \label{pseudoreachable-graded}
Pseudoreachability order is graded by length.
\end{theorem}
\begin{proof}
The definition of pseudoreachability as the transitive closure of reachability order implies that if $x_0<_P\cdots<_Px_m$ is a maximal chain, then in fact each $x_{i-1}\unlhd_R x_i$ for all $i$. Now, maximality together with Prop.~\ref{reachable-graded} implies in turn that in fact $x_{i-1}{\,\mathrlap{\lessdot}\lhd}_R\, x_i$.
\end{proof}
For comparison, the Hasse diagrams of Bruhat, pseudoreachability, and left weak orders on $\mathfrak{S}_3$ are shown in Figure~\ref{fig:s3}, together with the
reachability relation (which is reflexive and antisymmetric, but not transitive). The three partial orders on $\mathfrak{S}_4$ are shown in Figure~\ref{fig:s4}.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\newcommand{4.5}{4.5}
\begin{scope}[shift={(0,0)}]
\draw[black] (0,0)--(-1,1)--(-1,2)--(0,3)--(1,2)--(1,1)--cycle;
\draw[black] (-1,1)--(1,2) (-1,2)--(1,1);
\foreach \times/\y/\w in {0/0/123, -1/1/132, 1/1/213, -1/2/231, 1/2/312, 0/3/321} \node[fill=white] at (\times,\y) {\sf\w};
\node at (0,-.5) {Bruhat order $\geq_B$};
\end{scope}
\begin{scope}[shift={(4.5,0)}]
\draw[black] (0,0)--(-1,1)--(-1,2)--(0,3)--(1,2)--(1,1)--cycle;
\draw[black] (-1,1)--(1,2);
\foreach \times/\y/\w in {0/0/123, -1/1/132, 1/1/213, -1/2/231, 1/2/312, 0/3/321} \node[fill=white] at (\times,\y) {\sf\w};
\node at (0,-.5) {Pseudoreachability $\geq_P$};
\end{scope}
\begin{scope}[shift={(2*4.5,0)}]
\draw[black] (0,0)--(-1,1)--(-1,2)--(0,3)--(1,2)--(1,1)--cycle;
\foreach \times/\y/\w in {0/0/123, -1/1/132, 1/1/213, -1/2/231, 1/2/312, 0/3/321} \node[fill=white] at (\times,\y) {\sf\w};
\node at (0,-.5) {Left weak order $\geq_W$};
\end{scope}
\begin{scope}[shift={(3*4.5,0)}]
\foreach \p in {(-1,1), (1,1), (-1,2), (1,2), (0,3)} \draw[black] (0,0)--\p;
\foreach \p in {(-1,2), (1,2), (0,3)} \draw[black] (-1,1)--\p;
\draw[black] (1,1)--(1,2)--(0,3)--(-1,2);
\foreach \times/\y/\w in {0/0/123, -1/1/132, 1/1/213, -1/2/231, 1/2/312, 0/3/321} \node[fill=white] at (\times,\y) {\sf\w};
\node at (0,-.5) {Reachability $\unrhd_R$};
\end{scope}
\end{tikzpicture}
\caption{Bruhat, pseudoreachability, left weak order, and reachability on $\mathfrak{S}_3$\label{fig:s3}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=1.4]
\coordinate (p4321) at (0,6);
\coordinate (p3421) at (-1.5,5); \coordinate (p4231) at (0,5); \coordinate (p4312) at (1.5,5);
\coordinate (p2431) at (-3,4); \coordinate (p3241) at (-1.5,4); \coordinate (p3412) at (0,4); \coordinate (p4132) at (1.5,4); \coordinate (p4213) at (3,4);
\coordinate (p1432) at (-3.5,3); \coordinate (p2341) at (-2.25,3); \coordinate (p3142) at (-.75,3); \coordinate (p2413) at (.75,3); \coordinate (p3214) at (2.25,3); \coordinate (p4123) at (3.5,3);
\coordinate (p1342) at (-3,2); \coordinate (p1423) at (-1.5,2); \coordinate (p2143) at (0,2); \coordinate (p2314) at (1.5,2); \coordinate (p3124) at (3,2);
\coordinate (p1243) at (-1.5,1); \coordinate (p1324) at (0,1); \coordinate (p2134) at (1.5,1);
\coordinate (p1234) at (0,0);
\draw[very thick] (p1243)--(p1234) (p1324)--(p1234) (p1342)--(p1243) (p1423)--(p1324) (p1432)--(p1423) (p1432)--(p1342) (p2134)--(p1234) (p2143)--(p1243) (p2143)--(p2134) (p2314)--(p1324) (p2341)--(p1342) (p2413)--(p1423) (p2413)--(p2314) (p2431)--(p1432) (p2431)--(p2341) (p3124)--(p2134) (p3142)--(p2143) (p3214)--(p3124) (p3214)--(p2314) (p3241)--(p3142) (p3241)--(p2341) (p3412)--(p2413) (p3421)--(p3412) (p3421)--(p2431) (p4123)--(p3124) (p4132)--(p4123) (p4132)--(p3142) (p4213)--(p4123) (p4213)--(p3214) (p4231)--(p4132) (p4231)--(p3241) (p4312)--(p4213) (p4312)--(p3412) (p4321)--(p4312) (p4321)--(p4231) (p4321)--(p3421);
\draw[thick,blue] (p1243)--(p1423) (p1324)--(p3124) (p1342)--(p3142) (p1423)--(p4123) (p1432)--(p3412) (p1432)--(p4132) (p2143)--(p4123) (p2413)--(p4213) (p2431)--(p4231) (p4132)--(p4312);
\draw[red] (p1324)--(p1342) (p2134)--(p2314) (p2143)--(p2341) (p2143)--(p2413) (p2314)--(p2341) (p2413)--(p2431) (p3124)--(p3142) (p3142)--(p3412) (p3214)--(p3241) (p3214)--(p3412) (p3241)--(p3421) (p4213)--(p4231);
\foreach \name/\loc in {1234/p1234, 1243/p1243, 1324/p1324, 1342/p1342, 1423/p1423, 1432/p1432, 2134/p2134, 2143/p2143, 2314/p2314, 2341/p2341, 2413/p2413, 2431/p2431, 3124/p3124, 3142/p3142, 3214/p3214, 3241/p3241, 3412/p3412, 3421/p3421, 4123/p4123, 4132/p4132, 4213/p4213, 4231/p4231, 4312/p4312, 4321/p4321} \node at (\loc) [rectangle,draw,fill=white] {\sf\scriptsize\name};
\draw[very thick] (5,3.5)--(6,3.5); \node at (7,3.5) {$\geq_B,\ \geq_P,\ \geq_W$};
\draw[thick, blue] (5,3)--(6,3); \node at (6.7,3) {$\geq_B,\ \geq_P$};
\draw[red] (5,2.5)--(6,2.5); \node at (6.44,2.5) {$\geq_B$};
\end{tikzpicture}
\caption{Bruhat, pseudoreachability, and left weak order on $\mathfrak{S}_4$\label{fig:s4}}
\end{center}
\end{figure}
\section{Pseudoreachability order and bubble-sorting order}\label{sorting}
The theory of normal forms in a Coxeter system was introduced by du~Cloux~\cite{duCloux} and is described in~\cite[\S3.4]{BB}. We sketch here the facts we will need; see especially~\cite[Example 3.4.3]{BB}, which describes normal forms in the symmetric group in terms of bubble-sorting. Let $\sigma_k=s_1\cdots s_k$ and $\omega_n=\sigma_{n-1}\cdots\sigma_1$; then $\omega_n$ is a reduced word for $w_0\in\mathfrak{S}_n$. Every $x\in\mathfrak{S}_n$ has a unique \textbf{conormal form}: a reduced word $N(w)$ of the form $v_{n-1} v_{n-2} \cdots v_2 v_1$, where $v_k=s_j s_{j+1}\cdots s_k$ is a suffix of $\sigma_k$. The conormal form is the reverse of the lexicographically first reduced word for $x^{-1}$ (that is, of the normal form of $x^{-1}$, as described in~\cite{BB}). Thus $x$ is characterized by the sequence
\[\lambda(x)=(\lambda_{n-1}(x),\dots,\lambda_1(x))=(|v_{n-1}|,\dots,|v_1|)\in[0,n-1]\times[0,n-2]\times\cdots\times[0,1].\]
Armstrong~\cite{Armstrong} defined a general class of \emph{sorting orders} on a Coxeter system $(W,S)$: one fixes $w\in W$ and chooses a reduced word $\omega$ (the ``sorting word'') for $w\in W$, then partially orders all group elements expressible as a subword of~$\omega$ by inclusion between their lexicographically first such expressions. Armstrong proved that for every reduced word for the top element of a finite Coxeter group, the sorting order is a distributive lattice intermediate between the weak and Bruhat orders. In the case that $W=\mathfrak{S}_n$ and $\omega=\omega_n$, the sorting order is equivalent to comparing $\lambda(x)$ and $\lambda(y)$ componentwise, hence is isomorphic to $C_2\times\cdots\times C_n$, where $C_i$ denotes a chain with $i$ elements.
\begin{prop} \label{v-reduction}
Let $x,y\in\mathfrak{S}_n$ with $x(n)=y(n)=n$, and let $v=s_j s_{j+1} \cdots s_{n-1}$ be a suffix of $s_1 \; \cdots \; s_{n-1}$.
Then $x\unrhd_R y$ if and only $vx\unrhd_R vy$.
\end{prop}
\begin{proof}
If $v=e$, there is nothing to prove. Otherwise, by~\eqref{RC}, it suffices to show that for every $i\in[n]$, we have
\begin{subequations}
\begin{equation} \label{RC-for-xy}
[y(i),x(i)]\subseteq y[i,n]
\end{equation}
if and only if
\begin{equation} \label{RC-for-v}
[(vy)(i),(vx)(i)]\subseteq vy[i,n].
\end{equation}
\end{subequations}
This is clear if $i=n$, so we assume henceforth that $i\neq n$. Moreover,
\[v(k)=\begin{cases}
k &\text{ if } k<j,\\
k+1 &\text{ if } j\leq k<n,\\
j &\text{ if } k=n\end{cases}
\qquad\text{and}\qquad
v^{-1}(k)=\begin{cases}
k &\text{ if } k<j,\\
n &\text{ if } k=j,\\
k-1 &\text{ if } k>j.\end{cases}
\]
In particular, if $i\neq n$, then $x(i)>y(i)$ if and only if $v(x(i))>v(y(i))$. We assume henceforth that these two equivalent conditions hold, since if both fail then~\eqref{RC-for-xy} and~\eqref{RC-for-v} are both trivially true.
The proofs of the two directions now proceed very similarly.
\medskip
$\eqref{RC-for-xy}\implies\eqref{RC-for-v}$:\quad There are three cases.
\emph{Case 1a: $j>x(i)$.} Then $v$ fixes $[1,x(i)]$ pointwise, so $[(vy)(i),(vx)(i)]=v[y(i),x(i)]\subseteq vy[i,n]$ (applying $v$ to both sides of~\eqref{RC-for-xy}).
\emph{Case 1b: $y(i) < j \leq x(i)$.}
Then $(v x)(i) = x(i) + 1$ and $(v y)(i) = y(i)$, so
\begin{align*}
[(vy)(i),(vx)(i)] &= [y(i),j-1]\cup\{j\}\cup[j+1,x(i)+1]\\
&= v[y(i),j-1]\cup\{v(n)\}\cup v[j,x(i)]\\
&= v\left( [y(i),x(i)]\cup \{y(n)\}\right)\\
&\subseteq vy[i,n]
\end{align*}
establishing~\eqref{RC-for-v}.
\emph{Case 1c: $j \leq y(i)$.} Similarly to Case~1a, we have $[(vy)(i),(vx)(i)] = [y(i)+1,x(i)+1] = v[y(i),x(i)]\subseteq vy[i,n]$, as desired.
$\eqref{RC-for-v}\implies\eqref{RC-for-xy}$:\quad
Applying $v^{-1}$ to both sides of~\eqref{RC-for-v} gives
$v^{-1}[vy(i),vx(i)]\subseteq y[i,n]$,
so in order to prove~\eqref{RC-for-xy} It is enough to show that
\begin{equation} \label{enough}
[y(i),x(i)]\subseteq v^{-1}[vy(i),vx(i)]
\end{equation}
Moreover, the earlier assumption $i\neq n$ implies that $vx(i)\neq j$ and $vy(i)\neq j$.
\emph{Case 2a: $j > vx(i)$.} Then $v^{-1}$ fixes the set $[1,vx(i)]$ pointwise, so in particular $[y(i),x(i)] = [vy(i),vx(i)] = v^{-1}[vy(i),vx(i)]$, establishing~\eqref{enough}.
\emph{Case 2b: $vy(i) < j < vx(i)$.} Then $y(i) = vy(i)$ and $x(i) = vx(i)-1$, so
\begin{align*}
[y(i),x(i)] &= [vy(i),j-1] \cup [j,vx(i)-1] \\
&= v^{-1}[vy(i),vy(n)-1] \cup v^{-1}[vy(n)+1,vx(i)] \\
&\subseteq v^{-1}[vy(i),vx(i)].
\end{align*}
\emph{Case 2c: $j < vy(i)$.} Then $[y(i),x(i)] = [vy(i)-1,vx(i)-1] = v^{-1}[vy(i),vx(i)]$, again implying~\eqref{enough}.
\end{proof}
\begin{theorem} \label{thm:same}
The pseudoreachability order coincides with the bubble-sorting order.
\end{theorem}
\begin{proof}
It suffices to show that the two partial orders have the same covering relations, i.e., that
\[x\gtrdot_P y \quad\iff\quad \lambda(x)\gtrdot_C\lambda(y).\]
We induct on $n$; the base case $n=1$ is trivial. Let $x,y\in\mathfrak{S}_n$ with $n>1$, and let their conormal forms be
\[
x = u\bar x = (s_i\cdots s_{n-1}) \bar x,\qquad
y = v\bar y = (s_j\cdots s_{n-1}) \bar y
\]
where $i=x(n)=n-\lambda_{n-1}(x)$ and $j=y(n)=n-\lambda_{n-1}(y)$.
\medskip
($\impliedby$)\quad Suppose that $\lambda(x)\gtrdot_C\lambda(y)$. Then either $i=j-1$ or $i=j$. If $i=j-1$, then $\lambda(\bar x)=\lambda(\bar y)$, so $\bar x=\bar y$ and $x=s_iy$, which by Prop~\ref{lem:inv-1} implies $x\gtrdot_P y$. If $i=j$, then $\lambda(\bar x)\gtrdot_C\lambda(\bar y)$. Then $\bar x\gtrdot_P\bar y$ by induction, so $v\bar x\gtrdot_P v\bar y=y$
by Prop.~\ref{v-reduction}.
\medskip
($\implies$)\quad Suppose that $x\gtrdot_Py$. Then $x \gtrdot_B y$ by Theorem~\ref{thm:bruhat}, so $i\leq j$ (as noted in the proof of Prop.~\ref{cf-df-facts}).
If $i<j$, then $v$ is a proper suffix of $u$. By the definition of Bruhat order it must be the case that $x=yt_{a,b}$ for some $a<b$; in fact $b=n$ (otherwise $x(n)=y(n)$). Then $x(n)=y(a)$ and $x(a)=y(n)$, and $x(k)=y(k)$ for $k\not\in\{a,n\}$. Moreover, $y(a)<x(a)$ (since $x \gtrdot_B y$ and not vice versa). On the other hand, if $y(a)\leq x(a)-2$, so that $y(a)<c<x(a)=y(n)$ for some $c$, then by~\eqref{RC} $c=y(k)$ for some $k\in[a+1,n-1]$, and in particular $x$ has at least three more inversions than $y$ --- not only $(a,n)$, but also $(a,k)$ and $(k,n)$, which contradicts the assumption $x\gtrdot_Py$. Therefore $y(a)=x(a)-1$, i.e., $x(n)=y(n)-1$. We conclude that $x=s_iy$, so $\lambda(x)\gtrdot_C\lambda(y)$ using the conormal forms above.
If $i=j$, then $u=v$, so $\bar x\gtrdot_P\bar y$ by Prop.~\ref{v-reduction}. By induction $\lambda(\bar x)\gtrdot_C\lambda(\bar y)$, and prepending $n-i$ gives
$\lambda(x)\gtrdot_C\lambda(y)$ as well.
\end{proof}
\section{Pattern avoidance and reachability} \label{sec:avoid}
In this section, we establish two sufficient conditions for reachability using pattern avoidance. (It is dubious whether pattern avoidance conditions can completely characterize reachability.)
Let $\pi\in\mathfrak{S}_n$ and $\sigma\in\mathfrak{S}_m$, where $m\leq n$. A \defterm{$\sigma$-pattern} is a subsequence $\pi(i_1),\dots,\pi(i_m)$ in the same relative order as $\sigma$, i.e., such that $1\leq i_1<\cdots<i_m\leq n$ and $\pi(i_j)<\pi(i_k)$ if and only if $\sigma(j)<\sigma(k)$. If $\pi$ contains no $\sigma$-pattern then we say that $\pi$ \defterm{avoids} $\sigma$.
\begin{theorem} \label{thm:213-avoiding}
If $x\geq_By$ and $y$ avoids $213$, then $x\unrhd_R y$.
\end{theorem}
\begin{proof}
Suppose that $x\geq_By$ and $y$ avoids $213$, but $x\mkern-1mu\not\mathrel{\mkern1mu\unrhd_R}\mkern1mu y$.
Let $i$ be any index such that $\mathsf{d}_i(x,y) = 0$. By Prop.~\ref{cf-df-facts} we know that $1<i<n$ and that $y(i) < x(i)$. In particular, $m\neq i$, where $m = y^{-1}(x(i))$; that is, $y(m)=x(i)$.
First, suppose that $m > i$. We claim that there exists some $u<i$ such that $y(i) < y(u) < y(m)$. Otherwise, $J_i\geq y(m)-y(i)+1$, and then $k=y(m)-y(i)$ has the properties $k<J_i$ and $y(i)+k=y(m)=x(i)$, so $k\in\mathsf{D}_i(x,y)$, contradicting the assumption $\mathsf{d}_i(x,y) = 0$. Therefore $y(u), y(i), y(m)$ is a 213-pattern.
Second, suppose that $m < i$. If $y(k) > y(m)$ for some $k>i$, then $y(m),y(i),y(k)$ is a 213-pattern. On the other hand, suppose that $y(k) < y(m) = x(i)$ for all $k > i$ (hence for all $k \ge i$).
Then
\[\{k\in[i,n]:\ y(k)<x(i)\}=[i,n]~\supsetneq~[i+1,n]\supseteq\{k\in[i,n]:\ x(k)<x(i)\}\subseteq[i+1,n]\]
so
\begin{align*}
\#\{k\in[i,n]:\ y(k)<x(i)\} &> \#\{k\in[i,n]:\ x(k)<x(i)\}\\
\therefore\quad \#\{k\in[1,i-1]:\ y(k)<x(i)\} &< \#\{k\in[1,i-1]:\ x(k)<x(i)\}\\
\therefore\quad \#\{k\in[1,i-1]:\ y(k)\geq x(i)\} &> \#\{k\in[1,i-1]:\ x(k)\geq x(i)\}.
\end{align*}
That is, $y\langle i-1,x(i)\rangle>x\langle i-1,x(i)\rangle$, contradicting the assumption $x\geq_By$.
\end{proof}
Theorem~\ref{thm:213-avoiding} partially answers the question of when the converse of Theorem~\ref{thm:bruhat} holds, i.e., which Bruhat relations are also relations in pseudoreachability order. We next study if there is an analogous condition on $x$, rather than $y$, that suffices for reachability. One such condition that allows us to restrict $x$ instead of $y$ is to ensure that only very few entries $x(i)$ are large with respect to $i$.
\begin{lemma} \label{lem:x1}
Let $x\in\mathfrak{S}_n$. The following conditions are equivalent:
\begin{enumerate}
\item $x^{-1}(i) \leq i+1$ for all $i\in[n]$.
\item $x\langle j,j\rangle = 1$ for all $j \in [n]$.
\item $x$ avoids both 231 and 321.
\item $x$ is of the form $s_{i_1}\cdots s_{i_k}$, where $n-1\geq i_1>\cdots>i_k\geq1$.
\end{enumerate}
\end{lemma}
The number of these permutations is $2^{n-1}$, which is easiest to see from condition~(4).
Conditions~(1) and~(3) were mentioned respectively by J.~Arndt (June 24, 2009) and M.~Riehl (August 5, 2014) respectively in the comments on sequence A000079 in \cite{OEIS}.
Accordingly, we will call a permutation satisfying the condition of Lemma~\ref{lem:x1} an \defterm{AR permutation} (for Arndt--Riehl).
\begin{proof}
$(1)\iff(2)$: Formula~\eqref{angle-brackets} implies that
\begin{align*}
\forall j\in[n]:\ x\langle j,j\rangle = 1
&\iff \forall j\in[n]:\ [1,j-1]\subseteq x[1,j]\\
&\iff \forall j\in[n]:\ x^{-1}[1,j-1]\subseteq [1,j]\\
&\iff \forall i\in[n]:\ x^{-1}[1,i]\subseteq [1,i+1]
\end{align*}
since the last two statements differ only by the trivially true cases $i=0$ and $i=n$.
$(3)\iff(1)$: Condition~(3) holds if and only if no digit $i\in[n]$ occurs later than position $i+1$, but this is precisely condition (1).
$(4)\iff(1)/(3)$: Let $Y_n$ be the set of permutations in $\mathfrak{S}_n$ satisfying the equivalent conditions~(1) and~(3), and let $Z_n$ be the set satisfying condition (4). For $n\leq2$ we evidently have $Y_n=Z_n=\mathfrak{S}_n$. For $n\geq 3$, we proceed by induction. Observe that $Z_n=Z_{n-1}\cup s_{n-1}Z_{n-1}$, and that left-multiplication by $s_{n-1}$ (i.e., swapping the locations of $n-1$ and $n$) does not affect condition (1), which is always true for $i\in\{n-1,n\}$. Therefore $Z_n\subseteq Y_n$.
On the other hand, if $w\in Y_n$ then $w_n\in\{n-1,n\}$, otherwise $w_n$, together with the digits $n-1$ and $n$, would form a 231- or 321-pattern. Therefore, $w'_n=n$, where either $w'=w$ or $w'=s_{n-1}w$. By induction $w'\in Z_{n-1}$, so $w\in Z_n$ as desired.
\end{proof}
\begin{corollary} \label{cor:arn-bru}
If $x$ is AR and $y\leq_Bx$, then $y$ is AR as well.
\end{corollary}
\begin{proof}
Lemma \ref{lem:x1} asserts that $x\langle i,i\rangle = 1$ for all $i \in [n]$. Since $y\leq_Bx$, $y\langle i,i\rangle = 1$ or $0$, but the latter could not happen by the pigeonhole principle.
\end{proof}
An \defterm{exceedance} of a permutation $x\in\mathfrak{S}_n$ is an index $k\in[n]$ such that $x(k)>k$.
\begin{lemma} \label{lem:x2}
Let $x \in \mathfrak{S}_{n}$ be an AR permutation. Suppose that $k$ is an exceedance of $x$, and let $i=x(k)$. Then $x(j)=j-1$ for all $j\in[k+1,i]$.
\end{lemma}
\begin{proof}
The argument of Lemma~\ref{lem:x1} implies that $[1,k-1]\subseteq x[1,k]$; however, since $x(k)>k$ we have in fact
$[1,k-1]=x[1,k-1]$.
Now let $j\in[k+1,i]$. Lemma~\ref{lem:x1} also asserts that $x\langle j,j\rangle=\#A_j=1$, where $A_j=\{m\in[j]:\ x(m)\geq j\}$. Certainly $k\in A_j$, so $j\not\in A_j$, that is, $x(j)<j$. But since $x(j)\geq k$ for each such $j$, we can infer in turn that $x(k+1)=k$, $x(k+2)=k+1$, \dots, $x(i)=i-1$.
\end{proof}
\begin{theorem} \label{thm:x}
If $x\geq_By$ and $x$ is AR, then $x\unrhd_R y$.
\end{theorem}
\begin{proof}
Suppose that $x\geq_By$ and $x$ is AR, but $x\mkern-1mu\not\mathrel{\mkern1mu\unrhd_R}\mkern1mu y$.
Let $i$ be some index such that $\mathsf{d}_{i}(x, y) = 0$.
By~\eqref{RC}, there exists $j < i$ such that
\begin{equation} \label{banana}
y(i) < y(j) \le x(i).
\end{equation}
By Lemma \ref{lem:x1}, $x\langle i,i\rangle = 1$; that is, there exists some (unique) $k \leq i$ such that $x(k) \ge i$.
First, suppose that $k=i$. Then $x\langle i-1,i\rangle = 0$, and $y\langle i-1,i\rangle = 0$ as well because $y \leq_B x$. Hence $y[1,i-1]=[i-1]$. But then~\eqref{banana} implies that $y(i)<y(j)\leq i-1$ as well, a contradiction.
Second, suppose that $k < i$. Then $y(i)<x(i)<i$ by Lemma~\ref{lem:x2}, so $y(i)\leq i-2$.
Set $p=y(i)$; then $y^{-1}(p)=i\geq k+2$. But then $y$ is not AR, which violates Corollary~\ref{cor:arn-bru}.
\end{proof}
\bibliographystyle{plain}
| {
"timestamp": "2020-10-30T01:02:16",
"yymm": "2006",
"arxiv_id": "2006.09321",
"language": "en",
"url": "https://arxiv.org/abs/2006.09321",
"abstract": "Interval parking functions (IPFs) are a generalization of ordinary parking functions in which each car is willing to park only in a fixed interval of spaces. Each interval parking function can be expressed as a pair $(a,b)$, where $a$ is a parking function and $b$ is a dual parking function. We say that a pair of permutations $(x,y)$ is \\emph{reachable} if there is an IPF $(a,b)$ such that $x,y$ are the outcomes of $a,b$, respectively, as parking functions. Reachability is reflexive and antisymmetric, but not in general transitive. We prove that its transitive closure, the \\emph{pseudoreachability order}, is precisely the bubble-sort order on the symmetric group $\\Sym_n$, which can be expressed in terms of the normal form of a permutation in the sense of du~Cloux; in particular, it is isomorphic to the product of chains of lengths $2,\\dots,n$. It is thus seen to be a special case of Armstrong's sorting order, which lies between the Bruhat and (left) weak orders.",
"subjects": "Combinatorics (math.CO)",
"title": "Interval parking functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587275910132,
"lm_q2_score": 0.8128673223709251,
"lm_q1q2_score": 0.8029167920454189
} |
https://arxiv.org/abs/1202.2475 | On the speed of convergence of Newton's method for complex polynomials | We investigate Newton's method for complex polynomials of arbitrary degree $d$, normalized so that all their roots are in the unit disk. For each degree $d$, we give an explicit set $\mathcal{S}_d$ of $3.33d\log^2 d(1 + o(1))$ points with the following universal property: for every normalized polynomial of degree $d$ there are $d$ starting points in $\mathcal{S}_d$ whose Newton iterations find all the roots with a low number of iterations: if the roots are uniformly and independently distributed, we show that with probability at least $1-2/d$ the number of iterations for these $d$ starting points to reach all roots with precision $\varepsilon$ is $O(d^2\log^4 d + d\log|\log \varepsilon|)$. This is an improvement of an earlier result in \cite{Schleicher}, where the number of iterations is shown to be $O(d^4\log^2 d + d^3\log^2d|\log \varepsilon|)$ in the worst case (allowing multiple roots) and $O(d^3\log^2 d(\log d + \log \delta) + d\log|\log \varepsilon|)$ for well-separated (so-called $\delta$-separated) roots.Our result is almost optimal for this kind of starting points in the sense that the number of iterations can never be smaller than $O(d^2)$ for fixed $\varepsilon$. | \section{Introduction}
Newton's root finding method is an old and classical method for
finding roots of a differentiable function; it goes back to Newton in
the 18th century, perhaps earlier. It was one of the main reasons why
A.\ Douady, J.\ Hubbard and others in the late 1970s studied
iterations of complex analytic functions. The main question was to
know where to start the Newton iterative method in order to converge
to the roots of the polynomial. Newton's method is known as rapidly
converging near the roots (usually with quadratic convergence), but
had a reputation that its global dynamics was difficult to
understand, so that in practice other methods for root finding were
used. See \cite{JohannesNewtonSurvey} for an overview on recent
results about Newton's method.
Meanwhile, some small sets of good starting points are known: there
are explicit deterministic sets with $O(d\log^2d)$ points that are
guaranteed to find all roots of appropriately normalized polynomials
of degree $d$ \cite{HSS}, and probabilistic sets with as few as
$O(d(\log\log d)^2)$ points \cite{BLS}.
We are interested in the question how many iterations are required
until all roots are found with prescribed precision $\varepsilon$. In
\cite{D}, it is shown that among a set of starting points as
specified above, there are $d$ points that converge to the $d$ roots
and require at most $O(d^4\log^2d+d^3\log^2d|\log\varepsilon|)$ to get
$\varepsilon$-close to the $d$ roots in the worst case; for randomly placed
roots (or for roots at mutual distance at least $\delta$ for some
$\delta>0$), the required number of iterations is no more than
$O(d^3\log^3d+d\log|\log\varepsilon|)$ (with the constant depending on
$\delta$). This is about one power of $d$ away from the best possible
bounds.
In this paper, we show that Newton's method is about as fast as
theoretically possible. We consider the space of polynomials of
degree $d$, normalized so as to have all roots in the complex unit
disk $\mathbb D$. Our main result is the following.
\begin{theorem}[Quadratic Convergence in Expected Case] \label{Thm:Main}
For every degree $d$, there is an explicit universal set $\mathcal{S}_d$
of
points in ${\mathbb C}$, with $|\mathcal{S}_d|=3.33d\log^2d(1+o(1))$, with the
following property:
suppose that $\alpha_1, \ldots, \alpha_n$ are uniformly and independently
distributed in the unit disk and put $p(z)= \prod_{j=1}^d (z-\alpha_j)$.
Then there are $d$ starting points in $\mathcal{S}_d$ such
that with probability $p_d \rightarrow 1$ as $d \rightarrow \infty$, the number of
iterations needed to approximate all $d$ roots with precision $\varepsilon$
starting at these $d$ points is
\[
O(d^2\log^4 d + d\log|\log \varepsilon|).
\]
\end{theorem}
\begin{remark}
As stated, the theorem deals with $d$ distinguishable (i.e., ordered)
roots and their associated probability distribution.
We prove that the same result holds if we identify our polynomials in terms
of their sets of \emph{indistinguishable} roots, as two polynomials
$p(z)= \prod_{j=1}^d (z-\alpha_j)$ and $q(z)= \prod_{j=1}^d (z-\beta_j)$
are the same if their unordered sets of roots $\{\alpha_1,\ldots, \alpha_d\}$ and
$\{\beta_1,\ldots, \beta_d\}$ are equal.
\end{remark}
\begin{remark}
This bound on the number of iterations is optimal in the sense that
there is no bound on the number of iterations in the same generality
that for fixed $\varepsilon$ has asymptotics in $o(d^2)$, so we are away
from the best possible bound only by a factor of about $O(\log ^4d)$.
\end{remark}
\section{Good starting points for Newton's method}
\label{sec:
prelim results}
Studying the geometry of the immediate basins outside the unit disk
$\mathbb{D}$, in \cite{HSS} we proved the existence of a universal
starting set with $1.11 d\log^2 d$ points depending only on $d$ such
that for every polynomial of degree $d$ with all roots in the unit
disk, and for every root, there is a point in the set which is in the
immediate basin of this root. Enlarging the set by a factor of 3
approximately, in \cite{D} we obtained a set of starting points
$\mathcal{S}_d$ which ensured that for each polynomial $p$ and each
root $\alpha$ there is a point $z$ in $\mathcal{S}_d$ intersecting
the immediate basin $U$ of $\alpha$ in the ``middle third`` of the
``thickest'' \textit{channel}, where a channel is an unbounded
connected component of $U \setminus \overline{\mathbb{D}}$. Being in
this middle third implies an
upper bound on the displacement $d_U(z, N_p(z))$ in terms of the
Poincar\'e metric of the immediate basin. It also turns out that the
orbit of $z$ under iteration of Newton map does not leave $D_R(0)$,
the disk of radius $R$ centered at the origin for some bounded value
of $R$. We will refer to such points as having
\textit{$R$-central orbits}.
More precisely, let $\mathcal{S}_d$ be defined as follows.
\begin{definition}[Efficient Grid of Starting Points]
For each degree $d$, construct a circular grid $\mathcal{S}_d$ as
follows. For $k = 1, \ldots, s = \lceil 0.4\log d\rceil$, set
$$r_k = (1+\sqrt{2})\left(\frac{d-1}{d}\right)^\frac{2k-1}{4s},$$
and for each circle around 0 of radius $r_k$, choose $\lceil
8.33d\log d\rceil$ equidistant points (independently for all the
circles).
\end{definition}
The set $\mathcal{S}_d$ thus constructed has $3.33(1+o(1))d\log^2 d$
points. The following theorem is proven in \cite[Theorem~8]{D}.
\begin{theorem}\label{thm: set of starting points}
For each degree $d$, the set $\mathcal{S}_d$ has the following
universal property. If $p$ is any complex polynomial, normalized so
that all its roots are in $\mathbb{D}$, then there are $d$ points
$z^{(1)}, \ldots, z^{(d)}$ in $\mathcal{S}_d$ whose Newton iterations
converge to the $d$ roots of $p$. If $\alpha$ is a root of $p$ and
$U$ is the immediate basin of $\alpha$, then there is an index $i$
such that $z^{(i)}\in U$ with $d_U(z^{(i)}, N_p(z^{(i)}))< 2\log
d$. In addition, $z^{(1)}, \ldots, z^{(d)}$ have $R$-central orbits
for
$$R \leq 5\left(\frac{d}{d-1}\right)^{\lceil 5\pi(\log d + 1)\rceil}.$$
\end{theorem}
For $d = 100$, we have $R < 14;$ for $d = 1000, $ we have $R < 7.5;$
and asymptotically the upper bound on $R$ tends to $7$.
The result provides an upper bound for $R$ that is uniform in $d$.
This set of starting points will be the basis for the discussion
which follows.
\section{Uniformly distributed roots}\label{sec: result}
In this manuscript we investigate the Newton map for complex
polynomials with randomly distributed roots. In this section, we fix
notation and give the strategy of the
proof of our main result, Theorem~\ref{Thm:Main}.
Let $\alpha$ be a simple root
of the polynomial $p$ of degree $d$ and $U$ be the immediate basin of
attraction of $\alpha$. By the discussion in the
previous section, there exists $z_1\in \mathcal{S}_d$ with
$R$-central orbit in $U$, i.e.\ under iteration of the Newton map
$N_p$ the orbit converges to $\alpha$ and stays within $D_R(0)$. Let
$z_{n+1} := N_p(z_{n})$ for $n\geq 1$. For any two
consecutive points $z_n$ and $z_{n+1}$ along the orbit of $z_1$, in
\cite[Section~4]{D} we constructed ``thick'' curves that,
roughly speaking, ``use up'' area at least $|z_n -
z_{n+1}|^2/(2\tau)$ with $\tau := d_U(z_1, z_2) < 2\log d$.
In the region of quadratic convergence (near the root $\alpha$), only
$\log_2|\log_2 \varepsilon - 5|$ iterations are sufficient to get
$\varepsilon$-close to the root. Outside this region, two such curves
with base points $z_n$ and $z_{n'}$ are disjoint if $n' - n \geq
2\tau + 6$ \cite[Lemma~11]{D}. The bound $O(d^3\log^3d +
d\log|\log \varepsilon|)$ follows from lower bounds on the
displacements $|z_n - z_{n+1}|$ along the orbit. The main improvement
in this paper
is on the lower bounds on the displacements when the roots are
randomly distributed.
As in \cite{D}, we partition $D_{R}(0)$ (the disk of radius $R$
centered at the origin) into domains
\[
S_k := \left\{z\in D_{R}(0): \min_j|z-\alpha_j| \in \left(2^{-(k+1)},
2^{-k}\right]\right\}, \ k \in \mathbb{Z}
\;.
\]
It turns out that if the roots are randomly distributed in the unit
disk, then with high probability the following holds: there exists a
universal constant $C$ such that for every $n$ we have the following
estimates
\[ |z_n - z_{n+1}| \ge \left\{ \begin{array}{ll}
\frac{C}{d\log d} & \mbox{if $z_n\in S_k$ with $k
\leq \log_2 d$};\\
\frac{C}{2^k k } & \mbox{otherwise}.\end{array} \right.
\]
If $z_n\in S_k$ with $k\le \log_2d$, then we say that we are ``in the
far case'', as $z_n$ is far from all the roots. Since each such
iteration uses an area of at least $|z_n-z_{n+1}|^2/(2\tau)$, and one
in $2\tau+6$ such areas are disjoint, the total number of orbit
points in the far case is bounded by $O(d^2\log^4d)$.
\cite[Lemma~16]{D} says that if the orbit gets very close to
some root in comparison to the other roots, then it has entered the
region of quadratic convergence of that root where only
$\log_2|\log_2 \varepsilon - 5|$ are sufficient to approximate it
within an $\varepsilon$-neighborhood. We call this ``the near case''.
For randomly distributed roots, the mutual distance between roots is
large enough so away from the region of quadratic convergence, we
only need to consider $k \le 3+(2+\eta)\log_2 d$ for a certain
$\eta>0$. We define the ``intermediate case'' as those $z_n\in S_k$
with $\log_2d< k\le 3+(2+\eta)\log_2 d$. Each domain $S_k$ has area
$O(d 4^{-k})$, and each iteration with $z_n\in S_k$ uses area about
$(C/2^kk)^2/2\tau\approx C^2/4^kk^2\tau$, so the number of orbit points
in the intermediate case is at most $O(dk^2\tau)$ for each $k$, times
the usual factor $2\tau+6$ to make the areas disjoint. But
$\log_2d<k\leq 3+(2+\eta)\log_2 d$ and $\tau=O(\log d)$, so the total
number of iterations in the intermediate case is $O(d\log^5d)$.
In the subsequent sections we will make these arguments precise.
\subsection{On the distribution of the roots}\label{sec: root distribution}
In order to get a lower bound on the expected displacement, we will
first investigate the distribution of the roots. We will be
interested in two different kind of probability spaces. The first
space $\mathcal{P}_d =\{(x_1, \ldots, x_d) : x_i\in \mathbb{D}\}$
consists of all polynomials with $d$ \textit{distinguishable} roots
in the unit disk, normalized so as to have leading coefficients $1$,
and the probability measure is induced by the Lebesgue
measure on $\mathbb{D}^d$. The second space $\mathcal P_d/S_d$
consists of all polynomials with \textit{indistinguishable} roots in
the unit disk, i.e.\ the quotient probability space of the standard
action of the symmetric group $S_d$ on $\mathcal{P}_d$ defined by
permuting the roots.
The following lemma is the probabilistic ingredient of the main
theorem. It certainly isn't new, but easier verified than
looked up in the library.
\begin{lemma}[Base-$d$ numbers]\label{lemma: numbers}
Let $M_d$ be the set of all $d$-digit numbers in base $d$.
(a) The probability that that a randomly chosen number $a\in M_d$
does not have a digit repeating more than $O(\log d)$ times is at
least $1-1/d$.
(b) Let $\sim$ be an equivalence relation on $M_d$ defined as
follows: $a \sim b \Leftrightarrow \exists \sigma \in S_d$ with $a =
\sigma b$, i.e.\ two elements are equivalent if they have the same
sets of digits counted with multiplicities. Then the probability that
a randomly chosen element $[a]\in M_d/\sim$ does not have a
digit repeating more than $O(\log d)$ is at least $1-1/d$.
\end{lemma}
\begin{proof}
(a) For fixed $i$, the number of $d$-digit numbers which contain at
least $\alpha$ digits $i$ is at most
$\binom{d}{\alpha}d^{d-\alpha}$. Thus the number of $d$-digit numbers
which contain a symbol repeating at least $\alpha$ times is at most
$$d \binom{d}{\alpha}d^{d-\alpha} < \frac{d}{\alpha !} d^{d}.$$
So the probability that a randomly selected number in $M_d$
contains at least $\alpha$ identical digits is at most $\frac{d}{\alpha
!}$ since $|M_d| = d^d$. Therefore, with probability at least
$1-\frac{d}{\alpha!}$, a randomly selected number in $M_d$ does not
have a digit repeating more than $\alpha$ times.
Note that if $\alpha!\ge d^2$ we have $1-\frac{d}{\alpha!} \geq 1 -
\frac{1}{d}$. Therefore, by taking $\alpha$ such that $(\alpha - 1)!
< d^2 \leq \alpha !$ (which implies that $\alpha \in O(\log d)$), we
prove the first part of the claim.
(b) Note that the elements of $ M_d/\sim$ can be bijectively mapped
to the set $\hbox{Mult}_d = \{(x_0, \ldots, x_{d-1}) : x_i \in
\mathbb{Z}_{\geq 0}, x_0 + \ldots + x_{d-1} = d\}$ as follows: for
$[a]\in M_d/\sim$ let $x_i$ be the multiplicity of digit $i$
in every $a\in [a]$. It is well known and easy to see that
\begin{equation}\label{eq: multiset coefficient}
\left|\left\{\rule{0pt}{10pt}(x_0, \ldots, x_{r-1}) : x_i\in
\mathbb{Z}_{\geq 0}, x_0 + x_1 + \ldots + x_{r-1} = n\right\}\right|
= \binom{n + r - 1}{r - 1}
\;.
\end{equation}
Thus we have $|\hbox{Mult}_d| = \binom{2d - 1}{d-1}$. On the
other hand, the number of elements in $\hbox{Mult}_d$ with first
component at least $\alpha$ is equal to the cardinality of
\[
\left\{(x_2, \ldots, x_d) : x_i \in \mathbb{Z}_{\geq 0}, x_2 + \ldots
+ x_d \leq d-\alpha\right\}
\]
which has the same cardinality as
\[
\{(y_1, x_2, \ldots, x_d) : y_1, x_i \in \mathbb{Z}_{\geq 0}, y_1 +
x_2 + \ldots + x_d = d-\alpha\}
\;.
\]
Again by (\ref{eq: multiset coefficient}) this quantity equals to
$\binom{2d - \alpha - 1}{d-1}$. Therefore, the number of elements of
$\hbox{Mult}_d$ with a component at least $\alpha$, i.e. the number
of elements of $M_d/\sim$ with a digit repeating at least $\alpha$
times, is at most
\[
d\binom{2d - \alpha - 1}{d-1}\;.
\]
Hence the probability that a number of $M_d/\sim$ has a digit
repeating at least $\alpha$ times is at most
$$\frac{d\binom{2d - \alpha - 1}{d-1}}{\binom{2d - 1}{d-1}} = \frac{d
(2d - \alpha - 1)! (d-1)! d!}{(2d-1)! (d-1)! (d - \alpha)!} = $$
$$ = d\frac{(d-\alpha + 1)(d-\alpha + 2)\cdots d}{(2d - \alpha)(2d -
\alpha + 1)\cdots (2d - 1)} \leq d \left(\frac{1}{2}\right)^{\alpha -
1}\frac{d}{2d - 1}.$$ Hence for $\alpha = \lceil 2\log_2 d + 1
\rceil\in O(\log d)$ the second part of the claim follows.
\end{proof}
\begin{remark}
The statement remains true if we replace the probability $1-1/d$ by
$1-1/d^c$ for
any constant $c\geq 1$.
\end{remark}
If the roots are randomly distributed in the unit disk one should
expect that the number of roots in a region is proportional to its
area. The previous claim easily implies the following statement.
\begin{lemma}\label{lemma:disk}
Let a polynomial with roots $x_1, \ldots, x_d$ be randomly
chosen in $\mathcal{P}_d$ or $\mathcal{P}_d/S_d$. Then there exists a
constant $C_d \in O(\log d)$ such that with probability at least
$1-1/d$ the following holds true: every disk in ${\mathbb C}$ with area $A =
\pi r^2$ contains at most $k(A)$ points among $x_1, \ldots, x_d$ with
$$k(A) = \left\{ \begin{array}{ll}
C_ddA & \mbox{if $A \geq 1/d$}; \\
C_d & \mbox{otherwise}.\end{array} \right.$$
\end{lemma}
\begin{proof}
Without loss of generality we can assume that $d = (2k+1)^2$ for an
integer $k$ (the general case follows by enlarging $C_{(2k+1)^2}$ by
a bounded factor).
Then the unit disk can be subdivided into $d$ pieces
as follows (compare Fig.~\ref{fig: circle partitioned}): the first
piece is a disk with center $0$ and radius $r_0 = 1/\sqrt{d};$ next,
consider the annuli $A_s$ bounded between circles around $0$ of radii
$(2s-1)r_0$ and $(2s+1)r_0$ for $s = 1, \ldots k$, and subdivide each
annulus $A_s$ into exactly $8s$ pieces of equal area by drawing $8s$
radial segments. Thus we construct exactly $d$ pieces with equal area
and diameters comparable with $r_0$. By Lemma \ref{lemma: numbers} it
follows that each of the pieces contains at most $O(\log d)$ of the
points $x_1, \ldots, x_d$ with probability at least $1-1/d$ (in both
cases of distinguishable and indistinguishable roots): in the case of
distinguishable roots, the $i$-th digit of a $d$-digit number
specifies the number of the piece containing the $i$-th root; in the
other case, the same symmetries apply on both sides of the equality.
\begin{figure}
\begin{center}
\framebox{\includegraphics[width=0.5\textwidth]{DiskPartition1a.pdf}}
\caption{Partition of the unit disk into smaller pieces of similar
sizes.\label{fig: circle partitioned}}
\end{center}
\end{figure}
Hence, the claim is true for that particular partition of the unit
disk. This implies the general claim as follows. It is easy
to see that each square of side length at most $r_0$ in the complex
plane can intersect at most a constant number $C'$ of these pieces,
where $C'$ does not depend on $d$. Consider
a square $S$ for which the unit circle is inscribed, for example the
one with sides parallel to the real and imaginary axes. Subdivide it
into $d$ equal squares of
side length $r_0 $ (using the fact that $d$ is a square and
$r_0=1/\sqrt d$). Then each of these smaller
squares will intersect at most $C'$ pieces from the partition of the
unit disk (some squares will not intersect any). Therefore each of
the small squares contains at most $C''\log d$ points for some
constant $C''$ which does not depend on $d$. Since each square of
side length $r_0$ (possibly rotated) intersects at most $9$ of these
squares dividing $S$, we conclude that each square of side length
$r_0$ contains, with probability at least $1-1/d$,
at most $C\log d$ of the points $x_1,\ldots, x_d$ for $C = 9 C''$. If
we group every 4 neighboring small squares (of side length $r_0$) and
repeat the argument, we get that each square of side length $2r_0$
contains at most $4 C\log d$ points, and so on for squares of side
length $4r_0, 8r_0,\ldots $. Thus an arbitrary square of side length
$x\in[2^kr_0,2^{k+1}r_0]$ contains at most $2^{2k + 2} C\log d\le
4C(x^2/r_0^2)\log d \approx 4Cx^2d\log d$ points since it is contained some
square of side length $2^{k+1}r_0$, and thus enlarging the
constant by a factor of 4 the lemma will hold true for squares. Since
each disk of radius $r$ is contained in a square of side length $2r$,
the bound on the number of points in an arbitrary disk follows.
\end{proof}
Now we prove the following claim about the mutual distance for
randomly distributed points in the unit disk.
\begin{lemma}\label{lemma: mutual distance}
Let the polynomial $p(z)$ be randomly chosen in $\mathcal{P}_d$ or
$\mathcal{P}_d/S_d$. Then the mutual distance between any pair of its
roots is at least ${1}/{d^{1+\eta}}$, for any fixed $\eta>0$, with
probability at least $1-1/d^{2\eta}$.
\end{lemma}
\begin{proof}
First, note that the claim for a randomly chosen polynomial in
$\mathcal{P}_d/S_d$ follows from the claim for a randomly chosen
polynomial in $\mathcal{P}_d$. Choosing randomly a
polynomial in $\mathcal{P}_d$ is equivalent to choosing randomly and
independently its roots. For a positive number $r$, the
probability $p_{d,r}$ that $d$ uniformly and independently
distributed points in the unit disk have mutual distance at least $r$
is at least
\[
p_{d,r}\geq (1 - r^2)(1-2r^2)\ldots (1-(d-1)r^2)
\]
(the unit disk has area $\pi$, and after $k$ roots are selected, the
$k+1$-st root must avoid an area of at most $k\pi r^2$; this has
probability $(\pi-\pi kr^2)/\pi=1-kr^2)$.
Since $\log(1+x) \ge x/(1+x)$ for $x>-1$, we get
\[
\log p_{d,r}\ge \sum_{k=1}^{d-1} \log(1-k r^2) \ge \sum_{k = 1}^{d-1}
\frac{-k r^2}{1-k r^2}
\ge -r^2 \frac{\sum_{k = 1}^{d-1} k}{1-d r^2} \ge -r^2
\frac{d^2/2}{1-d r^2} \ge
-d^2 r^2
\;,
\]
where the last inequality holds if $d r^2 < 1/2.$ Hence
\[
p_{d, r}\ge
\exp(-d^2 r^2)\ge 1 - d^2 r^2
\;.
\]
If $r = 1/d^{1+\eta}$ (which satisfies $d r^2 < 1/2$),
then $p_{d,r}\ge
1-1/d^{2\eta}$ and thus the claim follows.
\end{proof}
\begin{remark}
The precise value of $\eta$ in this lemma is not very important to
us, as eventually it will only affect constants in the bounds we
obtain.
\end{remark}
An immediate corollary of the lemma is an upper bound on the distance
of $z_n$ to the closest root which guarantees quadratic convergence.
\begin{corollary}\label{cor: bound on K}
If the $d$ roots are randomly chosen and
$z_n \in S_k$ with $2^{-k} < {1}/{8d^{2+\eta}}$ and $\eta>0$, then
with probability at least $1-1/d^{2\eta}$ the orbit of $z_n$
converges to the closest root $\alpha$, and $\log_2|\log_2\varepsilon
- 5|$ iterations of $z_n$ are sufficient to get $\varepsilon$-close
to $\alpha$.
\end{corollary}
\begin{proof}
Indeed, if $z_n\in S_k$ and $\alpha$ is the closest root to $z_n$,
then $|z_n - \alpha| < {1}/{8d^{2+\eta}}$ and for every root
$\alpha_j\neq\alpha$ we have
\[
|z_n-\alpha_j| \geq |\alpha - \alpha_j| - |\alpha - z_n| >
1/ d^{1+\eta}-1/8d^{2+\eta} \ge
(8d+1)/8d^{2+\eta} > (4d + 3)|z_n - \alpha|
\]
(under the conditions of Lemma~\ref{lemma: mutual distance}).
Therefore by \cite[Lemma~16]{D}, we need no more than
$\log_2|\log_2\varepsilon - 5|$ iterations to get $\varepsilon$-close
to $\alpha$.
\end{proof}
We combine the previous two lemmas in the following claim.
\begin{lemma}\label{lemma: probabilistic conditions}
Let a polynomial with roots $x_1, \ldots, x_d$ be randomly chosen in
$\mathcal{P}_d$ or $\mathcal{P}_d/S_d$. Then with probability $p_d\ge
1-O(d^{-2\eta})$ (for fixed $\eta\in(0,1/2)$),
the following two statements simultaneously hold true:
\begin{description}
\item[Area Condition (AC)] There exists $C_d\in O(\log d)$ such
that every disk in ${\mathbb C}$ with area $A = \pi r^2$ contains at most
$k(A)$ roots among $x_1, \ldots, x_d$ with
$$k(A) = \left\{ \begin{array}{ll}
C_ddA & \mbox{if $A \geq 1/d$};\\
C_d & \mbox{otherwise}.\end{array} \right.$$
\item[Distance Condition (DC)] The mutual distance between any pair
of roots is at least ${1}/{d^{1+\eta}}$.
\end{description}
\end{lemma}
\begin{proof}
We are interested in $P(AC = \texttt{true} \mbox{ and } DC =
\texttt{true})$, which equals
\[
1 - P(AC = \texttt{false} \ \ \hbox{or}\ \ DC = \texttt{false})
\geq 1- P(AC = \texttt{false}) - P(DC = \texttt{false}).
\]
By Lemma \ref{lemma:disk} we have $P(AC = \texttt{false}) \leq 1/d$,
and by Lemma \ref{lemma: mutual distance} $P(DC = \texttt{false})\le
1/d^{2\eta}$. Hence the claim follows.
\end{proof}
\subsection{Proof of the main theorem}\label{sec: proof}
In this section we will use the two conditions \textbf{AC} and
\textbf{DC} to prove Theorem \ref{Thm:Main}. While \textbf{DC}
guarantees that proximity to a root implies fast convergence
(Corollary \ref{cor: bound on K}), \textbf{AC} gives a lower bound on
the displacements along an orbit far away from the roots. More
precisely, the following statement holds true.
\begin{lemma}\label{lemma: main}
Suppose that the Area Condition in Lemma \ref{lemma: probabilistic
conditions} holds true. If $z_n\in S_K \cap \mathbb{D}_2(0)$, then
\[
|z_n - z_{n+1}|\geq \frac{1}{(1 + 2C_d)2^{K+1} + 16\pi C_dd }
\;,
\]
where
$C_d\in O(\log d)$.
If $z_n \not \in \mathbb{D}_2(0)$, then $|z_n - z_{n+1}| > 1/d$.
\end{lemma}
\begin{proof}
The fact that $z_n\in S_K$ means that the closest root, say
$\alpha$, is at distance ${c}/{2^K}$ for some $c\in (0.5, 1]$, and all the
other roots satisfy $
|z_n-\alpha_j|\geq {c}/{2^K}$.
First suppose that $z_n\in S_K \cap
\mathbb{D}_2(0)$. This implies that $K\geq -2$. Let $T_k := \{z\in
\mathbb{C}:
2^{-k-1} < |z-z_n| \leq 2^{-k}\}$ for $k = -2, \ldots, K$. Then all the
roots are contained in $\bigcup_{k = -2}^{K}T_k$. The Area
Condition implies that there exists a constant $C_d\in O(\log d)$
such that
the number of roots in $T_k$ is bounded by $\pi C_d d4^{-k}$ for
$\pi 4^{-k}\geq 1/d$, and by $C_d$ otherwise. Thus we have
\begin{align}
\left|\sum\frac{1}{z_n-\alpha_j}\right|
&\leq \left|\frac{1}{z_n-\alpha}\right| + \sum_{\alpha_j \not=
\alpha}\left|\frac{1}{z_n-\alpha_j}\right| = \frac{2^K}{c} + \sum_{k
= -2}^{K}\sum_{\alpha_j \not= \alpha \atop \alpha_j \in
T_k}\left|\frac{1}{z_n-\alpha_j}\right| \nonumber
\\
&\leq \frac{2^K}{c} + \sum_{k = -2}^{\lfloor 0.5\log_2
\pi d\rfloor}\sum_{\alpha_j \not= \alpha \atop \alpha_j \in
T_k}\left|\frac{1}{z_n-\alpha_j}\right| +
\sum_{k = 1+\lfloor 0.5\log_2\pi d\rfloor}^{K}\sum_{\alpha_j \not=
\alpha \atop \alpha_j \in T_k}\left|\frac{1}{z_n-\alpha_j}\right|
\nonumber
\\
&\leq \frac{2^K}{c} + \sum_{k = -2}^{\lfloor 0.5\log_2 \pi d\rfloor}
\frac{2\pi C_d d4^{-k}}{2^{-k}} + \sum_{k = 1+\lfloor 0.5\log_2\pi
d\rfloor}^{K} C_d 2^{k+1} \nonumber
\\
&\leq 2^{K+1} + 16\pi C_dd + C_d 2^{K+2}\;. \nonumber
\end{align}
Therefore
\[
|z_n - z_{n+1}| = \frac{1}{\left|\sum\frac{1}{z_n-\alpha_j}\right|}\geq
\frac{1}{(1 + 2C_d)2^{K+1} + 16\pi C_dd } \;.
\]
For the case $z_n \not \in \mathbb{D}_2(0)$ we have
\[
|z_n - z_{n+1}|^{-1} = \left|\sum\frac{1}{z_n-\alpha_j}\right| <
\sum_{\alpha_j}1 = d \;,\]
and so $|z_n - z_{n+1}| > 1/d$.
\end{proof}
\begin{corollary}\label{cor: displacement}
Suppose that the Area Condition in Lemma \ref{lemma: probabilistic
conditions} holds true.
Then there is a universal constant $C$ such that the following
statements hold for any $z_n \in S_k$.
\begin{enumerate}
\item If $2^{-k} \geq 1/d$, then $|z_n - z_{n+1}| \geq
\frac{C}{d\log d}$.
\item If $1/8d^{2+\eta}\leq 2^{-k} < 1/d$, then $|z_n - z_{n+1}|
\geq\frac{C}{k2^k }$.
\end{enumerate}
\end{corollary}
\begin{proof}
For $z_n\in \mathbb{D}_2(0)$, Lemma \ref{lemma: main} gives
\[
|z_n - z_{n+1}| \geq \frac{1}{(1 + 2C_d)2^{k+1} + 16\pi C_dd } \;.
\]
If $2^{-k} \geq 1/d$, i.e.,\ $2^{k+1} \leq 2d$, the
denominator is at most $O(C_d d)$, so the displacement
is at least $C'/(d\log d)$ for some universal constant $C'$ (since
$C_d\in O(\log d)$). On the other hand, if $1/8d^{2+\eta}\leq 2^{-k} <
1/d$ and thus $d < 2^{k}$, the denominator is at most
$O(C_d2^k)$. In this case, $C_d\in O(\log d) = O(k)$, so the
displacement is at least $C''/(k2^k)$. Therefore the claim follows if
we take $C = \min \{C', C''\}$.
Finally, $z_n\not \in \mathbb{D}_2(0)$ implies $k < -1$. Again
Lemma \ref{lemma: main} gives $|z_n - z_{n+1}| > 1/d$, and thus we
finish the proof by possibly decreasing the constant $C$.
\end{proof}
The final step towards proving our main result is in the following theorem.
\begin{theorem}\label{thm:one root}
Let the polynomial $p(z)$ be randomly chosen in $\mathcal{P}_d$ or
$\mathcal{P}_d/S_d$ and let $(z_n)$ be an $R$-central orbit converging
to a root $\alpha$ with $d_U(z_0, z_1)\leq \tau$ for $\tau < 2\log
d$. Then with probability $p_d\ge
1-O(d^{-2\eta})$ (for fixed $\eta\in(0,1/2)$), the required number of
iterations for $z_0$ to get
$\varepsilon$-close to $\alpha$ is
\[
O\left(d^2\log^4 d R^2 + \log|\log\varepsilon - 5\right|) \;.
\]
\end{theorem}
Before proceeding to the proof of this statement, we will outline the
main idea. As in \cite{D}, we construct ``thick'' curves
connecting orbit points $z_n$ and $z_{n+1}$ that use up certain area
contained in a bounded domain. Far from the root, two curves
corresponding to $z_n$ and $z_{m}$ are disjoint provided that
$|n-m| > 2\tau + 6.$ A lower bound on the area of the ``thick''
curves gives an upper bound on the number of iterations. Also, near
the root the orbit enters the domain of quadratic convergence where
only a few iterations are sufficient to approximate the root.
More precisely, let $\varphi\colon U\to \mathbb{D}$ be the Riemann
map with $\varphi(\alpha) = 0$ considered in
\cite[Section~5]{D}. If $|\varphi(z_n)| < 1/2$ (``region of
fast convergence''), then according to \cite[Lemma~11]{D} we
need only $\log_2|\log_2 \varepsilon - 5|$ iterations to get
$\varepsilon$-close to the root $\alpha$. For orbit points with
$\varphi$-images having absolute values greater than $e ^{1/2} - 1$,
we can prove the following.
\begin{lemma}\label{lemma: area bound}
For every $n$ with $|\varphi(z_n)| > e^{1/2} - 1$, there are open
connected subsets $V_n\subset
D_{2R+2}(0)$ with $z_n,z_{n+1}\in\partial V_n$ and $|V_n|\ge
|z_n-z_{n+1}|^2/2\tau,$ having the following property: whenever $n$
and $m$ are such that $\min \{|\varphi(z_n)|, |\varphi(z_m)|\} >
e^{1/2} - 1$ and $|n-m|\ge \lceil 2\tau+6\rceil$, we have $V_n\cap
V_m=\emptyset$.
\end{lemma}
\begin{proof}
Let $\gamma\colon[0,s]\to U$ be the hyperbolic geodesic within $U$
connecting $z_n$ to $z_{n+1}$. For each $z=\gamma(t)$, let $\eta(t)$
be the Euclidean distance from $\gamma(t)$ to $\partial U$, and let
$X_t$ be the straight line segment (without endpoints) perpendicular
to $\gamma(t)$ of Euclidean length $\eta(t)$, centered at
$\gamma(t)$. Let $V_n:=\bigcup_{t\in(0,s)}X_t$. Then all $V_n$ are
open and connected and $z_n,z_{n+1}\in\partial V_n$, and the area of
$V_n$ is at least $|z_n-z_{n+1}|^2/2\tau$: this follows as in
\cite[Lemma~9]{D} (in this reference, the areas restricted to
certain domains $S_k$ are calculated; omitting this restriction, we
obtain the result we need, and the computations only get simpler). Moreover,
the orbit $(z_n)$ is $R$-central and the unit disk contains other
roots than $\alpha$, and
hence the length of $\gamma(t)$ for $t\in[0,s]$ is bounded by $R+1.$
This implies that all
pieces $V_n$ are contained in $D_{2R+2}(0)$ by construction.
The fact that $V_n\cap V_m$ are disjoint when $|n-m|>2\tau+6$ is
proved in \cite[Lemma~12]{D} (again for restricted domains, but
this is immaterial for the proof).
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:one root}]
We only need to consider iteration points whose images under
$\varphi$ have absolute values at least $e^{1/2} - 1$. Also, by Lemma~
\ref{lemma: probabilistic conditions}, the conditions \textbf{AC} and
\textbf{DC} hold true with probability $p_d\ge 1-O(d^{-2\eta})$.
Choose $M$ so that $ 2^M - 1>2R+2$. We distinguish the following three cases.
\begin{description}
\item[The Far Case] we have $z_n\in S_k$ with $2^{-k}\ge 1/d$. By
Corollary \ref{cor: displacement} (1)
we have $|z_n - z_{n+1}|
\geq\frac{C}{d\log d }$. Lemma \ref{lemma: area bound} says
that any Newton iteration $z_n\mapsto
z_{n+1}$ with $z_n\in S_k$ needs area at least
\[
\frac{|z_n-z_{n+1}|^2}{2\tau}
\ge
\frac{C^2}{2\tau d^2\log^2 d} \;.
\]
Moreover, the pieces of area for the
iterations $z_n\mapsto z_{n+1}$ and $z_{n'}\mapsto z_{n'+1}$ are
disjoint provided that $n-n' \geq 2\tau + 6$, and all these pieces of
area are contained in the disk $D_{2R+2}(0)$ with $R$ universally
bounded.
The total number of such iterations $D_{2R+2}(0)$ can accommodate is
thus at most
\[
C'd^2(\log d)^2 \tau \lceil 2\tau + 6\rceil R^2
\]
for a universal constant $C'$.
\item[The Intermediate Case] we have $z_n\in S_k$ with
$1/8d^{2+\eta}\leq 2^{-k} < 1/d$. Then $\log_2 d < k \leq 3 +
(2+\eta) \log_2 d$. By Corollary \ref{cor: displacement} (2) we have
$|z_n - z_{n+1}| \geq{C}/{k2^k }$.
Thus by \cite[ Proposition 13]{D}, the set $S_k$ contains at most
{\allowdisplaybreaks
\begin{align}
&\pi d\left(2^{-k+1} + \frac{C}{k2^k}\right)^2 \left(2\tau +
2^{k-1}\frac{C}{k2^k}\right)\lceil 2\tau + 6\rceil
\frac{k^22^{2k}}{C^2} \nonumber \\
&= \pi d 2^{-2k}k^{-2}(2k + C)^2 \left(2\tau +
\frac{C}{2k}\right)\lceil 2\tau + 6\rceil \frac{k^22^{2k}}{C^2}
\nonumber \\
&= \pi d C^{-2}(2k + C)^2 \left(2\tau + \frac{C}{2k}\right)\lceil
2\tau + 6\rceil \nonumber \\
&\leq \pi d C^{-2}(6 + (4+2\eta)\log_2 d + C)^2 \left(2\tau +
\frac{C}{2\log_2 d}\right)\lceil 2\tau + 6\rceil \nonumber
\\
&\leq C''d \log^2 d (2\tau + 1)\lceil 2\tau + 6\rceil
\nonumber
\end{align}
}%
orbit points for some universal constant $C''$. There are
$3+(1+\eta)\log d$ possible values of $k$ in the Intermediate Case,
so $\bigcup_{k} S_k$ (for all $k$ in the Intermediate Case) can
accommodate at most
\[
(1+\eta)C''d \log^3 d (2\tau + 1)\lceil 2\tau + 6\rceil
\]
orbit points for some universal constant $C''$.
\item[The Near Case]
we have $z_n \in S_k$ with $2^{-k} < {1}/{8d^{2+\eta}}$. By Corollary
\ref{cor: bound on K}, $\alpha$ is the closest root to $z_n$ and we
need $\log_2|\log_2\varepsilon - 5|$ iterations to get
$\varepsilon$-close to it.
\hide{Thus it only remains to consider the orbit points $z_n\in S_k$
for $k \leq 3 + (2+\eta)\log_2 d$; these are not necessarily in the
region of ``fast`` convergence (their image under the Riemann map
$\varphi$ as constructed in \cite[Section 3.3]{D} has absolute
value at least $e^{1/2} - 1$). Since the orbit $(z_n)$ is contained
in ${D}_{2^M - 1}(0)$ by hypothesis, we have $z_n\in S_k$ with $k
\geq -M$ for all $n\geq 0$.
}
\end{description}
Since $\tau\in O(\log d)$ and the Far Case dominates the Intermediate
Case, the claim follows.
\end{proof}
We now conclude the main statement.
\begin{proof}[Proof of Theorem \ref{Thm:Main}]
By Theorem \ref{thm: set of starting points}, for each root there is
a starting point satisfying the conditions of the theorem. In
particular, these orbits are $R$-central for a universally bounded
value of $R$. Note that the $d$ roots have to compete for the
available area in $D_{2R+2}(0)$. Since the estimates in the proof of
Theorem \ref{thm:one root} are based on the area (except for the
Near Case where the orbit gets to the region of quadratic
convergence), we get the same estimate for the combined number of
iterations (except that the estimate $\log |\log \varepsilon|$
applies for each root separately, thus it is multiplied by $d$).
\end{proof}
\begin{remark}
This result is close to optimal in the sense that the power of $d$
cannot be reduced for our universal set of starting points that is
bounded away from the unit disk. The reason is that outside the unit
disk $N_p$ is conjugate to the linear map $w \mapsto \frac{d-1}{d} w$
by \cite[Lemma~4]{HSS}, so at least $O(d)$ iterations are required for
each ``good`` starting point to get close to the unit disk where the
roots are located, and at least $O(d^2)$ for all the $d$ starting
points combined.
\end{remark}
| {
"timestamp": "2012-02-14T02:01:10",
"yymm": "1202",
"arxiv_id": "1202.2475",
"language": "en",
"url": "https://arxiv.org/abs/1202.2475",
"abstract": "We investigate Newton's method for complex polynomials of arbitrary degree $d$, normalized so that all their roots are in the unit disk. For each degree $d$, we give an explicit set $\\mathcal{S}_d$ of $3.33d\\log^2 d(1 + o(1))$ points with the following universal property: for every normalized polynomial of degree $d$ there are $d$ starting points in $\\mathcal{S}_d$ whose Newton iterations find all the roots with a low number of iterations: if the roots are uniformly and independently distributed, we show that with probability at least $1-2/d$ the number of iterations for these $d$ starting points to reach all roots with precision $\\varepsilon$ is $O(d^2\\log^4 d + d\\log|\\log \\varepsilon|)$. This is an improvement of an earlier result in \\cite{Schleicher}, where the number of iterations is shown to be $O(d^4\\log^2 d + d^3\\log^2d|\\log \\varepsilon|)$ in the worst case (allowing multiple roots) and $O(d^3\\log^2 d(\\log d + \\log \\delta) + d\\log|\\log \\varepsilon|)$ for well-separated (so-called $\\delta$-separated) roots.Our result is almost optimal for this kind of starting points in the sense that the number of iterations can never be smaller than $O(d^2)$ for fixed $\\varepsilon$.",
"subjects": "Dynamical Systems (math.DS)",
"title": "On the speed of convergence of Newton's method for complex polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587257892505,
"lm_q2_score": 0.8128673133042217,
"lm_q1q2_score": 0.8029167816251095
} |
https://arxiv.org/abs/2211.09094 | Guessing cards with complete feedback | We consider the following game that has been used as a way of testing claims of extrasensory perception (ESP). One is given a deck of $mn$ cards comprised of $n$ distinct types each of which appears exactly $m$ times: this deck is shuffled and then cards are discarded from the deck one at a time from top to bottom. At each step, a player (whose psychic powers are being tested) tries to guess the type of the card currently on top, which is then revealed to the player before being discarded. We study the expected number $S_{n,m}$ of correct predictions a player can make: one could always guess the exact same type of card which shows that one can achieve $S_{n,m}>m$. We prove that the optimal (non-psychic) strategy is just slightly better than that and find the first order correction when $n, m$ grows at suitable rates. This is very different from the case where $m$ is fixed and $n$ is large (He & Ottolini) and similar to the case of fixed $n$ and $m$ is large (Graham & Diaconis). The case $m=n$ answers a question of Diaconis. | \section{Introduction}
\subsection{Zener cards} Sometimes people present claims of having powers of extrasensory perceptions: a natural framework (proposed by the psychologist K. Zener and the botanist J. Rhine) in which to test such a hypothesis is that of card guessing (so-called Zener cards).
Consider a well-mixed deck of $mn$ cards comprised of $n$ distinct types of cards each of which appears exactly $m$ times. The deck is well shuffled and then placed in front of the player (who has full knowledge of the composition of the deck). The player then has to guess the type of the card on top; after the guess is made, the card is shown to the player and then discarded from the deck. The game continues until the deck runs over. Assuming the player does \textit{not} have psychic abilities, how many correct guesses can one expect?
\begin{center}
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{zener.png}
\caption{Zener cards: the figure shows the $n=5$ different types each of which appears $m=5$ times for a total of 25 cards.}
\label{fig:my_label}
\end{figure}
\end{center}
\vspace{-15pt}
A simple strategy would be to always guess the same type (say, the circle card). One is guaranteed to make exactly $m$ correct guesses. The optimal strategy is to memorize all cards that have been discarded up to now and guess the type of card that has, so far, appeared the fewest amounts of times (the optimality of this strategy was proven by Diaconis \& Graham \cite{DG81}). The natural question is now, assuming no powers of extrasensory perception, how many correct guesses can be expected under this optimal strategy?
This game has also been analyzed in connection with clinical trials \cite{Blackwell1957, EFRON1971} and is generally well studied: for other results and variations, we refer to
\cite{C98,Diaconis1978,DG81,diaconis2020card,diaconis2020guessing,KP01,KT21,KPP09,L21,P09,P91,S21}.
\subsection{Results.} For any given $m,n$, we consider the quantity $S_{n,m}$ describing the expected number of correctly guessed cards under the optimal strategy.
Two types of regimes are well understood. The first regime deals with the case where $n$, the number of distinct types, is fixed and the multiplicity $m$ with which each type appears becomes larges.
\begin{thm}[Diaconis \& Graham \cite{DG81}]
For the number of different types $n$ fixed and the multiplicity $m$ going to infinity,
\begin{equation}\label{largemsmalln}
S_{n,m}=m+\frac{\pi}{2}M_n\sqrt{m}+o_n(\sqrt m),
\end{equation}
where $M_n$ denotes the expected value of the maximum of $n$ normal random variables.
\end{thm}
One way of interpreting the result is perhaps as follows: the frequency with which each card appears should behave roughly like a normal distribution. Exploiting the fluctuations of $n$ Gaussians and taking the one that deviates the most from its expectation suggests an asymptotic along the lines given by Diaconis \& Graham. Note that this is merely a heuristic: these card counts are not actually independent.
In the opposite regime, fixing the multiplicity $m$ and assuming the number of different types $n$ becomes large, we obtain a very different result.
\begin{thm}[He \& Ottolini \cite{he2021card}]
For the multiplicity $m$ fixed and the number of different types $n$ going to infinity,
\begin{equation}\label{largemsmalln}
S_{n,m}=H_mH_n+\sum_{j=1}^{m-1}\frac{1}{j}\ln{m\choose j}+O_m(n^{-1/m}),
\end{equation}
where $H_n = 1 + \dots + 1/n$ denotes the $n$-th harmonic number.
\end{thm}
The leading order term is $H_m H_n \sim \ln{n} \ln{m}$ which, for $m$ fixed, is logarithmic growth in $n$. The second term in the expansion only depends on $m$ and is thus a constant. Empirically, the result is accurate even for small values of $m,n$ (see \cite{he2021card}).\\
A natural remaining question is what happens when both $m,n \rightarrow \infty$ with the original Zener setup $m=n$ being perhaps particularly interesting. Our main result covers a wide range of these parameters and is applicable as long as the number of different cards $n$ is slightly smaller than exponential in the number of different types $m$. Such a restriction is necessary: when $n$ becomes disproportionately large compared to $m$, the result of He \& Ottolini \cite{he2021card} shows the behavior to be different.
\begin{thm}[Main Result] \label{mainthm} Let $c, \varepsilon > 0$. If $m,n \rightarrow \infty$ while $(\ln{n})^{3+\varepsilon} \leq c \cdot m$, then
\begin{equation}\label{mnwhatever}
S_{n,m}=m+\frac{\pi}{\sqrt{2}}\sqrt{m\ln n}+o_{c,\varepsilon}(\sqrt{m\ln n}).
\end{equation}
\end{thm}
The result covers a wide range of parameters. It also suggests that there is a phase transition in the regime where there are a great many different types of cards each of which only appearing a relatively small number of times. In that regime, we expect a switch from \ref{mnwhatever} to \ref{largemsmalln}. The nature of this transition is currently not understood and appears to be an interesting problem: the proof of our main results suggests that this phasse transition may perhaps occur around $\ln{n} \sim m$. Of course, many other problems (variance or the existence of a central limit theorem) remain.
There is a heuristic that motivates our main result. We use $X_i(t)\in \{0,1,\ldots , m\}$ to denote the numbers of cards of type $1\leq i\leq n$ that are left in the deck when there are $1\leq t\leq nm$ cards left in total. Linearity of expectation and the description of the optimal strategy imply that
\begin{equation}\label{linearita}
S_{n,m}=\sum_{t=1}^{nm}\frac{\mathbb E[\max_i X_i(t)]}{t}.
\end{equation}
We rescale $p = t/nm$ (thus $0 \leq p \leq 1$).
The $X_i(t)$ should approximately obey a normal distribution and we could moreover assume that they are independent. This is certainly false because $X_1(t) + \dots +X_n(t) = t$ but it would simplify the problem. Pretending that the $X_i(t)$'s are independent normal random variables with the correct mean $mp$ and variance $mp(1-p)$, one would obtain
\begin{align*}
\mathbb E[\max_i X_i(t)]\approx m p+\sqrt{2mp(1-p)\ln n}.
\end{align*}
Plugging in, we obtain (after substituting $p = t/nm$)
\begin{align*}
S_{n,m}=\sum_{t=1}^{nm}\frac{\mathbb E[\max_i X_i(t)]}{t} &\approx m + \sum_{t=1}^{mn} \frac{\sqrt{2mp(1-p)\ln n}}{t} \\
&= m + \sqrt{2 m \ln{n}}\sum_{t=1}^{mn} \frac{\sqrt{p(1-p)}}{t} \\
&\approx m + \sqrt{2 m \ln{n}}\int_0^1\sqrt{\frac{1-p}{p}}dp
\end{align*}
and the integral evaluates to $\pi/2$.
\section{Proof}
\subsection{Outline} Our proof will be motivated by the heuristic sketched above. To justify the heuristic, we will exploit a well-known conditional representation of the $X_i(t)$'s in terms of conditionally independent binomial random variables $Y_i(t)$ given their sum. The intuition is that the maximum should only be mildly affected by the conditioning on the sum, which is indeed the case. One has to be careful in the case of of small and large $t$ (having selected almost none or almost all of the cards) where approximations degenerate. We will split the argument into two parts. In Section \ref{sec2}, we show how to reduce the problem to the case where the $X_i$s are independent binomials by means of a conditional representation. In Section \ref{sec3}, we prove the result in the case of independent binomials by exploiting sharp bounds on binomials tails. These two ingredients then establish the result. Section \ref{prooof} contains the proof of the main result.
\subsection{Reduction to independence}\label{sec2} We start by explaining how to reduce the problem to that of independent random variables by means of a useful conditional representation. The use of conditional limit theory to deal with order statistics of discrete processes dates back to \cite{Levin1981}, where the author exploits a conditional representation of the multinomial distribution in terms of independent Poisson conditioned on their sum.\\
Recall that, for each $1 \leq i \leq n$ and each $1\leq t\leq mn$, the random variable $X_i(t)$ counts the number of cards of type $i$ that are in the remaining $t$ cards.
Fixing a value of $t$ their joint distribution is given by a multivariate hypergeometric distribution
\begin{equation}
\mathbb P(X_1(t)=j_1,\ldots, X_n(t)=j_n)=\frac{\prod_{i=1}^n {m \choose j_i}}{{nm \choose t}}, \quad j_1+\ldots+j_n=t, \quad 0\leq j_i\leq m
\end{equation}
Note that, if it were not for the constraint of having a total of $t$ cards remaining
$$ \sum_{i=1}^{n} X_i(t) = \sum_{i=1}^{n} j_i=t,$$
the $X_i$ would be independent.
To overcome this issue, we consider $n$ independent and identically distributed random variables $Y_1(t), \ldots, Y_n(t)$ each of which follow an independent binomial distribution
$$Y_i\sim \mbox{Bin}(m,p) \quad \mbox{with}~ p=\frac{t}{mn}.$$
We also introduce their sum
$$\tilde Y(t)=\sum_{i=1}^{n} Y_i(t)$$
and note that $\tilde Y(t)\sim \mbox{Bin}(mn,p)$.
We will use the fact that for having $t$ fixed, the distribution of cards can be realized via independent and identical random variables following a binomial distribution and conditioned on having the correct sum (in particular, the binomial random variables are also not yet independent). This well-known characterization of the hypergeometric distribution (see, e.g., \cite{Skibinsky1970}) has a simple proof that we report here for the sake of completeness.\\
\begin{lemma}\label{conditionalrep}
For any $n,m$ and $1\leq t\leq mn$ fixed, we have
\begin{align*}
\mathcal L(X_1(t),\ldots, X_n(t))=\mathcal L(Y_1(t)\ldots, Y_n(t)|\tilde Y(t)=t),
\end{align*}
where $\mathcal{L}$ denotes the law of the random variable.
\end{lemma}
\begin{proof}
Consider any $n$-tuple $0\leq j_i\leq m$ with $\sum_{i=1}^n j_i=t$, and let $p=t/mn$. By definition of conditional expectation
\begin{align*}
\mathbb{P}(Y_1(t)=j_1,\ldots, Y_n(t)=j_n|\tilde Y(t)=t)&=\frac{\mathbb P(Y_1(t)=j_1,\ldots, Y_n(t)=j_n)}{\mathbb P(\tilde Y(t)=t)},
\end{align*}
where the condition $\tilde Y(t) = t$ can be omitted because $\sum_{i=1}^n j_i=t$ by design. We can now use the independence of the $Y_i$ to compute
\begin{align*}
\mathbb P(Y_1(t)=j_1,\ldots, Y_n(t)=j_n) &= \prod_{i=1}^{n} {m \choose j_i}p^{j_i}(1-p)^{m-j_i} \\
&= p^{\sum_{i=1}^{n} j_i} (1-p)^{mn - \sum_{i=1}^{n} j_i }\prod_{i=1}^{n} {m \choose j_i} \\
&= p^t (1-p)^{mn - t} \prod_{i=1}^{n} {m \choose j_i}.
\end{align*}
Simultaneously, since $\tilde Y(t)\sim \mbox{Bin}(mn,p)$, we have
$$ \mathbb P(\tilde Y(t)=t) = \binom{mn}{t} p^t (1-p)^{mn-t}$$
from which we deduce the desired statement.
\begin{align*}
\mathbb{P}(Y_1(t)=j_1,\ldots, Y_n(t)=j_n|\tilde Y(t)=t)=\frac{\prod_{i=1}^n {m \choose j_i}}{{nm \choose t}}.
\end{align*}
\end{proof}
We define $\tilde S_{n,m}$ to be the analogue of \eqref{linearita} where we replace the hypergeometric random variables with \textit{independent} binomial random variables, i.e.
\begin{equation} \label{eq:better}
\tilde S_{n,m}=\sum_{t=1}^{mn}\frac{\mathbb E[\max_i Y_i(t)]}{t}.
\end{equation}
Note that, according to Lemma 1, if we condition the binomial random variables on having the correct sum $t$, we recover the hypergeometric distribution exactly: the purpose of the next Lemma is to show that omitting this conditioning leads to a small error. This will the conclude the first part of the proof, the remainder of which is then dedicated to the study \ref{eq:better}.
\begin{lemma}\label{randomizing}
We have, for some universal $C>0$,
\begin{align*}
|S_{n,m}-\tilde S_{n,m}| \leq C(\sqrt m+\ln n).
\end{align*}
\end{lemma}
\begin{proof}
Owing to Lemma \eqref{conditionalrep}, we can replace the independent binomial random variables $Y_i$ by hypergeometric random variables $X_i$ provided that we condition on their sum $t$: this allows us to write
\begin{align*}
S_{n,m}-\tilde S_{n,m}&=\sum_{t=1}^m \frac{\mathbb E[\max_i X_i(t)]-\mathbb E[\max_i Y_i(t)]}{t}\\&=\sum_{t=1}^{nm}\frac{1}{t}\sum_{s=1}^{nm}\left(\mathbb E[\max_i X_i(t)]-\mathbb E[\max_i X_i(s)]\right)\mathbb P(\tilde Y(t)=s).
\end{align*}
One would of course now expect that $P(\tilde Y(t)=s)$ is tightly concentrated around its expectation: it thus suffices to understand how quickly the maximum can change for $|t-s|$ relatively small (without loss of generality, we assume from now on $s<t$).
We therefore have to understand the likelihood $\mathbb P\left(\max_i X_i(t+1)>\max_i X_i(t)\right)$. The maximum can only increase if a card is picked that is already maximal before. This suggests on conditioning on the number of types that have appeared a maximal number of times and we note that
\begin{equation}\label{excursion}
\mathbb P\left(\max_i X_i(t+1)>\max_i X_i(t)\Big||\ell: X_{\ell}(t)=\max_j X_j(t)=k|=j\right)=\frac{(m-k)j}{nm-t}\leq \frac{j}{n},
\end{equation}
where we used that
$$ t = \sum_{i=1}^{n} X_i(t) \leq n \max_{1 \leq i \leq n} X_i(t)$$
implying
$$ k = \max_{1\leq i \leq n} X_i(t) \geq\frac{t}{n}.$$
The only case in which \eqref{excursion} is saturated is the configuration in which $j$ cards appear with multiplicity $k$, and all other cards appear with multiplicity $k-1$. In this case the maximum increases with probability precisely $j/n$.
One way of seeing this is as follows: if the other cards had only appeared rarely up to that point, we would be more likely to pick one of these cards since there are still more of them in the pile. The most likely transition to a new maximum happens if the chance of picking a card that is already chosen a maximal number of times is greatest. \\
We will now present an argument which, implicitly, works under the assumption that we are constantly in the worst case setting described above (in a suitable sense). We introduce a Markov chain whose role is to keep track of the number of different types of cards whose current occurence is given by maximal multiplicity $\max_{i} X_i(t)$. Note that, in particular, if the maximum increases then there exists exactly one card which arises with maximal multiplicity and the counter drops back to 1. The Markov chain will be
operating on the state space $\{1,\ldots, n\}$: it is possible to move from each point $j$ to either $j+1$ or back to $1$. The corresponding transition probabilities $q_{i,j}$ are given by
\begin{align*}
q_{j, j+1}=\frac{n-j}{n} \quad \mbox{and} \quad q_{j,1}=\frac{j}{n} \quad \mbox{for} \qquad 1\leq j\leq n.
\end{align*}
The further the Markov chain is from 1, the more likely it is to return to 1 (corresponding to uncovering a new maximum). We also observe that the Markov chain is more likely to return to 1 than we are to uncover a new maximum (because the card deck will not always be in the worst case scenario assumed above): more formally, for all $1\leq k\leq m$, $1\leq t\leq mn$ and $1\leq j\leq j'\leq n$
\begin{align*}
\mathbb P\left(\max_i X_i(t+1)>\max_i X_i(t)\Big||\ell: X_{\ell}(t)=\max_j X_j(t)=k|=j\right)\leq q_{j,1}\leq q_{j',1}, \quad
\end{align*}
The number of times at which $\max X_i(t)$ changes are bounded above by the number $N_{t-s}$ of excursions away from $1$ of the Markov chain.
In particular,
\begin{align*}
\mathbb E[\max_i X_i(t)]-\mathbb E[\max_i X_i(s)]\leq \mathbb E[N_{t-s}].
\end{align*}
It thus remains to understand how often the Markov chain is going to hit the state 1.
Let now $T$ be the time to return back to the origin for the Markov chain $Z$. Renewal theory suggest that the latter should be approximately $(t-s)\mathbb E[T]$ for $t-s$ large. This is made precise in \cite{renewal}, whose result gives the estimate
\begin{align*}
\mathbb E[N_{t-s}]\leq \frac{t-s}{\mathbb E[T]}+O\left(\frac{\mathbb E[T^2]}{(\mathbb E[T])^2}\right).
\end{align*}
Note that $T$ is nothing but the expected time to observe a birthday coincidence in the classical birthday problem. In particular,
$$\mathbb P(T\geq s)=\left(1-\frac{s}{n}\right)\ldots \left(1-\frac{1}{n}\right)$$
and using the estimate
\begin{align*}
\exp\left(-\frac{s^2}{2n}+O\left(\frac{s^3}{n^2}\right)\right)\leq \left(1-\frac{s}{n}\right)\ldots \left(1-\frac{1}{n}\right)\leq \exp\left(-\frac{s^2}{2n}\right),
\end{align*}
we obtain the well-known results
$\mathbb E[T]=\Omega(\sqrt n)$ and $\mathbb E[T^2]=O(n)$
and thus
\begin{align*}
\mathbb E[\max_i X_i(t)]-\mathbb E[\max_i X_i(s)]=O\left(\frac{t-s}{\sqrt n}+1\right).
\end{align*}
This implies
\begin{align*}
|S_{n,m}-\tilde S_{n,m}|&=O\left(\frac{1}{\sqrt n}\sum_{t=1}^{nm}\frac{1}{t}\sum_{s=1}^{mn}\mathbb |t-s|P(\tilde Y(t)=s)+\sum_{t=1}^{nm}\frac{1}{t}\right).
\end{align*}
The second sum can be bounded by $\ln{(mn)}$. As for the first sum, we first note that by Cauchy-Schwarz for any random variable
$$ \mathbb{E} \left| X - \mathbb{E}X \right| \leq \sqrt{\mathbb{V} X}.$$
We also observe that $\tilde Y(t)\sim \mbox{Bin}(nm,p)$ with $p=t/nm$ has standard deviation $\sqrt{mnp(1-p)}$
and thus
$$ \sum_{s=1}^{mn}\mathbb |t-s|P(\tilde Y(t)=s) \leq \sqrt{m n p (1-p)}.$$
This simplifies the first sum and leads to the desired bound since
\begin{align*}
\frac{1}{\sqrt n}\sum_{t=1}^{nm}\frac{1}{t}\sum_{s=1}^{mn}\mathbb |t-s|P(\tilde Y(t)=s) &\leq \frac{1}{\sqrt n}\sum_{t=1}^{nm}\frac{1}{t} \sqrt{mn p (1-p)} \\
&=\sqrt m\sum_{t=1}^{nm}\frac{1}{t} \sqrt{\frac{t}{mn} \left(1 - \frac{t}{mn}\right)} \\
&\leq c\sqrt{m} \int_0^1 \frac{1}{x}\sqrt{x(1-x)} dx \leq C \sqrt{m}.
\end{align*}
\end{proof}
\subsection{The independent case}\label{sec3} It now suffices to analyze
$$\tilde S_{n,m}=\sum_{t=1}^{nm}\frac{\mathbb E[\max_i Y_i(t)]}{t}$$
where the $Y_i$ are \textit{independent} binomial random variables and
$$Y_i\sim \mbox{Bin}(m,p) \quad \mbox{with}~ p=\frac{t}{mn}.$$
We start by rescaling these random variables by shifting them to have mean 0: we let $\overline Y_i(t)=Y_i(t)-t/n$ which reduces our problem to the study of
\begin{equation}\label{tails}
\tilde S_{n,m}=m+\sum_{t=1}^{nm}\frac{\mathbb E[\max_i \overline Y_i(t)]}{t}.
\end{equation}
These binomial random variables are well approximated by a normal distributions in regions where their variance is not too small: this naturally suggests splitting the problem into different regions. We write, for some $ 0 < s=s_{n,m} \ll 1$ to be determined later (which will ultimately tend to 0 at a suitable rate),
\begin{align*}
\tilde S_{n,m} =m &+ \sum_{t=snm}^{(1-s)nm}\frac{\mathbb E[\max_i \overline Y_i(t)]}{t} \\
&+\sum_{t=1}^{s nm}\frac{\mathbb E[\max_i \overline Y_i(t)]}{t} +
\sum_{t= (1-s) nm}^{nm}\frac{\mathbb E[\max_i \overline Y_i(t)]}{t}.
\end{align*}
We treat the first sum by comparing with a normal distribution. The two remaining sums are tail events that will be treated separately. We start with the tail events.\\
\textbf{The tail sums.}
In order to deal with the tails, we just need an upper bound of the right order, but we can afford to lose a factor of $\sqrt{1-p}$. This follows from a Chernoff type argument: for all $\theta>0$, by linearity of expectation,
\begin{align*}
\exp{\{\theta \mathbb ~\mathbb{E}[\max_i \overline Y_i(t)]\}} &\leq \sum_{i=1}^{n} \exp{\{\theta~ \mathbb E[ \overline Y_i(t)]\}} \\
&=
n\exp{\{\theta~ \mathbb{E}[\overline Y_1(t)]\}}\leq n(1-p+pe^{\theta})^me^{-\theta m},
\end{align*}
so that taking logarithm of both sides and using $\ln(1+x)\leq x$ we obtain
\begin{align*}
\mathbb E[\max_i \overline Y_i(t)]\leq \frac{\ln n}{\theta}+\frac{mp}{\theta}(e^{\theta}-1)-m.
\end{align*}
We start with the case where $p$ is close to 0, which is the hardest to deal with since it is the time where the feedback is the most relevant. This allows for many extra correct guesses owing to the detailed knowledge of the composition of the deck -- captured by the logarithmic singularity close to $t=0$ in \eqref{tails}. However, we deal with that considering the choice
\begin{align*}
\theta=\sqrt{\frac{2\ln n}{m}}\left(\ln\frac{1}{p}\right)^{1+\varepsilon}
\end{align*}
Since $p = t/(mn) \geq 1/(mn)$ we deduce
$$\ln \frac{1}{p}\leq \ln n+\ln m$$ and thus
\begin{align*}
\theta=O\left(\sqrt{\frac{(\ln n+\ln m)^{3+2\varepsilon}}{m}}\right)=o(1)
\end{align*}
using the main assumption on $m$ and $n$ from the Main Theorem. Since $\theta$ is tending to 0, we can replace the exponential function $e^{\theta}$ by a second order Taylor expansion and
\begin{align*}
\mathbb E[\max_i \overline Y_i(t)]&\leq \frac{\ln n}{\theta}+\frac{mp\theta}{2}+o(\theta)\\&=O\left(\sqrt{m\ln n}\left[\left(\ln\frac{1}{p}\right)^{-1-\varepsilon}+p\left(\ln\frac{1}{p}\right)^{1+\varepsilon}\right]\right)\\
&= O\left( \sqrt{m\ln n}\left(\ln\frac{1}{p}\right)^{-1-\varepsilon} \right).
\end{align*}
Therefore, we conclude
\begin{align*}
\sum_{t=1}^{s nm}\frac{\mathbb E[\max_i \overline Y_i(t)]}{t} &\lesssim \sqrt{m \ln{n}} \sum_{t=1}^{s m n} \frac{1}{t} \left(\ln \frac{mn}{t} \right)^{-1 - \varepsilon}.
\end{align*}
Comparing to the integral, we have
$$\sum_{t=1}^{s m n} \frac{1}{t} \left(\ln \frac{mn}{t} \right)^{-1 - \varepsilon} \lesssim \int_0^{s} \frac{1}{x} \left(\ln \frac{1}{x} \right)^{-1 - \varepsilon} dx.$$
The integrand has an antiderivative in closed form
$$ \int \frac{1}{x} \left(\ln \frac{1}{x} \right)^{-1 - \varepsilon} dx = \frac{1}{\varepsilon} \left( \ln\frac{1}{x} \right)^{-\varepsilon}$$
allowing us to deduce that
$$ \int_0^{s} \frac{1}{x} \left(\ln \frac{1}{x} \right)^{-1 - \varepsilon} dx =\frac{1}{\varepsilon} \left( \ln \frac{1}{s} \right)^{-\varepsilon}$$
which, for fixed $\varepsilon$ tends to 0 provided that $s$ tends to 0. In the second regime, $p$ close to 1, we choose
\begin{align*}
\theta=\sqrt{\frac{2\ln n}{m}}
\end{align*}
which is guaranteed to converge to 0 as $m,n \rightarrow \infty$. A Taylor expansion and the observation $p\leq 1$ shows
\begin{align*}
\mathbb E[\max_i \overline Y_i(t)]\leq \frac{\ln n}{\theta}+\frac{mp\theta}{2}+o(\theta)=O(\sqrt{m\ln n}).
\end{align*}
From here, we conclude since
\begin{align*}
\sum_{t=(1-s)mn}^{ nm -1}\frac{\mathbb E[\max_i \overline Y_i(t)]}{t} &\lesssim \sqrt{m \ln{n}} \sum_{t=(1-s)mn}^{m n -1}\frac{1}{t} \lesssim s \sqrt{m\ln n}
\end{align*}
which again tends to 0 as $s \rightarrow 0$.
\subsection{The main term}
As for the main term, we need sharp asymptotics for the maximum of independent binomials. This amounts to controlling their tail probabilities. The following is a corollary of a more general result by Feller \cite{feller}.
\begin{lemma}[Feller \cite{feller}]\label{fellermagic}
Let $Y_{i}(t) \sim \emph{Bin}(m,p)$ be i.i.d. and $p = t/(nm)$. Let
\begin{align*}
Z_i(t)=\frac{Y_i(t)-mp}{\sqrt{mp(1-p)}}
\end{align*}
and assume that $p, 1-p\geq s_{n,m}$, where $s=s_{n,m}$ is chosen so that
\begin{align*}
\frac{\ln^3n}{ms(1-s)}=o(1), \quad \frac{m^{6/7}}{ms(1-s)}=o(1).
\end{align*}
Then, one has
\begin{align*}
\mathbb E[\max Z_i(t)]= \sqrt{2\ln n} + \mathcal{O}(1)
\end{align*}
with a uniform error bound on all such $p$s.
\end{lemma}
The assumption in the Main Theorem guarantees that we can find a sequence
$s_{n,m}=m^{-\varepsilon'}$ for some $\varepsilon'=\varepsilon'(\varepsilon)$ sufficiently small. Therefore, we will use the notation $O_{\varepsilon}(1)$ to indicate that the error will be a function of $\varepsilon$.
\begin{proof}
We follow the notation of Theorem $1$ by Feller \cite{feller} applied to binomial random variables, which gives an uniform estimate
\begin{align*}
\mathbb P(Z_i(t)\geq x)=(1-\Phi(x))\left(1+O\left(\frac{x^3}{\sqrt{mp(1-p)}}\right)\right)
\end{align*}
where $\Phi$ denotes the cumulative distribution of a standard normal random variable.
In particular, for $x=\sqrt{2\ln n}$ the error is small by our assumption and we can write the error term as $o_{\varepsilon}(1)$. Using, for $x=\sqrt{2\ln n}$, the elementary estimate
\begin{align*}
n(1-\Phi(x))=n\frac{e^{-x^2/2}}{\sqrt{2\pi}x}\left(1+O\left(1/x^2\right)\right)\geq \frac{c}{\sqrt{\ln n}},
\end{align*}
for some absolute constant $c$ and all $n\geq 2$, we derive the bound
\begin{align*}
\mathbb P(\max Z_i(t)\leq x)&=\left(\mathbb P(Z_i(t)\leq x)\right)^n\\&=\left(1-\frac{n\mathbb P(Z_i(t)\geq x)}{n}\right)^n\\&=1-O\left(\frac{1}{\sqrt{\ln n}}(1+o_{\varepsilon}(1))\right)
\end{align*}
From this, we get a lower bound on expectation
\begin{align*}
\mathbb E[\max Z_i(t)]&\geq x \cdot \mathbb P(\max \mathbb Z_i\leq x)=\sqrt{2\ln n}+O_{\varepsilon}(1).
\end{align*}
As for the upper bound, we slightly extend the range and consider values of $x$ up to $x\leq \sqrt{2(\ln n+m^{1/7})}$. This range is still admissible since
$$\frac{x^3}{\sqrt{mp(1-p)}}\rightarrow 0$$
owing to our assumptions. Using the standard bound
$$ 1 - \Phi(x) \leq \frac{e^{-\frac{x^2}{2}}}{\sqrt{2\pi} x}$$
for cumulative distribution function of the Gaussian, we infer that if $x=\sqrt{2(\ln n+c)}$ for some $0\leq c\leq m^{1/7}$, then
\begin{align*}
n(1-\Phi(x))\leq \frac{ne^{-\frac{x^2}{2}}}{\sqrt{2\pi}x}=O\left(\frac{e^{-c}}{\sqrt{2\ln n}}\right)
\end{align*}
from which we obtain
\begin{align*}
n \cdot\mathbb P(Z_i(t)\geq x)= n(1-\Phi(x))(1+o_{\varepsilon}(1))=O_{\varepsilon}\left(\frac{e^{-c}}{\sqrt{\ln n}}\right).
\end{align*}
In order to bound the expectation, let $$M=\sqrt{m\max\left(\frac{1-p}{p},\frac{p}{1-p}\right)}$$
be the maximum value of $|Z_i|$ Notice that, for instance, $M\leq m$ owing to our assumption on $p$ and $1-p$. Then we obtain
\begin{align*}
\mathbb E[\max Z_i(t)]&\leq \int_0^{M}\mathbb P(\max Z_i(t)\geq x)dx\\&\leq \sqrt{2\ln n}+\int_{{\sqrt{2\ln n}}}^{\sqrt{2\ln n+m^{1/7}}}n\mathbb P(Z_i(t)\geq x)dx+Me^{-m^{1/7}}
\end{align*}
The last term is certainly $o(1)$, while a change of variable shows that the second term is
\begin{align*}
\int_{{\sqrt{2\ln n}}}^{\sqrt{2\ln n+m^{1/7}}}n\mathbb P(Z_i(t)\geq x)dx=O_{\varepsilon}\left( \int_0^{\infty}\frac{e^{-c}}{\sqrt{2\ln n+c}}dc\right)=O_{\varepsilon}\left(\frac{1}{\sqrt{\ln n}}\right).
\end{align*}
Collecting all the pieces, we have
\begin{align*}
\mathbb E[\max_i Z_i(t)]=\sqrt{2\ln n}+O_{\varepsilon}(1).
\end{align*}
\end{proof}
\subsection{Proof of the Main Result} \label{prooof}
\begin{proof}
Using Lemma \ref{fellermagic}, we have
\begin{align*}
\mathbb E\left[\max_i Y_i(t)-\frac{t}{n}\right] = \sqrt{m p(1-p)} \sqrt{2 \ln n } + \mathcal{O}(\sqrt{m p(1-p)})
\end{align*}
with error term uniform for all $p, 1-p\geq s\rightarrow 0$. Combining Lemma \ref{randomizing} with the control on the tail, we have
\begin{align*}
S_{n,m}-m &= \tilde S_{n,m}-m + \mathcal{O}(\sqrt m+\ln n)\\
&= \sum_{t=1}^{nm}\frac{\mathbb E[\max_i Y_i(t)]}{t} - m + \mathcal{O}(\sqrt m+\ln n)\\
&= \sum_{t= 1}^{nm}\frac{\mathbb E[\max_i \overline Y_i(t)]}{t}+ \mathcal{O}(\sqrt m+\ln n).
\end{align*}
At this point, we split the sum into the three regions
$$ \sum_{t=1}^{mn} = \sum_{t=1}^{smn} + \sum_{t=smn}^{(1-s)mn} + \sum_{t=(1-s)mn}^{mn}.$$
As was shown in Section \S 2.3, as long as $s \rightarrow 0$, the first and the third sum
are $o(\sqrt{m \ln{n}})$. The sum in the middle, provided $s$ does not tend to 0 too quickly,
will then contribute
$$ \sum_{t= smn}^{(1-s)nm}\frac{\mathbb E[\max_i \overline Y_i(t)]}{t} = \left(1+\mathcal{O}\left( \frac{1}{\sqrt{\ln{n}}}\right)\right)\sum_{t= smn}^{(1-s)nm}\frac{ \sqrt{m p(1-p)} \sqrt{2 \ln n } }{t}.$$
The sum can now be simplified to
$$ \sum_{t= smn}^{(1-s)nm}\frac{ \sqrt{m p(1-p)} \sqrt{2 \ln n } }{t} = \sqrt{2 m \ln{n}} \sum_{t= smn}^{(1-s)nm}\frac{ \sqrt{ p(1-p)} }{t} $$
which, recalling $p = t/mn$ leads, as $s \rightarrow 0$, to the Riemann sum
$$\int_0^1\sqrt{\frac{1-p}{p}}dp = \frac{\pi}{2}.$$
\end{proof}
\section*{Acknowledgment}
We warmly thank Persi Diaconis for suggesting the problem and for some useful references.
\bibliographystyle{abbrv}
| {
"timestamp": "2022-11-17T02:19:00",
"yymm": "2211",
"arxiv_id": "2211.09094",
"language": "en",
"url": "https://arxiv.org/abs/2211.09094",
"abstract": "We consider the following game that has been used as a way of testing claims of extrasensory perception (ESP). One is given a deck of $mn$ cards comprised of $n$ distinct types each of which appears exactly $m$ times: this deck is shuffled and then cards are discarded from the deck one at a time from top to bottom. At each step, a player (whose psychic powers are being tested) tries to guess the type of the card currently on top, which is then revealed to the player before being discarded. We study the expected number $S_{n,m}$ of correct predictions a player can make: one could always guess the exact same type of card which shows that one can achieve $S_{n,m}>m$. We prove that the optimal (non-psychic) strategy is just slightly better than that and find the first order correction when $n, m$ grows at suitable rates. This is very different from the case where $m$ is fixed and $n$ is large (He & Ottolini) and similar to the case of fixed $n$ and $m$ is large (Graham & Diaconis). The case $m=n$ answers a question of Diaconis.",
"subjects": "Probability (math.PR); Combinatorics (math.CO)",
"title": "Guessing cards with complete feedback",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137884587393,
"lm_q2_score": 0.8175744695262775,
"lm_q1q2_score": 0.8028694021666439
} |
https://arxiv.org/abs/1802.08855 | Minimax Distribution Estimation in Wasserstein Distance | The Wasserstein metric is an important measure of distance between probability distributions, with applications in machine learning, statistics, probability theory, and data analysis. This paper provides upper and lower bounds on statistical minimax rates for the problem of estimating a probability distribution under Wasserstein loss, using only metric properties, such as covering and packing numbers, of the sample space, and weak moment assumptions on the probability distributions. | \section{Preliminary Lemmas and Proof Sketch of Theorem~\ref{thm:expectation_bound_appendix}}
\label{sec:lemmas}
In this section, we outline the proof of Theorem~\ref{thm:expectation_bound}, our upper bound for the case of totally bounded metric spaces. The proof of the more general Theorem~\ref{thm:unbounded_upper_bound} for unbounded metric spaces, which is given in the next section, builds on this.
We begin by providing a few basic lemmas; these lemmas are not fundamentally novel, but they will be used in the subsequent proofs of our main upper and lower bounds, and also help provide intuition for the behavior of the Wasserstein metric and its connections to other metrics between probability distributions. The proofs of these lemmas are given later, in Appendix~\ref{app:proofs}. Our first lemma relates Wasserstein distance to the notion of resolution of a partition.
\begin{lemma}
Suppose $\S \in \SS$ is a countable Borel partition of $\Omega$. Let $P$ and $Q$ be Borel probability measures such that, for every $S \in \S$, $P(S) = Q(S)$. Then, for any $r \geq 1$, $W_r(P, Q) \leq \operatorname{Res}(\S)$.
\label{lemma:measures_agree_on_partition}
\end{lemma}
Our next lemma gives simple lower and upper bounds on the Wasserstein distance between distributions supported on a countable subset $\X \subseteq \Omega$, in terms of $\Diam(\X)$ and $\Sep(\X)$. Since our main results will utilize coverings and packings to approximate $\Omega$ by finite sets, this lemma will provide a first step towards approximating (in Wasserstein distance) distributions on $\Omega$ by distributions on these finite sets. Indeed, the lower bound in Inequality~\eqref{ineq:countable_support_bound} will suffice to prove our lower bounds, although a tighter upper bound, based on the upper bound in~\eqref{ineq:countable_support_bound}, will be necessary to obtain tight upper bounds.
\begin{lemma}
Suppose $(\Omega, \rho)$ is a metric space, and suppose $P$ and $Q$ are Borel probability distributions on $\Omega$ with countable support; i.e., there exists a countable set $\X \subseteq \Omega$ with $P(\X) = Q(\X) = 1$. Then, for any $r \geq 1$,
\begin{equation}
(\Sep(\X))^r \sum_{x \in \X} \left| P(\{x\}) - Q(\{x\}) \right|
\leq W_r^r(P,Q)
\leq (\Diam(\X))^r \sum_{x \in \X} \left| P(\{x\}) - Q(\{x\}) \right|.
\label{ineq:countable_support_bound}
\end{equation}
\label{lemma:countable_support_bound}
\end{lemma}
\begin{remark}
Recall that the term $\sum_{x \in \X} \left| P(\{x\}) - Q(\{x\}) \right|$
in Inequality~\eqref{ineq:countable_support_bound} is the $\L_1$ distance
\[\|p - q\|_1 := \sum_{x \in \X} \left| p(x) - q(x) \right|\]
between the densities $p$ and $q$ of $P$ and $Q$ with respect to the counting measure on $\X$, and that this same quantity is twice the total variation distance
\[TV(P,Q) := \sup_{A \subseteq \Omega} \left| P(A) - Q(A) \right|.\]
Hence, Lemma~\ref{lemma:countable_support_bound} can be equivalently written as
\[\Sep(\Omega) \left( \|p - q\|_1 \right)^{1/r} \leq W_r(P,Q) \leq \Diam(\Omega) \left( \|p - q\|_1 \right)^{1/r}\]
and as
\[\Sep(\Omega) \left( 2 TV(P,Q) \right)^{1/r} \leq W_r(P,Q) \leq \Diam(\Omega) \left( 2 TV(P,Q) \right)^{1/r},\]
bounding the $r$-Wasserstein distance in terms of the $\L_1$ and total variation distance.
As noted in Example~\ref{ex:discrete_bound}, equality holds in \eqref{ineq:countable_support_bound} precisely when $\rho$ is the unit discrete metric given by $\rho(x,y) = 1_{\{x \neq y\}}$ for all $x,y \in \Omega$.
On metric spaces that are discrete (i.e., when $\Sep(\Omega) > 0$), the Wasserstein metric is (topologically) at least as strong as the total variation metric (and the $\L_1$ metric, when it is well-defined), in that convergence in Wasserstein metric implies convergence in total variation (and $\L_1$, respectively). On the other hand, on bounded metric spaces, the converse is true. In either of these cases, \emph{rates} of convergence may differ between metrics, although, in metric spaces that are both discrete \textit{and} bounded (e.g., any finite space), we have $W_r \asymp TV^{1/r}$.
\label{remark:Wasserstein_L1_TV}
\end{remark}
To obtain tight bounds as discussed below, we will require not only a partition of the sample space $\Omega$, but a nested sequence of partitions, defined as follows.
\begin{definition}[Refinement of a Partition, Nested Partitions]
Suppose $\S, \T \in \SS$ are partitions of $\Omega$. $\T$ is said to be a \emph{refinement of $\S$} if, for every $T \in \T$, there exists $S \in \S$ with $T \subseteq S$. A sequence $\{\S_k\}_{k \in \N}$ of partitions is called \emph{nested} if, for each $k \in \N$, $\S_k$ is a refinement of $\S_{k + 1}$,
\end{definition}
While Lemma~\ref{lemma:countable_support_bound} gave a simple upper bound on the Wasserstein distance, the factor of $\Diam(\Omega)$ turns out to be too large to obtain tight rates for a number of cases of interest (such as the $D$-dimensional unit cube $\Omega = [0,1]^D$, discussed in Example~\ref{ex:unit_cube_lower_bound}). The following lemma gives a tighter upper bound, based on a hierarchy of nested partitions of $\Omega$; this allows us to obtain tighter bounds (than $\Diam(\Omega)$) on the distance that mass must be transported between $P$ and $Q$.
Note that, when $K = 1$, Lemma~\ref{lemma:nested_partitions_Wasserstein_bound} reduces to a trivial combination of Lemmas~\ref{lemma:measures_agree_on_partition} and \ref{lemma:countable_support_bound}; indeed, these lemmas are the starting point for proving Lemma~\ref{lemma:nested_partitions_Wasserstein_bound} by induction on $K$.
Note that the idea of such a ``multi-resolution'' upper bound has been utilized extensively before, and numerous versions have been proven before (see, e.g., Fact 6 of \citet{do2011sublinearTimeEMD}, Lemma 6 of \citet{fournier2015rate}, or Proposition 1 of \citet{weed2017sharp}). Most of these versions have been specific to Euclidean space; to the best of our knowledge, only Proposition 1 of \citet{weed2017sharp} applies to general metric spaces. However, that result also requires that $(\Omega,\rho)$ is totally bounded (more precisely, that $m_x^\infty(P) < \infty$, for some $x \in \Omega$).
\begin{lemma}
Let $K$ be a positive integer. Suppose $\{\S_k\}_{k \in \N}$ is a nested sequence of countable Borel $\delta$-partitions of $(\Omega,\rho)$. Then, for any $r \geq 1$ and Borel probability measures $P$ and $Q$ on $\Omega$,
\begin{equation}
W_r^r(P, Q)
\leq (\operatorname{Res}(\S_0))^r + \sum_{k = 1}^\infty \left( \operatorname{Res}(\S_k) \right)^r
\left( \sum_{S \in \S_{k + 1}} \left| P(S) - Q(S) \right| \right).
\label{ineq:multiresolution_bound}
\end{equation}
\label{lemma:nested_partitions_Wasserstein_bound}
\end{lemma}
Lemma~\ref{lemma:nested_partitions_Wasserstein_bound} requires a sequence of partitions of $\Omega$ that is not only multi-resolution but also nested. While the $\epsilon$-covering number implies the existence of small partitions with small resolution, these partitions need not be nested as $\epsilon$ becomes small. For this reason, we now give a technical lemma that, given any sequence of partitions, constructs a \textit{nested} sequence of partitions of the same cardinality, with only a small increase in resolution.
\begin{lemma}
Suppose $\S$ and $\T$ are partitions of $(\Omega,\rho)$, and suppose $\S$ is countable. Then, there exists a partition $\S'$ of $(\Omega,\rho)$ such that:
\begin{enumerate}[label=\alph*)]
\item
$|\S'| \leq |\S|$.
\item
$\operatorname{Res}(\S') \leq \operatorname{Res}(\S) + 2\operatorname{Res}(\T)$.
\item
$\T$ is a refinement of $\S'$.
\end{enumerate}
\label{lemma:fine_refinement}
\end{lemma}
Lemmas~\ref{lemma:nested_partitions_Wasserstein_bound} and \ref{lemma:fine_refinement} are the main tools needed to bound the expected Wasserstein distance $\E[W_r^r(P, \hat P)]$ of the empirical distribution from the true distribution into a sum of its expected errors on each element of a nested partition of $\Omega$. Then, we will need to control the total expected error across these partition elements, which we will show behaves similarly to the $\L_1$ error of the standard maximum likelihood (mean) estimator a multinomial distribution from its true mean. Thus, the following result of \citet{han2015minimax} will be useful.
\begin{lemma}[Theorem 1 of \citep{han2015minimax}]
Suppose $(X_1,...,X_K) \sim \operatorname{Multinomial}(n,p_1,...,p_K)$. Let
\[Z := \|X - n p\|_1 = \sum_{k = 1}^K \left| X_k - n p_k \right|.\] Then,
$\E \left[ Z/n \right] \leq \sqrt{(K - 1)/n}$.
\label{lemma:multinomial_expectation}
\end{lemma}
Finally, we are ready to prove Theorem~\ref{thm:expectation_bound_appendix}.
\begin{customthm}{\ref{thm:expectation_bound}}
Let $(\Omega,\rho)$ be a metric space on which $P$ is a Borel probability measure. Let $\hat P$ denote the empirical distribution of $n$ IID samples $X_1,...,X_n \stackrel{IID}{\sim} P$, give by
\[\hat P(S) := \frac{1}{n} \sum_{i = 1}^n 1_{\{X_i \in S\}}, \quad \forall S \in \Sigma.\]
Then, for any sequence $\{\epsilon_k\}_{k \in [K]} \in (0,\infty)^K$ with $\epsilon_0 = \Diam(\Omega)$,
\[\E \left[ W_r^r(P, \hat P) \right] \leq \epsilon_K^r + \frac{1}{\sqrt{n}} \sum_{k = 1}^K \left( \sum_{j = k - 1}^K 2^{j - k} \epsilon_j \right)^r \sqrt{N(\epsilon_k) - 1}.\]
\label{thm:expectation_bound_appendix}
\end{customthm}
\begin{proof}
By recursively applying Lemma~\ref{lemma:fine_refinement}, there exists a sequence $\{\S_k\}_{k \in [K]}$ of partitions of $(\Omega,\rho)$ satisfying the following conditions:
\begin{enumerate}
\item
for each $k \in [K]$, $|\S_k| = N(\epsilon_k)$.
\item
for each $k \in [K]$,
$\displaystyle \operatorname{Res}(\S_k) \leq \sum_{j = k}^K 2^{j - k} \epsilon_j$.
\item
$\{S_k\}_{k \in [K]}$ is nested.
\end{enumerate}
Note that, for any $k \in [K]$, the vector $n\hat P(S)$ (indexed by $S \in \S_k$) follows an $n$-multinomial distribution over $|\S_k|$ categories, with means given by $P(S)$; i.e., \[(n\hat P(S_1),...,n\hat P(S_k)) \sim \operatorname{Multinomial}(n,P(S_1),...,P(S_k)).\]
Thus, by Lemma~\ref{lemma:multinomial_expectation}, for each $k \in [K]$,
\[\E \left[ \sum_{S \in \S_k} \left| P(S) - \hat P(S) \right| \right]
\leq \sqrt{\frac{|\S_k| - 1}{n}}
= \sqrt{\frac{N(\epsilon_k) - 1}{n}}.\]
Thus, by Lemma~\ref{lemma:nested_partitions_Wasserstein_bound},
\begin{align*}
\E \left[ W_r^r(P, Q) \right]
& \leq \E \left[ \epsilon_K^r + \sum_{k = 1}^K \left( \sum_{j = k}^K 2^{j - k} \epsilon_j \right)^r \left( \sum_{S \in \S_k} \left| P(S) - Q(S) \right| \right) \right] \\
& \leq \epsilon_K^r + \sum_{k = 1}^K \left( \sum_{j = k}^K 2^{j - k} \epsilon_j \right)^r \E \left[ \sum_{S \in \S_k} \left| P(S) - Q(S) \right| \right] \\
& \leq \epsilon_K^r + \frac{1}{\sqrt{n}} \sum_{k = 1}^K \left( \sum_{j = k}^K 2^{j - k} \epsilon_j \right)^r \sqrt{N(\epsilon_k) - 1}
\end{align*}
\end{proof}
\section{Proof Sketch of Theorem~\ref{thm:unbounded_upper_bound_appendix}}
In this section, we prove our more general upper bound, Theorem~\ref{thm:unbounded_upper_bound_appendix}, which applies to potentially unbounded metric spaces $(\Omega,\rho)$, assuming that $P$ is sufficiently concentrated (i.e., has at least $\ell > 0$ finite moments).
The basic idea is to partition the potentially unbounded metric space $(\Omega,\rho)$ into countably many totally bounded subsets $B_1,B_2,...$, and to decompose the Wasserstein error into its error on each $B_i$, weighted by the probability $P(B_i)$. Specifically, fixing an arbitrary base point $x_0$, $B_1,B_2,...$ will be spherical shells, such that $x_0 \in B_1$, and both the distance between $B_i$ and $x_0$, as well as the size (covering number) of $B_i$, increase with $i$. For large $i$, the assumption that $P$ has $\ell$ bounded moments implies (by Markov's inequality) that $P(B_i)$ is small, whereas, for small $i$, we adapt our previous result Theorem~\ref{thm:expectation_bound_appendix} in terms of the covering number.
To carry out this approach, we will need two new lemmas. The first decomposes Wasserstein distance into the sum of its distances on each $B_i$, and can be considered an adaptation of Lemma 2.2 of \citet{lei2018convergence} (for Banach spaces) to general metric spaces.
\begin{lemma}
Fix a reference point $x_0 \in \Omega$ and a non-decreasing real-valued sequence $\{w_k\}_{k \in \N}$ with $w_0 = 0$ and $\lim_{k \to \infty} w_k = \infty$. For each $k \in \N$, define
\[B_k := \left\{x \in \Omega : w_k \leq \rho(x_0, x) < w_{k + 1} \right\}.\]
Then, there exists a constant $C_r$ depending only on $r$ such that, for any Borel probability measures $P$ and $Q$ on $\Omega$,
\[W_r^r(P,Q)
\leq C_r \sum_{k = 0}^\infty
w_k^r
\min \left\{ P(B_k), Q(B_k) \right\}
W_r^r(P_{B_k},Q_{B_k})
+ \left| P(B_k) - Q(B_k) \right|.\]
where, for any sets $A, B \subseteq \Omega$,
\[P_A(B) = \frac{P(A \cap B)}{P(B)}\]
(under the convention that $\frac{0}{0} = 0$) denotes the conditional probability of $B$ given $A$, under $P$.
\label{lemma:sigma_finite_partition}
\end{lemma}
The second lemma is more nuanced variant of Lemma~\ref{lemma:multinomial_expectation} (albeit, leading to slightly looser constants). When $i$ is large the covering number of $B_i$ can become quite large, but the total probability $P(B_i)$ is quite small. Whereas Lemma~\ref{lemma:multinomial_expectation} depends only on the size of the partition, the following result will allow us to control the total error using both of these factors.
\begin{lemma}[Theorem 1 of \citet{berend2013binomialMAD}]
Suppose $X \sim \operatorname{Binomial}(n,p)$. Then, we have the bound
\begin{equation}
\E \left[ \left| X - n p \right| \right] \leq n \min \left\{ 2P(A), \sqrt{P(A)/n} \right\}.
\label{ineq:binomial_MAD}
\end{equation}
on the mean absolute deviation of $X$.
\label{lemma:binomial_MAD}
\end{lemma}
Finally, we are ready to prove our main upper bound result for unbounded metric spaces.
\begin{customthm}{\ref{thm:unbounded_upper_bound}}[General Upper Bound for Unbounded Metric Spaces]
Let $x_0 \in \Omega$ and suppose $m_{\ell,x_0}(P) \in [1, \infty)$. Let $J$ be a positive integer. Fix two non-decreasing real-valued sequences $\{w_k\}_{k \in \N}$ and $\{\epsilon_j\}_{j \in \N}$, of which $\{w_k\}_{k \in \N}$ is non-decreasing with $w_0 = 0$ and $\lim_{k \to \infty} w_k = \infty$ and $\{\epsilon_j\}_{j \in [J]}$ is non-increasing.
For each $k \in \N$, define
\[B_k(x_0) := \left\{ y \in \Omega : w_k \leq \rho(x_0, x) < w_{k + 1} \right\}.\]
Then,
\begin{align*}
\E \left[ W_r^r(P, \hat P) \right]
& \leq m_{\ell,x_0}^\ell \sum_{k \in \N} w_k^{-\ell} \left( \epsilon_J \right)^r + 2^r w_k^{r - \ell/2} \min \left\{ 2w_k^{-\ell/2}, \sqrt{\frac{1}{n}} \right\} \\
& \hspace{2cm} + \sum_{j = 1}^J \left( \sum_{t = j}^J 2^{J - t} \epsilon_t \right)^r \min \left\{ 2w_k^{-\ell}, \sqrt{\frac{w_k^{-\ell}}{n} N(B_k,\rho,\epsilon_j)} \right\}.
\end{align*}
\label{thm:unbounded_upper_bound_appendix}
\end{customthm}
\begin{proof}
As in the proof of Theorem~\ref{thm:expectation_bound_appendix}, by recursively applying Lemma~\ref{lemma:fine_refinement}, for each $k \in \N$, we can construct a nested sequence $\{\S_{k,j}\}_{j \in [J_k]}$ of partitions of $B_k$ such that, for each $j \in [J_k]$,
\begin{equation}
|\S_{k,j}| = N(B_k,\rho,\epsilon_{k,j})
\quad \text{ and } \quad
\operatorname{Res}(\S_{k,j}) \leq \sum_{t = 0}^j 2^t \epsilon_{k,j}.
\label{eq:recursive_fine_refinement}
\end{equation}
Since each $P_{B_k}$ and $\hat P_{B_k}$ are supported only on $B_k$, plugging the bound Lemma~\ref{lemma:nested_partitions_Wasserstein_bound} into the bound in Lemma~\ref{lemma:sigma_finite_partition} gives
\begin{align*}
& W_r^r(P, \hat P) \\
& \leq \sum_{k \in \N} \min \left\{ P(B_k), \hat P(B_k) \right\} \left( \left( \operatorname{Res}(\S_{k,0}) \right)^r + \sum_{j = 1}^{J_k} \left( \operatorname{Res}(\S_{k,j}) \right)^r \sum_{S \in \S_{k,j + 1}} \left| P_{B_k}(S) - \hat P_{B_k}(S) \right| \right) \\
& \hspace{1cm} + 2^r w_k^r \left| P(B_k) - \hat P(B_k) \right| \\
& \leq \sum_{k \in \N} 2^r w_k^r \left| P(B_k) - \hat P(B_k) \right| + P(B_k) \left( \operatorname{Res}(\S_{k,0}) \right)^r + \sum_{j = 1}^J \left( \operatorname{Res}(\S_{k,j}) \right)^r \sum_{S \in \S_{k,j + 1}} \left| P(S) - \hat P(S) \right|.
\end{align*}
Since each $\hat P(S) \sim \operatorname{Binomial}(n, P(S))$, for each $k \in \N$ and $j \in [J_k]$, Lemma~\ref{lemma:binomial_MAD} followed by Cauchy-Schwarz gives
\begin{align*}
\E \left[ \sum_{S \in \S_{k,j}} \left| P(S) - \hat P(S) \right| \right]
& \leq \sum_{S \in \S_{k,j + 1}} \min \left\{ 2P(S), \sqrt{P(S)/n} \right\} \\
& \leq \min \left\{ 2P(B_k), \sqrt{\frac{P(B_k)}{n} |\S_{k,j}|} \right\}.
\end{align*}
Therefore, taking expectations (over $X_1,...,X_n$), applying Inequality~\ref{eq:recursive_fine_refinement}, and applying Lemma~\ref{lemma:binomial_MAD} once more gives
\begin{align*}
\E \left[ W_r^r(P, \hat P) \right]
& \leq \sum_{k \in \N} P(B_k) \left( \epsilon_{k,0} \right)^r + 2^r w_k^r \min \left\{ 2P(B_k), \sqrt{P(B_k)/n} \right\} \\
& \hspace{1cm} + \sum_{j = 1}^{J_k} \left( \sum_{t = 0}^j 2^t \epsilon_{k,j} \right)^r \min \left\{ 2P(B_k), \sqrt{\frac{P(B_k)}{n} N(B_k,\rho,\epsilon_{k,j + 1})} \right\}.
\end{align*}
Now note that, by Markov's inequality,
\begin{equation}
P(B_k)
\leq \pr_{X \sim P} \left[ \rho(x_0, X) \geq w_k \right]
= \pr_{X \sim P} \left[ \rho^\ell (x_0, X) \geq w_k^\ell \right]
\leq \frac{m_{\ell,x_0}^\ell(P)}{w_k^\ell}.
\end{equation}
Therefore, assuming that each $m_{\ell,x_0}^\ell \geq 1$, so that $m_{\ell,x_0}^\ell \geq m_{\ell,x_0}^{\ell/2}$,
\begin{align*}
\E \left[ W_r^r(P, \hat P) \right]
& \leq m_{\ell,x_0}^\ell \sum_{k \in \N} w_k^{-\ell} \left( \epsilon_{k,0} \right)^r + 2^r w_k^r \min \left\{ 2w_k^{-\ell}, \sqrt{w_k^{-\ell}/n} \right\} \\
& \hspace{1cm} + \sum_{j = 1}^{J_k} \left( \sum_{t = 0}^j 2^t \epsilon_{k,j} \right)^r \min \left\{ 2w_k^{-\ell}, \sqrt{\frac{w_k^{-\ell}}{n} N(B_k,\rho,\epsilon_{k,j + 1})} \right\},
\end{align*}
proving the theorem.
\end{proof}
\section{Proofs of Lemmas}
\label{app:proofs}
\begin{customlemma}{\ref{lemma:measures_agree_on_partition}}
Suppose $\S \in \SS$ is a countable Borel partition of $\Omega$. Let $P$ and $Q$ be Borel probability measures such that, for every $S \in \S$, $P(S) = Q(S)$. Then, for any $r \geq 1$, $W_r(P, Q) \leq \operatorname{Res}(\S)$.
\end{customlemma}
\begin{proof}
This fact is intuitively obvious; clearly, there exists a transportation map $\mu$ from $P$ to $Q$ that moves mass only within each $S \in \S$ and therefore without moving any mass further than $\delta$. For completeness, we give a formal construction.
Let $\mu : \Sigma^2 \to [0,1]$ denote the coupling that is conditionally independent given any set $S \in \S$ with $P(S) = Q(S) > 0$ (that is, for any $A, B \in \Sigma$, $\mu(A \times B \cap S \times S) P(S) = P(A \cap S) Q(B \cap S)$).\footnote{The existence of such a measure can be verified by the Hahn-Kolmogorov theorem, similarly to that of the usual product measure (see, e.g., Section IV.4 of \citet{doob2012measure}).} It is easy to verify that $\mu \in \mathcal{C}(P,Q)$. Since $\S$ is a countable partition and $\mu$ is only supported on $\bigcup_{S \in \S} S \times S$,
\begin{align*}
W_r(P, Q)
& \leq \left( \int_{\Omega \times \Omega} \rho^r(x,y) \, d\mu(x,y) \right)^{1/r} \\
& = \left( \sum_{S \in \S} \int_{S \times S} \rho^r(x,y) \, d\mu(x,y) \right)^{1/r} \\
& \leq \left( \sum_{S \in \S} \int_{S \times S} \delta^r \, d\mu(x,y) \right)^{1/r} \\
& = \delta \left( \sum_{S \in \S} \mu(S \times S) \right)^{1/r}
= \delta \left( \sum_{S \in \S} \frac{P(S) Q(S)}{P(S)} \right)^{1/r}
= \delta \left( \sum_{S \in \S} Q(S) \right)^{1/r}
= \delta.
\end{align*}
\end{proof}
\begin{customlemma}{\ref{lemma:countable_support_bound}}
Suppose $(\Omega, \rho)$ is a metric space, and suppose $P$ and $Q$ are Borel probability distributions on $\Omega$ with countable support; i.e., there exists a countable set $\X \subseteq \Omega$ with $P(\X) = Q(\X) = 1$. Then, for any $r \geq 1$,
\[(\Sep(\X))^r \sum_{x \in \X} \left| P(\{x\}) - Q(\{x\}) \right|
\leq W_r^r(P,Q)
\leq (\Diam(\X))^r \sum_{x \in \X} \left| P(\{x\}) - Q(\{x\}) \right|.\]
\end{customlemma}
\begin{proof}
The term $\sum_{x \in \X} \left| P(\{x\}) - Q(\{x\}) \right| = TV(P, Q)$ is precisely the (unweighted) amount of mass that must be transported to transform between $P$ and $Q$. Hence, the result is intuitively fairly obvious; all mass moved has a cost of at least $\Sep(\Omega)$ and at most $\Diam(\Omega)$. However, for completeness, we give a more formal proof below.
To prove the lower bound, suppose $\mu \in \Pi(P, Q)$ is any coupling between $P$ and $Q$. For $x \in \X$,
\[\mu(\{x\} \times \{x\}) + \mu(\{x\} \times (\Omega \sminus \{x\}))
= \mu(\{x\} \times \Omega)
= P(\{x\})\]
and, similarly,
\[\mu(\{x\} \times \{x\}) + \mu((\Omega \sminus \{x\}) \times \{x\})
= \mu(\Omega \times \{x\})
= Q(\{x\}).\]
Since $P(\{x\}), Q(\{x\}) \in [0,1]$, it follows that
\[\mu(\{x\} \times (\Omega \sminus \{x\})) + \mu(\mu((\Omega \sminus \{x\}) \times \{x\}))
\geq \left| P(\{x\} - Q(\{x\}) \right|.\]
Therefore, since $\rho(x,y) = 0$ whenever $x = y$ and $\rho(x, y) \geq \Sep(\Omega)$ whenever $x \neq y$,
\begin{align*}
\int_{\Omega \times \Omega} \rho^r(x, y) \, d\mu(x,y)
& = \int_{\X \times \X} \rho^r(x, y) \, d\mu(x,y) \\
& = \sum_{x \in \X} \int_{\{x\} \times (\Omega \sminus \{x\})} \rho^r(x, y) \, d\mu(x,y)
+ \int_{(\Omega \sminus \{x\}) \times \{x\}} \rho^r(x, y) \, d\mu(x,y) \\
& \geq (\Sep(\Omega))^r \sum_{x \in \X} \mu(\{x\} \times (\Omega \sminus \{x\}))
+ \mu((\Omega \sminus \{x\}) \times \{x\}) \\
& \geq (\Sep(\Omega))^r \sum_{x \in \X} \left| P(\{x\}) - Q(\{x\}) \right|.
\end{align*}
Taking the infimum over $\mu$ on both sides gives
\[(\Sep(\Omega))^r \sum_{x \in \X} \left| P(\{x\}) - Q(\{x\}) \right|
\leq W_r^r(P, Q).\]
To prove the upper bound, since $\rho$ is upper bounded by $\Diam(\Omega)$, it suffices to construct a coupling $\mu$ that only moves mass into or out of each given point, but not both; that is, for each $x \in \X$,
\[\min\{\mu(\{x\} \times (\Omega \sminus \{x\})), \mu((\Omega \sminus \{x\}) \times \{x\})\}
= 0.\]
One way of doing this is as follows. Fix an ordering $x_1,x_2,...$ of the elements of $\X$.
For each $i \in \N$, define
\[X_i := \sum_{\ell = 1}^i (P(x_\ell) - Q(x_\ell))_+
\quad \text{ and } \quad
Y_i := \sum_{\ell = 1}^i (Q(x_\ell) - P(x_\ell))_+,\]
and further define
\[j_i := \min \{ j \in \N : X_i \leq Y_j \}
\quad \text{ and } \quad
k_i := \min \{ k \in \N : X_j \geq Y_i \}.\]
Then, for each $i \in \N$, move $X_i$ mass from $\{x_1,...,x_i\}$ to $\{y_1,...,y_{j_i}\}$ and move $Y_i$ mass from $\{y_1,...,y_i\}$ to $\{x_1,...,x_{k_i}\}$. As $i \to \infty$, by construction of $X_i$ and $Y_i$, the total mass moved in this way is
\[\mu((\X \times \X) \sminus \{(x,x) : x \in \X\})
= \lim_{i \to \infty} X_i + Y_i = \sum_{x \in \X} \left| P(x) - Q(x) \right|.\]
\end{proof}
\begin{customlemma}{\ref{lemma:nested_partitions_Wasserstein_bound}}
Let $K$ be a positive integer.
Suppose $\{\S_k\}_{k \in [K]}$ is a sequence of nested countable Borel partitions of $(\Omega,\rho)$, with $\S_0 = \Omega$. Then, for any $r \geq 1$ and any Borel probability distributions $P$ and $Q$ on $\Omega$,
\[W_r^r(P, Q)
\leq (\operatorname{Res}(\S_K))^r + \sum_{k = 1}^K \left( \operatorname{Res}(\S_{k - 1}) \right)^r
\left( \sum_{S \in \S_k} \left| P(S) - Q(S) \right| \right).\]
\end{customlemma}
\begin{proof}
Our proof follows the same ideas as and slightly generalizes of the proof of Proposition 1 in \citet{weed2017sharp}.
Intuitively, to prove Lemma~\ref{lemma:nested_partitions_Wasserstein_bound} it suffices to find a transportation map such that
For each $k \in [K]$, recursively define
\[P_k := P - \sum_{j = 0}^{k - 1} \mu_k
\quad \text{ and } \quad
Q_k := Q - \sum_{j = 0}^{k - 1} \nu_k,\]
where, for each $k \in [K]$, $\mu_k$ and $\nu_k$ are Borel measures on $\Omega$ defined for any $E \in \Sigma$ by
\[\mu_k(E) := \sum_{S \in \S_k : P_k(S) > 0} \left( P_k(S) - Q_k(S) \right)_+ \frac{P_k(E \cap S)}{P_k(S)}\]
and
\[\nu_k(E) := \sum_{S \in \S_k : Q_k(S) > 0} \left( Q_k(S) - P_k(S) \right)_+ \frac{Q_k(E \cap S)}{Q_k(S)}.\]
By construction of $\mu_k$ and $\nu_k$, each $\mu_k$ and $\nu_k$ is a non-negative measure and $\sum_{k = 1}^K \mu_k \leq P$ and $\sum_{k = 1}^K \nu_k \leq Q$. Furthermore, for each $k \in [K - 1]$, for each $S \in \S_k$, $\mu_{k + 1}(S) = \nu_{k + 1}(S)$, and
\[\mu_k(\Omega) = \nu_k(\Omega) \leq \sum_{S \in \S_k} \left| P(S) - Q(S) \right|.\]
Consequently, although $\mu$ and $\nu$ are not probability measures, we can slightly generalize the definition of Wasserstein distance by writing
\[W_r^r \left( \mu_k, \nu_k \right)
:= \mu(\Omega) \inf_{\tau \in \Pi \left( \frac{\mu_k}{\mu_k(\Omega)}, \frac{\nu_k}{\nu_k(\Omega)}\right)} \E_{(X,Y) \sim \mu} \left[ \rho^r \left( X, Y \right) \right]\]
(or $W_r^r(\mu_k, \nu_k) = 0$ if $\mu_k = \nu_k = 0$).
In particular, this is convenient because we one can easily show that, by construction of the sequences $\{P_k\}_{k \in [K]}$ and $\{Q_k\}_{k \in [K]}$,
\begin{equation}
W_r^r(P, Q)
\leq W_r^r \left( P_K, Q_K \right) + \sum_{k = 1}^K W_r^r \left(\mu_k, \nu_k \right).
\label{ineq:decomposition}
\end{equation}
For each $k \in [K]$, Lemma~\ref{lemma:countable_support_bound} implies that
\begin{align*}
W_r^r(\mu_k,\nu_k)
& \leq \sum_{S \in \S_{k - 1}} \left( \Diam(S) \right)^r \sum_{T \in \S_k : T \subseteq S} \left| P(T) - Q(T) \right| \\
& \leq \left( \operatorname{Res}(\S_{k - 1}) \right)^r \sum_{S \in \S_{k - 1}} \sum_{T \in \S_k : T \subseteq S} \left| P(T) - Q(T) \right| \\
& = \left( \operatorname{Res}(\S_{k - 1}) \right)^r \sum_{T \in \S_k} \left| P(T) - Q(T) \right|.
\end{align*}
Furthermore, for each $S \in \S_K$, $P_K = Q_K$, Lemma~\ref{lemma:measures_agree_on_partition} gives that
\[W_r^r \left( P_K, Q_K \right)
\leq \left( \operatorname{Res}(\S_K) \right)^r\]
Plugging these last two inequalities into Inequality~\eqref{ineq:decomposition} gives the desired result:
\[W_r^r(P, Q)
\leq \left( \operatorname{Res}(\S_K) \right)^r + \sum_{k = 1}^K \left( \operatorname{Res}(\S_{k - 1}) \right)^r \sum_{S \in \S_k} \left| P(S) - Q(S) \right|.\]
\end{proof}
\begin{customlemma}{\ref{lemma:fine_refinement}}
Suppose $\S$ and $\T$ are partitions of $(\Omega,\rho)$, and suppose $\S$ is countable. Then, there exists a partition $\S'$ of $(\Omega,\rho)$ such that:
\begin{enumerate}[label=\alph*)]
\item
$|\S'| \leq |\S|$.
\item
$\operatorname{Res}(\S') \leq \operatorname{Res}(\S) + 2\operatorname{Res}(\T)$.
\item
$\T$ is a refinement of $\S'$.
\end{enumerate}
\end{customlemma}
\begin{proof}
Enumerate the elements of $\S$ as $S_1,S_2,...$. Define $S_0' := \emptyset$, and then, for each $i \in \{1,2,...\}$, recursively define
\[S_i' := \left. \left( \bigcup_{T \in \T : T \cap S_i \neq \emptyset} T \right) \middle \sminus \left( \bigcup_{j = 1}^{i - 1} S_j' \right) \right.,\]
and set $\S' = \{S_1',S_2',...\}$. Clearly, $|\S'| \leq |\S|$ (equality need not hold, as we may have some $S_i' = \emptyset$).
By the triangle inequality, each
\[\Diam(S_i')
\leq \Diam \left( \bigcup_{T \in \T : T \cap S_i \neq \emptyset} T \right)
\leq \delta_\S + 2 \delta_T.\]
Finally, since $\T$ is a partition and we can write
\[S_i' = \left. \left( \bigcup_{T \in \T : T \cap S_i \neq \emptyset} T \right) \middle \sminus \left( \bigcup_{j = 1}^{i - 1} \bigcup_{T \in \T : T \cap S_j' \neq \emptyset} T \right) \right.,\]
$\T$ is a refinement of $\S'$.
\end{proof}
\section{Proof of Lower Bound}
In this section, we provide a proof of our main lower bound, Theorem~\ref{thm:Wasserstein_distribution_estimation_lower_bound} in the main text. The proof consists of two main steps. First, we show that the minimax error of estimation in Wasserstein distance can be lower bounded by a product of two terms, one depending on the packing radius $R$ and the other depending on the minimax risk of estimating a particular discrete (i.e., multinomial) distribution under $\L_1$ loss. The second step is then to apply a minimax lower bound on the risk of estimating a multinomial distribution under $\L_1$ loss. These two steps respectively rely on two lemmas, Lemma~\ref{lemma:wasserstein_projections} and Lemma~\ref{lemma:multinomial_minimax_lower_bound} given below.
The first lemma implies that, when a distribution $P$ is supported on a finite subset $\D$ of the sample space, then there exists an estimator $\hat P_\D$ of $\hat P$ that is supported on $\D$ is minimax optimal, up to a small constant factor. While this fact is relatively obvious for measure-theoretic metrics such as $\L_p$ distances, it is somewhat less obvious for Wasserstein distances, which also depend on metric properties of the space. This observation is key to lower bounding the minimax rate in terms of the minimax rate for estimating a discrete distribution.
\begin{lemma}[Wasserstein Projections]
Let $(\X,\rho)$ be a metric space and let $\D \subseteq \X$ be finite. Let $\P$ denote the family of all Borel probability distributions on $\X$, and let
\[\P_\D := \{P \in \P : P(\D) = 1\}\]
denote the set of distributions supported only on $\D$. Suppose $P \in \P_\D$ and $Q \in \P$. Then,
\[\mathop{\arg\!\min}_{\tilde Q \in \P_\D} W_r(Q, \tilde Q) \neq \emptyset
\quad \text{ and, for any } \quad
Q' \in \mathop{\arg\!\min}_{\tilde Q \in \P'} W_r(Q, \tilde Q),\]
we have $W_r(P, Q') \leq 2W_r(P, Q)$.
\label{lemma:wasserstein_projections}
\end{lemma}
\begin{proof}
Let $\{\S_x\}_{x \in \D}$ denote the Voronoi diagram of $\X$ with respect to $\D$; that is, for each $x \in \D$, let
\[\S_x := \{y \in \X : x \in \mathop{\arg\!\min}_{z \in \D} \rho(x,y) \}.\]
Since $\{S_x\}_{x \in \D}$ is a finite cover of $\X$, we can disjointify it (see Remark~\ref{remark:disjointification}) while retaining the property that, for every $x \in \D$ and every $y \in \S_x$, $\rho(x,y) = \min_{z \in \D} \rho(z,y)$; hence, we assume without loss of generality that $\{\S_x\}_{x \in \D}$ is a partition of $\X$. Then, there is a unique distribution $Q' \in \P_D$ such that, for each $x \in \D$, $Q'(\{x\}) = Q(\S_x)$. It is easy to see by definition of the Voronoi diagram that $Q' \in \mathop{\arg\!\min}_{\tilde Q \in \P_\D} W_r(Q, \tilde Q)$; the unique transportation map $\mu_* \in \Pi(Q,Q')$ such that each $\mu(\S_x,\{x\}) = Q(\S_x)$ clearly minimizes
\[\E_{(X,Y) \sim \mu} \left[ \rho^r(X,Y) \right]\]
over all $\mu \in \bigcup_{\tilde Q \in \P_\D} \Pi(Q, \tilde Q)$. Moreover, since $P \in \P_D$, by the triangle inequality and the definition of $Q'$, $W_r(P, Q') \leq W_r(P, Q) + W_r(Q, Q') \leq 2 W_r(P, Q)$.
\end{proof}
The second lemma is a simple minimax lower bound for the problem of estimating the mean vector of a multinomial distribution, under $\L_1$ loss.
\begin{lemma}[Minimax Lower Bound for Mean of Multinomial Distribution]
Suppose $k \leq 32 n$. Let $p \in \Delta^k$, and suppose $X_1,...,X_n \stackrel{IID}{\sim} \operatorname{Categorical}(p_1,...,p_k)$ are distributed IID according to a categorical distribution on $[k]$, with mean vector $p$. Then, we have the following minimax lower bound for estimating $p$ under $\L^1$-loss:
\[\inf_{\hat p} \sup_{p \in \Delta^k} \E \left[ \|p - \hat p\|_1 \right] \geq \frac{3\log 2}{4096} \sqrt{\frac{k - 1}{n}},\]
where the infimum is taken over all estimators (i.e., all (potentially randomized) functions $\hat p : [k]^n \to \Delta^k$ of the data).
\label{lemma:multinomial_minimax_lower_bound}
\end{lemma}
Note that, while the above result is phrased for categorical distributions to simplify notation in the proof, the result is equivalent to a statement for multinomial distributions, since $\sum_{i = 1}^n X_i \sim \text{Multinomial}(n,p_1,...,p_k)$ and $X_1,...,X_n$ are assumed to be IID and therefore exchangeable.
\begin{proof}
We follow a standard procedure for proving minimax lower bounds based on Fano's inequality, as outlined in Section 2.6 of \citet{tsybakov2009introduction}.
Let $p_0 = \left( 1/k, ...., 1/k \right) \in \Delta^K$ denote the uniform vector in $\Delta^k$.
Let $\I := \left[ \lfloor \frac{k}{2} \rfloor \right]$. For each $j \in \I$, define $\phi_j : [k] \to \R^k$ by
\[\phi_j := 1_{\{2j - 1\}} - 1_{\{2j\}},\]
and, for each $\tau \in \{-1,1\}^\I$, define
\[p_\tau := p_0 + \frac{c}{k} \sum_{j \in \I} \tau_j \phi_j,\]
where
\[c = \frac{1}{16} \sqrt{\frac{k - 1}{n}\log 2} \leq \frac{1}{2}.\]
Note that, since $|c| \leq 1$ and, for each $j \in \I$, $\sum_{\ell \in [k]} \phi_j(\ell) = 0$, each $p_\tau \in \Delta^k$.
Observe that, for any $\tau,\tau' \in \{-1,1\}^\I$, we have
\[\|p_\tau - p_{\tau'}\|_1
= \frac{4 c \omega(\tau,\tau')}{k},
\quad \text{ where } \quad
\omega(\tau,\tau') = \sum_{i \in \I} 1_{\{\tau_i \neq \tau_i'\}}\]
denotes the Hamming distance between $\tau$ and $\tau'$. By the Varshamov-Gilbert bound (see, e.g., Lemma 2.9 of \citet{tsybakov2009introduction}), there exists a subset $T \subseteq \{-1,1\}^\I$ such that $\log |T| \geq \frac{\lfloor k/2 \rfloor \log 2}{8}$ and, for every $\tau, \tau' \in T$,
\[\omega(\tau,\tau') \geq \frac{|\I|}{8}
= \frac{\lfloor k/2 \rfloor}{8},
\quad \text{ and hence } \quad
\|p_\tau - p_{\tau'}\|_1 \geq c \frac{\lfloor k/2 \rfloor}{2k}.\]
Also, for any $\tau \in T$,
\begin{align*}
D_{KL}(p_\tau^n,p_0^n)
& = n D_{KL}(p_\tau,p_0) \\
& = n \sum_{j \in [k]} p_{\tau,j} \log \left( \frac{p_{\tau,j}}{p_{0,j}}\right) \\
& = n \sum_{j \in \I} p_{\tau,2j - 1} \log \left( \frac{p_{\tau,2j - 1}}{1/k} \right) + p_{\tau,2j} \log \left( \frac{p_{\tau,2j}}{1/k} \right) \\
& = \frac{n |\I|}{k} \left( (1 - c) \log \left( 1 - c \right) + (1 + c) \log \left( 1 + c \right) \right)
\end{align*}
One can check (e.g., by Taylor expansion) that, for any $c \in (0,1/2)$,
\[(1 - c) \log \left( 1 - c \right) + (1 + c) \log \left( 1 + c \right)
< 2c^2.\]
Thus, since $|\I| \leq k/2$,
\[D_{KL}(p_\tau^n,p_0^n)
\leq \frac{2 n |\I| c^2}{k}
\leq n c^2.\]
It follows that from the choice of $c$ (and noting that, by the assumptions that $k \leq 32n$, $c \in (0,1/2)$) that
\[\frac{1}{|T|} \sum_{\tau \in T} D_{KL}(p_\tau^n, p_0^n)
\leq nc^2
\leq \frac{\lfloor k/2 \rfloor \log 2}{128}
\leq \frac{1}{16} \log |T|.\]
Therefore, by Fano's method for lower bounds (see, e.g., Theorem 2.5 of \citet{tsybakov2009introduction}, with $\alpha = 1/16$ and
\[s := \frac{c}{16}
\leq c \frac{\lfloor k/2 \rfloor}{4k}
\leq \frac{1}{2} \|p_\tau - p_{\tau'}\|_1,\]
we have
\begin{align*}
\inf_{\hat p} \sup_{p \in \Delta^k} \E \left[ \|p - \hat p\|_1 \right]
& \geq \inf_{\hat p} \sup_{p \in \Delta^k} c \frac{\lfloor k/2 \rfloor}{4k} \pr \left[ \|p - \hat p\|_1 \geq c \frac{\lfloor k/2 \rfloor}{4k} \right] \\
& \geq c \frac{\lfloor k/2 \rfloor}{4k} \frac{3}{16} \\
& \geq \frac{3\log 2}{4096} \sqrt{\frac{k - 1}{n}}.
\end{align*}
\end{proof}
\begin{customthm}{\ref{thm:Wasserstein_distribution_estimation_lower_bound}}
Let $(\Omega,\rho)$ be a metric space, and let $\P$ denote the set of Borel probability measures on $(\Omega,\rho)$.
\[\inf_{\hat P : \X^n \to \P} \sup_{P \in \P} \E_{X_1,...,X_n \stackrel{IID}{\sim} P} \left[ W_r^r(P, \hat P(X_1,...,X_n)) \right]
\geq c_r \sup_{k \in [32n]} R^r(k) \sqrt{\frac{k - 1}{n}},\]
where
\[c_r = \frac{3\log 2}{4096\cdot 2^r}.\]
is independent of $n$ and the infimum is taken over all estimators (i.e., all (potentially randomized) functions $\hat P : \X^n \to \P$ of the data).
\label{thm:Wasserstein_distribution_estimation_lower_bound_appendix}
\end{customthm}
\begin{proof}
Let $k \leq 32n$, and let $\D$ be an $R(k)$-packing $\D$ of $(\Omega,\rho)$ with $|\D| = k$. Let $\P_\D$ denote the class of (discrete) distributions over $\D$.
By Lemma~\ref{lemma:countable_support_bound}, Lemma~\ref{lemma:wasserstein_projections}, Lemma~\ref{lemma:multinomial_minimax_lower_bound}, and the definition of the packing radius (in that order)
\begin{align*}
\inf_{\hat P : \X^n \to \P} \sup_{P \in \P} \E \left[ W_r^r(\hat P, P) \right]
& \geq \left( \Sep(\D) \right)^r \inf_{\hat P : \X^n \to \P} \sup_{P \in \P} \E \left[ \|\hat P - P\|_1 \right] \\
& \geq \left( \Sep(\D) \right)^r \inf_{\hat P : \X^n \to \P} \sup_{P \in \P_\D} \E \left[ \|\hat P - P\|_1 \right] \\
& \geq \left( \frac{\Sep(\D)}{2} \right)^r \inf_{\hat P : \X^n \to \P_\D} \sup_{P \in \P_\D} \E \left[ \|\hat P - P\|_1 \right] \\
& \geq \frac{3\log 2}{4096 \cdot 2^r} \left( \Sep(\D) \right)^r \sqrt{\frac{|\D| - 1}{n}} \\
& \geq \frac{3\log 2}{4096\cdot 2^r} R^r(k) \sqrt{\frac{k - 1}{n}}.
\end{align*}
The theorem follows by taking the supremum over $k \leq 32n$ on both sides.
\end{proof}
\section{Introduction}
The Wasserstein metric is an important measure of distance between probability distributions, based on the cost of transforming either distribution into the other through mass transport, under a base metric on the sample space. Originating in the optimal transport literature,\footnote{The Wasserstein metric has been variously attributed to Monge, Kantorovich, Rubinstein, Gini, Mallows, and others; see Chapter 3 of \citep{villani2008optimalTransport} for detailed history.}
the Wasserstein metric has, owing to its intuitive and general nature, been utilized in such diverse areas as probability theory and statistics, economics, image processing, text mining, robust optimization, and physics~\citep{villani2008optimalTransport,fournier2015rate,esfahani2015robustOptimization,gao2016distributionallyRobust}.
In the analysis of image data, the Wasserstein metric has been used for various tasks such as texture classification and face recognition~\citep{sandler2011NMFImageAnalysis}, reflectance interpolation, color transfer, and geometry processing~\citep{solomon2015imageOptimalTrans}, image retrieval~\citep{rubner2000imageRetrieval},
and image segmentation~\citep{ni2009imageSegmentation}, and, in the analysis of text data, for tasks such as document classification~\citep{kusner2015documentDistances} and machine translation~\citep{zhang2016machineTranslation}.
In contrast to a number of other popular notions of dissimilarity
between probability distributions, such as $\L_p$ distances or Kullback-Leibler and other $f$-divergences~\citep{morimoto1963divergences,csiszar1964divergences,ali1966divergences}, which require distributions to be absolutely continuous with respect to each other or to a base measure, Wasserstein distance is well-defined between \emph{any} pair of probability distributions over a sample space equipped with a metric.\footnote{Hence, we use ``distribution estimation'' in this paper, rather than the more common ``density estimation''.}
As a particularly important consequence, Wasserstein distances between discrete (e.g., empirical) distributions and continuous distributions are well-defined, finite, and informative (e.g., can decay to $0$ as the distributions become more similar).
Partly for this reason, many central limit theorems and related approximation results~\citep{ruschendorf1985wasserstein,johnson2005central,chatterjee2008normalApproximation,rio2009upper,rio2011asymptotic,chen10SteinsMethod,reitzner2013central} are expressed using Wasserstein distances.
Within machine learning and statistics, this same property motivates a class of so-called \emph{minimum Wasserstein distance estimates}~\citep{del1999CLT,del2003correction,bassetti2006minimum,bernton2017inferenceUsingWasserstein} of distributions, ranging from exponential distributions~\citep{baillo2016exponentialWasserstein} to more exotic models such as restricted Boltzmann machines (RBMs)~\citep{montavon2016wassersteinRBMs} and generative adversarial networks (GANs)~\citep{arjovsky2017wassersteinGAN}.
This class of estimators also includes $k$-means and $k$-medians, where the hypothesis class is taken to be discrete distributions supported on at most $k$ points~\citep{pollard1982quantization}; more flexible algorithms such as hierarchical $k$-means~\citep{ho2017multilevel} and $k$-flats~\citep{tseng2000kFlats} can also be expressed in this way, using a more elaborate hypothesis classes. PCA can also be expressed and generalized to manifolds using Wasserstein distance minimization~\citep{boissard2015template}.
These estimators are conceptually equivalent to empirical risk minimization, leveraging the fact that Wasserstein distances between the empirical distribution and distributions in the relevant hypothesis class are well-behaved.
Moreover, these estimates often perform well in practice because they are free of both tuning parameters and strong distributional assumptions.
For many of the above applications, it is important to understand how quickly the empirical distribution converges to the true distribution in Wasserstein distance, and whether there exist distribution estimators that converge more quickly. For example, \citet{canas2012learning} use bounds on Wasserstein convergence to prove learning bounds for $k$-means, while \citet{arora2017generalization} used the slow rate of convergence in Wasserstein distance in certain cases to argue that GANs based on Wasserstein distances fail to generalize with fewer than exponentially many samples in the dimension.
To this end, the {\bf main contribution} of this paper is to identify, in a wide variety of settings, the minimax convergence rate for the problem of estimating a distribution using Wasserstein distance as a loss function. Our setting is very general, relying only on metric properties of the support of the distribution and the number of finite moments the distribution has; some diverse examples to which our results apply are given in Section~\ref{sec:examples}.
Specifically, we assume only that the distribution is has some number of finite moments in a given metric. We then prove bounds on the minimax convergence rates of distribution estimation, utilizing covering numbers of the sample space for upper bounds and packing numbers for lower bounds. It may at first be surprising that positive results can be obtained under such mild assumptions; this highlights that the Wasserstein metric is quite a weak metric (see our Lemma 11
and the subsequent remark for discussion of this). Moreover, our results imply that, without further assumptions on the population distribution, the empirical distribution is typically minimax rate-optimal. Note that, while there has been previous work on upper bounds (discussed in Section~\ref{sec:related_work}), this paper is the first to study minimax lower bounds for this problem.
\textbf{Organization: }
The remainder of this paper is organized as follows.
Section~\ref{sec:notation} provides notation required to formally state both the problem of interest and our results, while Section~\ref{sec:related_work} reviews previous work studying convergence of distributions in Wasserstein distance.
Sections~\ref{sec:upper_bounds} and \ref{sec:lower_bounds} respectively contain our main upper and lower bound results. Since the proofs of the upper bounds, are fairly long, Appendices A and B provide high-level sketches of the proofs, followed by detailed proofs in Appendix C. The lower bound is proven in Appendix D
Finally, in Section~\ref{sec:examples}, we apply our upper and lower bounds to identify minimax convergence rates in a number of concrete examples.
Section~\ref{sec:conclusion} concludes with a summary of our contributions and suggested avenues for future work.
\section{Notation and Problem Setting}
\label{sec:notation}
For any positive integer $n \in \N$, $[n] = \{1,2,...,n\}$ denotes the set of the first $n$ positive integers. For sequences $\{a_n\}_{n \in \N}$ and $\{b_n\}_{n \in \N}$ of non-negative reals, $a_n \lesssim b_n$ and, equivalently $b_n \gtrsim a_n$, indicate the existence of a constant $C > 0$ such that $\limsup_{n \to \infty} \frac{a_n}{b_n} \leq C$. $a_n \asymp b_n$ indicates $a_n \lesssim b_n \lesssim a_n$.
\subsection{Problem Setting}
For the remainder of this paper, fix a metric space $(\Omega,\rho)$, over which $\Sigma$ denotes the Borel $\sigma$-algebra, and let $\P$ denote the family of all Borel probability distributions on $\Omega$. The main object of study in this paper is the Wasserstein distance on $\P$, defined as follows:
\begin{definition}[$r$-Wasserstein Distance]
Given two Borel probability distributions $P$ and $Q$ over $\Omega$ and $r \in [1,\infty)$, the $r$-\emph{Wasserstein distance} $W_r(P,Q) \in [0,\infty]$ between $P$ and $Q$ is defined by
\[W_r(P,Q)
:= \inf_{\mu \in \Pi(P,Q)} \left( \E_{(X,Y) \sim \mu} \left[ \rho^r \left( X, Y \right) \right] \right)^{1/r},\]
where $\Pi(P,Q)$ denotes all couplings between $X \sim P$ and $Y \sim Q$; that is,
\[\Pi(P,Q)
:= \left\{ \mu : \Sigma^2 \to [0,1] \middle| \text{ for all } A \in \Sigma,
\mu(A \times \Omega) = P(A) \text{ and } \mu(\Omega \times A) = Q(A) \right\},\]
is the set of joint probability measures over $\Omega \times \Omega$ with marginals $P$ and $Q$.
\end{definition}
Intuitively, $W_r(P,Q)$ quantifies the $r$-weighted total cost of transforming mass distributed according to $P$ to be distributed according to $Q$, where the cost of moving a unit mass from $x \in \Omega$ to $y \in \Omega$ is $\rho(x,y)$. $W_r(P,Q)$ is sometimes defined in terms of equivalent (e.g., dual) formulations; these formulations will not be needed in this paper.
$W_r$ it is symmetric in its arguments and satisfies the triangle inequality, and, for all $P \in \P$, $W_r(P, P) = 0$. Thus, $W_r$ is always a pseudometric. Moreover, it is a proper metric (i.e., $W_r(P,Q) = 0 \Rightarrow P = Q$) if and only if $\rho$ is as well.
This paper studies the following problem:
\textbf{Formal Problem Statement:} Suppose $(\Omega,\rho)$ is a known metric space. Suppose $P$ is an unknown Borel probability distribution on $\Omega$, from which we observe $n$ IID samples $X_1,...,X_n \stackrel{IID}{\sim} P$. We are interested in studying the minimax rates at which $P$ can be estimated from $X_1,...,X_n$, in terms of the ($r^{th}$ power of the) $r$-Wasserstein loss. Specifically, we are interested in deriving finite-sample upper and lower bounds, in terms of only properties of the space $(\Omega,\rho)$, on the quantity
\begin{equation}
\inf_{\hat P} \sup_{P \in \P} \E_{X_1,...,X_n \stackrel{IID}{\sim} P} \left[ W_r^r \left( P, \hat P(X_1,...,X_n) \right) \right],
\label{exp:minimax_error}
\end{equation}
where the infimum is taken over all estimators $\hat P$ (i.e., (potentially randomized) functions $\hat P : \Omega^n \to \P$ of the data). In the sequel, we suppress the dependence of $\hat P = \hat P(X_1,...,X_n)$ in the notation.
\subsection{Definitions for Stating our Results}
Here, we give notation and definitions needed to state our main results in Sections~\ref{sec:upper_bounds} and \ref{sec:lower_bounds}.
Let $2^\Omega$ denote the power set of $\Omega$. Let $\SS \subseteq 2^{2^{\Omega}}$ denote the family of all Borel partitions of $\Omega$:
\[\SS := \left\{ \S \subseteq \Sigma \quad : \quad \Omega \subseteq \bigcup_{S \in \S} S \quad \text{ and } \quad \forall S,T \in \S, S \cap T = \emptyset \right\}.\]
We now define some metric notions that will later be useful for bounding Wasserstein distances:
\begin{definition}[Diameter and Separation of a Set, Resolution of a Partition]
For any set $S \subseteq \Omega$, the \emph{diameter $\Diam(S)$ of $S$} is defined by
$\Diam(S) := \sup_{x,y \in S} \rho(x,y)$,
and the \emph{separation $\Sep(S)$ of $S$} is defined by
$\Sep(S) := \inf_{x \neq y \in S} \rho(x,y)$.
If $\S \in \SS$ is a partition of $\Omega$, then the \emph{resolution $\operatorname{Res}(\S)$ of $\S$} defined by
$\operatorname{Res}(\S) := \sup_{S \in \S} \Diam(S)$ is the largest diameter of any set in $\S$.
\end{definition}
We now define the covering and packing number of a metric space, which are classic and widely used measures of the size or complexity of a metric space \citep{dudley1967coveringNumbers,haussler1995sphere,zhou2002covering,zhang2002covering}. Our main convergence results will be stated in terms of these quantities, as well as the packing radius, which acts, approximately, as the inverse of the packing number.
\begin{definition}[Covering Number, Packing Number, and Packing Radius of a Metric Space]
The \emph{covering number $N : (0,\infty) \to \N$ of $(\Omega,\rho)$} is defined for all $\epsilon > 0$ by
\[N(\epsilon) := \min \left\{ |\S| : \S \in \SS \quad \text{ and } \quad \operatorname{Res}(\S) \leq \epsilon \right\}.\]
The \emph{packing number $M : (0,\infty) \to \N$ of $(\Omega,\rho)$} is defined for all $\epsilon > 0$ by
\[M(\epsilon) := \max \left\{ |S| : S \subseteq \Omega \quad \text{ and } \quad \Sep(S) \geq \epsilon \right\}.\]
Finally, the \emph{packing radius $R : \N \to [0,\infty]$} is defined for all $n \in \N$ by
\[R(n) := \sup\{ \Sep(S) : S \subseteq \Omega \quad \text{ and } \quad |S| \geq n\}.\]
Sometimes, we use the covering or packing number of a metric space, say $(\Theta, \tau)$, other than $(\Omega,\rho)$; in such cases, we write $N(\Theta;\tau;\epsilon)$ or $M(\Theta;\tau;\epsilon)$ rather than $N(\epsilon)$ or $M(\epsilon)$, respectively.
For specific $\epsilon > 0$, we will also refer to $N(\Theta;\tau;\epsilon)$ as the \emph{$\epsilon$-covering number of $(\Theta,\tau)$}.
\end{definition}
\begin{remark}
The covering and packing numbers of a metric space are closely related. In particular, for any $\epsilon > 0$, we always have
\begin{equation}
M(\epsilon) \leq N(\epsilon) \leq M(\epsilon/2).
\label{ineq:covering_packing_relationship}
\end{equation}
The packing number and packing radius also have a close approximate inverse relationship. In particular, for any $\epsilon > 0$ and $n \in \N$, we always have
\begin{equation}
R(M(\epsilon)) \geq \epsilon
\quad \text{ and } \quad
M(R(n)) \geq n.
\label{ineq:packing_number_radius_relationship}
\end{equation}
However, it is possible that $R(M(\epsilon)) > \epsilon$ or $M(R(n)) > n$.
\end{remark}
Finally, when we consider unbounded metric spaces, we will require some sort of concentration conditions on the probability distributions of interest, to obtain useful results. Specifically, we an appropriately generalized version of the moment of the distribution:
\begin{remark}
We defined the covering number slightly differently from usual (using partitions rather than covers). However, the given definition is equivalent to the usual definition, since (a) any partition is itself a cover (i.e., a set $\C \subseteq 2^\Omega$ such that $\Omega \subseteq \bigcup_{C \in \C} C$), and (b), for any countable cover $\C := \{C_1,C_2,...\} \subseteq 2^\Omega$, there exists a partition $\S \in \SS$ with $|\S| \leq |\C|$ and each $S_i \subseteq C_i$, defined recursively by
$S_i := C_i \sminus \bigcup_{j = 1}^{i - 1} S_i$.
$\S$ is often called the \emph{disjointification} of $\C$.
\label{remark:disjointification}
\end{remark}
\begin{definition}[Metric Moments of a Probability Distribution]
For any $\ell \in [0,\infty]$, probability measure $P \in \P$, and $x \in \Omega$, the \emph{$\ell^{th}$ metric moment $m_{\ell,x}(P)$ of $P$ around $x$} is defined by
\[m_{\ell,x}(P)
:= \left( \E_{Y \sim P} \left[ \left( \rho(x,Y) \right)^\ell \right] \right)^{1/\ell} \in [0,\infty],\]
using the appropriate limit if $\ell = \infty$.
The chosen reference point $x$ only affects constant factors since,
\[\text{ for all } x,x' \in \Omega,
\quad \left| m_{\ell,x}^\ell(P) - m_{\ell,x'}^\ell(P) \right| \leq \left( \rho(x,x') \right)^\ell.\]
Note that, if $\Omega$ has linear structure with respect to which $\rho$ is translation-invariant (e.g., if $(\Omega,\rho)$ is a Fr\'echet space), we can state our results more simply in terms of $m_\ell(P) := \inf_{x \in \Omega} m_{\ell,x}(P)$.
As an example, if $\Omega = \R$ and $\rho(x,y) = |x - y|$, then $m_2(P)$ is precisely the standard deviation of $P$.
\end{definition}
\section{Related Work}
\label{sec:related_work}
A long line of work~\citep{dudley1969speed,ajtai1984optimalMatchings,canas2012learning,dereich2013constructive,boissard2014mean,fournier2015rate,weed2017sharp,lei2018convergence} has studied the rate of convergence of the empirical distribution to the population distribution in Wasserstein distance. In terms of upper bounds, the most general and tight upper bounds are the recent works of \citep{weed2017sharp}
and \citep{lei2018convergence}.
As we describe below, while these two papers overlap significantly, neither supersedes the other, and our upper bound combines the key strengths of those in \citep{weed2017sharp} and \citep{lei2018convergence}.
The results of \citep{weed2017sharp} are expressed in terms of a particular notion of dimension, which they call the \textit{Wasserstein dimension} $s$, since they derive convergence rates of order $n^{-r/s}$ (matching the $n^{-r/D}$ rate achieved on the unit cube $[0,1]^D$). The definition of $s$ is complex (e.g., it depends on the sample size $n$), but \citep{weed2017sharp} show that, in many cases, $s$ converges to certain common definitions of the intrinsic dimension of the support of the distribution.
This paper overcomes three main limitations of \citep{weed2017sharp}:
\begin{enumerate}[nolistsep,leftmargin=2em]
\item
The upper bounds of \citep{weed2017sharp} apply only to totally bounded metric spaces. In contrast, our upper bounds permit unbounded metric spaces under the assumption that the distribution $P$ has some finite moment $m_\ell(P) < \infty$. The results of \citep{weed2017sharp} correspond to the special case $\ell = \infty$.
\item
Their main upper bound (their Proposition 10) only holds when $s > 2r$, with constant factors diverging to infinity as $s \downarrow 2r$. Hence, their rates are loose when $r$ is large or when the data have low intrinsic dimension.
In contrast, our upper bound is tight even when $s \leq 2r$.
\item
As we discuss in our Example~\ref{ex:Lipschitz_Ball}, the upper bound of \citep{weed2017sharp} becomes loose as the Wasserstein dimension $s$ approaches $\infty$, limiting its utility in infinite-dimensional function spaces. In contrast, we show that our upper and lower bounds match for several standard function spaces.
\end{enumerate}
Intuitively, we find that the finite-sample bounds of \citep{weed2017sharp} are tight when the intrinsic dimension of the data lies in an interval $[a, b]$ with $2r < a < b < \infty$, but they can be loose outside this range. In contrast, we find our results give tight rates for a larger class of problems.
On the other hand, \citep{lei2018convergence} focuses on the case where $\Omega$ is a (potentially unbounded and infinite-dimensional) Banach space, under moment assumptions on the distributions.
Thus, while the results of \citep{lei2018convergence} cover interesting cases such as infinite-dimensional Gaussian processes, they do not demonstrate that convergence rates improve when the intrinsic dimension of the support of $P$ is smaller than that of $\Omega$ (unless this support lies within a \textit{linear} subspace of $\Omega$). As a simple example, if the distribution is in fact supported on a finite set of $k$ linearly independent points, the bound of \citep{lei2018convergence} implies only a convergence rate, whereas we give a bound of order $O(\sqrt{k/n})$. Although we do not delve into this here, our results (unlike those of \citep{lei2018convergence}) should also benefit from the multi-scale behavior discussed in Section 5 of \citep{weed2017sharp}; namely, much faster convergence rates are often observed for small $n$ than for large $n$. This may help explain why algorithms such as functional $k$-means~\citep{garcia2015functionalKMeans} work in practice, even though the results of \citep{lei2018convergence} imply only a slow convergence rate of $O\left( (\log n)^{-p} \right)$, for some constant $p > 0$, in this case.
Under similarly general conditions, \citep{sriperumbudur2010integralProbabilityMetrics,sriperumbudur2012empirical} have studied the related problem of estimating the Wasserstein distance between two unknown distributions given samples from those two distributions.
Since one can estimate Wasserstein distances by plugging in empirical distributions,
our upper bounds imply upper bounds for Wasserstein distance estimation. These bounds are tighter, in several cases, than those of \citep{sriperumbudur2010integralProbabilityMetrics,sriperumbudur2012empirical}; for example, when $\X = [0,1]^D$ is the Euclidean unit cube, we give a rate of $n^{-1/D}$, whereas they give a rate of $n^{-\frac{1}{D + 1}}$. Minimax rates for this problem are currently unknown, and it is presently unclear to us under what conditions recent results on estimation of $\L_1$ distances between discrete distributions~\citep{jiao2017minimaxL1} might imply an improved rate as fast as $\left( n \log n \right)^{-1/D}$ for estimation of Wasserstein distance.
To the best of our knowledge, minimax lower bounds for distribution estimation under Wasserstein loss remain unstudied, except in the very specific case when $\Omega = [0,1]^D$ is the Euclidean unit cube and $r = 1$~\citep{liang2017well}. As noted above, most previous works have focused on studying convergence rate of the empirical distribution to the true distribution in Wasserstein distance. For this rate, several lower bounds have been established, matching known upper bounds in many cases. However, many distribution estimators besides the empirical distribution can be considered. For example, it is tempting (especially given the infinite dimensionality of the distribution to be estimated) to try to reduce variance by techniques such as smoothing or importance sampling~\citep{bucklew2013introduction}. Our lower bound results, given in Section~\ref{sec:lower_bounds}, imply that the empirical distribution is already minimax optimal, up to constant factors, in many cases.
\section{Upper Bounds}
\label{sec:upper_bounds}
In this section, we present our main upper bounds on the convergence rate of the empirical distribution to the true distribution in Wasserstein distance. We begin by presenting a simpler result for the case of totally bounded metric spaces, followed by a more complex but general result for arbitrary metric spaces under finite-moment assumptions on the distribution.
\begin{theorem}
Let $(\Omega,\rho)$ be a metric space on which $P$ is a Borel probability measure. Let $\hat P$ denote the empirical distribution of $n$ IID samples $X_1,...,X_n \stackrel{IID}{\sim} P$, give by
\begin{equation}
\hat P(S) := \frac{1}{n} \sum_{i = 1}^n 1_{\{X_i \in S\}}, \quad \forall S \in \Sigma.
\label{eq:empirical_distribution}
\end{equation}
Then, for any non-increasing sequence $\{\epsilon_k\}_{k \in [K]} \in (0,\infty)^K$ with $\epsilon_0 = \Diam(\Omega)$,
\[\E \left[ W_r^r(P, \hat P) \right] \leq \epsilon_K^r + \frac{1}{\sqrt{n}} \sum_{k = 1}^K \left( \sum_{j = k}^K 2^{K - j} \epsilon_j \right)^r \sqrt{N(\epsilon_k) - 1}.\]
\label{thm:expectation_bound}
\end{theorem}
In the proof of the above theorem, the sequence $\{\epsilon_k\}_{k \in [K]} \in (0,\infty)^K$ gives the resolutions of a sequence of increasingly fine partitions of $\Omega$. The basic idea of the proof is to recursively bound the error over each partition at resolution $\epsilon_j$ in terms of $\epsilon_j$ and the error over the partition of resolution $\epsilon_{j + 1}$. The parameter $K$ restricts us to a particular finite resolution, with optimal value typically increasing with $n$. Note that this ``multi-resolution'' proof approach has been utilized in several special cases, apparently originating in the analysis of
Our Theorem~\ref{thm:expectation_bound} is most comparable to the upper bound (Proposition 10) of \citet{weed2017sharp}.
Theorem~\ref{thm:expectation_bound} requires $(\Omega,\rho)$ to be totally bounded in order for $N(\epsilon)$ to be finite. Next, we present a more complex bound, which, under the additional assumption that $P$ has some number of finite moments, is often finite even when $(\Omega,\rho)$ is \textit{not} totally bounded. The key idea of the proof is to partition $\Omega = \bigcup_{k = 0}^\infty B_k$ into bounded subsets, over each of which we can apply a bound similar to Theorem~\ref{thm:expectation_bound}. Thus, instead of the covering number $N(\epsilon)$ of $(\Omega,\rho)$, this result uses covering numbers $N(B_k,\rho,\epsilon)$ of a partition $\Omega = \bigcup_{k = 0}^\infty B_k$ into totally bounded subsets.
\begin{theorem}[General Upper Bound for Unbounded Metric Spaces]
Let $x_0 \in \Omega$ and suppose $m_{\ell,x_0}(P) \in [1, \infty)$. Let $J \in \N$. Fix two non-decreasing real-valued sequences $\{w_k\}_{k \in \N}$ and $\{\epsilon_j\}_{j \in \N}$, of which $\{w_k\}_{k \in \N}$ is non-decreasing with $w_0 = 0$ and $\lim_{k \to \infty} w_k = \infty$ and $\{\epsilon_j\}_{j \in [J]}$ is non-increasing.
For each $k \in \N$, define $B_k(x_0) := \left\{ y \in \Omega : w_k \leq \rho(x_0, x) < w_{k + 1} \right\}$.
Then,
\begin{align*}
\E \left[ W_r^r(P, \hat P) \right]
& \leq m_{\ell,x_0}^\ell(P) \sum_{k \in \N} w_k^{-\ell} \left( \epsilon_J \right)^r + 2^r w_k^{r - \ell/2} \min \left\{ 2w_k^{-\ell/2}, \sqrt{\frac{1}{n}} \right\} \\
& \hspace{2cm} + \sum_{j = 1}^J \left( \sum_{t = j}^J 2^{J - t} \epsilon_t \right)^r \min
\left\{ 2w_k^{-\ell}, \sqrt{\frac{w_k^{-\ell}}{n} N(B_k,\rho,\epsilon_j)} \right\}.
\end{align*}
\label{thm:unbounded_upper_bound}
\end{theorem}
In the above, $w_k$ corresponds to radii of the partition of $\Omega = \bigcup_{k = 0}^\infty B_k$ into a sequence of ``spherical shells'', whereas $\epsilon_j$, as in the previous result, corresponds to resolutions of partitions of the $B_k$'s. As with $K$ in the previous result, $J$ is used to ensure that we restrict ourselves to a particular finite resolution. The $\min$ terms appear because, for large $k$, the error is controlled by the fact that $P(B_k)$ is small (due to the moment assumption), rather than using a covering of $B_k$.
\section{Lower Bounds}
\label{sec:lower_bounds}
In this section, we provide a minimax lower bound (over the family $\P$ of all Borel distributions on $\Omega$) for density estimation in Wasserstein distance (that is, the quantity
\begin{equation}
\inf_{\hat P : \X^n \to \P} \sup_{P \in \P} \E_{X_1,...,X_n \stackrel{IID}{\sim} P} \left[ W_r^r \left( P, \hat P \right) \right],
\label{exp:distribution_estimation_minimax}
\end{equation}
where the infimum is over all estimators $\hat P$ of $P$ (i.e., all (potentially randomized) functions $\hat P : \Omega^n \to \P$)). Our bound depends primarily on the packing radius $R$ of $(\Omega,\rho)$, and, presently, we handle only the case without finite-moment assumptions on $P$. However, we show in the next section that this often implies tight lower bounds when enough (roughly, $\ell \geq \max\{D,2r\}$) moments exist.
\begin{theorem}
Let $(\Omega,\rho)$ be a metric space, on which $\P$ is the set of Borel probability measures. Then,
\[\inf_{\hat P : \X^n \to \P} \sup_{P \in \P} \E_{X_1,...,X_n \stackrel{IID}{\sim} P} \left[ W_r^r(P, \hat P(X_1,...,X_n)) \right]
\geq c_r \sup_{k \in [32n]} R^r(k) \sqrt{\frac{k - 1}{n}},\]
where $c_r = \frac{3\log 2}{2^{r + 12}}$ depends only on $r$.
\label{thm:Wasserstein_distribution_estimation_lower_bound}
\end{theorem}
\section{Example Applications}
\label{sec:examples}
Our theorems in the previous sections are quite abstract and have many tuning parameters. Thus, we conclude by exploring applications of our results to cases of interest. In each of the following examples, $P$ is an unknown Borel probability measure over the specified $\Omega$, from which we observe $n$ IID samples. For upper bounds, $\hat P$ denotes the empirical distribution~\eqref{eq:empirical_distribution} of these samples.
\begin{example}[Finite Space]
Consider the case where $\Omega$ is a finite set, over which $\rho$ is the discrete metric given, for some $\delta > 0$, by
$\rho(x, y) = \delta 1_{\{x = y\}}$, for all $x,y \in \Omega$.
Then, for any $\epsilon \in (0,\delta)$, the covering number is $N(\epsilon) = |\Omega|$. Thus, setting $K = 1$ and sending $\epsilon_1 \to 0$ in Theorem~\ref{thm:expectation_bound} gives
\[\E \left[ W_r^r(P, \hat P) \right] \leq \delta^r \sqrt{\frac{|\Omega| - 1}{n}}.\]
On the other hand, $R(|\Omega|) = \delta$, and so, setting $k = |\Omega|$ in Theorem~\ref{thm:Wasserstein_distribution_estimation_lower_bound} yields
\[\inf_{\hat P} \sup_{P \in \P} \E_{X_1,...,X_n \stackrel{IID}{\sim} P} \left[ W_r^r(P, \hat P) \right]
\gtrsim \delta^r \sqrt{\frac{|\Omega| - 1}{n}}.\]
\label{ex:discrete_bound}
\end{example}
\begin{example}[Unit Cube, Euclidean Metric]
Consider the case where $\Omega = \R^D$ is the unit cube and $\rho$ is the Euclidean metric. Assuming $\ell > r$, using the fact that $N \left( B_k, \rho, \epsilon \right) \leq \left( \frac{3 w_k}{\epsilon} \right)^D$~\citep{pollard1990empirical} and plugging $\epsilon_j = 2^{-2j}$ and $w_k = 2^k$ into Theorem~\ref{thm:unbounded_upper_bound} gives (after a straightforward but very tedious calculation) a constant $C_{D,r,\ell}$ depending only on $D$, $r$, and $\ell$ such that
\begin{equation}
\E \left[ W_r^r(P, \hat P) \right]
\leq C_{D,\ell,r} m_\ell^\ell(P) \left( n^{\frac{\ell - r}{\ell}} + 2^{-2Jr} + \sum_{j = 1}^J 2^{(D - 2r)j} \right).
\label{ineq:general_Euclidean_bound}
\end{equation}
Of these three terms, the first depends only on the number $\ell$ of finite moments $P$ is assumed to have and the order $r$ of the Wasserstein distance, whereas the second and third terms depend on choosing the parameter $J$. The optimal choice of $J$ scales with the sample size $n$ at a rate depending on the quantity $D - 2r$. Specifically, if $D = 2r$, then setting $J \asymp \frac{1}{4r} \log_2 n$ gives a rate of
$\E \left[ W_r^r(P, \hat P) \right]
\lesssim n^{\frac{\ell - r}{\ell}} + n^{-1/2} \log n$.
If $D \neq 2r$, then~\eqref{ineq:general_Euclidean_bound} reduces to
\[\E \left[ W_r^r(P, \hat P) \right]
\leq C_{D,\ell,r} m_\ell^\ell(P) \left( n^{\frac{\ell - r}{\ell}} + 2^{-2Jr} + \frac{2^{(D - 2r)J} - 1}{2^{D - 2r} - 1} \right).\]
Then, if $D > 2r$, sending $J \to \infty$ gives $\E \left[ W_r^r(P, \hat P) \right] \lesssim n^{\frac{\ell - r}{\ell}} + n^{-1/2}$. Finally, if $D < 2r$, then setting $J \asymp \frac{1}{2D} \log n$ gives $\E \left[ W_r^r(P, \hat P) \right] \lesssim n^{\frac{\ell - r}{\ell}} + n^{-\frac{r}{D}}$.
To summarize
\[\E \left[ W_r^r(P, \hat P) \right]
\lesssim n^{\frac{\ell - r}{\ell}} + \left\{
\begin{array}{ll}
n^{-1/2} & \text { if } 2r > D \\
n^{-1/2} \log n & \text { if } 2r = D \\
n^{-r/D} & \text { if } 2r < D
\end{array}
\right.\]
(reproducing Theorem 1 of \citep{fournier2015rate}). On the other hand, it is easy to check that the packing radius $R$ satisfies $R(n) \geq n^{-1/D}$ and $R(2) \geq \sqrt{D}$. Thus, Theorem~\ref{thm:Wasserstein_distribution_estimation_lower_bound} with $k = n$ and $k = 2$ yields
\[\inf_{\hat P} \sup_{P \in \P} \E \left[ W_r^r(\hat P, P) \right]
\gtrsim \max \left\{ (n + 1)^{-r/D}, D^{r/2} n^{-1/2} \right\}.\]
Together, these bounds give the following minimax rates for density estimation in Wasserstein loss:
\[\inf_{\hat P} \sup_{P \in \P} \E \left[ W_r^r(\hat P, P) \right]
\asymp \left\{
\begin{array}{ll}
n^{-1/2} & \text { if } \ell > 2r > D \\
n^{-r/D} & \text { if } 2r < D, \ell > \frac{Dr}{D - r}
\end{array}
\right.\]
When $2r = D$ and $\ell > 2r$, our upper and lower bounds are separated by a factor of $\log n$. The main result of \citep{ajtai1984optimalMatchings} implies that, for the case $D = 2$ and $r = 1$, the empirical distribution converges as $n^{-1/2} \log n$, suggesting that the $\log n$ factor in our upper bound may be tight. Further generalization of Theorem~\ref{thm:Wasserstein_distribution_estimation_lower_bound} is needed to give lower bounds when both $D, \ell \leq 2r$ or when $D > 2r$ and $\ell \leq \frac{Dr}{D - r}$.
\label{ex:unit_cube_lower_bound}
\end{example}
The next example demonstrates how the rate of convergence in Wasserstein metric depends on properties of the metric space $(\Omega,\rho)$ at both large and small scales. Specifically, if we discretize $\Omega$, then the phase transition at $2r = D$ disappears.
\begin{example}
Suppose $\Omega = \mathbb{Z}^D$ is a $D$-dimensional grid of integers and $\rho$ is $\ell_\infty$-metric (given by $\rho(x,y) = \max_{j \in [D]} |x_j - y_j|$). Since $\Z^D \subseteq \R^D$ and the $\ell_\infty$ and Euclidean metrics are topologically equivalent, the upper bounds from Example~\ref{ex:unit_cube_lower_bound} clearly apply, up to a factor of $\sqrt{D}$. However, we also have the fact that, whenever $\epsilon < 1$, $N(B_k,\rho,\epsilon) = w_k^D$. Therefore, setting $J = 0$, $\epsilon_0 = 0$, and $w_k = 2^k$ in Theorem~\ref{thm:unbounded_upper_bound} gives, for a constant $C_{D,\ell,r}$ depending only on $D$, $\ell$, and $r$,
\begin{align*}
\E \left[ W_r^r(P, \hat P) \right]
& \leq C_{D,\ell,r} m_\ell^\ell(P) \left( n^{\frac{\ell - r}{\ell}} + \sum_{k \in \N} \sqrt{\frac{2^{(D - \ell)k}}{n}} \right).
\end{align*}
When $\ell > D$, this reduces to $\E \left[ W_r^r(P, \hat P) \right] \lesssim n^{\frac{\ell - r}{\ell}} + n^{-1/2}$, giving a tighter rate than in Example~\ref{ex:unit_cube_lower_bound} when $2r \leq D < \ell$.
To the best of our knowledge, no prior results in the literature imply this fact.
\end{example}
Finally, we consider distributions over an infinite dimensional space of smooth functions.
\begin{example}[H\"older Ball, $\L_\infty$ Metric]
Suppose that, for some $\alpha \in (0,1]$,
\[\Omega
:= \left\{ f [0,1]^D \to [-1,1] \quad \middle| \quad \forall x,y \in [0,1]^D, \quad |f(x) - f(y)| \leq \|x - y\|_2^\alpha \right\}\]
is the class of unit $\alpha$-H\"older functions on the unit cube and $\rho$ is the $\L_\infty$-metric given by
\[\rho(f,g) = \sup_{x \in [0,1]^D} |f(x) - g(x)|, \quad \text{ for all } f,g \in \Omega.\]
The covering and packing numbers of $(\Omega,\rho)$ are well-known to be of order $\exp \left( \epsilon^{-D/\alpha} \right)$ \citep{devore1993approximation}; specifically, there exist positive constants $0 < c_1 < c_2$ such that, for all $\epsilon \in (0,1)$,
\[c_1 \exp \left( \epsilon^{-D/\alpha} \right)
\leq N(\epsilon)
\leq M(\epsilon)
\leq c_2 \exp \left( \epsilon^{-D/\alpha} \right).\]
Since $\Diam(\Omega) = 2$, applying Theorem~\ref{thm:expectation_bound} with $K = 1$ and
\[\epsilon_1
= \left( 2\log n - (\alpha r/D) \log \log n \right)^{-\frac{\alpha r}{D}}
\quad \text{ gives } \quad
\E \left[ W_r^r(P, \hat P) \right]
\lesssim \left( \log n \right)^{\frac{- \alpha r}{D}}.\]
Conversely, Inequality~\eqref{ineq:packing_number_radius_relationship} implies $R(n) \geq \left( \log(n/c_1) \right)^\frac{-\alpha}{D}$, and so setting $k = n$ in Theorem~\ref{thm:Wasserstein_distribution_estimation_lower_bound} gives
\[\inf_{\hat P} \sup_{P \in \P} \E_{X_1,...,X_n \stackrel{IID}{\sim} P} \left[ W_r^r(P, \hat P) \right]
\gtrsim \left( \frac{1}{\log(n/c_1)} \right)^\frac{\alpha r}{D},\]
showing that distribution estimation over $(\P,W_r^r)$ has the extremely slow minimax rate $\left( \log n \right)^\frac{-\alpha r}{D}$. Although we considered only $\alpha \in (0,1]$ (due to the notational complexity of defining higher-order H\"older spaces), analogous rates hold for all $\alpha > 0$. Also, since our rates depend only on covering and packing numbers of $\Omega$, identical rates can be derived for related Sobolev and Besov classes.
Note that the Wasserstein dimension used in the prior work \citep{weed2017sharp} is of order $\frac{D}{\alpha} \log n$, and so their upper bound (their Proposition 10) gives a rate of $n^{-\frac{\alpha r}{D \log n}} = \exp \left( -\frac{\alpha r}{D} \right)$, which fails to converge as $n \to \infty$.
\label{ex:Lipschitz_Ball}
\end{example}
One might wonder why we are interested in studying Wasserstein convergence of distributions over spaces of smooth functions, as in Example~\ref{ex:Lipschitz_Ball}.
Our motivation comes from the historical use of smooth function spaces have been widely used for modeling images and other complex naturalistic signals \citep{mallat1999wavelet,peyre2011numerical,sadhanala2016totalVariation}.
Empirical breakthroughs have recently been made in generative modeling, particularly of images, based on the principle of minimizing Wasserstein distance between the empirical distribution and a large class of models encoded by a deep neural network~\citep{montavon2016wassersteinRBMs,arjovsky2017wassersteinGAN,gulrajani2017improved}.
However, little is known about theoretical properties of these methods; while there has been some work studying the optimization landscape of such models~\citep{nagarajan2017gradient,liang2018interaction}, we know of far less work exploring their \textit{statistical} properties.
Given the extremely slow minimax convergence rate we derived above, it must be the case that the class of distributions encoded by such models is far smaller or sparser than $\P$. An important avenue for further work is thus to explicitly identify stronger assumptions that can be made on distributions over interesting classes of signals, such as images, to bridge the gap between empirical performance and our theoretical understanding.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we derived upper and lower bounds for distribution estimation under Wasserstein loss. Our upper bounds generalize prior results and are tighter in certain cases, while our lower bounds are, to the best of our knowledge, the first minimax lower bounds for this problem. We also provided several concrete examples in which our bounds imply novel convergence rates.
\subsection{Future Work}
We studied minimax rates over the very large entire class $\P$ of all distributions with some number of finite moments.
It would be useful to understand how minimax rates improve when additional assumptions, such as smoothness, are made (see, e.g., \citep{liang2017well} for somewhat improved upper bounds under smoothness assumptions when $(\Omega,\rho)$ is the Euclidean unit cube).
Given the slow convergence rates we found over $\P$ in many cases, studying minimax rates under stronger assumptions may help to explain the relatively favorable empirical performance of popular distribution estimators based on empirical risk minimization in Wasserstein loss.
Moreover, while rates over all of $\P$ are of interest only for very weak metrics such as the Wasserstein distance (as stronger metrics may be infinite or undefined), studying minimax rates under additional assumptions will allow for a better understanding of the Wasserstein metric in relation to other commonly used metrics.
\newpage
\subsubsection*{Acknowledgments}
This work was partly supported by a NSF Graduate Research Fellowship DGE-1252522 to S.S.
{\small
\bibliographystyle{plainnat}
| {
"timestamp": "2018-05-24T02:05:10",
"yymm": "1802",
"arxiv_id": "1802.08855",
"language": "en",
"url": "https://arxiv.org/abs/1802.08855",
"abstract": "The Wasserstein metric is an important measure of distance between probability distributions, with applications in machine learning, statistics, probability theory, and data analysis. This paper provides upper and lower bounds on statistical minimax rates for the problem of estimating a probability distribution under Wasserstein loss, using only metric properties, such as covering and packing numbers, of the sample space, and weak moment assumptions on the probability distributions.",
"subjects": "Statistics Theory (math.ST); Information Theory (cs.IT); Machine Learning (cs.LG); Machine Learning (stat.ML)",
"title": "Minimax Distribution Estimation in Wasserstein Distance",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9711290922181331,
"lm_q2_score": 0.8267117962054049,
"lm_q1q2_score": 0.8028438761749772
} |
https://arxiv.org/abs/1310.2972 | Hypergraph Colouring and Degeneracy | A hypergraph is "$d$-degenerate" if every subhypergraph has a vertex of degree at most $d$. A greedy algorithm colours every such hypergraph with at most $d+1$ colours. We show that this bound is tight, by constructing an $r$-uniform $d$-degenerate hypergraph with chromatic number $d+1$ for all $r\geq2$ and $d\geq1$. Moreover, the hypergraph is triangle-free, where a "triangle" in an $r$-uniform hypergraph consists of three edges whose union is a set of $r+1$ vertices. | \section{Introduction}
\citet{EL75} proved the following fundamental result about colouring hypergraphs\footnote{A \emph{hypergraph} $G$ consists of a set $V(G)$ of \emph{vertices} and a set $E(G)$ of subsets of $V(G)$ called \emph{edges}. A hypergraph is \emph{$r$-uniform} if every edge has size $r$. A \emph{graph} is a $2$-uniform hypergraph. A hypergraph $H$ is a \emph{subhypergraph} of a hypergraph $G$ if $V(H)\subseteq V(G)$ and $E(H)\subseteq E(G)$. A \emph{colouring} of a hypergraph $G$ assigns one colour to each vertex in $V(G)$ such that no edge in $E(G)$ is monochromatic. The \emph{chromatic number} of $G$, denoted by $\chi(G)$, is the minimum number of colours in a colouring of $G$. A colouring of $G$ can be thought of as a partition of $V(G)$ into \emph{independent sets}, each containing no edge. The \emph{degree} of a vertex $v$ is the number of edges that contain $v$. See the textbook of \citet{BergeBook} for other notions of degree in a hypergraph.}
\begin{thm}[\citep{EL75}]
\label{EL75}
For fixed $r$, every $r$-uniform hypergraph with maximum degree $\Delta$ has chromatic number at most $O(\Delta^{1/(r-1)})$.
\end{thm}
Theorem~\ref{EL75} implies that every $r$-uniform hypergraph with maximum degree $\Delta$ has an independent set of size at least $\Omega(n/\Delta^{1/(r-1)})$. \citet{Spencer} proved the following stronger bound.
\begin{thm}[\citep{Spencer}]
\label{Spencer}
For fixed $r$, every $r$-uniform hypergraph with $n$ vertices and average degree $d$ has an independent set of size at least $\Omega(n/d^{1/(r-1)})$.
\end{thm}
A hypergraph is \emph{$d$-degenerate} if every subhypergraph has a vertex of degree at most $d$. A minimum-degree-greedy algorithm colours every $d$-degenerate hypergraph with at most $d+1$ colours. This bound is tight for graphs ($r=2$) since the complete graph on $d+1$ vertices is $d$-degenerate, and of course, has chromatic number $d+1$. However, this observation does not generalise for $r\geq3$. In particular, for the complete $r$-uniform hypergraph on $n$ vertices, every vertex has degree $\binom{n-1}{r-1}$, yet the chromatic number is $\ceil{\frac{n}{r-1}}$. Thus for $r\geq 3$, the degeneracy is much greater than the chromatic number.
Given Theorems~\ref{EL75} and \ref{Spencer}, it seems plausible that for $r\geq 3$, every $r$-uniform $d$-degenerate hypergraph is $o(d)$-colourable. It even seems possible that every $r$-uniform $d$-degenerate hypergraph is $O(d^{1/(r-1)})$-colourable. This natural strengthening of Theorems~\ref{EL75} and \ref{Spencer} would (roughly) say that $G$ can be partitioned into independent sets, whose average size is that guaranteed by Theorem~\ref{Spencer}.
This note rules out these possibilities, by showing that the naive upper bound $\chi\leq d+1$ is tight for all $r$. This is the main conclusion of this paper. Moreover, we prove it for triangle-free hypergraphs, where a \emph{triangle} in an $r$-uniform hypergraph consists of three edges whose union is a set of $r+1$ vertices. Observe that this definition with $r=2$ is equivalent to the standard notion of a triangle in a graph (although there are other notions of a triangle in a hypergraph \citep{CM13}).
\begin{thm}
\label{Main}
For all $r\geq2$ and $d\geq 1$ there is a triangle-free $d$-degenerate $r$-uniform hypergraph with chromatic number $d+1$.
\end{thm}
Theorem~\ref{Main} and its proof is a generalisation of a result of \citet{AKS99} who proved it for graphs ($r=2$). Of course, the complete graph $K_{d+1}$ is $d$-degenerate with chromatic number $d+1$. The triangle-free property was the main conclusion of their result. See \citep{KR-RSA10,Alon85} for other related results.
\section{Proof}
Theorem~\ref{Main} is a corollary of the following:
\begin{lem}
\label{MainMain}
Fix $r\geq 2$. For all $d\geq 1$ there is a triangle-free $d$-degenerate $r$-uniform hypergraph $G_d$ with chromatic number $d+1$, such that in every $(d+1)$-colouring of $G_d$ each colour is assigned to at least $r-1$ vertices.
\end{lem}
\begin{proof}
We proceed by induction on $d$. First consider the base case $d=1$. Let $n:=r(r-1)$. Let $V(G_1):=\{v_1,\dots,v_n\}$ and $E(G_1):=\{e_i:1\leq i\leq n-r+1\}$, where $e_i:=\{v_i,v_{i+1},\dots,v_{i+r-1}\}$. If $S\subseteq V(G_1)$ and $i$ is minimum such that $v_i\in S$, then $v_i$ has degree at most 1 in the subhypergraph induced by $S$. Thus $G_1$ is $1$-degenerate. If $e_i,e_j,e_k$ are three edges in $G_1$ with $i<j<k$, then $e_i\cup e_j\cup e_k$ includes the $r+2$ distinct vertices $v_i,v_{i+1},\dots,v_{i+r-1},v_{j+r-1},v_{k+r-1}.$ Hence $G_1$ is triangle-free. Consider a 2-colouring of $G_1$. Clearly, $G_1$ contains $r-1$ pairwise disjoint edges, each of which contains vertices of both colours. Hence each colour is assigned to at least $r-1$ vertices. This completes the base case.
Now assume that $G_{d-1}$ is a triangle-free $(d-1)$-degenerate $r$-uniform hypergraph with chromatic number $d$, such that in every $d$-colouring of $G_{d-1}$ each colour is assigned to at least $r-1$ vertices.
Initialise $G_d$ to consist of $d+r-2$ disjoint copies $H_1,\dots,H_{d+r-2}$ of $G_{d-1}$. Let $S$ be a set of $(r-1)d$ vertices in $H_1\cup\dots\cup H_{d+r-2}$ such that $|S\cap V(H_i)|\in\{0,r-1\}$ for $1\leq i\leq d+r-2$. That is, $S$ contains exactly $r-1$ vertices from exactly $d$ of the $H_i$, and contains no vertices from the other $r-2$. Now, for each such set $S$, add $r-1$ \emph{new} vertices $v_1,\dots,v_{r-1}$ to $G_d$ and add the \emph{new} edge $(S\cap V(H_i))\cup\{v_j\}$ to $G_d$ whenever $|S\cap V(H_i)|=r-1$. Thus each new vertex has degree $d$. Since $H_1\cup \dots\cup H_{d+r-2}$ is $d$-degenerate, $G_d$ is also $d$-degenerate.
Suppose on the contrary that $G_d$ contains a triangle $T$. Since $G_{d-1}$ is triangle-free, at least one edge in $T$ is a new edge, which is contained in $V(H_i)\cup\{v\}$ for some $i\in[1,d+r-2]$ and some new vertex $v$. Each vertex in a triangle is in at least two of the edges of the triangle. However, by construction, $v$ is contained in only one edge contained in $V(H_i)\cup\{v\}$. Thus $G_d$ is triangle-free.
Since $H_1\cup\dots\cup H_{d+r-2}$ is $d$-colourable, and no edge contains only new vertices, assigning all the new vertices a $(d+1)$-th colour produces a $(d+1)$-colouring of $G_d$. Thus $\chi(G_d)\leq d+1$.
Suppose on the contrary that $G_d$ has a $(d+1)$-colouring with at most $r-2$ vertices of some colour, say `blue'. Say the other colours are $1,\dots,d$. At most $r-2$ copies of the $H_i$ contain blue vertices. Hence, without loss of generality, $H_1,\dots,H_d$ contain no blue vertices. That is, $H_1,\dots,H_d$ are $d$-coloured with colours $1,\dots,d$. By induction, $H_i$ contains a set $S_i$ of $r-1$ vertices coloured $i$ for $1\leq i\leq d$. By construction, there are $r-1$ vertices $v_1,\dots,v_{r-1}$ in $G_d$, such that $S_i\cup\{v_j\}$ is an edge of $G_d$ for $1\leq i\leq d$ and $1\leq j\leq r-1$. Since each such edge is not monochromatic, each vertex $v_j$ is coloured blue. In particular, there are at least $r-1$ blue vertices, which is a contradiction. Therefore, in every $(d+1)$-colouring of $G_d$, each colour class has at least $r-1$ vertices, as claimed. (In particular, $G_d$ has no $d$-colouring.)
\end{proof}
\section{An Open Problem}
We conclude with an open problem. The \emph{girth} of a graph (that contains some cycle) is the length of its shortest cycle. \citet{Erdos59} proved that there exists a graph with chromatic number at least $k$ and girth at least $g$, for all $k\geq 3$ and $g\geq 4$. (\citet{EH66} proved an analogous result for hypergraphs). Theorem~\ref{Main} strengthens this result for triangle-free graphs (that is, with girth $g=4$). This leads to the following question: Does there exist a $d$-degenerate graph with chromatic number $d+1$ and girth $g$, for all $d\geq 2$ and $g\geq4$? Odd cycles prove the $d=2$ case. An affirmative answer would strengthen the above result of \citet{Erdos59}. A negative answer would also be interesting---this would provide a non-trivial upper bound on the chromatic number of $d$-degenerate graphs with girth $g$.
\subsection*{Note} After this paper was written the author discovered the beautiful paper by \citet{KN99} which proves a strengthening of Theorem~\ref{Main} and includes the positive solution of the above open problem.
\subsection*{Acknowledgement} Thanks to an anonymous referee for pointing out an error in an earlier version of this paper.
\def\soft#1{\leavevmode\setbox0=\hbox{h}\dimen7=\ht0\advance \dimen7
by-1ex\relax\if t#1\relax\rlap{\raise.6\dimen7
\hbox{\kern.3ex\char'47}}#1\relax\else\if T#1\relax
\rlap{\raise.5\dimen7\hbox{\kern1.3ex\char'47}}#1\relax \else\if
d#1\relax\rlap{\raise.5\dimen7\hbox{\kern.9ex \char'47}}#1\relax\else\if
D#1\relax\rlap{\raise.5\dimen7 \hbox{\kern1.4ex\char'47}}#1\relax\else\if
l#1\relax \rlap{\raise.5\dimen7\hbox{\kern.4ex\char'47}}#1\relax \else\if
L#1\relax\rlap{\raise.5\dimen7\hbox{\kern.7ex
\char'47}}#1\relax\else\message{accent \string\soft \space #1 not
defined!}#1\relax\fi\fi\fi\fi\fi\fi}
| {
"timestamp": "2014-08-18T02:03:46",
"yymm": "1310",
"arxiv_id": "1310.2972",
"language": "en",
"url": "https://arxiv.org/abs/1310.2972",
"abstract": "A hypergraph is \"$d$-degenerate\" if every subhypergraph has a vertex of degree at most $d$. A greedy algorithm colours every such hypergraph with at most $d+1$ colours. We show that this bound is tight, by constructing an $r$-uniform $d$-degenerate hypergraph with chromatic number $d+1$ for all $r\\geq2$ and $d\\geq1$. Moreover, the hypergraph is triangle-free, where a \"triangle\" in an $r$-uniform hypergraph consists of three edges whose union is a set of $r+1$ vertices.",
"subjects": "Combinatorics (math.CO)",
"title": "Hypergraph Colouring and Degeneracy",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683462197239,
"lm_q2_score": 0.8128673223709251,
"lm_q1q2_score": 0.8027620372499097
} |
https://arxiv.org/abs/math/0609845 | On a Balanced Property of Compositions | Let $S$ be a finite set of positive integers with largest element $m$. Let us randomly select a composition $a$ of the integer $n$ with parts in $S$, and let $m(a)$ be the multiplicity of $m$ as a part of $a$. Let $0\leq r<q$ be integers, with $q\geq 2$, and let $p_{n,r}$ be the probability that $m(a)$ is congruent to $r$ modulo $q$. We show that if $S$ satisfies a certain simple condition, then $\lim_{n\to \infty} p_{n,r} =1/q$. In fact, we show that an obvious necessary condition on $S$ turns out to be sufficient. | \section{Introduction} A {\em composition} of the positive integer $n$
is a sequence $(a_1,a_2,\cdots ,a_k)$ of positive integers so that
$\sum_{i=1}^k a_i=n$. The $a_i$ are called the {\em parts} of the composition.
It is well-known \cite{bonaw}
that the number of compositions of $n$ into $k$ parts
is ${n-1\choose k-1}$.
From this fact, it is possible to prove the following.
Let $0\leq r<q$ be integers, with
$q\geq 2$, and let $P_{n,r}$ be the probability of the event that
the number of parts of a randomly selected composition of $n$ is congruent to
$r$ modulo $q$. Then $\lim_{n\rightarrow \infty} P_{n,r}
=1/q$. In other words,
\[\lim_{n\rightarrow \infty}
\frac{\sum_{i=0}^{\lfloor (n-1)/q \rfloor} {n-1\choose iq+r}}{2^{n-1}}=
\frac{1}{q}.\]
For $q=2$, this follows from the well-known fact that the number of
even-sized subsets of a non-empty
finite set is equal to the number of odd-sized
subsets of that set. For $q>2$, the statement can be proved, for example,
by the method we will use in this paper. The special cases of $q=3$ and $q=4$
appear as Exercises 4.41 and 4.42 in \cite{bonaw}.
In other words, all residue classes
are equally likely to occur. We will refer to this phenomenon by saying
that the number of part sizes of a randomly selected composition of $n$ is
{\em balanced}.
Now let us impose a restriction on the {\em part sizes} of the compositions
of $n$ that form our sample space by requiring that all part sizes come
from a finite set $S$. Is it still true that the number of part
sizes of a randomly selected composition of $n$ is balanced? That will
certainly depend on the restriction we impose on the part sizes.
For instance, if the set $S$ of allowed parts consists of odd numbers only,
then the number of part sizes will not be balanced. Indeed, if $q=2$,
then $P_{n,0}=1$ if $n$ is even, and $P_{n,0}=0$ if $n$ is odd.
Let $m$ be the largest element of the set $S$ of allowed parts. It turns
out that it is easier to (directly)
work with the multiplicity $m(a)$ of $m$ as a part of the randomly
selected composition $a$ than with its number of parts.
For the special case when $S$ has only two elements, the
results obtained for $m(a)$ can then be translated back to results on
the number of parts of $a$.
In this paper, we prove that if $S$ satisfies a certain obviously necessary
condition, then the parameter $m(a)$ is balanced, as described in the
abstract.
That is, the remainder of $m(a)$ modulo $q$ is equally likely to take all
possible values.
\section{The Strategy}
Let $S=\{s_1,s_2,\cdots ,s_k=m\}$ be a finite set of positive integers
with at least two elements. Let us assume without loss of generality
that no integer larger than 1
divides all $k$ elements of $S$. Clearly, if $s_1,s_2, \cdots ,s_{k-1}$ are
all divisible by a certain prime $h>1$, then the multiplicity
$m(a)$ of $m=s_k$ as a part of a composition $a$ of $n$ is restricted.
Indeed, $n-m(a)m$ must be divisible by $h$. In particular, if $n$ is divisible
by $h$, then $m(a)m$ is divisible by $h$, and so $m(a)$ must be divisible
by $h$. Therefore, the parameter $m(a)$ is not balanced. Indeed, if $n$
is divisible by $h$, and we choose
$q=h$, then $p_{n,0}=1$, and $p_{n,r}=0$ for $r\neq 0$, while if $n$ is
not divisible by $h$ and $q=h$, then $p_{n,0}=0$.
So for $m(a)$ to be a balanced parameter, it is {\em necessary} for $S$ to
satisfy the condition that its smallest $k-1$ elements do not have
a proper common divisor. In the rest of this paper, we prove that this
condition is at the same time {\em sufficient} for $m(a)$ to be balanced.
{\em Unless otherwise stated}, let $S$ be a finite set of positive integers
with at least two elements, and let $S=\{s_1,s_2,\cdots ,s_k=m\}$, where
the $s_i$ are listed in increasing order. So $m$ is the largest element of
$S$. {\em Unless otherwise stated}, let us also assume that no integer
larger than 1 is a divisor of all of $s_1,s_2, \cdots ,s_{k-1}$.
Note that this means that if $|S|=2$, then $s_1=1$.
For a fixed positive integer $n$, let
$A_{S,n}(x)$ be the ordinary generating function of all compositions of the
integer $n$ into parts in $S$ according to their number of parts equal to
$m$. In other words, \[A _{S,n}(x)=\sum_a x^{m(a)}=
\sum_{d}a_{S,n,d}x^d,\] where
$a$ ranges over all compositions of $n$ into parts in $S$, and $m(a)$ is
the multiplicity of $m$ as a part in $a$. On the far right, $a_{S,n,d}$ is
the number of compositions $a$ of $n$ into parts in $S$ so that $m(a)=d$.
\begin{example} Let $S=\{1,3\}$. Then the first few polynomials $A_n(x)=
A_{S,n}(x)$
are as follows.
\begin{itemize}
\item $A_0(x)=A_1(x)=A_2(x)=1$,
\item $A_3(x)=x$, $A_4(x)=2x+1$, $A_5(x)=3x+1$.
\item $A_6(x)=x^2+4x+1$, $A_7(x)=3x^2+5x+1$, $A_8(x)=6x^2+6x+1$.
\end{itemize}
\end{example}
Let $q\geq 2$ be a positive integer, and let $0\leq r\leq q-1$.
Let $A_{S,n,r}$ be the number of compositions $a$ of $n$ with parts in $S$
so that $m(a)$ is congruent
to $r$ modulo $q$. So
\[A_{S,n,r}=a_{S,n,r}+a_{S,n,q+r}+\cdots +a_{S,n,\lfloor n/q \rfloor q+r} .\]
In order to simplify the presentations of our results,
we will first discuss the special case when $n$ is divisible by $q$ and
$r=0$. Let $w$ be a primitive $q$th root of unity. Then
\begin{eqnarray} \label{unitroots}
\sum_{t=0}^{q-1} A_{S,n}(w^t) & = & \sum_{t=0}^{q-1}
\sum_{d= 0}^{n/m} a_{S,n,d} w^{td} \\
& = & \sum_{d=0}^{n/m} \sum_{t=0}^{q-1}
a_{S,n,d} w^{td} \\
& = & \sum_{d=0}^{n/m} a_{S,n,d} \sum_{t=0}^{q-1} w^{td}.
\end{eqnarray}
Using the summation formula of a geometric progression, we get that
\[ \sum_{t=0}^{q-1} (w^{d})^t= \left\{ \begin{array}{l@{\ }l}
0 \hbox{ if $w^d\neq 1$, that is, if $q\nmid d$}, \\
\\ q \hbox{ if $w^d =1$, that is, if $q|d$.
}
\end{array}\right.
\]
Therefore, (\ref{unitroots}) reduces to
\[
\sum_{t=0}^{q-1} A_{S,n}(w^t) =
q\cdot \sum_{j=1}^{n/q} a_{S,n,jq}
= qA_{S,n,0},\]
\begin{equation}
\label{forma0} \frac{1}{q} \sum_{t=0}^{q-1} A_{S,n}(w^t)=A_{S,n,0}.
\end{equation}
So in order to find the approximate value of $A_{S,n,0}$, it suffices to
find the approximate values of $A_n(w^t)$, for $0\leq t\leq r-1$, and
for a primitive root of unity $w$. The number $A_{S,n}$
of {\em all} compositions
of $n$ into parts in $S$ is equal to $A_n(1)$, so we will need that value
as well, in order to compute the ratio $A_{S,n,0}/A_{S,n}$.
Finally, note that if $n$ is not divisible by $q$, but $r=0$,
then the same argument
applies, and $\frac{1}{q} \sum_{t=0}^{q-1} A_{S,n}(w^t)
=A_{S,n,0}$ still holds.
If $r\neq 0$, then instead of computing $\sum_{t=0}^{q-1} A_{S,n}(w^t)$, we
compute \[T_n(w)=\sum_{t=0}^{q-1} A_{S,n}(w^t)w^{-rt}
=\sum_{d= 0}^{n/m} a_{S,n,d} \sum_{t=0}^{q-1} w^{t(d-r)}.\]
This shows that the coefficient of $w^k$ in $T_n(w)$ is 0, unless
$w^{d-r}=1$, that is, unless $d-r$ is divisible by $q$. If $d-r$ is divisible
by $q$, then this coefficient is $q$. This shows again that
\begin{equation}
\label{generalc} \frac{1}{q} \sum_{t=0}^{q-1} A_{S,n}(w^t)w^{-tr}=A_{S,n,r}.
\end{equation}
Therefore, computing $A_{S,n}(w^t)$
will be useful in the general case as well.
\section{Linear Recurrence Relations}
In order to compute the values of $A_{S,n}(x)$ for various values of $x$,
we can keep $x$ fixed, and let $n$ grow.
The connection among the polynomials $A_{S,n}(x)$
is explained by the following Proposition.
\begin{proposition} Let $S=\{s_1,s_2,\cdots ,s_k=m\}$, with $k\geq 2$. Then
for all positive integers $n\geq 3$, the polynomials $A_{S,n}(x)$ satisfy the
recurrence relation
\begin{equation} \label{recur}
A_{S,n}(x)=A_{S,n-s_1}(x)+A_{S,n-s_2}(x)+\cdots +A_{S,n-s_{k-1}}(x)+
xA_{S,n-m}(x).
\end{equation}
\end{proposition}
\begin{proof}
Let $a$ be a composition of $n$ with parts in $S$. If the first part of
$a$ is $s_i$, for some $i\in [1,k-1]$, then the rest of $a$ forms a
composition of $n-s_i$ with parts in $S$ in which the multiplicity of $m$ as
a part is still $m(a)$. These compositions of $n$
are counted by $A_{S,n-s_i}(x)$.
If the first part of $a$ is $m$, then the rest of $a$ forms a
composition of $n-m$ with parts in $S$ in which the multiplicity of $m$ as
a part is $m(a)-1$. These compositions of $n$ are counted by
$xA_{S,n-m}(x)$.
\end{proof}
\begin{example} If $S=\{1,3\}$, then (\ref{recur}) reduces to
\begin{equation} \label{srecur} A_{S,n}(x)= A_{S,n-1}(x)+x A_{S,n-3}(x).
\end{equation}
\end{example}
For a {\em fixed} real number $x$, the recurrence relation (\ref{recur})
becomes a recurrence relation on real numbers. The solutions of such
recurrence relations are described by the following well-known theorem.
(See, for instance, \cite{rosen}, Section 7.2.
)
\begin{theorem} \label{recthe}
Let \begin{equation}
\label{grec} a_n=c_1a_{n-1}+c_2a_{n-2}+\cdots +c_ka_{n-k} \end{equation}
be a recurrence
relation,
where the $c_i$ are complex constants. Let $\alpha_1,\alpha_2,\cdots,
\alpha_t$ be the distinct roots of the characteristic
equation \begin{equation} \label{chareq}
z^k-c_1z^{k-1}-c_2z^{k-2}-\cdots -c_k=0,\end{equation}
and let $M_i$ be the multiplicity of $\alpha_i$.
Then the sequence $a_0,a_1,\cdots $ of complex numbers satisfies (\ref{grec})
if and only if there exist constants $b_1,b_2, \cdots ,b_k$ so that for
all $n\geq 0$, we have
\begin{equation}
\label{fgrec} a_n=b_1\alpha_1^n+b_2 n\alpha_1^n+ \cdots
+b_{M_1}n^{M_1-1}\alpha_1^n+ b_{M_1+1}\alpha_2^n, \cdots
\end{equation} \begin{equation}
\cdots +b_{M_1+M_2}n^{M_2-1}\alpha_2^n, \cdots, \cdots
+b_kn^{M_k-1}\alpha_k^n.\end{equation}
In other words, the solutions of (\ref{grec}) form a $k$-dimensional vector
space.
\end{theorem}
We will need the following consequence of Theorem \ref{recthe}.
\begin{corollary} \label{nonzero}
Let us assume that the sequence $\{a_n\}$ is a solution of (\ref{grec}) and
that there is no linear recurrence relation of a degree less than
$k$ that is satisfied by $ \{a_n\}$. Let us further assume that the
characteristic equation (\ref{chareq}) of (\ref{grec}) has a unique
root $\alpha_1$ of largest modulus. Then there is a {\em nonzero}
constant $C$ so that
\[a_n=C\alpha_1^n + o(\alpha_1^n).\]
\end{corollary}
\begin{proof}
As $\{a_n\}$ does not satisfy a recurrence relation of a degree less than
$k$, we must have $c_1\neq 0$. As $|\alpha_1|>|\alpha_i|$ for $i\neq 1$,
the statement follows.
\end{proof}
Let us now apply Theorem \ref{recthe} to find
the solution of (\ref{recur}) for a fixed $x$. The characteristic
equation of (\ref{recur}) is
\begin{equation} \label{polyeq}
f(z)=z^{m}-\sum_{i=1}^{k-1} z^{m-s_{i}} - x
=0.\end{equation}
As explained in Section 2, we will need to compute $A_{S,n}(1)$ and also,
$A_{S,n}(w^t)$ for the case when $w\neq 1$ is a $q$th primitive
root of unity. To that
end, we need to find the roots of the corresponding characteristic
equations. That is, we will compare the root of largest modulus
of the characteristic
equation
\begin{equation} \label{forone}
f_1(z)=
z^{m}-\sum_{i=1}^{k-1} z^{m-s_{i}} - 1=0
\end{equation}
and the root(s) of the largest modulus of the characteristic equation
\begin{equation} \label{forw}
f_w(z)=
z^{m}-\sum_{i=1}^{k-1} z^{m-s_{i}} - w=0
\end{equation}
The following lemma, helping to compute root of largest
modulus of $f_1(z)$,
is a special case of Exercise III.16 in \cite{polya}.
\begin{lemma} \label{first}
The polynomial $f_1(z)=z^{m}-\sum_{i=1}^{k-1}
z^{m-s_{i}} - 1$
has a unique positive real root $\alpha$.
\end{lemma}
\begin{proof} Let $\alpha$ be the smallest positive real root of $f_1(z)$.
We know such a root exists since $f(0)=-1$ and $\lim_{z\rightarrow
\infty}f(z)=\infty$. We claim that then $f$ is strictly monotone increasing
on $[\alpha, \infty )$, implying that $f$ cannot have another positive
real root. Indeed, if $r>1$, then
\begin{eqnarray*} f(r\alpha)+1 & = & (r\alpha)^{m} - \sum_{i=1}^{k-1}
(r\alpha)^{m-s_{i}}\\
& > & r^{m} \left(\alpha^{m}-\sum_{i=1}^{k-1} \alpha^{m-s_{i}} \right ) \\
& = & r^m \\
& > & 1,
\end{eqnarray*}
and so $f(r\alpha)>0$.
\end{proof}
Now we address the problem of finding the roots of the characteristic
equation (\ref{polyeq}) in the case when $w\neq 1$ is a root of unity.
It turns out that it suffices to assume that $|w|=1$. (The following
Lemma is similar to Exercise III.17 in \cite{polya}.)
\begin{lemma} Let $\alpha$ be defined as in Lemma \ref{first}.
Let $w$ be any complex number satisfying $w\neq 1$ and $|w|=1$.
Then all roots of the polynomial $f_w(z)$ are of smaller modulus than
$\alpha$.
\end{lemma}
\begin{proof} Let $y$ be a root of $f$. Then
\begin{eqnarray*} |y|^m & = & \left |w + \sum_{i=1}^{k-1} y^{m-s_i}
\right |\\
& \leq & 1+ \sum_{i=1}^{k-1} \left | y^{m-s_i} \right |.
\end{eqnarray*}
Therefore, $f(|y|)\leq 0$. This implies that $|y|\leq \alpha$ since we have
seen in the proof of Lemma \ref{first} that $f(t)>0$ if $t>\alpha$.
Furthermore, in the last displayed inequality, the inequality is strict
unless for all $i$ so that $1\leq i\leq k-1$,
the complex numbers $y^{m-s_i}$ have the same argument as $w$, and that
argument is the same as the argument of $y^m$. That happens only if
the complex numbers $y^{s_1},y^{s_2-s_1},\cdots y_{s_{k-1}-s_{k-2}}$ all
have argument 0, that is, when these numbers are positive real numbers.
However, that happens precisely when $s_1, s_2-s_1, \cdots, s_{k-1}-s_{k-2}$
are all multiples of the multiplicative order $o_y$
of $y/|y|$ as a complex number.
That implies that $s_1,s_2,\cdots ,s_{k-1}$ are all divisible by $o_y$,
contradicting our hypothesis on $S$.
\end{proof}
The previous two lemmas show that the largest root of the
characteristic equation for the
sequence $\{A_{S,n}(1)\}_{n\geq 0}$ is larger than the largest root(s) of
the characteristic equation for the sequence $\{A_{S,n}(w)\}_{n\geq 0}$ for
any complex number $w\neq 1$ with absolute value 1. Given formula
(\ref{fgrec}), in order to see that
the first sequence indeed grows faster than the second,
all we need to show is that the {\em coefficient} $b_1$ of $\alpha^n$
in (\ref{grec}) is not 0. (Here $\alpha$, the largest root of $f_1(z)$,
plays the role of $\alpha_1$ in (\ref{fgrec})). This is the content of the
next lemma.
\begin{lemma} \label{noshorter} Let $S=\{s_1,s_2,\cdots ,s_{k-1}\}$
be any finite set of positive
integers (so for this Lemma, we do not require that $s_1,s_2,\cdots s_{k-1}$
do not have a proper common divisor).
Then the sequence $\{A_{S,n}\}_{n\geq 0}=\{A_{S,n}(1)\}_{n\geq 0}$
does not satisfies a linear recurrence
relation with constant coefficients and
less than $|S|+1$ terms. In other words, if $|S|=k$, then
there do not exist
constants $c_2,c_3,\cdots ,c_k$ and positive integers $j_1,j_2,\cdots ,
j_{k-1}$ so that for all $n\geq 0$,
\[A_{S,n}=\sum_{i=1}^{k-1} c_i A_{S,n-j_i}.\]
\end{lemma}
\begin{proof}
Let us assume that $S$ is a minimal counterexample. It is then straightforward
to verify that $|S|>2$. Let $S'=S-m$, that is, the set obtained from
$S$ by removing the largest element of $S$. Then
\begin{equation} \label{reduction}
A_{S',n}=\sum_{i=1}^{k-1} c_i' A_{S',n-s_i},\end{equation}
and there is no shorter recurrence satisfied by $\{A_{S',n}\}$.
Now crucially, $A_{S',n}=A_{S,n}$ for all $n$ satisfying $0\leq n< m$.
So these sequences agree in $m-1\geq k-1$ values. So if $\{A_{S,n}\}$
satisfied a linear recurrence relation of degree $k-1$, that would have
to be the recurrence relation (\ref{reduction}). Indeed, by Theorem
\ref{recthe}, the solutions of (\ref{reduction}) form a $k$-dimensional
vector space, so knowing $k$ elements of a solution determines the whole
solution.
However, $\{A_{S,n}\}$ does not satisfy (\ref{reduction}) since
$A_{S,m}=A_{S',m}+1\neq A_{S',m}$, where the difference is caused by the
one-part composition $m$.
\end{proof}
Now we are in position to express the growth rate of $A_{S,n}=A_{S,n}(1)$.
\begin{proposition}
Let $\alpha$ be defined as in Lemma \ref{first}. Then
\[A_{S,n}(1)=C \alpha^n + o(\alpha^n),\] for some nonzero
constant $C$.
\end{proposition}
\begin{proof} Immediate from Corollary \ref{nonzero} and
Lemma \ref{noshorter}.
\end{proof}
We can now compare the growth rates of $A_{S,n}(w)$ and $A_{S,n}(1)$.
\begin{lemma} \label{grrate}
Let $w\neq 1$ be any complex number so that $|w|=1$. Then
\[\lim_{n\rightarrow \infty} \frac{A_{S,n}(w)}{A_{S,n}(1)} =0. \]
\end{lemma}
\begin{proof}
Lemma (\ref{first}) shows that the unique positive root of the characteristic
equation (\ref{forone}) is larger than the absolute value of all
roots of the characteristic
equation (\ref{forw}). Therefore, $A_{S,n}(w)=O(n^k\beta^k)$, with
$\beta < \alpha$.
\end{proof}
Finally, we can use the results of this section to prove the balanced
properties of the numbers $A_{S,n,r}$.
\begin{theorem}
Let $q\geq 2$ be an integer, and let $r$ be an integer satisfying $0\leq r
\leq q-1$. Then
\[\lim_{n\rightarrow \infty} p_{n,r}=\lim_{n\rightarrow \infty}
\frac{A_{S,n,r}}{A_{S,n}}=\frac{1}{q}.\]
\end{theorem}
\begin{proof} Let us first address the case of $r=0$.
Dividing (\ref{forma0}) by $A_{S,n}$, we get that
\[A_{S,n,0}=\frac{1}{q}
\sum_{t=0}^{q-1}\frac{A_{S,n}(w^t)}{A_{S,n}}.\]
However, Lemma \ref{grrate} shows that all but one of the $q$ summands
on the right-hand side converge to 0, and the
remaining one (the first summand) is equal to 1.
For general $r$, the only change is that instead of dividing both sides
of (\ref{forma0}) by $A_{S,n}$, we divide both sides of (\ref{generalc})
by $A_{S,n}$. As $|w|=1$, the result follows in the same way.
\end{proof}
\section{Further Directions}
Let $S=\{1,3\}$. Numerical evidence suggest that for all $n$, the polynomials
$A_{S,n}(x)$ have real roots only. Furthermore, numerical evidence also
suggests that the sequences of polynomials $\{A_{S,3n+r}(x)\}_{n\geq 0}$ form
a Sturm sequence for each of $r=0,1,2$. (See \cite{wilfb} for the definition
and importance of Sturm sequences.) This raises the following intriguing
questions.
\begin{question} For which sets $S$ is it true that the polynomials
$A_{S,n}(x)$ have real roots only?
\end{question}
\begin{question} For which sets $S$ is it true that the polynomials
$A_{S,n}(x)$ can be partitioned into a few Sturm sequences?
\end{question}
Herb Wilf \cite{wilf}
proved that the set $S=\{1,2\}$ does have both of these properties.
If $A_{S,n}(x)$ has real roots only, then its coefficients form a log-concave
(and therefore, unimodal) sequence. (See Chapter 8 of
\cite{bonaint} for an
introduction into into unimodal and log-concave sequences.) This raises the
following questions.
\begin{question} Let us assume that $A_{S,n}(x)$ has real roots only.
Is there a combinatorial proof for the log-concavity of its coefficients?
\end{question}
\begin{question} Let us assume that $A_{S,n}(x)$ has real roots only.
Where is the peak (or peaks) of the unimodal sequence of its coefficients?
\end{question}
Another interesting question is the following.
\begin{question}
For what $S$ and $n$ does the equality $A_{S,n}(-1)=0$ hold? When it does,
the number of compositions of $n$ with parts in $S$ and with $m(a)$ even
equals the number of compositions of $n$ with parts in $S$ and with $m(a)$
odd. Is there a combinatorial proof of that fact?
\end{question}
Finally, our methods rested on the finiteness of $S$, but we can still
ask what can be said for {\em infinite} sets of allowed parts.
\begin{center} {\bf Acknowledgment}
I am grateful to Herb Wilf for valuable discussions and advice.
\end{center}
| {
"timestamp": "2006-09-29T15:57:13",
"yymm": "0609",
"arxiv_id": "math/0609845",
"language": "en",
"url": "https://arxiv.org/abs/math/0609845",
"abstract": "Let $S$ be a finite set of positive integers with largest element $m$. Let us randomly select a composition $a$ of the integer $n$ with parts in $S$, and let $m(a)$ be the multiplicity of $m$ as a part of $a$. Let $0\\leq r<q$ be integers, with $q\\geq 2$, and let $p_{n,r}$ be the probability that $m(a)$ is congruent to $r$ modulo $q$. We show that if $S$ satisfies a certain simple condition, then $\\lim_{n\\to \\infty} p_{n,r} =1/q$. In fact, we show that an obvious necessary condition on $S$ turns out to be sufficient.",
"subjects": "Combinatorics (math.CO)",
"title": "On a Balanced Property of Compositions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683491468142,
"lm_q2_score": 0.8128673110375457,
"lm_q1q2_score": 0.802762028436759
} |
https://arxiv.org/abs/1805.05025 | Cutoff for product replacement on finite groups | We analyze a Markov chain, known as the product replacement chain, on the set of generating $n$-tuples of a fixed finite group $G$. We show that as $n \rightarrow \infty$, the total-variation mixing time of the chain has a cutoff at time $\frac{3}{2} n \log n$ with window of order $n$. This generalizes a result of Ben-Hamou and Peres (who established the result for $G = \mathbb{Z}/2$) and confirms a conjecture of Diaconis and Saloff-Coste that for an arbitrary but fixed finite group, the mixing time of the product replacement chain is $O(n \log n)$. | \section{Introduction}
Let $G$ be a finite group, and let $[n] := \{1, 2, \dots, n\}$. We
consider the set $G^n$ of all functions $\sigma: [n] \to G$ (or
``configurations''). We may define a Markov chain $(\sigma_t)_{t\ge0}$ on
$G^{n}$ as follows: if we have a current state $\sigma$, then uniformly at
random, choose an ordered pair $(i, j)$ of distinct integers in $[n]$,
and change the value of $\sigma(i)$ to $\sigma(i) \sigma(j)^{\pm 1}$, where the
signs are chosen with equal probability.
We will restrict the chain $(\sigma_t)_{t\ge 0}$ to the space of {\it
generating $n$-tuples}, i.e.\ the set of $\sigma$ whose values generate
$G$ as a group:
\[ {\mathcal S} := \left\{ \sigma \in G^n \ : \ \langle \sigma(1), \dots, \sigma(n) \rangle=G \right\}. \]
It is not hard to see that for fixed $G$ and large enough $n$, the
chain on ${\mathcal S}$ is irreducible (see \cite[Lemma 3.2]{DSC96}). We will
always assume $n$ is large enough so that this irreducibility
holds. Note that the chain is also symmetric, and it is aperiodic
because it has holding on some states. Thus, the chain has a uniform
stationary distribution $\pi$ with $\pi(\sigma)=1/|{\mathcal S}|$.
This Markov chain was first considered in the context of computational
group theory---it models the \emph{product replacement algorithm} for
generating random elements of a finite group introduced in
\cite{Celler}. By running the chain for a long enough time $t$ and
choosing a uniformly random index $k \in [n]$, the element $\sigma_t(k)$
is a (nearly) uniformly random element of $G$. The product replacement
algorithm has been found to perform well in practice \cite{Celler,
Holt-Rees}, but the question arises: how large does $t$ need to be
in order to ensure near uniformity?
One way of answering the question is to estimate the mixing time of
the Markov chain. It was shown by Diaconis and Saloff-Coste that for
any fixed finite group $G$, there exists a constant $C_G$ such that
the $\ell^2$-mixing time is at most $C_G n^2 \log n$ \cite{DSC96,
DSC98} (see also Chung and Graham \cite{Chung-Graham-Group} for a
simpler proof of this fact with a different value for $C_G$).
In another line of work, Lubotzky and Pak \cite{Lubotzky-Pak} analyzed
the mixing of the product replacement chain in terms of Kazhdan
constants (see also subsequent quantitative estimates for Kazhdan
constants by Kassabov \cite{Kassabov}). We also mention a result of
Pak \cite{Pak} which shows mixing in $\text{polylog}(|G|)$ steps when
$n = \Theta(\log |G| \log \log |G|)$. The reader may consult the
survey \cite{Pak-Survey} for further background on the product
replacement algorithm.
Diaconis and Saloff-Coste conjectured that the mixing time bound can
be improved to $C_G n \log n$ \cite[Remark 2, Section 7,
p.\ 290]{DSC98}, based on the observation that at least $n \log n$
steps are needed by the classical coupon-collector's problem. This was
confirmed in the case $G = {\mathbb Z}/2$ by Chung and Graham
\cite{Chung-Graham-Cube} and recently refined by Ben-Hamou and Peres,
who show that when $G={\mathbb Z}/2$, the chain in fact exhibits a cutoff at
time $\frac{3}{2}n \log n$ in total-variation with window of order $n$
\cite{BHP}.
In this paper, we extend the result of Ben-Hamou and Peres to all
finite groups. Note that this also verifies the conjecture of Diaconis
and Saloff-Coste for a fixed finite group. To state the result, let us
denote the total variation distance between $\Pr_\sigma(\sigma_t \in \cdot
\ )$ and $\pi$ by
\[ d_\sigma(t) := \max_{A \subseteq {\mathcal S}}|\Pr_\sigma(\sigma_t \in A)- \pi (A)|. \]
\begin{theorem}\label{Thm:main}
Let $G$ be a finite group. Then, the Markov chain $(\sigma_t)_{t \ge 0}$ on the set of
generating $n$-tuples of $G$ has a total-variation cutoff at time
$\frac{3}{2}n\log n$ with window of order $n$. More precisely, we have
\begin{equation}\label{Eq:UB}
\lim_{\b\to\infty} \limsup_{n\to\infty} \max_{\sigma \in {\mathcal S}}
d_\sigma\left(\frac{3}{2}n\log n + \b n\right) = 0
\end{equation}
and
\begin{equation}\label{Eq:LB}
\lim_{\b\to\infty} \liminf_{n\to\infty} \max_{\sigma \in {\mathcal S}}
d_\sigma\left(\frac{3}{2}n\log n - \b n\right) = 1.
\end{equation}
\end{theorem}
\subsection{A connection to cryptography}
We mention another motivation for studying the product replacement
chain in the case $G=({\mathbb Z}/q)^m$ for a prime $q \ge 2$ and integers $m \ge
1$. It comes from a public-key authentication protocol proposed by
Sotiraki \cite{Sotiraki}, which we now briefly describe. In the
protocol, a verifier wants to check the identity of a prover based on
the time needed to answer a challenge.
First, the prover runs the Markov chain with $G = ({\mathbb Z}/q)^m$ and $n = m$,
which can be interpreted as performing a random walk on $SL_n({\mathbb Z}/q)$,
where $\sigma(k)$ is viewed as the $k$-th row of a $n \times n$
matrix. (In each step, a random row is either added to or subtracted
from another random row.)
After $t$ steps, the prover records the resulting matrix $A \in
SL_n({\mathbb Z}/q)$ and makes it public. To authenticate, the verifier gives
the prover a vector $x \in ({\mathbb Z}/q)^n$ and challenges her to compute $y :=
Ax$. The prover can perform this calculation in $O(t)$ operations by
retracing the trajectory of the random walk.
Without knowing the trajectory, if $t$ is large enough, an adversary
will not be able to distinguish $A$ from a random matrix and will be
forced to perform the usual matrix-vector multiplication (using $n^2$
operations) to complete the challenge. Thus, the question is whether
$t \ll n^2$ is large enough for the matrix $A$ to become sufficiently
random, so that the prover can answer the challenge much faster than
an adversary.
Note that when $n > m$, the product replacement chain on $G = ({\mathbb Z}/q)^m$
amounts to the projection of the random walk on $SL_n({\mathbb Z}/q)$ onto the
first $m$ columns. Thus, Theorem \ref{Thm:main} shows that when $m$ is
fixed and $n \rightarrow \infty$, the mixing time for the first $m$
columns is around $\frac{3}{2} n \log n$. One then hopes that the
mixing of several columns is enough to make it computationally
intractable to distinguish $A$ from a random matrix; this would
justify the authentication protocol, as $n \log n \ll n^2$.
We remark that when $t$ is much larger than the mixing time of the
random walk on $SL_n({\mathbb Z}/q)$ generated by row and additions and
subtractions, it is information theoretically impossible for an
adversary to distinguish $A$ from a random matrix. However, the
diameter of the corresponding Cayley graph on $SL_n({\mathbb Z}/q)$ is known to
be of order $\Theta\left( \frac{n^2}{\log_q n} \right)$ \cite{AHM,
Christofides}, so a lower bound of the same order necessarily holds
for the mixing time. Diaconis and Saloff-Coste \cite[Section 4,
p.\ 420]{DSC96} give an upper bound of $O(n^4)$, which was
subsequently improved to $O(n^3)$ by Kassabov \cite{Kassabov}. Closing
the gap between $n^3$ and $\frac{n^2}{\log n}$ remains an open
problem.
\subsection{Outline of proof}
The proof of Theorem \ref{Thm:main} analyzes the mixing behavior in
several stages:
\begin{itemize}
\item an initial ``burn-in'' period lasting around $n \log n$ steps,
after which the group elements appearing in the configuration are
not mostly confined to any proper subgroup of $G$;
\item an averaging period lasting around $\frac{1}{2} n \log n$ steps,
after which the counts of group elements become close to their
average value under the stationary distribution; and
\item a coupling period lasting $O(n)$ steps, after which our chain
becomes exactly coupled to the stationary distribution with high
probability.
\end{itemize}
The argument is in the spirit of \cite{BHP}, but a more elaborate
analysis is required in the second and third stages. To analyze the
first stage, for a fixed proper subgroup $H$, the number of group
elements in $H$ appearing in the configuration is a birth-and-death
process whose transition probabilities are easy to estimate. The
analysis of the resulting chain is the same as in \cite{BHP}, and we
can then union bound over all proper subgroups $H$.
In the second stage, for a given starting configuration $\sigma_0 \in
{\mathcal S}$, we consider quantities $n_{a,b}(\sigma)$ counting the number of
sites $k$ where $\sigma_0(k) = a$ and $\sigma(k) = b$. A key observation
(which also appears in \cite{BHP}) is that by symmetry, projecting the
Markov chain onto the values $(n_{a,b}(\sigma_t))_{a, b \in G}$ does
not affect the mixing behavior. Thus, it is enough to understand the
mixing behavior of the counts $n_{a,b}$.
One expects these counts to evolve towards their expected value
${\mathbb E}_{\sigma \sim \pi} n_{a,b}(\sigma)$ as the chain mixes. To carry out the
analysis rigorously, we write down a stochastic difference equation
for the $n_{a,b}$ and analyze it via the Fourier
transform. Intuitively, as $n \rightarrow \infty$, the process
approaches a ``hydrodynamic limit'' so that it becomes approximately
deterministic. It turns out that after about $\frac{1}{2} n \log n$
steps, the $n_{a,b}$ are likely to be within $O(\sqrt{n})$ of their
expected value. Our analysis requires a sufficiently ``generic''
initial configuration, which is why the first stage is necessary.
Finally, in the last stage, we show that if the $(n_{a,b}(\sigma))_{a,b\in
G}$ and $(n_{a,b}(\sigma'))_{a,b\in G}$ for two configurations are
within $O(\sqrt{n})$ in $\ell^1$ distance, they can be coupled to be
exactly the same with high probability after $O(n)$ steps of the
Markov chain. A standard argument involving coupling to the stationary
distribution then implies a bound on the mixing time.
The main idea to prove the coupling bound is that even if the $\ell^1$
distance evolves like an unbiased random walk, there is a good chance
that it will hit $0$ due to random fluctuations. A similar argument is
used to prove cutoff for lazy random walk on the hypercube \cite[Chapter
18]{LevinPeresWilmer}. However, some careful accounting is necessary
in our setting to ensure that in fact the $\ell^1$ distance does not
increase in expectation and to ensure sufficient fluctuations.
\subsection{Organization of the paper}
The rest of the paper is organized as follows. In Section
\ref{Sec:UB}, we state (without proof) the key lemmas describing the
behavior in each of the three stages and use these to prove the upper
bound \eqref{Eq:UB} in Theorem \ref{Thm:main}. Sections
\ref{Sec:DE-Proofs} and \ref{Sec:Coupling-Proof} contain the proofs of
these lemmas. Finally, in Section \ref{Sec:LB}, we prove the lower
bound \eqref{Eq:LB} in Theorem \ref{Thm:main}; this is mostly a matter
of verifying that the estimates used in the upper bound were tight.
\subsection{Notation}
Throughout this paper, we use $c, C, C', \dots$, to denote
absolute constants whose exact values may change from line to line,
and also use them with subscripts, for instance, $C_G$ to specify its
dependency only on $G$. We also use subscripts with big-$O$ notation,
e.g.\ we write $O_G(\,\cdot\,)$ when the implied constant depends only
on $G$.
\section{Proof of Theorem \ref{Thm:main} (\ref{Eq:UB})}\label{Sec:UB}
Let us fix a finite group $G$ and denote its cardinality by ${\mathcal Q} :=
|G|$. For a configuration $\sigma \in {\mathcal S}$, let $n_a(\sigma)$ denote the
number of sites having group element $a$, i.e.,
\[ n_a(\sigma) := |\{i \in [n] \ : \ \sigma(i)=a\}|. \]
\subsection{The burn-in period}
For a proper subgroup $H \subseteq G$, let
\[ n_{non}^H(\sigma) := \sum_{a \in G \setminus H} n_a(\sigma) \]
denote the number of sites not in $H$, and define for $c \in (0, 1)$
the set
\[ \Snon{c} := \{\sigma \in {\mathcal S} \ : \ n_{non}^H(\sigma) \ge cn \text{ for all proper subgroups $H \subseteq G$} \}. \]
Thus, $\Snon{c}$ is the set of states $\sigma$ where the group elements
appearing in $\sigma$ are not mostly confined to any particular proper
subgroup of $G$. The next lemma shows that we reach $\Snon{1/3}$
in about $n \log n$ steps, and once we reach $\Snon{1/3}$, we
remain in $\Snon{1/6}$ for $n^2$ steps with high probability. Note
that $n^2$ is much larger than the overall mixing time, so we may
essentially assume that we are in $\Snon{1/6}$ for all of the
later stages.
\begin{lemma}\label{Lem:Burn}
Let $\t_{1/3} := \min\{t \ge 0 : \sigma_t \in \Snon{1/3}\}$ be the
first time to hit $\Snon{1/3}$. Then for all large enough $n$
and for all large enough $\b > 0$,
\[ \max_{\sigma \in {\mathcal S}} \Pr_\sigma(\t_{1/3} > n \log n + \b n) \le \frac{120 {\mathcal Q}}{\b^2}. \]
Moreover, there exists a constant $C_G$ depending only on $G$ such that
\[ \max_{\sigma \in \Snon{1/3}}\Pr_\sigma \left(\sigma_t \notin \Snon{1/6} \ \text{\rm for some $t \le n^2$}\right) \le C_G n^2e^{-n/10}. \]
\end{lemma}
\begin{proof}
Fix a proper subgroup $H \subset G$, and consider what happens to
$n_{non}^H(\sigma_t)$ at time $t$. Suppose our next step is to replace
$\sigma(i)$ with $\sigma(i)\sigma(j)$.
If $\sigma(j) \in H$, then $n_{non}^H(\sigma_{t+1}) = n_{non}^H(\sigma_t)$. If
$\sigma(j) \not\in H$ and $\sigma(i) \in H$, then $n_{non}^H(\sigma_{t+1}) =
n_{non}^H(\sigma_t) - 1$. Finally, if $\sigma(j), \sigma(i) \not\in H$, then
$\sigma(i)\sigma(j)$ may or may not be in $H$, so $n_{non}^H(\sigma_{t+1}) \ge
n_{non}^H(\sigma_t) - 1$.
Let $(N_t)_{t \ge 0}$ be the birth-and-death chain with the
following transition probabilities for $1 \le k \le n$:
\begin{align*}
\Pr(N_{t+1} = k+1 \mid N_t = k) &= \frac{k(n-k)}{n(n-1)} \\
\Pr(N_{t+1} = k-1 \mid N_t = k) &= \frac{k(k-1)}{n(n-1)} \\
\Pr(N_{t+1} = k \mid N_t = k) &= \frac{n-k}{n}.
\end{align*}
We start this chain at $N_0 = n^H_{non}(\sigma_0)$; note that because
the elements appearing in $\sigma_0$ generate $G$, we are guaranteed
to have $n^H_{non}(\sigma_0) > 0$.
The above birth-and-death chain corresponds to the behavior of
$(n^H_{non}(\sigma_t))$ if whenever $\sigma(j), \sigma(i) \not\in H$, it always
happened that $\sigma(i)\sigma(j) \in H$. Thus, $(n^H_{non}(\sigma_t))$
stochastically dominates $(N_t)$.
The chain $(N_t)$ is precisely what is analyzed in \cite{BHP} for
the case $G = {\mathbb Z}/2$. Let
\[T_k := \min\{t \ge 0 : N_t=k\}.\]
Then, we have ${\mathbb E}_{k-1}T_k \le \frac{n^2}{k(n-2k)}$ \cite[(2) in the
proof of Lemma 1]{BHP} and thus ${\mathbb E}_1 (T_{n/3})
=\sum_{k=2}^{n/3}{\mathbb E}_{k-1}T_k \le n \log n + n$. On the other hand,
setting $v_k={\rm Var}_{k-1}(T_k)$, we have $v_2 \le n^2$,
\[v_{k+1}\le \frac{k}{n-k}v_k + \frac{54 n^2}{k^2},\]
and ${\rm Var}_1 (T_{n/3}) = \sum_{k=2}^{n/3}v_k \le 110 n^2$ \cite[The proof of Lemma 1]{BHP}.
Hence by Chebyshev's inequality for
all large enough $\b > 0$,
\[\Pr_1(T_{n/3} > n \log n + \b n) \le \frac{120}{\b^2}.\]
Moreover, we have $\Pr_{n/3} \left( T_{n/6} \le n^2 \right) \le n^2e^{-n/10}$.
Indeed, this follows from the fact that for $m<k$, we have
\[\Pr_k(T_m \le n^2) \le n^2\frac{\pi_{\rm BD}(m)}{\pi_{\rm BD}(k)},\]
where $\pi_{\rm BD}(k)={n \choose k}/(2^n-1)$ \cite[(5) and the
following in the proof of Proposition 2]{BHP}.
We now take a union bound over all the
proper subgroups $H$.
\end{proof}
\subsection{The averaging period}
In the next stage, the counts $n_a(\sigma_t)$ go toward their average
value. We actually analyze this stage in two substages, looking at a
``proportion vector'' and ``proportion matrix'', as described below.
\subsubsection{Proportion vector chain}
For a configuration $\sigma \in {\mathcal S}$, we consider the ${\mathcal Q}$-dimensional
vector $(n_a(\sigma)/n)_{a \in G}$, which we call the {\it proportion
vector} of $\sigma$. One may check that for a typical $\sigma \in {\mathcal S}$, each
$n_a(\sigma)/n$ is about $1/{\mathcal Q}$. For each $\d > 0$, we define the
$\d$-{\it typical set}
\[ {\mathcal S}_\ast(\d) := \left\{\sigma \in {\mathcal S} \ : \ \left\|\left(\frac{n_a(\sigma)}{n}\right)_{a \in G} - \left(\frac{1}{{\mathcal Q}}\right)_{a \in G}\right\| \le \d \right\}, \]
where $\| \cdot \|$ denotes the $\ell^2$-norm in ${\mathbb R}^G$.
The following lemma implies that starting from $\sigma \in \Snon{1/3}$, we
reach $\Sast{\d}$ in $O_\d(n)$ steps with high
probability. The proof is given in Section \ref{Subsec:vector}.
\begin{lemma}\label{Lem:1stDE}
Consider any $\sigma \in \Snon{1/3}$ and any constant $\d>0$. There exists a constant $C_{G, \d}$
depending only on $G$ and $\d$ such that for any $T \ge C_{G, \d} n$, we have
\[ \Pr_\sigma\left(\sigma_T \notin \Sast{\d} \right) \le \frac{1}{n} \]
for all large enough $n$.
\end{lemma}
\subsubsection{Proportion matrix chain}
We actually need a more precise averaging than what is provided by
Lemma \ref{Lem:1stDE}. Fix a configuration $\sigma_0 \in {\mathcal S}$. For any $\sigma
\in {\mathcal S}$ and for any $a, b \in G$, define
\[ n_{a,b}^{\sigma_0}(\sigma) := |\{i \in [n] \ : \ \sigma_0(i)=a, \sigma(i)=b \}|. \]
If we run the Markov chain $(\sigma_t)_{t\ge 0}$ with initial state
$\sigma_0$, then $n_{a,b}^{\sigma_0}(\sigma_t)$ is the number of sites that
originally contained the element $a$ (at time 0) but now contain $b$
(at time $t$). Note that
\[ \sum_{b \in G} n_{a,b}^{\sigma_0}(\sigma) = n_a(\sigma_0) \quad \text{and} \quad \sum_{a \in G} n_{a,b}^{\sigma_0}(\sigma) = n_b(\sigma).\]
We can then associate with $(\sigma_t)_{t \ge 0}$ another Markov chain
$\left(n_{a,b}^{\sigma_0}(\sigma_t)/n_a(\sigma_0)\right)_{a, b \in G}$ for $t \ge
0$, which we call the {\it proportion matrix chain} ({\it with respect to
$\sigma_0$}). The state space for the proportion matrix chain is $\{0,
1, \dots, n\}^{G \times G}$, and the transition probabilities depend
on $\sigma_0$.
The proportion matrix acts like a ``sufficient statistic'' for
analyzing our Markov chain started at $\sigma_\ast$, because of the
permutation invariance of our dynamics. In fact, as the following
lemma shows, the distance to stationarity of the proportion matrix
chain is equal to the distance to stationarity of the original chain.
\begin{lemma}\label{Lem:nij}
Let $\sigma_\ast \in {\mathcal S}$ be a configuration. For the Markov chain
$(\sigma_t)_{t \ge 0}$ with initial state $\sigma_\ast$, we consider
$\left(n_{a, b}^{\sigma_\ast}(\sigma_t)\right)_{a, b \in G}$. Let
$\overline{\pi}^{\sigma_\ast}$ be the stationary measure for the Markov
chain $\{(n_{a, b}^{\sigma_\ast}(\sigma_t))_{a, b \in G}\}_{t \ge 0}$ on
$\left\{0, 1, \dots, n\right\}^{G \times G}$. Then, for every $t \ge
0$, we have
\[ \normTV{ \Pr_{\sigma_\ast}(\sigma_t \in \cdot \ ) - \pi } = \normTV{ \Pr_{\sigma_\ast}\left( (n_{a, b}^{\sigma_\ast}(\sigma_t))_{a, b \in G} \in \cdot \ \right) - \overline{\pi}^{\sigma_\ast} }. \]
\end{lemma}
\begin{proof}
For any matrix $N = (N_{a,b})_{a,b \in G} \in \{0, 1, \ldots ,
n\}^{G \times G}$, write
\[ {\mathcal X}_{(N)} := \left\{\sigma \in {\mathcal S} \ : \ n_{a, b}^{\sigma_\ast}(\sigma)=N_{a, b} \ \text{for all $a, b \in G$}\right\} \]
for the set of configurations with $N$ as their proportion matrix.
Since the distribution of $\sigma_t$ is invariant under permutations on
sites $i \in [n]$ preserving the set $\{ i : \sigma_\ast(i) = a\}$ for
every $a \in G$, the conditional probability measures
$\Pr_{\sigma_\ast}\left(\sigma_t \in \cdot \mid \sigma_t \in {\mathcal X}_{(N)} \right)$ and
$\pi( \ \cdot \mid {\mathcal X}_{(N)})$ are both uniform on ${\mathcal X}_{(N)}$.
This implies that for each $\sigma \in {\mathcal X}_{(N)}$,
\[ |\Pr_{\sigma_\ast}(\sigma_t =\sigma) - \pi(\sigma)| = \frac{1}{\left|{\mathcal X}_{(N)}\right|}\left|\Pr_{\sigma_\ast}\left((n^{\sigma_\ast}_{a, b}(\sigma_t))_{a, b \in G}=N\right)- \overline{\pi}^{\sigma_\ast}(N) \right|, \]
and summing over all $\sigma \in {\mathcal X}_{(N)}$ and all $N$, we obtain the
claim.
\end{proof}
For $\sigma_0 \in {\mathcal S}$ and $r > 0$, define the set of configurations
\[ \Sast{\sigma_0, r} := \left\{ \sigma \in {\mathcal S} \ : \ \left\| \left(\frac{n^{\sigma_0}_{a, b}(\sigma)}{n_a(\sigma_0)}\right)_{b \in G} - \left(\frac{1}{{\mathcal Q}}\right)_{b \in G}\right\| \le r \text{ for all $a
\in G$} \right\}. \]
Roughly speaking, the following lemma shows that starting from a
typical configuration $\sigma_\ast \in \Sast{\frac{1}{4{\mathcal Q}}}$, we need
about $\frac{1}{2}n \log n$ steps to reach $\Sast{\sigma_\ast,
\frac{R}{\sqrt{n}}}$, where $R$ is a constant. We show this fact in
a slightly more general form where the initial state need not be
$\sigma_\ast$; the proof is given in Section \ref{Subsec:matrix}.
\begin{lemma}\label{Lem:2ndDE}
Consider any $\sigma_\ast, \sigma'_\ast \in \Sast{\frac{1}{4{\mathcal Q}}}$, and let
$T := \ceil{\frac{1}{2} n \log n}$. There exists a constant $C_G >
0$ depending only on $G$ such that for any given $R > 0$, we have
\[ \Pr_{\sigma'_\ast}\left(\sigma_T \notin \Sast{\sigma_\ast, \frac{R}{\sqrt{n}}}\right) \le C_G e^{-R} + \frac{1}{n} \]
for all large enough $n$.
\end{lemma}
\subsection{The coupling period}
After reaching $\Sast{\sigma_\ast, \frac{R}{\sqrt{n}}}$, we show that only
$O(n)$ additional steps are needed to mix in total variation
distance. The main ingredient in the proof is a coupling of proportion
matrix chains so that they coalesce in $O(n)$ steps when they both
start from configurations $\sigma, \tilde\sigma \in \Sast{\sigma_\ast,
\frac{R}{\sqrt{n}}}$. We construct such a coupling and prove the
following lemma in Section \ref{Sec:Coupling-Proof}.
\begin{lemma}\label{Lem:RW}
Consider any $\sigma_\ast \in \Sast{\frac{1}{5{\mathcal Q}^3}}$, and let $R > 0$. Suppose $\sigma, \tilde
\sigma \in \Sast{\sigma_\ast, \frac{R}{\sqrt{n}}}$. Then, there exists a
coupling $(\sigma_t, \tilde \sigma_t)$ of the Markov chains with initial
states $(\sigma, \tilde \sigma)$ such that for a given $\b > 0$ and all
large enough $n$,
\[ \Pr_{\sigma, \tilde \sigma}(\t > \b n) \le \frac{32{\mathcal Q}^2 R}{\sqrt{\b}}, \]
where $\t:=\min\{t \ge 0 : n^{\sigma_\ast}_{a, b}(\sigma_t) =
n^{\sigma_\ast}_{a, b}(\tilde \sigma_t) \ \text{\rm for all $a, b \in G$}\}$.
\end{lemma}
To translate this coupling time into a bound on total variation
distance, we need also the simple observation that the stationary
measure $\pi$ concentrates on $\Sast{\sigma_\ast, \frac{R}{\sqrt{n}}}$
except for probability $O(1/R^2)$, as given in the next lemma.
\begin{lemma}\label{Lem:stationary}
For the stationary distribution $\pi$ of the chain $(\sigma_t)_{t \ge 0}$,
for every $R>0$ and for all $n > m$,
\[ \pi\left(\sigma \notin \Sast{\frac{R}{\sqrt{n}}}\right) \le \frac{C_G}{R^2}. \]
Moreover for every $\d<1/(2{\mathcal Q})$, for every $R>0$ and for all $n>m$,
\[ \max_{\sigma_\ast \in \Sast{\d}}\pi\left(\sigma \notin \Sast{\sigma_\ast, \frac{R}{\sqrt{n}}}\right) \le \frac{2 C_G {\mathcal Q}}{R^2}, \]
where $C_G$ and $m$ are constants depending only on $G$.
\end{lemma}
\begin{proof}
Observe that since the stationary distribution $\pi$ is uniform on
${\mathcal S}$, it is given by the uniform distribution ${\rm Unif}$ on $G^n$
conditioned on ${\mathcal S}$. Note that we can always generate $G$ using
each of its $|G|$ elements, so we have an easy lower bound of $|{\mathcal S}|
\ge |G|^{n - |G|}$. Consequently, we have
\begin{align*}
\pi\left(\sigma \notin \Sast{\frac{R}{\sqrt{n}}}\right)
&\le |G|^{|G|} {\rm Unif}\left(\sigma \notin \Sast{\frac{R}{\sqrt{n}}}\right) \\
&\le |G|^{|G|} \sum_{a \in G}{\rm Unif}\left(\left| \frac{n_a(\sigma)}{n}-\frac{1}{{\mathcal Q}}\right| \ge \frac{R}{\sqrt{n}}\right) \le \frac{|G|^{|G|}}{R^2}\left(1-\frac{1}{{\mathcal Q}}\right).
\end{align*}
Concerning the second assertion, we note that $n_a(\sigma_\ast) \ge
(1/{\mathcal Q} -\d)n$ for each $a \in G$; the rest follows similarly, so we
omit the details.
\end{proof}
\begin{remark}\label{Rem:1}
In Lemma \ref{Lem:stationary} above, we have given a very loose
bound on $C_G$ for sake of simplicity. Actually, it is not hard to
see that holding $G$ fixed, we have $\lim_{n\rightarrow \infty}
|{\mathcal S}|/|G|^n = 1$. See also \cite[Section 6.B.]{DSC98} for more
explicit bounds for various families of groups.
\end{remark}
Together, Lemmas \ref{Lem:2ndDE}, \ref{Lem:RW}, and \ref{Lem:stationary}
imply the following bound for total variation distance.
\begin{lemma} \label{Lem:coupling-tv}
Let $\b > 0$ be given, and let $T := \ceil{\frac{1}{2} n \log n} +
\ceil{\b n}$. Then, for any $\sigma_\ast \in \Sast{\frac{1}{5{\mathcal Q}^3}}$,
we have
\[ \normTV{ \Pr_{\sigma_\ast}(\sigma_T \in \cdot \ ) - \pi } \le \frac{C_G}{\b^{1/4}}, \]
where $C_G$ is a constant depending only on $G$.
\end{lemma}
\begin{proof}
Let $\tilde\sigma$ be drawn from the stationary distribution
$\pi$. Define
\[ \tau = \min \left\{ t \ge 0 : n^{\sigma_\ast}_{a,b}(\sigma_t) = n^{\sigma_\ast}_{a,b}(\tilde\sigma_t) \text{ for all $a, b \in G$}\right\}, \]
where $(\tilde\sigma_t)$ is a Markov chain started at $\tilde\sigma$. Let
$\overline{\pi}^{\sigma_\ast}$ denote the stationary distribution for
the proportion matrix with respect to $\sigma_\ast$. Since $\tilde\sigma$
was drawn from $\pi$, the proportion matrix of $\tilde\sigma_t$ remains
distributed as $\overline{\pi}^{\sigma_\ast}$ for all $t$.
We first run $\sigma$ and $\tilde\sigma$ independently up until time $T_1 :=
\ceil{\frac{1}{2} n \log n}$. For a parameter $R$ to be specified
later, consider the events
\[ {\mathcal G} := \left\{ \sigma_{T_1} \in \Sast{\sigma_\ast, \frac{R}{\sqrt{n}}} \right\}, \qquad \tilde{{\mathcal G}} := \left\{ \tilde\sigma_{T_1} \in \Sast{\sigma_\ast, \frac{R}{\sqrt{n}}} \right\}. \]
Lemma \ref{Lem:2ndDE} implies that $\Pr({\mathcal G}^{\sf c}) \le
C_G e^{-R} + \frac{1}{n}$, and Lemma \ref{Lem:stationary} implies that
$\Pr(\tilde{\mathcal G}^{\sf c}) \le \frac{2 C_G {\mathcal Q}}{R^2}$.
Let $T_2 := \ceil{\b n}$. Starting from time $T_1$, as long as both
${\mathcal G}$ and $\tilde{\mathcal G}$ hold, we may use Lemma \ref{Lem:RW} to form a
coupling $(\sigma_t, \tilde\sigma_t)$ so that
\[ \Pr_{\sigma_\ast, \sigma_\ast} \Big( n^{\sigma_\ast}_{a, b}(\sigma_{T_1+T_2}) \ne n^{\sigma_\ast}_{a, b}(\tilde\sigma_{T_1+T_2}) \text{ for some $a, b \in G$} \,\Big|\, {\mathcal G} \cap \tilde{\mathcal G} \Big) \le \frac{C{\mathcal Q}^2 R}{\sqrt{\b}}. \]
Setting $R = \b^{1/4}$, we conclude that
\begin{align*}
&\Pr_{\sigma_\ast, \sigma_\ast} \Big( n^{\sigma_\ast}_{a, b}(\sigma_{T_1+T_2}) \ne n^{\sigma_\ast}_{a, b}(\tilde\sigma_{T_1+T_2}) \text{ for some $a, b \in G$} \Big) \le \frac{C{\mathcal Q}^2 R}{\sqrt{\b}} + \Pr({\mathcal G}^{\sf c}) + \Pr(\tilde{\mathcal G}^{\sf c}) \\
&\qquad\le \frac{C{\mathcal Q}^2 R}{\sqrt{\b}} + \left(C_G e^{-R} + \frac{1}{n}\right) + \frac{2C_G {\mathcal Q}}{R^2} = O_G\left(\frac{1}{\b^{1/4}}\right).
\end{align*}
We have $T = T_1 + T_2$, and recall that the proportion matrix for
$\tilde\sigma$ is stationary for all time. This yields
\[ \normTV{ \Pr_{\sigma_\ast}\left( (n^{\sigma_\ast}_{a, b}(\sigma_T))_{a, b \in G} \in \cdot \ \right) - \overline{\pi}^{\sigma_\ast} } = O_G\left(\frac{1}{\b^{1/4}}\right). \]
The result then follows by Lemma \ref{Lem:nij}.
\end{proof}
\subsection{Proof of the main theorem}
We now combine the lemmas from the burn-in, averaging, and coupling
periods to complete the proof of the upper bound in Theorem
\ref{Thm:main}.
\begin{proof}[Proof of Theorem \ref{Thm:main} (\ref{Eq:UB})]
Define $T_1 := \ceil{n \log n + \b n}$, $T_2 := \ceil{\b n}$, and
$T_3 := \ceil{\frac{1}{2} n \log n} + \ceil{\b n}$.
Let $\tau_{1/3}$ be the first time to hit $\Snon{1/3}$ as in
Lemma \ref{Lem:Burn}. Then, Lemma \ref{Lem:Burn} implies that for
any $\sigma_1 \in {\mathcal S}$ and any $t \ge 0$, we have
\begin{align}
d_{\sigma_1}(T_1 + t) &\le \Pr_{\sigma_1}\left( \tau_{1/3} > T_1 \right) + \max_{\sigma \in \Snon{1/3}} d_\sigma(t) \nonumber \\
&\le \frac{120 {\mathcal Q}}{\b^2} + \max_{\sigma \in \Snon{1/3}} d_\sigma(t). \label{Eq:stage1}
\end{align}
Next, by Lemma \ref{Lem:1stDE}, for any $\sigma_2 \in \Snon{1/3}$
and when $\b$ and $n$ are sufficiently large, we have that
$\Pr_{\sigma_2} \left( \sigma_{T_2} \not\in \Sast{\frac{1}{5{\mathcal Q}^3}} \right)
\le \frac{1}{n}$. Consequently, for $\sigma_2 \in \Snon{1/3}$, we
have
\begin{equation} \label{Eq:stage2}
d_{\sigma_2}(T_2 + t) \le \frac{1}{n} + \max_{\sigma_\ast \in \Sast{\frac{1}{5{\mathcal Q}^3}}} d_{\sigma_\ast}(t).
\end{equation}
Finally, Lemma \ref{Lem:coupling-tv} states that
\begin{equation} \label{Eq:stage3}
\max_{\sigma_\ast \in \Sast{\frac{1}{5{\mathcal Q}^3}}} d_{\sigma_\ast}(T_3) \le \frac{C_G}{\b^{1/4}}.
\end{equation}
Thus, combining \eqref{Eq:stage1}, \eqref{Eq:stage2}, and
\eqref{Eq:stage3}, we obtain for any $\sigma \in {\mathcal S}$ that
\begin{align*}
d_\sigma\left(\frac{3}{2} n \log n + 4 \b n \right) &\le d_\sigma\left(T_1 + T_2 + T_3 \right) \nonumber \\
&\le \frac{120 {\mathcal Q}}{\b^2} + \frac{1}{n} + \frac{C_G}{\b^{1/4}}
\end{align*}
sending $n \rightarrow \infty$ and then $\b \rightarrow \infty$
yields \eqref{Eq:UB}.
\end{proof}
\section{Proofs for the averaging period} \label{Sec:DE-Proofs}
In this section, we prove Lemmas \ref{Lem:1stDE} and
\ref{Lem:2ndDE}. The proofs are based on analyzing stochastic
difference equations satisfied by the Fourier transform of the
proportion vector or matrix.
\subsection{The Fourier transform for $G$}
We first establish some notation and preliminaries for the Fourier
transform. Let $G^\ast$ be a complete set of non-trivial irreducible
representations of $G$. In other words, for each $\r \in G^\ast$, we
have a finite dimensional complex vector space $V_\r$ such that $\r: G
\to GL(V_\r)$ is a non-trivial irreducible representation, and any
non-trivial irreducible representation of $G$ is isomorphic to some unique $\r \in
G^\ast$. Moreover, we may equip each $V_\r$ with an inner product for
which $\r \in G^\ast$ is unitary.
For a configuration $\sigma \in {\mathcal S}$ and for each $\r \in G^\ast$, we
consider the matrix acting on $V_\r$ given by
\[ x_\r(\sigma) := \sum_{a \in G} \frac{n_a(\sigma)}{n}\r(a), \]
so that $x_\r(\sigma)$ is the Fourier transform of the proportion vector
at the representation $\r$. We write $x(\sigma):=(x_\r(\sigma))_{\r \in
G^\ast}$.
Let $\wt{V} := \bigoplus_{\r \in G^\ast}{\rm End}_{\mathbb C}(V_\r)$, and write
$d_\r := \dim_{\mathbb C} V_\r$. For an element $x = (x_\r)_{\r \in G^\ast} \in
\wt{V}$, we define a norm $\| \cdot \|_{\wt{V}}$ given by
\begin{equation*}\label{Eq:norm}
\|x\|_{\wt{V}}^2 := \frac{1}{{\mathcal Q}}\sum_{\r \in G^\ast}d_\r \|x_\r\|_{{\rm HS}}^2,
\end{equation*}
where $\langle A, B \rangle_{{\rm HS}} =\Tr(A^\ast B)$ denotes the
Hilbert-Schmidt inner product in ${\rm End}_{\mathbb C}(V_\r)$ and $\|\cdot\|_{{\rm HS}}$
denotes the corresponding norm. (Note that $\langle \cdot, \cdot
\rangle_{{\rm HS}}$ and $\|\cdot\|_{{\rm HS}}$ depend on $\r$, but for sake of
brevity, we omit the $\r$ when there is no danger of confusion.)
The Peter-Weyl theorem \cite[Chapter 2]{DiaconisRep} says that
\begin{equation*} \label{Eq:Peter-Weyl}
L^2(G) \cong {\mathbb C} \oplus \wt{V},
\end{equation*}
where the isomorphism is given by the Fourier transform. The
Plancherel formula then reads
\begin{equation}\label{Eq:Plancherel}
\|x(\sigma)\|_{\wt{V}}^2 = \left\|\left(\frac{n_a(\sigma)}{n}\right)_{a \in G} - \left(\frac{1}{{\mathcal Q}}\right)_{a \in G}\right\|^2.
\end{equation}
Thus, in order to show that $\sigma \in \Sast{\d}$, it
suffices to show that $\|x(\sigma)\|_{\wt{V}}$ is small. A similar
argument may be applied to the proportion matrix instead of the
proportion vector.
Finally, for an element $A \in {\rm End}_{\mathbb C}(V_\r)$, we will at times also
consider the \emph{operator norm} $\|A\|_{op} := \sup_{v \in V_\r, v \neq 0}
\|Av\| / \|v\|$. We will also sometimes use the following (equivalent)
variational characterization of the operator norm:
\begin{align*}
\sup_{\substack{X \in {\rm End}_{\mathbb C}(V_\r) \\ \|X\|_{{\rm HS}} = 1}} \|XA\|^2_{{\rm HS}} &= \sup_{\substack{X \in {\rm End}_{\mathbb C}(V_\r) \\ \|X\|_{{\rm HS}} = 1}} \Tr (XAA^*X^*) = \sup_{\substack{X \in {\rm End}_{\mathbb C}(V_\r) \\ \|X\|_{{\rm HS}} = 1}} \Tr (X^*XAA^*) \\
&= \sup_{\substack{Y \in {\rm End}_{\mathbb C}(V_\r) \\ Y = Y^*, \;\; \Tr Y = 1}} \langle Y , AA^* \rangle_{{\rm HS}} = \|AA^*\|_{op} = \|A\|_{op}^2.
\end{align*}
\subsubsection{The special case of $G = {\mathbb Z}/q$}
On a first reading of this section, the reader may wish to consider
everything for the special case of $G = {\mathbb Z}/q$ for some integer $q \ge 2$. In
that case, each representation is one-dimensional, and the
representations can be indexed by $\ell = 0, 1, 2, \ldots , q - 1$. The
Fourier transform is then particularly simple: the coefficients are
scalar values
\[ x_\ell(\sigma) = \sum_{a = 0}^{q - 1} \frac{n_a(\sigma)}{n} \o^{a \ell}, \]
where $\o := e^{\frac{2\pi i}{q}}$ is a primitive $q$-th root of
unity.
This special case already illustrates most of the main ideas while
simplifying the estimates in some places (e.g.\ matrix inequalities we
use will often be immediately obvious for scalars).
\subsection{A stochastic difference equation for the $n_a$}
For $a \in G$, we next analyze the behavior of $n_a(\sigma_t)$ over
time. For convenience, we write $n_a(t) = n_a(\sigma_t)$. Let ${\mathcal F}_t$
denote the $\sigma$-field generated by the Markov chain $(\sigma_t)_{t \ge 0}$
up to time $t$. Then, our dynamics satisfy the equation
\begin{equation}\label{Eq:DE0}
{\mathbb E}[n_a(t+1)-n_a(t) \mid {\mathcal F}_t] = \sum_{b \in G} \frac{n_{ab^{-1}}(t) n_b(t) }{2n(n-1)}+\sum_{b \in G} \frac{n_{ab}(t) n_b(t)}{2n(n-1)} - \frac{n_a(t)}{n}.
\end{equation}
Note that $|n_a(t + 1) - n_a(t)| \le 1$ almost surely. Thus, for each
$a \in G$, we can write the above as a stochastic difference equation
\begin{equation}\label{Eq:DE}
n_a(t+1) - n_a(t) = \sum_{b \in G} \frac{n_{ab^{-1}}(t) n_b(t) }{2n(n-1)}+\sum_{b \in G} \frac{n_{ab}(t) n_b(t)}{2n(n-1)} - \frac{n_a(t)}{n} + M_a(t+1),
\end{equation}
where ${\mathbb E}[M_a(t+1) \mid {\mathcal F}_t] = 0$ and $|M_a(t)| \le 2$ almost
surely.
It is easiest to analyze this equation through the Fourier
transform. Writing $x_\r(t) = x_\r(\sigma_t)$, we calculate from \eqref{Eq:DE} that
\[ x_\r(t+1) - x_\r(t) = \frac{1}{n-1}x_\r(t) \left(\frac{x_\r(t) + x_\r(t)^\ast}{2} -\frac{n-1}{n}\right) + \wh{M}_\r(t+1), \]
where $\wh{M}_\r(t) := \frac{1}{n}\sum_{a \in G}M_a(t) \r(a)$.
For convenience, write
\[X_\r(t) = \frac{1}{n - 1}\left(\frac{x_\r(t) + x_\r(t)^\ast}{2} - \frac{n-1}{n}\right),\]
so that our equation becomes
\begin{equation} \label{Eq:DEfourier}
x_\r(t+1) - x_\r(t) = x_\r(t) X_\r(t) + \wh{M}_\r(t+1).
\end{equation}
Note that we have
\[ \|x_\r(t)\|_{{\rm HS}} \le \sqrt{d_\r}, \qquad {\mathbb E}[\wh{M}_\r(t+1) \mid {\mathcal F}_t] = 0, \qquad\text{and}\qquad \|\wh{M}_\r(t)\|_{{\rm HS}} \le \frac{2 {\mathcal Q}\sqrt{d_\r}}{n}, \]
and thus,
\[ \|x(t)\|_{\wt{V}} \le 1 \qquad\text{and}\qquad \|\wh{M}(t)\|_{\wt{V}}\le \frac{2{\mathcal Q}}{n},\]
where $\wh{M}=(\wh{M}_\r)_{\r \in G^\ast}$.
\subsection{A general estimate for stochastic difference equations}
Before proving Lemma \ref{Lem:1stDE}, we also need a technical lemma
for controlling the behavior of stochastic difference equations, which
will be used to analyze \eqref{Eq:DEfourier} as well as other similar
equations.
\begin{lemma} \label{Lem:GenDE}
Let $(z(t))_{t \ge 0}$ be a sequence of $[0,1]$-valued random
variables adapted to a filtration $({\mathcal F}_t)_{t \ge 0}$. Let $\varepsilon \in
(0, 1)$ be a small constant, and let $\varphi : {\mathbb R}^+ \to (0,1]$ be a
non-decreasing function.
Suppose that there are ${\mathcal F}_t$-measurable random variables $M(t)$
for which
\begin{equation} \label{Eq:zDE}
z(t+1) - z(t) \le -\varepsilon\varphi(t+1) z(t) + M(t+1)
\end{equation}
and which, for some constant $D$, satisfy the bounds
\[ {\mathbb E}[ M(t+1) \mid {\mathcal F}_t ] \le D\varepsilon\sqrt{\varepsilon}, \qquad |M(t)| \le D\varepsilon. \]
Then, for each $t$ and each $\lambda > 0$, we have
\[ \Pr\left( z(t) \ge \lambda \sqrt{\varepsilon} + e^{- \varepsilon \int_0^t \varphi(s)\,ds} \cdot z(0) \right) \le C_{D,\varphi} e^{-c_{D,\varphi} \lambda^2} \]
for constants $c_{D,\varphi}, C_{D,\varphi}$ depending only on $D$ and $\varphi$.
\end{lemma}
\begin{proof}
Let us define for integers $t \ge 1$,
\[ \Phi(t) := \varepsilon^{-1} \sum_{k = 1}^t \log \frac{1}{1-\varepsilon \varphi(k)}, \qquad \text{and} \qquad \Phi(0) := 0.\]
Taking conditional expectations in
the inequality relating $z(t+1)$ to $z(t)$, we have
\[ {\mathbb E}[ z(t+1) \mid {\mathcal F}_t ] \le (1 - \varepsilon \varphi(t+1)) z(t) + D \varepsilon\sqrt{\varepsilon}. \]
Rearranging and using the fact that $\varphi(t)$ is non-decreasing, we have
\begin{align*}
{\mathbb E}[ z(t+1) \mid {\mathcal F}_t ] - \frac{D\sqrt{\varepsilon}}{\varphi(0)} &\le (1 - \varepsilon \varphi(t+1)) z(t) - \frac{D\sqrt{\varepsilon}(1 - \varepsilon \varphi(t+1))}{\varphi(0)} \\
&\le (1 - \varepsilon \varphi(t+1)) \left( z(t) - \frac{D\sqrt{\varepsilon}}{\varphi(0)} \right).
\end{align*}
Consequently,
\[ Z_t := e^{\varepsilon \Phi(t)} \left( z(t) - \frac{D\sqrt{\varepsilon}}{\varphi(0)} \right) \]
is a supermartingale, and its increments are bounded by
\begin{equation} \label{Eq:Z-increment-bound}
|Z_{t+1}-Z_t| \le e^{\varepsilon \Phi(t+1)}\left(|M(t+1)|+D \varepsilon\right) \le 2D\varepsilon e^{\varepsilon\Phi(t+1)}.
\end{equation}
Recall that $\varphi$ is non-decreasing, so that for all $t \ge s \ge 0$,
we have
\[ \Phi(t) = \Phi(s) + \varepsilon^{-1} \sum_{k = s + 1}^t \log \frac{1}{1 - \varepsilon \varphi(k)} \ge \Phi(s) + (t - s) \varphi(0). \]
Using this with \eqref{Eq:Z-increment-bound}, we see that the sum of
the squares of the first $t$ increments is at most
\begin{align*}
\sum_{s = 1}^{t} 4D^2 \varepsilon^2 e^{2\varepsilon\Phi(s)} &\le 4D^2\varepsilon^2 \sum_{s = 1}^t e^{2\varepsilon \Phi(t) - 2\varepsilon\varphi(0)(t - s)} \le 4D^2\varepsilon^2 e^{2\varepsilon\Phi(t)} \cdot \frac{1}{1 - e^{-2\varepsilon\varphi(0)}} \\
&\le 4D^2\varepsilon^2 e^{2\varepsilon\Phi(t)} \cdot \frac{1}{1 - (1 - \frac{1}{2}\varepsilon\varphi(0))} = \frac{8D^2 \varepsilon}{\varphi(0)} \cdot e^{2\varepsilon\Phi(t)}.
\end{align*}
By the Azuma-Hoeffding inequality, this yields
\[ \Pr\left(Z_t \ge \lambda \sqrt{\varepsilon} e^{\varepsilon\Phi(t)} + Z_0 \right) \le \exp\left( - \frac{\varphi(0) \lambda^2 \varepsilon \cdot e^{2\varepsilon\Phi(t)}}{16D^2 \varepsilon \cdot e^{2\varepsilon\Phi(t)}} \right) = \exp\left( -\frac{\varphi(0) \lambda^2}{16D^2} \right), \]
which in turn implies
\[ \Pr\left( z(t) \ge \frac{D\sqrt{\varepsilon}}{\varphi(0)} + e^{-\varepsilon\Phi(t)} z(0) + \lambda \sqrt{\varepsilon} \right) \le \exp\left( -\frac{\varphi(0)\lambda^2}{16D^2} \right). \]
Finally, observe that $\Phi(t) \ge \sum_{k = 1}^t \varphi(k) \ge \int_0^t
\varphi(s)\, ds$. The result then follows upon shifting and rescaling of
$\lambda$.
\end{proof}
\subsection{Proportion vector chain: Proof of Lemma \ref{Lem:1stDE}}\label{Subsec:vector}
We first prove a bound for the Fourier coefficients $x_\r(t)$.
\begin{lemma} \label{Lem:1stDE-fourier}
Consider any $\sigma \in \Snon{1/3}$ and any $\r \in G^\ast$. We have a
constant $c_G$ depending only on $G$ for which
\[ \Pr_\sigma\left( \bigcup_{t = 1}^{n^2} \left\{ \|x_\r(t)\|_{{\rm HS}} \ge \frac{1}{n^{1/8}} + e^{-c_G t/n}\cdot \|x_\r(0)\|_{{\rm HS}} \right\} \right) \le \frac{1}{n^3}. \]
for all large enough $n$.
\end{lemma}
This immediately implies Lemma \ref{Lem:1stDE}.
\begin{proof}[Proof of Lemma \ref{Lem:1stDE}]
With $c_G$ defined as in Lemma \ref{Lem:1stDE-fourier}, take $C_{G, \d}$
large enough so that for any $T \ge C_{G, \d} n$,
\[ \frac{1}{n^{1/8}} + e^{-c_G T/n}\sqrt{d_\r} \le \d. \]
Then, Lemma \ref{Lem:1stDE-fourier} and Plancherel's formula yield
\begin{align*}
\Pr_\sigma\left(\sigma_T \notin \Sast{\d} \right) &\le \Pr_\sigma\left( \|x_\r(T)\|_{{\rm HS}} \ge \d \text{ for some $\r \in G^\ast$} \right) \\
&\le \frac{{\mathcal Q}}{n^3} \le \frac{1}{n},
\end{align*}
for large enough $n$, as desired.
\end{proof}
We are now left with proving Lemma \ref{Lem:1stDE-fourier}, which
relies on the following bound on the operator norm.
\begin{lemma} \label{Lem:gap}
There exists a positive constant $\gamma_G$ depending on $G$ such that for
any $\r \in G^\ast$ and any $\sigma \in \Snon{1/6}$,
\[ \|I_{d_\r}+X_\r(\sigma)\|_{op} \le 1-\frac{\gamma_G}{n}. \]
\end{lemma}
\begin{proof}
Let $\Delta_G$ denote the set of all probability distributions on
$G$, and for $c \in (0, 1)$, let $\Delta_G(c) \subset \Delta_G$
denote the set of all probability distributions $\mu$ such that
$\mu(H) \le 1 - c$ for all proper subgroups $H \subset G$.
Consider a representation $\r \in G^\ast$, and consider the function
$h : \Delta_G(1/6) \to {\rm End}_{\mathbb C}(V_\r)$ given by
\[ h(\mu) = \sum_{a \in G} \mu(a) \frac{\r(a)+\r(a)^\ast}{2}. \]
Then, $h(\mu)$ is hermitian, and since $\r$ is unitary, we clearly have
\[\l(\mu):=\max_{v \in V_\r, \|v\|=1}\langle h(\mu)v, v\rangle \le 1. \]
We claim that $\l(\mu) < 1$ for each $\mu \in \Delta_G(c)$.
Indeed, suppose the contrary.
Then, there exists a non-zero vector $v \in V_\r$ such that $\Re \langle \r(a)v, v \rangle=1$ for all $a \in G$ with $\mu(a)>0$.
This implies that the support of $\mu$ is included in the subgroup
\[H=\{a \in G \ : \ \r(a)v=v\}.\]
Since $\r$ is a (non-trivial) irreducible representation, $H$ is a
proper subgroup of $G$, and thus $\mu(H) \le 1- c$, contradicting the
assumption that $\mu \in \Delta_G(c)$.
Note that $\mu \mapsto \l(\mu)$ is continuous.
We may define
\[\gamma_\r:=\max_{\mu \in \Delta_G(1/6)}\l(\mu) < 1
\qquad\text{and}\qquad \tilde \gamma_G:=\max_{\r \in G^\ast}\gamma_\r<1.\]
Then, we have for any $\sigma \in \Snon{1/6}$,
\[ \frac{x_\r(\sigma) + x_\r(\sigma)^\ast}{2} = \sum_{a \in G} \frac{n_a(\sigma)}{n}\frac{\r(a)+\r(a)^\ast}{2}
\preceq \tilde \gamma_G I_{d_\r}. \]
Taking $0<\gamma_G < 1-\tilde \gamma_G$, and
plugging this into the definition of $X_\r$ gives
$X_\r(\sigma) \preceq -\frac{\gamma_G}{n}I_{d_\r}$.
Note that $X_\r(\sigma) \succeq -\frac{2}{n-1}I_{d_\r}$.
Combining these together gives the result.
\end{proof}
\begin{remark}
A much more direct approach is possible in the case $G = {\mathbb Z}/q$. The
condition $\sigma \in \Snon{1/6}$ implies that $n_0(\sigma) \le
\frac{5}{6}$. Then, we have
\[ \Re x_\ell(\sigma) := \Re \sum_{a = 0}^{q - 1} \frac{n_a(\sigma)}{n} \o^{a \ell} \le \frac{5}{6} + \frac{1}{6} \max_{1 \le a \le q - 1} \Re \o^{a \ell} = \frac{5}{6} + \frac{1}{6} \cos \frac{2\pi}{q} < 1 - \gamma_G \]
for some positive $\gamma_G$. Some rearranging of equations then yields
the desired result.
\end{remark}
\begin{proof}[Proof of Lemma \ref{Lem:1stDE-fourier}]
Fix $\r \in G^\ast$. Let ${\mathcal G}_t$ denote the event where for all $0
\le s \le t$, we have $\|I_{d_\r}+X_\r(s)\|_{op} \le 1 - \frac{\gamma_G}{n}$,
where $\gamma_G$ is taken as in Lemma \ref{Lem:gap}. Since our chain
starts at $\sigma \in \Snon{1/3}$, Lemmas \ref{Lem:Burn} and
\ref{Lem:gap} together imply that
\[ \Pr_\sigma({\mathcal G}_{n^2}^{\sf c}) \le C_G n^2 e^{-n/10}. \]
Next, we turn to \eqref{Eq:DEfourier}. Rearranging
\eqref{Eq:DEfourier} and squaring, we have
\begin{align}
\|x_\r(t+1)\|_{{\rm HS}}^2 &= \|x_\r(t)(I_{d_\r} + X_\r(t))\|_{{\rm HS}}^2 + \|\wh{M}_\r(t+1)\|_{{\rm HS}}^2 \nonumber \\
&\hphantom{==} + 2\Re \langle x_\r(t)(I_{d_\r} + X_\r(t)), \wh{M}_\r(t+1) \rangle_{{\rm HS}} \label{Eq:1stDEsquared}
\end{align}
Let $z_t := {\bf 1}_{{\mathcal G}_t} \|x_\r(t)\|_{{\rm HS}}^2$ and
\[ M'(t+1) := \|\wh{M}_\r(t+1)\|_{{\rm HS}}^2 + 2\Re \langle x_\r(t)(I_{d_\r} + X_\r(t)), \wh{M}_\r(t+1) \rangle_{{\rm HS}}. \]
Substituting into \eqref{Eq:1stDEsquared}, we obtain
\[ z_{t+1} \le \|I_{d_\r} + X_\r(t)\|_{op}^2 \cdot z_t + {\bf 1}_{{\mathcal G}_t} M'(t+1) \le \left(1 - \frac{\gamma_G}{n}\right)^2 z_t + {\bf 1}_{{\mathcal G}_t} M'(t+1). \]
Note that we have the bounds
\[ {\mathbb E}[ M'(t+1) \mid {\mathcal F}_t ]= {\mathbb E}[ \|\wh{M}_\r(t+1)\|_{{\rm HS}}^2 \mid {\mathcal F}_t ] \le \frac{4{\mathcal Q}^2d_\r}{n^2} \]
\[ |M'(t+1)| \le \|\wh{M}_\r(t+1)\|_{{\rm HS}}^2 + 2\sqrt{d_\r}\left(1+\frac{1}{n(n-1)}\right) \|\wh{M}_\r(t+1)\|_{{\rm HS}} \le \frac{6{\mathcal Q}^2 d_\r}{n}. \]
We now apply Lemma \ref{Lem:GenDE} with $\varepsilon = \frac{1}{n}$, $\varphi(t) =
\gamma_G$, $D = 6{\mathcal Q}^2 d_\r$, and $\lambda = n^{1/4}$. This yields
\[ \Pr\left( z_t \ge n^{-1/4} + e^{-\gamma_G t/n}\cdot z_0 \right) \le C'_G e^{-c'_G \sqrt{n}}. \]
Consequently,
\[ \Pr_\sigma\left( \|x_\r(t)\|_{{\rm HS}} \ge n^{-1/8} + e^{-\gamma_G t/2n} \cdot \|x_\r(0)\|_{{\rm HS}} \right) \le C'_G e^{-c'_G \sqrt{n}} + C_G n^2 e^{-n/10}. \]
The lemma with $c_G = \gamma_G/2$ then follows from union bounding over
all $1 \le t \le n^2$ and taking $n$ sufficiently large.
\end{proof}
\subsection{Proportion matrix chain: Proof of Lemma \ref{Lem:2ndDE}}\label{Subsec:matrix}
We carry out a similar albeit more refined strategy to analyze the
proportion matrix. Throughout this section, we assume our Markov chain
$(\sigma_t)_{t \ge 0}$ starts at an initial state $\sigma_\ast \in
\Sast{\frac{1}{4{\mathcal Q}}}$. We again write $n_a(t)=n_a(\sigma_t)$ and
$n_{a, b}(t)=n^{\sigma_\ast}_{a, b}(\sigma_t)$, and similar to before, the
$n_{a, b}(t)$ satisfy the difference equation
\begin{equation}\label{Eq:de2}
n_{a, b}(t+1)-n_{a, b}(t)=\sum_{c \in G}\frac{n_{a, bc^{-1}}(t)n_c(t)}{2n(n-1)}+\sum_{c \in G}\frac{n_{a, bc}(t)n_c(t)}{2n(n-1)} - \frac{n_{a, b}(t)}{n} + M_{a, b}(t+1),
\end{equation}
where ${\mathbb E}[M_{a, b}(t+1) \mid {\mathcal F}_t]=0$ and $|M_{a, b}(t)| \le 2$ for all $t \ge 0$.
We can again analyze this equation via the Fourier transform. In this
case, for each $a \in G$, we take the Fourier transform of
$\left(n_{a, b}(t)/n_a(\sigma_\ast)\right)_{b \in G}$. For $\r \in G^\ast$,
let
\[ y_{a,\r}(t) = y_{a,\r}^{\sigma_\ast}(t) := \sum_{b \in G}\frac{n_{a, b}(t)}{n_a(\sigma_\ast)}\r(b) \]
denote the Fourier coefficient at $\r$. Let $\wh{M}_{a, \r}(t) :=
\frac{1}{n_a(\sigma_\ast)}\sum_{b \in G}M_{a, b}(t)\r(b)$. Then, \eqref{Eq:de2} becomes
\begin{equation}\label{Eq:DEy}
y_{a, \r}(t+1) - y_{a, \r}(t) = y_{a, \r}(t) X_\r(t) + \wh{M}_{a, \r}(t+1).
\end{equation}
Note that ${\mathbb E}_\sigma[\wh{M}_{a, \r}(t+1) \mid {\mathcal F}_t]=0$. Also, since we
assumed $\sigma_\ast \in \Sast{\frac{1}{4{\mathcal Q}}}$, it follows that $\frac{n_a(\sigma_\ast)}{n}
\ge \frac{1}{2{\mathcal Q}}$. Thus, we also know
$\|\wh{M}_{a, \r}(t+1)\|_{{\rm HS}} \le \frac{4{\mathcal Q}^2\sqrt{d_\r}}{n}$.
Again, our main step is a bound on the Fourier coefficients
$y_{a, \r}(t)$, which will also be useful later in proving Lemma
\ref{Lem:RW}.
\begin{lemma} \label{Lem:2ndDE-fourier}
Consider any $\sigma_\ast, \sigma'_\ast \in \Sast{\frac{1}{4{\mathcal Q}}}$. There
exist constants $c_G, C_G > 0$ depending only on $G$ such that for
all large enough $n$, we have
\[ \Pr_{\sigma'_\ast}\left( \|y^{\sigma_\ast}_{a, \r}(t)\|_{{\rm HS}} \ge R \left(\frac{1}{\sqrt{n}} + e^{-t/n} \|y^{\sigma_\ast}_{a, \r}(0)\|_{{\rm HS}} \right) \right) \le e^{-\O_G(R^2) + O_G(1)} + \frac{2}{n^3} \]
for all $t$ and $R > 0$.
\end{lemma}
The above lemma directly implies Lemma \ref{Lem:2ndDE}.
\begin{proof}[Proof of Lemma \ref{Lem:2ndDE}]
We apply Lemma \ref{Lem:2ndDE-fourier} to each $a \in G$ and $\r \in
G^\ast$. Recall that $T = \ceil{\frac{1}{2} n \log n}$, so that
\[ \frac{1}{\sqrt{n}} + e^{-T/n} \|y^{\sigma_\ast}_{a, \r}(0)\|_{{\rm HS}} \le \frac{2\sqrt{d_\r}}{\sqrt{n}}. \]
Then, Lemma \ref{Lem:2ndDE-fourier} implies
\[ \Pr_{\sigma'_\ast}\left( \|y^{\sigma_\ast}_{a, \r}(T)\|_{{\rm HS}} \ge \frac{R}{\sqrt{n}} \right) \le e^{-\O_G(R^2) + O_G(1)} + \frac{2}{n^3}. \]
Union bounding over all $a \in G$ and $\r \in G^\ast$ and using the
Plancherel formula, this yields
\begin{align*}
&\Pr_{\sigma'_\ast}\left( \sigma_\ast \not\in \Sast{\sigma_\ast, \frac{R}{\sqrt{n}}} \right) \le \Pr_{\sigma'_\ast}\left( \max_{a, \r} \|y^{\sigma_\ast}_{a, \r}(T)\|_{{\rm HS}} \ge \frac{R}{\sqrt{n}} \right) \\
&\qquad\qquad \le e^{-\O_G(R^2) + O_G(1)} + \frac{2 {\mathcal Q}^2}{n^3} \le C_G e^{-R} + \frac{1}{n}
\end{align*}
for sufficiently large $C_G$ and $n$.
\end{proof}
We now prove Lemma \ref{Lem:2ndDE-fourier}. Before proceeding with the
main proof, we need the following routine estimate as a preliminary
lemma.
\begin{lemma} \label{Lem:theta}
Let $\theta_n : {\mathbb R}^d \to {\mathbb R}^+$ be the function given by $\theta_n(x)
= \|x\| + \frac{1}{\sqrt{n}}e^{-\sqrt{n}\|x\|} -
\frac{1}{\sqrt{n}}$. Then, we have the inequalities
\[ \|\nabla \theta_n(x)\| \le 1, \qquad \theta_n(x + h) \le \theta_n(x) + \langle h, \nabla \theta_n(x) \rangle + \frac{\sqrt{n}}{2} \|h\|^2. \]
\end{lemma}
\begin{proof}
We can write $\theta_n(x) = f(\|x\|)$, where $f(r) = r +
\frac{1}{\sqrt{n}} e^{-\sqrt{n} r} - \frac{1}{\sqrt{n}}$. By
spherical symmetry, we have
\[ \|\nabla \theta_n(x)\| = f'(\|x\|) = 1 - e^{-\sqrt{n}\|x\|} \le 1, \]
which is the first inequality. Again by spherical symmetry, the
eigenvalues of the Hessian $\nabla^2 \theta_n(x)$ can be directly
computed to be $f''(\|x\|)$ and $f'(\|x\|) / \|x\|$. But these are
bounded by
\[ f''(r) \le \sqrt{n} e^{-\sqrt{n}r} \le \sqrt{n}, \qquad f'(r)/r \le \frac{1 - e^{-\sqrt{n}r}}{r} \le \sqrt{n}. \]
Thus, $\nabla^2 \theta_n(x) \preceq \sqrt{n} I$, and the second inequality
follows from Taylor expansion.
\end{proof}
\begin{proof}[Proof of Lemma \ref{Lem:2ndDE-fourier}]
Let $\gamma_G$ and $c_G$ be the constants from Lemmas \ref{Lem:gap} and
\ref{Lem:1stDE-fourier}, respectively. Define the events
\[ {\mathcal G}_t := \bigcap_{s = 0}^t \left\{ X_\r(\sigma_s) \preceq -\frac{\gamma_G}{n} \right\}, \qquad {\mathcal G}'_t := \bigcap_{s = 0}^t \left\{ X_\r(\sigma_s) \preceq -\frac{1 - \sqrt{d_\r} e^{-c_G s/n} - 2n^{-1/8}}{n} \right\}. \]
Note that $\sigma'_\ast \in \Sast{\frac{1}{4{\mathcal Q}}} \subseteq
\Snon{1/3}$. Hence, by Lemmas \ref{Lem:Burn} and \ref{Lem:gap}, we
have $\Pr({\mathcal G}^{\sf c}_{n^2}) \le C_G n^2 e^{-n/10}$. We also have
\begin{align*}
X_\r(s) &= \frac{1}{n - 1} \left( \frac{x_\r(s)+x_\r(s)^\ast}{2} - \frac{n - 1}{n}I_{d_\r} \right) \preceq -\frac{1}{n}\left( 1 - \frac{n \|x_\r(s)\|_{{\rm HS}}}{n - 1} \right) I_{d_\r} \\
&\preceq -\frac{1}{n}\left( 1 - \|x_\r(s)\|_{{\rm HS}} - \frac{\sqrt{d_\r}}{n-1} \right) I_{d_\r},
\end{align*}
where we have used the fact that
$\left\|\frac{x_\r(s)+x_\r(s)^\ast}{2}\right\|_{op} \le \|x_\r(s)\|_{op} \le
\|x_\r(s)\|_{{\rm HS}}$.
Lemma \ref{Lem:1stDE-fourier} then implies that $\Pr({\mathcal G}'^{\sf
c}_{n^2}) \le \frac{1}{n^3}$. Thus, setting
\[ \varphi(t) := \max(\gamma_G, 1 - \sqrt{d_\r} e^{-c_G t/n} - 2 n^{-1/8}), \]
\[ {\mathcal H}_t := {\mathcal G}_t \cap {\mathcal G}'_t = \bigcap_{s = 0}^t \left\{ X_\r(\sigma_s) \preceq -\frac{\varphi(t)}{n} \right\}, \]
we conclude that
\[ \Pr({\mathcal H}^{\sf c}_{n^2}) \le \Pr({\mathcal G}^{\sf c}_{n^2}) + \Pr({\mathcal G}'^{\sf c}_{n^2}) \le \frac{2}{n^3} \]
for all large enough $n$.
Next, we turn to \eqref{Eq:DEy} and apply $\theta_n$ to both sides,
where we identify ${\mathbb C}^{d_\r^2}$ with ${\mathbb R}^{2d_\r^2}$. Using Lemma \ref{Lem:theta} and
taking the conditional expectation, we obtain
\begin{align*}
{\mathbb E}\left[ \theta_n\left( y_{a, \r}(t+1) \right) \,\middle|\, {\mathcal F}_t \right] &\le \theta_n\left( y_{a, \r}(t) (I_{d_\r} + X_\r(t))\right) + \frac{8 {\mathcal Q}^4 d_\r}{n \sqrt{n}} \\
&\le \theta_n(\|I_{d_\r} + X_\r(t)\|_{op} \cdot y_{a, \r}(t)) + \frac{8 {\mathcal Q}^4 d_\r}{n \sqrt{n}} \\
&\le \|I_{d_\r} + X_\r(t)\|_{op} \cdot \theta_n(y_{a, \r}(t)) + \frac{8 {\mathcal Q}^4 d_\r}{n \sqrt{n}},
\end{align*}
where the second inequality follows from the variational formula for
operator norm (i.e. that $\|BA\|_{{\rm HS}} \le \|A\|_{op} \|B\|_{{\rm HS}}$),
and the third inequality follows from the fact that $\theta_n$ is
convex with $\theta_n(0) = 0$. Thus, we may write
\[ \theta_n(y_{a, \r}(t+1)) \le \|I_{d_\r} + X_\r(t)\|_{op} \cdot \theta_n(y_{a, \r}(t)) + M'(t+1) \]
where
\[ {\mathbb E}[ M'(t+1) \mid {\mathcal F}_t ] \le \frac{8 {\mathcal Q}^4 d_\r}{n\sqrt{n}}, \qquad |M'(t+1)| \le \frac{8{\mathcal Q}^2 \sqrt{d_\r}}{n}. \]
Now, let $z_t := {\bf 1}_{{\mathcal H}_t} \theta_n(y_{a, \r}(t))$, and note that
since $X_\r(\sigma) \succeq -\frac{2}{n-1} I_{d_\r}$, we have
$\|I_{d_\r} + X_\r(t)\|_{op} \le 1-\frac{\varphi(t)}{n}$ whenever ${\mathcal H}_t$
holds. Thus,
\[ z_{t+1} \le \|I_{d_\r} + X_\r(t)\|_{op} \cdot z_t + {\bf 1}_{{\mathcal H}_t}M'(t+1) \le \left(1 - \frac{1}{n} \varphi(t) \right) z_t + {\bf 1}_{{\mathcal H}_t}M'(t+1). \]
We may then apply Lemma \ref{Lem:GenDE} with $\varepsilon = \frac{1}{n}$ and
$D = 8{\mathcal Q}^4 d_\r$. Note that
\begin{align*}
\int_0^t \varphi(s) \,ds &\ge \left(1 - 2 n^{-\frac{1}{8}}\right)t - \sqrt{d_\r} \int_0^\infty e^{-\frac{c_G s}{n}} \,ds \ge t - O_G(n)
\end{align*}
for all large enough $n$. Thus, Lemma \ref{Lem:GenDE} implies that
\begin{equation} \label{Eq:2ndDE-fourier-z_t}
\Pr\left(z_t \ge \frac{\lambda}{\sqrt{n}} + C_G e^{-t/n} \cdot z_0 \right) \le C'_G e^{-c'_G \lambda^2}.
\end{equation}
Consequently,
\begin{align*}
&\Pr\left( \|y_{a, \r}(t)\|_{{\rm HS}} \ge R\left( \frac{1}{\sqrt{n}} + e^{-\frac{t}{n}}\|y_{a, \r}(0)\|_{{\rm HS}} \right) \right) \\
&\qquad\qquad \le \Pr\left( \theta_n(y_{a, \r}(t)) \ge \frac{R - 1}{\sqrt{n}} + Re^{-\frac{t}{n}}\|y_{a, \r}(0)\|_{{\rm HS}} \right) \\
&\qquad\qquad \le \Pr\left( \theta_n(y_{a, \r}(t)) \ge \frac{R - 1}{\sqrt{n}} + Re^{-\frac{t}{n}}\theta_n(y_{a, \r}(0)) \right) \\
&\qquad\qquad \le \Pr\left( z_t \ge \frac{R - 1}{\sqrt{n}} + Re^{-\frac{t}{n}}z_0 \right) + \Pr({\mathcal H}^{\sf c}_{n^2}) \\
&\qquad\qquad \le e^{-\O_G(R^2) + O_G(1)} + \frac{2}{n^3},
\end{align*}
as desired.
\end{proof}
\section{Construction of the coupling: Proof of Lemma \ref{Lem:RW}}\label{Sec:Coupling-Proof}
For each $\d>0$, we define a subset of $\{0, 1, \dots, n\}^{G \times
G}$ by
\[ \Mc_\d:=\left\{ (n_{a, b})_{a, b \in G} \ : \ n_{a, b} \ge \frac{(1-\d) n}{{\mathcal Q}^2} \ \text{for every $a, b \in G$ and}\ \sum_{a, b \in G}n_{a, b}=n\right\}. \]
\begin{lemma}\label{Lem:coupling}
Consider a configuration $\sigma_\ast \in {\mathcal S}$ and a constant $0<\d \le
\frac{1}{2{\mathcal Q}^2}$, and assume that $(1 - \d)n/{\mathcal Q}^2$ is an
integer. Let $(\sigma_t)_{t \ge 0}$ and $(\tilde\sigma_t)_{t \ge 0}$ be two
product replacement chains started at $\sigma$ and $\tilde\sigma$,
respectively. Then, there exists a coupling $(\sigma_t, \tilde \sigma_t)$ of
the Markov chains satisfying the following:
Let
\[ D_t:=\frac{1}{2}\sum_{a, b \in G}|n^{\sigma_\ast}_{a, b}(\sigma_t) - n^{\sigma_\ast}_{a, b}(\tilde \sigma_t)|. \]
Then, on the event $\{(n^{\sigma_\ast}_{a, b}(\sigma_t))_{a, b \in G}, (n^{\sigma_\ast}_{a, b}(\tilde \sigma_t))_{a, b \in G} \in \Mc_\d\}$ and $\{D_t > 0\}$, one has
\begin{align}
{\mathbb E}_{\sigma, \tilde \sigma}[D_{t+1}-D_t \mid \sigma_t, \tilde \sigma_t] &\le 0, \label{Eq:D-drift-0} \\
\Pr_{\sigma, \tilde \sigma}\left(D_{t+1} - D_t \neq 0 \mid \sigma_t, \tilde \sigma_t \right) &\ge \frac{(1-\d)^2}{4{\mathcal Q}^3}. \label{Eq:D-fluctuate}
\end{align}
\end{lemma}
\begin{proof}
Let us abbreviate $n_{a, b}(t) = n^{\sigma_\ast}_{a, b}(\sigma_t)$ and $\tilde
n_{a, b}(t) = n^{\sigma_\ast}_{a, b}(\tilde \sigma_t)$. Let
$m_{a, b}(t):=\min(n_{a, b}(t), \tilde n_{a, b}(t))$. For each $a \in G$, we
define the quantity
\[ d_a(t) := \frac{1}{2}\sum_{b \in G} |n_{a, b}(t) - \tilde{n}_{a, b}(t)| = \sum_{b \in G} n_{a, b}(t) - \sum_{b \in G} m_{a, b}(t), \]
so that $D_t = \sum_{a \in G} d_a(t)$.
For accounting purposes, it is helpful to introduce two sequences
\[ (x_1, x_2, \ldots , x_n) \text{ and } (\tilde{x}_1, \tilde{x}_2, \ldots , \tilde{x}_n) \]
of elements of $G \times G$. These sequences are chosen so that
the number of $x_k$ equal to $(a, b)$ is exactly $n_{a, b}$, and
similarly the number of $\tilde{x}_k$ equal to $(a, b)$ is
$\tilde{n}_{a, b}$. Moreover, we arrange their indices in a coordinated
fashion, as described below.
We define three families of disjoint sets: $P_{a, b}$, $Q_a$, and $R_a
\subset [n]$.
\begin{itemize}
\item For each $a, b \in G$, let $P_{a, b}$ be a set of size $(1 - \delta)n/{\mathcal Q}^2$ such that for any $k \in P_{a, b}$, we have $x_k
=\tilde{x}_k = (a, b)$. (This is possible provided that $(n_{a, b}(t)), (\tilde n_{a, b}(t)) \in \Mc_\d$
holds.)
\item For each $a \in G$, let $Q_a$ be a set of size $\sum_{b \in G}(m_{a, b} - |P_{a, b}|)$ such that for any $k \in Q_a$, $x_k
=\tilde{x}_k= (a, b)$ for some $b$. (Note that $Q_a$ may be empty.)
\item For each $a \in G$, let $R_a$ be a set of size $d_a$ such
that for any $k \in R_a$, $x_k$ and $\tilde{x}_k$ both have $a$ as
their first coordinate. (This $R_a$ is well-defined since $\sum_b
n_{a, b} = \sum_b \tilde n_{a, b}$ for each $a$; it may also be empty.)
\end{itemize}
Define
\[
P := \bigsqcup_{a, b \in G} P_{a, b}, \qquad Q := \bigsqcup_{a \in G} Q_a, \qquad R := \bigsqcup_{a \in G} R_a.
\]
Suppose that $D_t > 0$, so that for some $a_*, b_*, b_*' \in G$ we
have $n_{a_*, b_*} > \tilde{n}_{a_*, b_*}$ and $n_{a_*, b'_*} <
\tilde{n}_{a_*, b'_*}$. Let us consider all possible ways to sample a
pair of indices and a sign $(k, l, s) \in \{1, 2, \ldots , n\}^2
\times \{\pm 1\}$ with $k \neq l$.
Suppose $x_k = (a_k, b_k)$ and $x_l = (a_l, b_l)$. We think of $(k, l,
+1)$ as corresponding to a move on $(n_{a, b}(t))$ where $n_{a_k, b_k}$ is
decremented and $n_{a_k, (b_k \cdot b_l)}$ is incremented. Similarly, $(k, l,
-1)$ corresponds to a move where $n_{a_k, b_k}$ is decremented and
$n_{a_k, (b_k \cdot b_l^{-1})}$ is incremented. We may also think of $(k, l, \pm
1)$ as corresponding to moves on $(\tilde n_{a, b}(t))$ in an analogous way.
\begin{figure}
\includegraphics{pqr_figure.pdf}
\caption{Illustration of cases (i) through (iv).}
\label{fig:coupling-cases}
\end{figure}
We now analyze four cases, as illustrated in Figure \ref{fig:coupling-cases}.
\paragraph{(i) {\bf Case $(k, l) \in (P \sqcup Q) \times (P \sqcup Q)$.}}
For all but an exceptional situation described below, we apply the
move corresponding to $(k, l, s)$ to both states $(n_{a, b}(t))$ and
$(\tilde n_{a, b}(t))$. In these cases, $D_{t+1}=D_t$.
We now describe the exceptional situation. Define
\[ S = P_{a_*, b_*} \times \left(\bigsqcup_{c \in G} P_{c, (b_*^{-1} \cdot b'_*)}\right) \qquad\text{and}\qquad S' = P_{a_*, b'_*} \times \left(\bigsqcup_{c \in G} P_{c, {\rm id}}\right). \]
Then, the exceptional situation occurs when $s = +1$ and $(k, l) \in
S \sqcup S'$.
Take any bijection $\tau$ from $S$ to $S'$. If $(k, l) \in S$, then
we apply $(k, l, +1)$ to $(n_{a, b}(t))$ while applying $(\tau(k, l),
+1)$ to $(\tilde n_{a, b}(t))$. This increments $n_{a_*, b'_*}$,
decrements $n_{a_*, b_*}$, and has no effect on the
$(\tilde{n}_{a, b}(t))$. The overall effect is that $D_{t+1} = D_t - 1$.
If instead $(k, l) \in S'$, then we apply $(k, l, +1)$ to $(n_{a,
b}(t))$ and $(\tau^{-1}(k, l), +1)$ to $(\tilde n_{a, b}(t))$. A
similar analysis shows that in this case $D_{t+1} = D_t + 1$.
The exceptional event occurs with probability $\frac{(1 -
\d)^2}{2{\mathcal Q}^3}$, and when it occurs, $D_t$ increases or decreases
by $1$ with equal probability. Thus, the exceptional situation plays
the role of introducing some unbiased fluctuation in $D_t$ and gives
us \eqref{Eq:D-fluctuate}. \\
\paragraph{(ii) {\bf Case $(k, l) \in (Q \sqcup R) \times (Q \sqcup R)$ but $(k, l) \not\in Q \times Q$.}}
This occurs with probability
\[ \frac{1}{n(n-1)}\left((|Q|+|R|)(|Q|+|R|-1)-|Q|(|Q|-1)\right) \]
which is at most
\[ \frac{2}{n(n - 1)}(|Q| + |R|) |R| = \frac{2 \d}{n - 1}D_t. \]
Apply the move corresponding to $(k, l, s)$ to both states. This
increases $D_t$ by at most $1$. We will see later that the effect of
this case is small compared to the other cases. \\
\paragraph{(iii) {\bf Case $(k, l) \in P \times R$.}}
This occurs with probability
\[ \frac{1}{n(n - 1)}|P| |R| =\frac{1 - \d}{n - 1}D_t. \]
Apply the move corresponding to $(k, l, s)$ to both states. Again,
this increases $D_t$ by at most $1$, but there is also a chance not
to increase.
Suppose that $x_l = (a_1, b_1)$ and $\tilde{x}_l = (a_1,
\tilde{b}_1)$, and suppose that $k \in P_{a_2, b_2}$. Then the move has
the effect of decreasing $n_{a_2, b_2}$ and $\tilde{n}_{a_2, b_2}$ while
increasing $n_{a_2, (b_2 \cdot b_1^s)}$ and $\tilde{n}_{a_2,
(b_2\cdot \tilde{b}_1^s)}$.
Note that conditioned on this case happening, $(a_2, b_2)$ is
distributed uniformly over $G \times G$. When $(a_2,
(b_2\cdot \tilde{b}_1^s)) = (a_*, b_*)$ or $(a_2,
(b_2\cdot b_1^s)) = (a_*, b'_*)$, the move does not increase
$D_t$. Therefore there is at least a $2/{\mathcal Q}^2$ chance that $D_t$ is
actually not increased. Hence, the probability that $D_t$ is
increased by $1$ is at most
\[ \left(1 - \frac{2}{{\mathcal Q}^2}\right)\frac{1 - \d}{n-1}D_t. \]
\paragraph{(iv) {\bf Case $(k, l) \in R \times P$.}}
This occurs with probability
\[ \frac{1}{n(n - 1)} |R| |P| = \frac{1 - \d}{n - 1} D_t. \]
Suppose that $x_k = (a, b)$ and $\tilde{x}_k = (a, \tilde{b})$. Let
$\tau$ be a permutation of $P$ such that for $l \in P_{a, c}$, one has
$\tau(l) \in P_{a, \tilde{b}^{-1}\cdot b \cdot c^s}$. Then apply $(k, l, s)$ to $(n_{a, b}(t))$
and apply $(k, \tau(l), s)$ to $(\tilde n_{a, b}(t))$. This always decreases
$D_t$ by $1$. \\
Let us now summarize what we know when $(n_{a, b}(t)), (\tilde n_{a, b}(t)) \in \Mc_\d$ and $D_t>0$.
From Cases (i), (ii), and (iii), we have
\[ \Pr_{\sigma, \tilde \sigma}(D_{t+1} = D_t + 1 \mid \sigma_t, \tilde \sigma_t) \le \left(1-\frac{2(1-\d)}{{\mathcal Q}^2}+\d\right)\frac{D_t}{n-1} + \frac{(1 - \d)^2}{4{\mathcal Q}^3}. \]
From Cases (i) and (iv), we have
\[ \Pr_{\sigma, \tilde \sigma}(D_{t+1} = D_t - 1 \mid \sigma_t, \tilde \sigma_t) \ge (1-\d)\frac{D_t}{n-1} + \frac{(1 - \d)^2}{4{\mathcal Q}^3}. \]
Therefore, if $0<\d \le \frac{1}{2{\mathcal Q}^2}$, then
\[{\mathbb E}_{\sigma, \tilde \sigma}[D_{t+1}-D_t \mid \sigma_t, \tilde \sigma_t] \le 0, \]
verifying \eqref{Eq:D-drift-0}.
To fully define the coupling, when $D_t = 0$, we can couple $\sigma_t$
and $\sigma_t$ to be identical, and if either $(n_{a, b}(t)) \notin
\Mc_\d$ or $(\tilde n_{a, b}(t)) \notin \Mc_\d$, we may run the two
chains independently.
\end{proof}
\begin{proof}[Proof of Lemma \ref{Lem:RW}]
Since $\sigma \in \Sast{\sigma_\ast, \frac{R}{\sqrt{n}}}$, we must have for
each $a \in G$ and $\r \in G^\ast$ that $\|y^{\sigma_\ast}_{a, \r}(\sigma)\|_{{\rm HS}}
\le \frac{R}{\sqrt{n}}$. Note that for large enough $n$, we have
$\Sast{\sigma_\ast, \frac{R}{\sqrt{n}}} \subseteq
\Sast{\frac{1}{5{\mathcal Q}^3}}$. Thus, we may apply Lemma
\ref{Lem:2ndDE-fourier} to obtain
\begin{equation} \label{Eq:RW-1}
\Pr\left( \bigcup_{t = 0}^{n^2} \left\{ \|y^{\sigma_\ast}_{a, \r}(\sigma_t)\|_{{\rm HS}} \ge \frac{1}{5{\mathcal Q}^3} \right\} \right) \le n^2 \left( e^{-\O_{G}(n) + O_{G}(1)} + \frac{2}{n^3} \right) \le \frac{3}{n}
\end{equation}
for large enough $n$. Define the event
\[ {\mathcal G}_t := \left\{ \sigma_s \in \Sast{\sigma_\ast, \frac{1}{5{\mathcal Q}^3}} \text{ for all $1 \le s \le t$} \right\}. \]
The Plancherel formula applied to \eqref{Eq:RW-1} implies that
$\Pr({\mathcal G}^{\sf c}_{n^2}) \le \frac{3{\mathcal Q}^2}{n}$. We may analogously define
an event $\tilde{\mathcal G}_t$ for $\tilde\sigma$ and let ${\mathcal A}_t := {\mathcal G}_t \cap
\tilde{\mathcal G}_t$. Thus, $\Pr({\mathcal A}_{n^2}^{\sf c}) \le \frac{6{\mathcal Q}^2}{n}$.
Pick $\d' \in \left(\frac{2}{5{\mathcal Q}^2}, \frac{3}{7{\mathcal Q}^2}\right)$ so that $(1 - \d')n/{\mathcal Q}^2$ is an
integer. Note that when ${\mathcal A}_t$ holds, we have
\[ \sigma_t \in \Sast{\sigma_\ast, \frac{1}{5{\mathcal Q}^3}} \quad \text{and} \quad \sigma_\ast \in \Sast{\frac{1}{5{\mathcal Q}^3}}\implies (n_{a, b}(t)) \in \Mc_{\frac{2}{5{\mathcal Q}^2}} \subseteq \Mc_{\d'}, \]
and similarly $\tilde\sigma_t \in \Mc_{\d'}$.
Thus, we may invoke Lemma \ref{Lem:coupling} to give a coupling
between $\sigma$ and $\tilde\sigma$ where on the event ${\mathcal A}_t$, the quantity
$D_t$ is more likely to decrease than increase. Letting ${\bf
D}_t:={\bf 1}_{{\mathcal A}_t}D_t$, we see that $({\bf D}_t)$ is a
supermartingale with respect to $({\mathcal F}_t)$.
Define
\[ \t := \min\{ t \ge 0 : D_t=0 \}, \qquad {\tilde \t} := \min\{ t \ge 0 : {\bf D}_t=0 \}. \]
Then, Lemma \ref{Lem:coupling} ensures that on the event $\{\tilde\t
> t\}$, we have ${\rm Var}({\bf D}_{t+1}\mid {\mathcal F}_t) \ge \a^2$, where
$\a^2:=\left(1- \frac{1}{{\mathcal Q}^2}\right)\frac{(1-\d')^2}{4{\mathcal Q}^3}$. By
\cite[Proposition 17.20]{LevinPeresWilmer}, for every $u > 12/\a^2$,
\begin{equation}\label{Eq:RW}
\Pr(\tilde \t > u) \le \frac{4 {\bf D}_0}{\a\sqrt{u}}.
\end{equation}
Recall that $T = \ceil{\b n}$ and $D_0 \le \sqrt{{\mathcal Q}} R\sqrt{n}$. As
long as $\b$ is large enough, we may apply \eqref{Eq:RW} with $u =
T$ to get
\[ \Pr_{\sigma, \tilde \sigma}(\t > T) \le \frac{16 {\mathcal Q}^2 R}{(1-\d')\sqrt{\b}} + \Pr({\mathcal A}_T^{\sf c}) \le \frac{32 {\mathcal Q}^2 R}{\sqrt{\b}} \]
for all large enough $n$, as desired.
\end{proof}
\section{Proof of Theorem \ref{Thm:main} (\ref{Eq:LB})}\label{Sec:LB}
\newcommand{n^{\{\id\}}_{non}}{n^{\{{\rm id}\}}_{non}}
The lower bound is proved essentially by showing that the estimates of
Lemmas \ref{Lem:Burn} and \ref{Lem:2ndDE} cannot be improved. Let
$a_1, a_2, \ldots , a_k$ be a set of generators for $G$. Let $\sigma_\star
\in {\mathcal S}$ be the configuration given by
\[ \sigma_\star(i) = \begin{cases}
a_i & \text{if $i \le k$,} \\
0 & \text{otherwise}.
\end{cases} \]
We will analyze the Markov chain started at $\sigma_\star$ and show that
it does not mix too fast.
Recall from Section \ref{Sec:UB} the notation
\[ n^{\{\id\}}_{non}(\sigma) = |\{ i \in [n] : \sigma(i) \ne {\rm id} \}| \]
for the number of sites in $\sigma$ that do not contain the identity. We
first show that if we run the chain for slightly less than $n \log n$
steps, most of the sites will still contain the identity.
\begin{lemma} \label{Lem:Burn-lower}
Let $T := \floor{n \log n - Rn}$. Then,
\[ \Pr_{\sigma_\star}\left( n^{\{\id\}}_{non}(\sigma_T) \ge \frac{n}{3} \right) \le \frac{4{\mathcal Q}^2}{R^2}. \]
\end{lemma}
\begin{proof}
Recall that in one step of our Markov chain, we pick two indices $i,
j \in [n]$ and replace $\sigma(i)$ with $\sigma(i) \cdot \sigma(j)$ or $\sigma(i) \cdot \sigma(j)^{-1}$.
The only way for $n^{\{\id\}}_{non}(\sigma_t)$ to increase after this step is
if $\sigma(j) \ne {\rm id}$. Thus,
\begin{equation} \label{Eq:nnon}
\Pr( n^{\{\id\}}_{non}(\sigma_{t+1}) = n^{\{\id\}}_{non}(\sigma_t) + 1 \mid n^{\{\id\}}_{non}(\sigma_t)) \le \frac{n^{\{\id\}}_{non}(\sigma_t)}{n}.
\end{equation}
Let $\t := \min\{ t \ge 0 : n^{\{\id\}}_{non}(\sigma_t) \ge \frac{n}{3} \}$ be the
first time that $n^{\{\id\}}_{non}(\sigma_t)$ is at least $\frac{n}{3}$. We have
that $n^{\{\id\}}_{non}(\sigma_\star) = k$, so it follows from \eqref{Eq:nnon} that
$\t$ stochastically dominates the sum
\[ G := \sum_{s = k}^{\floor{n/3}} G_s, \]
where the $G_s$ are independent geometric variables with success
probability $\frac{s}{n}$. Note that we have the bounds
\[ {\mathbb E} G = \sum_{s = k}^{\floor{n/3}} \frac{n}{s} \ge n \left( \log \floor{\frac{n}{3}} - \log k \right), \qquad {\rm Var}(G) = \sum_{s = k}^{\floor{n/3}} \frac{n(n-s)}{s^2} \le n^2. \]
Hence,
\begin{align*}
\Pr(\t < T) &\;\le\; \Pr(G < T) \;\le\; \Pr(G < {\mathbb E} G + n\log(3k) - Rn ) \\
&\;\le\; \frac{n^2}{n^2(R - \log(3k))^2} \le \frac{4}{R^2}
\end{align*}
for $R \ge 2 {\mathcal Q} \ge 2 \log (3k)$. On the other hand, the bound
claimed in the lemma is trivial for $R \le 2 {\mathcal Q}$, so we have
completed the proof.
\end{proof}
Next, we show that it really takes about $\frac{1}{2}n \log n$ steps
for the Fourier coefficients $x_\r$ to decay to $O\left(
\frac{1}{\sqrt{n}} \right)$, as suggested by Lemma
\ref{Lem:2ndDE}. Note that it suffices here to analyze the $x_\r$
instead of the $y_{a, \r}$, which simplifies our analysis. Actually,
it suffices to consider (the real part of) the trace of $x_\r$. Here
the orthogonality of characters reads $\frac{1}{{\mathcal Q}}\sum_{a \in G} \Tr
\r(a)=0$, and it takes about $\frac{1}{2}n \log n$ steps for $\Re \Tr
x_\r(t)$ to decay to $O\left( \frac{1}{\sqrt{n}} \right)$.
\begin{lemma} \label{Lem:DE-lower}
Consider any $\r \in G^\ast$ and any $R > 5$. Let $T :=
\floor{\frac{1}{2}n \log n - Rn}$, and suppose that $\sigma \in {\mathcal S}$
satisfies $n^{\{\id\}}_{non}(\sigma) \le \frac{n}{3}$. Then,
\[ \Pr_{\sigma}\left( \|x_\r(\sigma_T)\|_{{\rm HS}} \le \frac{R}{\sqrt{n}} \right) \le \frac{4{\mathcal Q}^2}{R^2}. \]
\end{lemma}
\begin{proof}
Let $z(t) := (1/d_\r) \Tr (x_\r(t)+x_\r(t)^\ast)/2$.
Then, noting that \eqref{Eq:DEfourier} also holds for $x_\r(t)^\ast$ since $x_{\r^\ast}(t)=x_\r(t)^\ast$,
we have
\[ z(t+1)-z(t) = \frac{1}{n-1}\frac{1}{d_\r} \Tr \left(\frac{x_\r(t)+x_\r(t)^\ast}{2}\right)^2-\frac{1}{n}z(t) + M(t+1), \]
where
\[ {\mathbb E}[M(t+1) \mid {\mathcal F}_t]=0 \qquad \text{and} \qquad |M(t)| \le \frac{2{\mathcal Q}}{n}. \]
Here we have
\[\frac{1}{d_\r} \Tr \left(\frac{x_\r(t)+x_\r(t)^\ast}{2}\right)^2 \ge z(t)^2.\]
We compare $z(t)$ to another process $(w(t))_{t \ge 0}$ defined by
$w(0) := \frac{1}{3}$ and
\begin{equation} \label{Eq:w-equation}
w(t+1) := \left(1-\frac{1}{n}\right)w(t) + M(t+1).
\end{equation}
We will show by induction that $z(t) \ge w(t)$ for all $t$. For the
base case, note that since $n^{\{\id\}}_{non}(\sigma) \le \frac{n}{3}$, we have
\[ z(0) = \frac{1}{d_\r}\Re \Tr \sum_{a \in G} \frac{n_a(t)}{n} \cdot \r(a) \ge \frac{2}{3} - \frac{1}{3} = \frac{1}{3}. \]
Suppose now that $z(t) \ge w(t)$. Then,
\begin{align*}
z(t+1) &\ge z(t) + \frac{1}{n-1}z(t)^2-\frac{1}{n}z(t) + M(t+1) \\
&\ge \left(1 - \frac{1}{n}\right) w(t) + M(t+1) = w(t+1),
\end{align*}
completing the induction.
It now suffices to lower bound $w(T)$. To this end, we first note
that applying \eqref{Eq:w-equation} repeatedly and taking
expectations, we obtain
\[ {\mathbb E} w(T) = \left(1 - \frac{1}{n}\right)^T \cdot \frac{1}{3} \ge \frac{e^R}{6\sqrt{n}} \ge \frac{2R}{\sqrt{n}}. \]
In order to calculate the variance of $w(T)$, we can also square
\eqref{Eq:w-equation} and take the expectation, which gives us
\begin{align*}
{\rm Var}(w(T)) &= {\mathbb E} w(T)^2 - ({\mathbb E} w(T))^2 \\
&\le \left(1 - \frac{1}{n}\right)^{2T} \cdot \frac{1}{9} + n \cdot \left( \frac{2{\mathcal Q}}{n} \right)^2 - \left( \left(1 - \frac{1}{n}\right)^T
\cdot \frac{1}{3} \right)^2 \\
&= \frac{4{\mathcal Q}^2}{n}.
\end{align*}
Then, by Chebyshev's inequality, we have
\begin{align*}
\Pr_{\sigma}\left( \|x_\r(\sigma_T)\|_{{\rm HS}} \le \frac{R}{\sqrt{n}} \right) &\le \Pr\left( z(T) \le \frac{R}{\sqrt{n}} \right) \le \Pr\left( w(T) \le \frac{R}{\sqrt{n}} \right) \\
&\le \frac{4{\mathcal Q}^2/n}{(R/\sqrt{n})^2} = \frac{4{\mathcal Q}^2}{R^2},
\end{align*}
as desired.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm:main} (\ref{Eq:LB})]
Let $T = T_1 + T_2$, where $T_1 := \floor{n \log n - \b n}$ and $T_2
:= \floor{\frac{1}{2} n \log n - \b n}$. Fix any $\r \in G^\ast$. By
Lemma \ref{Lem:Burn-lower} followed by Lemma \ref{Lem:DE-lower}, we
have for large enough $\b$ that
\[ \Pr_{\sigma_\star} \left(\sigma_T \in \Sast{\frac{\b}{\sqrt{n}}} \right) \le \Pr_{\sigma_\star}\left( \|x_\r(T)\|_{{\rm HS}} \le \sqrt{\frac{{\mathcal Q}}{d_\r}}\frac{\b}{\sqrt{n}} \right) \le \frac{8{\mathcal Q}^2}{\b^2}. \]
On the other hand, Lemma \ref{Lem:stationary} tells us that
\[ \pi\left({\mathcal S}_\ast\left(\frac{\b}{\sqrt{n}}\right)\right) \ge 1-
\frac{c_G}{\b^2}. \]
Consequently,
\[ d_{\sigma_\star}(T) \ge 1 - \frac{c_G}{\b^2} - \frac{8{\mathcal Q}^2}{\b^2}, \]
which tends to $1$ as $\b \rightarrow \infty$, establishing
\eqref{Eq:LB}.
\end{proof}
\section*{Acknowledgements}
This work was initiated while R.T. and A.Z. were visiting Microsoft
Research in Redmond. They thank Microsoft Research for the
hospitality. R.T. was also visiting the University of Washington in
Seattle and thanks Professor Christopher Hoffman for making his visit
possible. R.T. is supported by JSPS Grant-in-Aid for Young
Scientists (B) 17K14178. A.Z. is supported by a Stanford Graduate Fellowship.
\bibliographystyle{alpha}
| {
"timestamp": "2018-05-15T02:14:27",
"yymm": "1805",
"arxiv_id": "1805.05025",
"language": "en",
"url": "https://arxiv.org/abs/1805.05025",
"abstract": "We analyze a Markov chain, known as the product replacement chain, on the set of generating $n$-tuples of a fixed finite group $G$. We show that as $n \\rightarrow \\infty$, the total-variation mixing time of the chain has a cutoff at time $\\frac{3}{2} n \\log n$ with window of order $n$. This generalizes a result of Ben-Hamou and Peres (who established the result for $G = \\mathbb{Z}/2$) and confirms a conjecture of Diaconis and Saloff-Coste that for an arbitrary but fixed finite group, the mixing time of the product replacement chain is $O(n \\log n)$.",
"subjects": "Probability (math.PR)",
"title": "Cutoff for product replacement on finite groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9904406026280905,
"lm_q2_score": 0.8104789178257654,
"lm_q1q2_score": 0.8027312277887138
} |
https://arxiv.org/abs/1808.04163 | Approximation in FEM, DG and IGA: A Theoretical Comparison | In this paper we compare approximation properties of degree $p$ spline spaces with different numbers of continuous derivatives. We prove that, for a given space dimension, $\smooth {p-1}$ splines provide better a priori error bounds for the approximation of functions in $H^{p+1}(0,1)$. Our result holds for all practically interesting cases when comparing $\smooth {p-1}$ splines with $\smooth {-1}$ (discontinuous) splines. When comparing $\smooth {p-1}$ splines with $\smooth 0$ splines our proof covers almost all cases for $p\ge 3$, but we can not conclude anything for $p=2$. The results are generalized to the approximation of functions in $H^{q+1}(0,1)$ for $q<p$, to broken Sobolev spaces and to tensor product spaces. | \section{Introduction}
The aim of this work is to compare the approximation properties of different piecewise polynomial spaces commonly employed in Galerkin methods for PDEs.
Following the well known Lemmas of C\'ea and Strang such approximation results imply a priori error estimates for these numerical methods.
In particular we consider the tensor product spaces used in Discontinuous Galerkin (DG), Finite Element Methods (FEM) and IsoGeometric Analysis (IGA) that differ only in their global smoothness.
As such our comparison provides an answer to the following question: ``does smoothness impede or favour approximation?''.
It was noticed by Hughes, Cottrell and Bazilevs \cite{Hughes:2005} that smoother spaces have better approximation properties in their numerical tests.
Spline approximation in the IGA setting was first studied by Bazilevs, Beir\~ao da Veiga, Cottrell, Hughes and Sangalli \cite{MR2250029}.
Later, Evans, Bazilevs, Babuska and Hughes \cite{Evans:2009} numerically computed approximation constants and observed that the maximally smooth splines provide better a priori bounds on the approximation error.
Beir\~ao da Veiga, Buffa, Sangalli and Rivas \cite{MR2800710} studied how the approximation depends on the mesh-size, the degree and the global smoothness.
Takacs and Takacs \cite{Takacs:2016} proved an upper bound for the approximation error with an explicit constant.
Recently, Floater and Sande \cite{Floater:2017,Floater:2018} provided optimal constants on which we base our results.
The comparison in this paper is related to the $n$-width theory \cite{Kolmogorov:36,Pinkus:85}, i.e., looking at approximation properties of a space of fixed dimension. Our results can be seen as a partial answer to an $n$-width problem constrained to piecewise polynomial spaces on uniform partitions.
We will first look at the univariate setting before extending the results to general tensor product spaces.
Let $\poly p$ be the space of polynomials of degree at most $p$, and $L^2=L^2(0,1)$ and $H^{q+1}=H^{q+1}(0,1)$ be the standard Sobolev spaces. For a given $n\in \mathbb{N}$ let $I_j$ be the interval $[\frac jn,\frac{j+1}n)$ and define the spline space $\spline p k n$, of degree $p$, smoothness $k$ and on $n$ segments by
\begin{equation}\label{eq:def-spline}
\spline p k n =\big\{f \in\smooth k([0,1]): f|_{I_j} \in\poly p,\, j=0,\dots,n-1\big\}.
\end{equation}
Here $k=-1$ means that jumps are allowed at the internal breakpoints.
Furthermore, let $\proj pkn$ be the $L^2$ projection onto $\spline pkn$ and $\cf pknq$ be the smallest real number such that
\begin{equation}\label{eq:estimate-form}
\norm{f- \proj pkn f}\le \cf pknq \norm{\partial^{q+1} f}
\end{equation}
holds for all $f\in H^{q+1}$. Here $\norm{\cdot}$ denotes the $L^2$ norm.
Finally for $q=p$ we let $\const pkn:=\cf pknp$.
The studied $n$-width problem can then be formulated as follows. Given the space dimension $N$ and Sobolev regularity $q$, find the degree $p$, smoothness $k$, and number of segments $n$ such that the constant $\cf pknq$ is minimized.
Note that for each $N$ only finitely many $(p,k,n)$ fulfill
\begin{equation}\label{eq:spline-dim}
N=\dim\spline p k n =(n-1)(p-k)+p+1.
\end{equation}
It is then possible to try an exhaustive approach. The difficulty of such a strategy is that the constants $\cf pknq$ are solutions of eigenvalue problems that are badly conditioned. Any conclusion based on this strategy would then have to take into consideration the reliability of the method used to compute the constant.
Our approach is to first provide lower and upper bounds for $\const pkn$ and base the conclusions on provable properties of these bounds.
In particular we compare $\const p{p-1}m$ with $\const pkn$ for $k<p-1$ under the constraint
$$\dim\spline {p} {p-1} {m} = \dim\spline {p} {k} {n}, $$
i.e., for
\begin{equation}\label{eq:m-formula}
m=(n-1)(p-k)+1.
\end{equation}
Note that for a fixed number of segments $n$ we have $\spline p{k_1}n \supseteq \spline p{k_2}n$ whenever $k_1\le k_2$ so that necessarily $\const p{k_1}n \le \const p{k_2}n$ under the same condition. However, for a fixed dimension $N$ the smoother space is defined on a finer partition.
We show in Section \ref{sec:comparison-1d} that $\const p{p-1}m$ is smaller than $\const pkn$ in almost all cases of practical interest for $k=0$ (see Theorem~\ref{thm:fem}) and $k=-1$ (see Theorem~\ref{thm:dg}).
In Section~\ref{sec:low} we extend our results to the case of $p>q$ in \eqref{eq:estimate-form}. Here we compare the approximation of maximally smooth spline spaces of a ``too high degree'', $\spline {p} {p-1} {m}$, with spline spaces of lower smoothness, $\spline {q} {k} {n}$, under the constraint $\dim\spline {p} {p-1} {m} = \dim\spline {q} {k} {n}$. The main result of this section is contained in Theorem~\ref{thm:lower}. A comparison in the case of Broken Sobolev spaces is then performed in Section~\ref{sec:broken} and extensions to tensor product cases are considered in Section~\ref{sec:tens}.
The fact that smoother spaces provide better approximation could be surprising to people not familiar with the $n$-width theory, indeed it could seem reasonable that smoother spaces are more ``rigid'' and thus that they can not approximate functions that are less regular.
This is not true: for instance it was shown by Kolmogorov \cite{Kolmogorov:36} that
$$
\SPAN \{1,\cos(\pi x),\ldots,\cos((N-1)\pi x)\}
$$
is optimal for $H^1$ in the $n$-width sense, meaning that no other $N$-dimensional subspace of $L^2$ can provide a better a priori error estimate for $H^1$ functions.
Based on results of Melkman and Micchelli \cite{Melkman:78} it was proved in \cite{Floater:2017} that for all $q$ and $N$ there exists an optimal $\smooth {\ell (q+1)-2}$ spline space of degree $\ell (q+1)-1$ for any $\ell=1,2,\dots$. In fact, for $q=0$ and with even degrees $p$, the knots of the optimal spline spaces are uniform and so they are subspaces of $\spline p{p-1}n$.
\section{Upper and lower bounds for $\const pkn$}\label{sec:univ}
We prove the following bounds on the best constants $\const {p}{k}n$, $k=-1,0, \ldots, p-1$.
\begin{theorem}\label{thm:bounds}
For all $p\ge 0$, $k\in\{-1,0,\ldots,p-1\}$ and $n\ge 1$ we have
\begin{align}
\label{est:-1} \frac{(p+1)!}{(2p+2)!\sqrt{2p+3}} n^{-p-1}\le &\const {p}{k}n \le (n\pi)^{-p-1}
\end{align}
\end{theorem}
The above inequalities are shown in the following lemmas.
\begin{lemma}\label{lem:legendre-poly} For discontinuous spline approximation we have
$$
\frac{(p+1)!}{(2p+2)!\sqrt{2p+3}}n^{-p-1} \le \const p{-1}n.
$$
\end{lemma}
\begin{proof}
It is enough to show that there exists an $f\in H^{p+1}$ such that
$$
\norm{f-\proj p{-1}n f}\ge \frac{(p+1)!}{(2p+2)!\sqrt{2p+3}} n^{-p-1} \norm{\partial^{p+1}f}.
$$
This is the case for $f(x)=x^{p+1}$.
The projection $\proj p{-1}n$ acts independently on each $I_j$, $j=0,\dots,n-1$ and on $I_j$ we have
$$
x^{p+1}= \sum_{i=0}^{p+1} c_{i,j}\,\ell_i (n x-j ) ,$$
where $\ell_i$ is the $i$-th Legendre polynomial on $[0,1]$.
Since $\ell_{p+1}(nx-j)$ is orthogonal to the polynomials of degree $p$ on $I_j$ we have
$$x^{p+1} -\proj p{-1}n x^{p+1} = \sum_{j=0}^{n-1} c_{p+1,j} \ell_{p+1}( n x-j ) .
$$
Since $$
\norm{\ell_{p+1}}=(\sqrt{2p+3})^{-1}\qquad \text{and}\qquad \norm{\partial^{p+1}\ell_{p+1}}=\frac{(2p+2)!}{(p+1)!}.
$$
by taking the derivative of $\ell_{p+1}( n x-j )$ we obtain
$$
\norm{x^{p+1}-\proj p{-1}n x^{p+1}}_{I_j}= \frac{(p+1)!}{(2p+2)!\sqrt{2p+3}}n^{-p-1} \norm{\partial^{p+1} x^{p+1}}_{I_j}.
$$
Summing over $j$ the squares of the left and right hand sides yields the result.
\end{proof}
\begin{lemma}\label{lem:univk}
For maximally smooth spline approximation we have
$$\const {p}{p-1}n \le (n\pi)^{-p-1}.$$
\end{lemma}
\begin{proof}
This is a corollary of the results in \cite{Floater:2018}.
Let
$$E^{p}=\{f\in H^{p+1}:\quad \partial^{s} f(0)=\partial^{s} f(1)=0,\quad 0\leq s< p,\quad s+p\text{ is odd} \}.$$
Observe that for $p$ odd, $E^p$ coincides with the non-standard Sobolev space $H^{p+1}_0$ defined in \cite{Floater:2018}, and for $p$ even it coincides with the space $H^{p+1}_1$ in that paper.
Then \cite[Theorems 1 and 2]{Floater:2018} states that for all $v\in E^p$
\begin{equation}\label{eq:Floater-result}
\norm{v-\Pi_E v}\le \Big(\frac{1}{n\pi}\Big)^{p+1}\norm{\partial^{p+1}v},
\end{equation}
where $\Pi_E :E^p\rightarrow E^p\cap \spline p{p-1}n$ is the $L^2$ projection.
Note that the $n$ in \cite{Floater:2018} is the space dimension, what we call $N$, and not the number of segments.
Given $f\in H^{p+1}$ let $g\in \poly p$ be a polynomial such that $f-g\in E^p$. In other words, for $0\le s < p$ with $s+p$ odd, we have
$$
\left\{\begin{aligned}
\partial^s g(0)&=\partial^s f(0)\\
\partial^s g(1)&=\partial^s f(1).
\end{aligned}\right .
$$
This $g$ exists according to Lemmas \ref{lem:poly-interp} and \ref{lem:poly-interp2} in the appendix.
Then, since $g\in\spline p{p-1}n$ and $f-g\in E^p$ we have
$$
\begin{aligned}
\norm{f-\proj p{p-1}n f}&= \norm{(f-g) - \proj p{p-1}n (f-g)}\\
&\le \norm{(f-g) - \Pi_E (f-g)}\\
&\le \Big(\frac{1}{n\pi}\Big)^{p+1}\norm{\partial^{p+1}(f-g)}\\
&= \Big(\frac{1}{n\pi}\Big)^{p+1}\norm{\partial^{p+1}f}.
\end{aligned}
$$
\end{proof}
Theorem \ref{thm:bounds} now follows from the observation that $\spline p {k+1}n\subset\spline p {k}n$ for all $k=-1,0,\ldots,p-2$ and so $\const pkn\le \const p{k+1}n$.
\section{Univariate comparisons}
\label{sec:comparison-1d}
Here we compare the space of maximally smooth splines, $\spline p{p-1}m$, commonly used in IGA, with the space $\spline pkn$ of smoothness $k<p-1$ where $m$ depends on $n$ as in \eqref{eq:m-formula}, i.e., such that $\dim\spline p{p-1}m=\dim\spline pkn$. This means that the smoother space is defined on a finer grid. See Figure \ref{fig:deg1-3} for an example of this. Note that the case $k=p-1$ and the case $n=1$ are uninteresting since we would then be comparing $\spline pkn$ with itself.
The estimates in Section \ref{sec:univ} are sharp enough to prove that smooth splines will eventually provide a better approximation in the number of degrees of freedom.
This is stated in the following theorem.
More precise statements for the IGA-FEM comparison ($k=0$) and the IGA-DG comparison ($k=-1$) are contained in subsections 3.1 and 3.2.
\begin{figure}
\includegraphics{deg1.pdf} \includegraphics{deg3.pdf
\caption{Example of pairs of functions in $\spline p{-1}n$ (blue) and $\spline p{p-1}m$ (red) for $p=1$ and $p=3$, $n=3$. Note how the maximally smooth spline space is defined on a finer grid.}\label{fig:deg1-3}
\end{figure}
\begin{theorem}\label{thm:comparison-general}
For all $k\geq -1$ and $n\ge 2$ there exists $\bar p$ such that for all $p\ge \bar p$
$$
\const p{p-1}m<\const pkn,
$$
where $m=(n-1)(p-k)+1$.
\end{theorem}
This theorem follows from studying the bounds in Theorem \ref{thm:bounds}, which is done in Lemma \ref{thm:ratio} and Proposition \ref{prop:B}.
\begin{lemma}\label{thm:ratio}
For all $p\ge 0$, $k\in\{-1,\dots,p-1\}$ and $n,m\ge 1$ we have
\begin{equation}\label{eq:ratio}
\frac{\const p{p-1}m}{\const pkn} \le \Big(\frac{4}{e\pi}\Big)^{p+1}\Big(\frac{n}{m}\Big)^{p+1} (p+1)^{p+1}\sqrt{4p+6}.
\end{equation}
\end{lemma}
\begin{proof}
From Theorem \ref{thm:bounds} we have for $k=-1,\dots,p-1$, that
\begin{equation*}
\frac{\const p{p-1} m}{\const pkn} \le \Big(\frac{n}{m\pi}\Big)^{p+1} \frac{(2p+2)!}{(p+1)!}\sqrt{2p+3}.
\end{equation*}
Now, using the error bounds of the Stirling's approximation~\cite{Robbins:55}
\begin{equation}\label{ineq:Stirling}
\sqrt{2\pi}r^{r+\frac{1}{2}}e^{-r}e^{\frac{1}{12r+1}}\le r! \le \sqrt{2\pi}r^{r+\frac{1}{2}}e^{-r}e^{\frac{1}{12r}},
\end{equation}
we find that
\begin{align*}
\frac{(2p+2)!}{(p+1)!} &\le \frac{\sqrt{2\pi}(2p+2)^{2(p+1)+\frac{1}{2}}e^{-2(p+1)}e^{\frac{1}{12(2p+2)}}}{\sqrt{2\pi}(p+1)^{p+1+\frac{1}{2}}e^{-p-1}e^{\frac{1}{12(p+1)+1}}},\\
&=2^{2(p+1)+\frac{1}{2}}
\frac{(p+1)^{2(p+1)+\frac{1}{2}}}{(p+1)^{p+1+\frac{1}{2}}}
\frac{e^{-2(p+1)}}{e^{-p-1}}
\frac{e^{\frac{1}{12(2p+2)}}}{e^{\frac{1}{12(p+1)+1}}},\\
&=4^{p+1}\sqrt{2}(p+1)^{p+1} e^{-p-1}
e^{\frac{1}{24(p+1)}-\frac{1}{12(p+1)+1}},\\
&=\Big(\frac{4}{e}\Big)^{p+1}\sqrt{2}(p+1)^{p+1}e^{\frac{1-12(p+1)}{24(p+1)(1+12(p+1))}},\\
&\le \Big(\frac{4}{e}\Big)^{p+1}\sqrt{2}(p+1)^{p+1},
\end{align*}
and the result follows.
\end{proof}
For $m$ as in \eqref{eq:m-formula}, we let $\rt pnk$ be the bound in \eqref{eq:ratio}, now given as
\begin{align}\label{eq:ratio-m}
\rt pkn=(\B pkn)^{p+1} \sqrt{4p+6}\qquad\text{with}\\
\label{eq:base-m}
\B pkn = \frac{4}{e\pi} \frac{n (p+1)}{(p-k)(n-1)+1} .
\end{align}
The next step of our analysis is the study of $\B pkn$.
\begin{proposition}\label{prop:B}
For $-1\leq k <p-1$ and $n\ge 2$ the following statements hold
\begin{enumerate}
\item \label{p-lim} for fixed $n$ and $k $ $$\lim_{p\rightarrow\infty} B_{p,k,n}= \frac {4 } {e\pi}\frac n{n-1}<1;$$
\item \label{n-lim} for fixed $p$ and $k$, $$\lim_{n\rightarrow\infty} B_{p,k,n}= \frac {4 } {e\pi }\frac{p+1}{p-k};$$
\item\label{n-decreasing} $B_{p,k,n}$ is strictly decreasing in $n$;
\item\label{p-decreasing} $B_{p,k,n}$ is decreasing in $p$ for $k\ge 0$.
\end{enumerate}
\end{proposition}
\begin{proof}
The limits in points \ref{p-lim} and \ref{n-lim} are straightforward.
For \ref{n-decreasing} it is sufficient to show that $\B p{n+1}k<\B {p}nk$, i.e.,
\begin{align*}
&\frac{n+1}{(p-k)n+ 1}<\frac{n}{(p-k)(n-1) + 1},
\end{align*}
which is equivalent to $k<p-1$, since the denominators are positive.
For \ref{p-decreasing} we prove that $\B {p+1}nk \leq \B pnk$, i.e.,
\begin{align*}
\frac{p+2}{(p-k+1)(n-1)+1} \leq \frac{p+1}{(p-k)(n-1)+1}.
\end{align*}
This is equivalent to $(k+1)(n-1)\geq 1$, which holds for $n\geq 2$ and $k\geq 0$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:comparison-general}]
From point \ref{p-lim} of Proposition \ref{prop:B} we deduce that for $p>\hat p$ we have $\B pkn \le t<1$ and
$$
\rt pkn = (\B pkn)^{p+1}\sqrt{4p+6}\le t^{p+1}\sqrt{4p+6}.
$$
Thus there exists $\bar p\ge \hat p$ such that for all $p>\bar p$, $\rt pkn <1$.
\end{proof}
\begin{remark}\label{rem:exp}
Using Proposition \ref{prop:B} we can obtain an estimate of how much better the approximation with smooth splines is in Theorem \ref{thm:comparison-general}.
For a fixed $k$, and any $\bar p,\bar n$ satisfying $\B {\bar p}k{\bar n}\leq \frac{4}{e\pi}\gamma<1$ we have
\begin{equation}\label{ineq:exp}
\rt pkn \leq \Big ( \frac 4{e\pi}\gamma \Big)^{p+1} \sqrt{4p+6},\qquad \forall n\geq\bar n, \, \forall p\geq\bar p,
\end{equation}
i.e. $\rt pkn$ gets exponentially smaller as $p$ increases.
The level set $\B pkn=\frac{4}{e\pi}\gamma$ is the hyperbola
\begin{align*}
0 = \big(n -\frac {\gamma}{\gamma-1} \big)\big(p-\frac{\gamma k+1}{\gamma-1} \big) + \frac{\gamma(\gamma-k-2)}{(\gamma-1)^2}
\end{align*}
and has asymptotes
\begin{align*}
p= \frac{\gamma k+1}{\gamma-1}, \qquad n=\frac \gamma{\gamma-1}.
\end{align*}
This tells us that for each $\bar n\geq \frac \gamma{\gamma-1}$ there is a corresponding $\bar p$ such that we obtain the exponential improvement in \eqref{ineq:exp}.
\end{remark}
\begin{corollary}\label{thm:decreasing-n}
For all $p\ge1$ and $k=-1,\dots,p-2$, the ratio $\rt pkn$ in \eqref{eq:ratio-m} is strictly decreasing in $n$.
\end{corollary}
\begin{proof}
By definition
$$
\rt pkn=(\B pkn)^{p+1} \sqrt{4p+6}
$$
and $\B pkn^{p+1}$ is strictly decreasing in $n$ by point \ref{n-decreasing} of Proposition \ref{prop:B}.
\end{proof}
\begin{corollary}\label{thm:decreasing-p}
For all $k\ge 0$, $\rt pkn$ is strictly decreasing in $p$ for all $p\ge \bar p$ where $\bar p$ is such that $\rt {\bar p}kn\le1$.
\end{corollary}
\begin{proof}
From point \ref{p-decreasing} of Proposition \ref{prop:B}, $\B pkn$ is decreasing in $p$. Moreover, $(4p+6)^{1/(2p+2)}$ is strictly decreasing in $p$. Thus $(\rt pkn)^{1/(p+1)}$ is also strictly decreasing in $p$ and the result follows.
\end{proof}
\begin{remark}\label{rem:quarter}
For fixed $k\geq 0$ and given $\bar p$ and $\bar n$ such that $\rt{\bar p}k{\bar n}<1$ then from the above corollaries we find that
$$\const p{ p-1} m < \const p{k}{ n},\qquad \forall p\ge \bar p,\quad \forall n \ge \bar n.$$
This means that there cannot be isolated values for which this inequality holds.
A similar result is true for $k=-1$, but it requires a more technical argument that is postponed until later.
\end{remark}
\subsection{IGA-FEM comparison}
\begin{theorem}[IGA-FEM comparison]\label{thm:fem}
For $p\geq 3$ and sufficiently large $n$, more precisely
$$
\begin{aligned}
&n\geq 7 &&\text{for} &&p=3,
\\&n\geq 4 &&\text{for} &&p\in\{4,5\},
\\&n\geq 3 &&\text{for} &&p\in\{6,...,37\},
\\&n\geq 2 &&\text{for} &&p\ge 38,
\end{aligned}
$$
we have
$$
\const p{p-1}m<\const p0n.
$$
\end{theorem}
Note that for $n=1$ or $p=0$ the spaces are the same and $\const p{p-1}m=\const p0n$.
Note further that no conclusion can be drawn for $p=2$. Indeed we have
$$
\rt 20n=\Big(\frac {4}{e\pi}\frac{3n}{2n-1}\Big)^3 \sqrt{14} >\Big(\frac {6}{e\pi}\Big)^3 \sqrt{14}> 1, \quad \forall n\ge 2.
$$
All cases are summarized in Fig.~\ref{fig:iga-fem}.
\begin{proof}
Using Corollary \ref{thm:decreasing-n} and Corollary \ref{thm:decreasing-p} as explained in Remark \ref{rem:quarter} it is enough to show that $\rt {38}0{2}$, $\rt 603$, $\rt 404$ and $\rt 307$ are less than $1$.
We have
\begin{align*}
\rt {38}02 &= \Big(\frac8{e\pi }\Big)^{39}\sqrt{158} = 0.9851\ldots\\
\rt 603&= \Big(\frac4{e\pi }\frac {21}{13} \Big)^{7}\sqrt{30} = 0.7776\ldots\\
\rt 404&= \Big(\frac4{e\pi }\frac {20}{13} \Big)^{5}\sqrt{22} = 0.9114\ldots\\
\rt 307&= \Big(\frac4{e\pi }\frac {28}{19} \Big)^{4}\sqrt{18} = 0.9632\ldots
\end{align*}
\end{proof}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\fill[purple!15] (0,0)--(0,1)--(1,1)--(1,7)--(2,7)--(2,1) --(9.5,1)--(9.5,0);
\fill[white] (0,1) rectangle (1,6.8);
\fill[blue!50] (3,6)--(4,6)--(4,3)--(6,3)--(6,2)--(8.5,2)--(8.5,1)--(9.5,1)--(9.5,7)--(3,7);
\fill[red!50] (2,1) -- (8.5,1)--(8.5,2)--(6,2)--(6,3)--(4,3)--(4,6)--(3,6)--(3,7)--(2,7);
\draw[gray] (0,0) grid (7,7);
\begin{scope}[xshift=7.5cm]
\draw[gray] (-.9,0) grid (2,7);
\end{scope}
\draw[->] (-.1,0)--(9.6,0) node[below] {$p$};
\draw[->] (0,-.1)--(0,7.1) node[left] {$n$};
\node at (4,.5) {\large SAME SPACE};
\node[rotate=-90] at (.5,3) {\large FEM=$\mathbb{R}$};
\foreach \x[evaluate=\x as \xp using \x+.1] in {7.1,7.3,7.5} { \fill[white] (\x,-.1) rectangle (\xp,7.1); }
\foreach \p[evaluate=\p as \l using \p+.5] in {0,1,...,6} {\draw (\l,.1)--(\l,-.1) node[below] {$\p$};}
\foreach \n[evaluate=\n as \l using \n-.5] in {1,2,...,7} {\draw (.1,\l)--(-.1,\l) node[left] {$\n$};}
\foreach \p/\l in {37/8, 38/9} {\draw (\l,.1)--(\l,-.1) node[below] {$\p$};}
\node at (5.5,4.5) {\large $\displaystyle\frac {\const p{p-1}m}{\const p0n}<1$};
\node at (3.5,6.5) {\Large $\color{green}\checkmark$};
\node at (4.5,3.5) {\Large $\color{green}\checkmark$};
\node at (6.5,2.5) {\Large $\color{green}\checkmark$};
\node at (9,1.5) {\Large $\color{green}\checkmark$};
\end{tikzpicture}
\end{center}
\caption{The blue area indicates for which $p$ and $n$ we can conclude that IGA approximation is better than FEM approximation. The red area indicates where no conclusion can be obtained from the estimate. The two spaces coincide in the pink area.}\label{fig:iga-fem}
\end{figure}
\subsection{IGA-DG comparison}\label{sec:dg}
Similarly to the previous subsection we note that for $n=1$ or $p=0$ the spaces are the same and $\const p{p-1}m=\const p{-1}n$.
\begin{lemma}\label{thm:decreasing-p-1}
For $n=2$, $p\ge 22$ and $n=3$, $p\ge 2$ the function $\rt p{-1}n$ is strictly decreasing in $p$.
\end{lemma}
\begin{proof}
First note that $\rt p{-1}n$ is decreasing whenever $\rt p{-1}n^2$ is decreasing.
We now let $s=p+1$ and compute the derivative of $\rt {s-1}{-1}n^2$ with respect to $s$ and show that it is negative.
$$
\begin{aligned}
\partial_s&( \rt {s-1}{-1}n)^2= \underbrace{\frac 4{ns-s+1}\Big(\frac 4{e\pi} \frac{ns}{ns-s+1}\Big )^{2s} }_{> 0}\\
&\Big( 1-2 s^2 (n-1) + \underbrace{(1+2s)(ns-s+1)}_{>0} \underbrace{\ln \big(\frac 4{\pi} \frac {ns}{ns-s+1} \big)}_{\leq L} \Big),
\end{aligned}
$$
where $L=\ln \big(\frac 4{\pi} \frac {n}{n-1} \big)<1$ is an upper bound on the logarithm.
It follows that $\partial_s \rt {s-1}{-1}n <0$ if
$$
2(n-1)(L-1)s^2 + (n+1)L s + (L+1)<0,
$$
i.e., for
$$
s> \frac { -(n+1)L -\sqrt{(n+1)^2L^2-8(L^2-1)(n-1)} }{4(n-1)(L-1)}.
$$
For $n=2$ we have $L=\ln\frac 8\pi < 0.935$ and $\rt p{-1}2$ is strictly decreasing for
$$
p=s-1\ge \frac {3L+\sqrt{L^2+8}}{4(1-L)}-1\approx 21.14\ldots.
$$
For $n=3$ we have $L=\ln\frac 6\pi < 0.648$ and $\rt p{-1}3$ is strictly decreasing for
$$
p=s-1\ge\frac12\frac{1+L}{1-L}-1\approx 1.33\ldots.
$$
\end{proof}
\begin{theorem}[IGA-DG comparison]\label{thm:dg}
For$$
\begin{aligned}
&n\ge 3 &&\text{and}&&p\in\{1,\dots,17\},
\\&n\ge 2 &&\text{and}&&p\ge 18,
\end{aligned}$$
we have
$$
\const p{p-1}m<\const p{-1}n.
$$
\end{theorem}
\begin{proof}
Using Lemma \ref{thm:decreasing-n} and Lemma \ref{thm:decreasing-p-1} it is enough to show that $\rt 1{-1}3$, $\rt 2{-1}3$ and $\rt {22}{-1}2$ are less than $1$ to cover all cases but $\rt {18}{-1}2$, $\rt {19}{-1}2$, $\rt {20}{-1}2$, $\rt {21}{-1}2$. The latter are also checked.
We have
\begin{align*}
\rt 1{-1}3&=\Big(\frac4{e\pi }\frac 65 \Big)^{2}\sqrt{10} = 0.9990\ldots\\
\rt 2{-1}3&=\Big(\frac {4}{e\pi }\frac 97 \Big)^{3}\sqrt{14} = 0.8172\ldots\\
\rt {18}{-1}2&=\Big(\frac4{e\pi }\frac {19}{10} \Big)^{19}\sqrt{ 78} = 0.9639 \ldots\\
\rt {19}{-1}2&=\Big(\frac4{e\pi }\frac {40}{21} \Big)^{20}\sqrt{ 82} = 0.9247 \ldots\\
\rt {20}{-1}2&=\Big(\frac4{e\pi }\frac {21}{11} \Big)^{21}\sqrt{ 86} = 0.8862 \ldots\\
\rt {21}{-1}2&=\Big(\frac4{e\pi }\frac {44}{23} \Big)^{22}\sqrt{ 90} = 0.8484 \ldots\\
\rt {22}{-1}2&=\Big(\frac4{e\pi }\frac {23}{12} \Big)^{23}\sqrt{ 94} = 0.8115 \ldots
\end{align*}
\end{proof}
Note that nothing can be concluded for $n=2$ and $p\in\{1,\dots,17\}$ since the estimate $\rt p{-1}2>1$ in these cases.
All cases are summarized in Fig.~\ref{fig:iga-dg}.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\fill[purple!15] (0,0)--(0,7)--(1,7)--(1,1)--(12,1)--(12,0);
\fill[blue!50] (1,2)--(6,2)--(6,1)--(12,1)--(12,7)--(1,7);
\fill[red!50] (1,1) rectangle(6,2);
\draw[gray] (0,0) grid (12,7);
\draw[->] (-.1,0)--(12.1,0) node[below] {$p$};
\draw[->] (0,-.1)--(0,7.1) node[left] {$n$};
\node at (2,.5) {\large SAME SPACE};
\node at (8.5,4) {\large $\displaystyle\frac {\const p{p-1}m}{\const p{-1}n}<1$};
\foreach \p[evaluate=\p as \l using \p+.5] in {0,1,...,3} {\draw (\l,.1)--(\l,-.1) node[below] {$\p$};}
\foreach \n[evaluate=\n as \l using \n-.5] in {1,2,...,6} {\draw (.1,\l)--(-.1,\l) node[left] {$\n$};}
\foreach \x[evaluate=\x as \xp using \x+.1] in {4.3,4.5,4.7} { \fill[white] (\x,-.1) rectangle (\xp,7.1); }
\foreach \p/\l in {17/5.5,18/6.5,19/7.5,20/8.5,21/9.5,22/10.5,23/11.5} {\draw (\l,.1)--(\l,-.1) node[below] {$\p$};}
\node at (1.5,2.5) {\Large $\color{green}\checkmark$};
\node at (2.5,2.5) {\Large $\color{green}\checkmark$};
\node at (6.5,1.5) {\Large $\color{green}\checkmark$};
\node at (7.5,1.5) {\Large $\color{green}\checkmark$};
\node at (8.5,1.5) {\Large $\color{green}\checkmark$};
\node at (9.5,1.5) {\Large $\color{green}\checkmark$};
\node at (10.5,1.5) {\Large $\color{green}\checkmark$};
\end{tikzpicture}
\end{center}
\caption{The blue area indicates for which $p$ and $n$ we can conclude that IGA approximation is better than DG approximation. The red area indicates where no conclusion can be obtained from the estimate. The two spaces coincide in the pink area.}\label{fig:iga-dg}
\end{figure}
\section{Lower order Sobolev spaces}
\label{sec:low}
In this section we consider an approximand $f$ in $H^{q+1}$ and compare the approximation by smooth splines of degree $p>q$, $\spline p{p-1}m$, with that by $\smooth k$ splines of degree $q$, $\spline qkn$.
Both spaces provide the same approximation order, but the smoother space has a degree higher than the regularity of the approximand.
In IGA the degree of the spline space is sometimes determined by the parametrisation of the domain, and not by the Sobolev regularity of the solution.
Our aim is to show that, for practical purposes, smooth spline spaces of degree higher than the Sobolev regularity have better approximation properties than lower smoothness spaces of the optimal degree.
Recalling that $\cf pknq$, $0\le q\le p$ is the best constant in
$$
\norm {f-\proj pkn f}\le \cf pknq \norm{\partial^{q+1}f},
$$
we compare $\cf p{p-1} m q$ with $\const q kn $ under the constraint $\dim \spline p{p-1}m = \dim \spline qkn,$
which corresponds to
\begin{equation}\label{eq:m-formula-low}
m=(q-k)(n-1)+1+q-p.
\end{equation}
\begin{theorem}\label{thm:lower-norm}
For all $0\le q \le p$, $k\in\{-1,0,\ldots,p-1\}$ and $n\ge 1$ we have
$$
\cf pknq \le \Big(\frac1{n\pi}\Big)^{q+1}.
$$
\end{theorem}
The proof is done only for $k=p-1$ and using induction starting from $q=0$.
The base case is proved in the following lemma.
\begin{lemma}\label{lem:ineqH1}
For all $p\geq 0$ and $n\geq 1$ we have
\begin{equation}\label{ineq:H1}
\cf p{p-1}n0 \le \frac1{n\pi}.
\end{equation}
\end{lemma}
\begin{proof}
If $p$ is even, then this follows directly from \cite[Theorem~2]{Floater:2018} (originally shown in \cite[Theorem~2]{Floater:2017}) where it is proved for a subspace of $\spline p{p-1}{n}$.
If $p$ is odd, \cite[Theorem~2]{Floater:2018} states approximation results for splines on a different partition.
We obtain the desired result by extending the domain to
$$
\widetilde I=(-\frac{1}{2n},1+\frac{1}{2n})
$$
and considering the spaces
\begin{align*}
\widetilde E&=\{ f\in \smooth {p-1}(\widetilde I) : \partial^s f(-\frac 1{2n} )= \partial^s f(1+\frac 1{2n} )=0, \ 1\le s\le p,\,s\text{ odd} \}\\
\widetilde{\mathcal S}&=\{f\in \widetilde E : f|_{[-\frac 1{2n},0)},\, f|_{[1,1+\frac 1{2n})},\,f|_{I_j}\in\poly p, \ j=0,\ldots,n-1 \}
\end{align*}
where we recall $I_j=[\frac jn,\frac{j+1}n)$.
Note that $\spline pkn$ is the restriction of $\widetilde{\mathcal S}$ to $[0,1]$ and that $\dim \widetilde {\mathcal S}=n+1$.
Furthermore let $\widetilde \Pi: L^2(\widetilde I) \rightarrow \widetilde {\mathcal S} $ be the orthogonal projection and $\mathcal{E}: H^1(I)\rightarrow H^1(\widetilde I)$ be the extension operator
$$
\mathcal{E} f(x) =\begin{cases}
f(0)& x\le 0,\\
f(x)& 0<x\le 1,\\
f(1)& x>1.
\end{cases}
$$
Then using \cite[Theorem~2]{Floater:2018} on $\widetilde I$ we get
\begin{align*}
\norm{f-\proj p{p-1}n f}&\le \norm{f- (\widetilde \Pi \circ \mathcal{E} f)|_{I}}
\le \norm{\mathcal{E} f- \widetilde \Pi \circ \mathcal{E} f}_{\widetilde I}\\
&\le \frac{n+1}{n}\frac 1{(n+1)\pi}\norm{\partial \mathcal{E} f}_{\widetilde I} = \frac{1}{n\pi}\norm{\partial f},
\end{align*}
where the factor $(n+1)/n$ is the length of $\widetilde I$, $n+1$ is $\dim \widetilde{\mathcal S}$ and $\norm{\cdot}_{\widetilde I}$ denotes the $L^2$ norm on the interval $\widetilde I$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:lower-norm}]
The proof is by induction. The cases $p\ge q=0$ are proved in Lemma \ref{lem:ineqH1}.
The case $(p,q)$ is proved assuming the result is true for $(p-1,q-1)$, namely that for $f\in H^q$ we have
$$\norm {f - \proj {p-1}{p-2}n f }\le \Big(\frac 1 {n\pi}\Big)^{q}\norm{\partial^q f}.$$
Let $Q:H^{1}\rightarrow \spline p{p-1}n$ be the projection defined by
\begin{equation}\label{eq:H1proj}
Q f (x) =c(f) + \int_0^x \proj {p-1}{p-2}n \partial f(z)\,dz
\end{equation}
where $c(f)\in\mathbb{R}$ is uniquely determined by requiring that $Q$ is a projection.
Since $\proj p{p-1}n$ is a projection and using Lemma \ref{lem:ineqH1}, we have for $f\in H^{q+1}$
\begin{align*}
\norm{f-\proj p{p-1}n f }&= \norm{ (f-Q f) -\proj p{p-1}n ( f-Q f) }
\le \Big(\frac 1 {n\pi}\Big) \norm{\partial (f-Q f)}.
\end{align*}
Using
\eqref{eq:H1proj} and the induction hypothesis on $\partial f\in H^q$ we obtain
\begin{align*}
\Big(\frac 1 {n\pi}\Big) \norm{\partial (f-Q f)} = \Big(\frac 1 {n\pi}\Big) \norm{\partial f - \proj {p-1}{p-2}n \partial f} \le \Big(\frac 1 {n\pi}\Big)^{q+1} \norm{\partial^{q+1}f}.
\end{align*}
\end{proof}
\begin{theorem}\label{thm:lower}
Let $q$ and $k<q-1$ be given. If $\rt qk{\bar n}<1$ for some $\bar n$, then for all $p\ge q$,
\begin{equation}\label{ineq:lower}
n\ge \frac{p-k-1}{q-k-1}\bar n
\end{equation}
and with $m$ as in \eqref{eq:m-formula-low}, it holds
$$
\cf p{p-1}mq<\const qkn.
$$
\end{theorem}
Observe that for fixed $n, k$ and $q$, the degree $p$ can only be increased until it reaches $p=(q-k)(n-1) +q $.
At that point $m=1$ and $\spline p{p-1}m=\poly p$.
\begin{proof}
Similar to Section \ref{sec:comparison-1d} we have
$$
\frac {\cf p{p-1}mq}{\const qkn}\le \rf pknq
$$
where
\begin{align*}
\rf pknq&= (\Bf pknq)^{q+1}\sqrt{4q+6}\quad\text{with}\\
\label{eq:bf-pknq}\Bf pknq&=\frac{4}{e\pi} \frac{n (q+1)}{(q-k)(n-1)+1+q-p}.
\end{align*}
Moreover, $$\Bf pknq \le \B qk{\bar n}\quad \Rightarrow\quad\rf pknq\le \rt qk{\bar n}$$
and
$\Bf pknq \le \B qk{\bar n}$ is equivalent to \eqref{ineq:lower}.
\end{proof}
Comparing $\Bf pknq$ in the above proof with $\B pkn$ for the case $p=q$ in Section~\ref{sec:comparison-1d}, there is only an additional $q-p$ in the denominator.
\begin{example}
IGA-DG comparison in $H^2$. It follows from Theorem \ref{thm:dg} that for $k=-1$ and $q=1$ we can choose $\bar n=3$ in \eqref{ineq:lower}. Thus the IGA space of degree $p\geq 1$ gives better approximation in $H^2$ than the DG space of degree $1$ for all $n\geq 3p$.
\end{example}
\begin{example}
IGA-FEM comparison in $H^4$. It follows from Theorem \ref{thm:fem} that for $k=0$ and $q=3$ we can choose $\bar n=7$ in \eqref{ineq:lower}. Thus the IGA space of degree $p\geq 3$ gives better approximation in $H^4$ than the FEM space of degree $3$ for all $n\geq 7(p-1)/2$.
\end{example}
\section{Broken Sobolev spaces}
\label{sec:broken}
In numerical methods for PDEs, and especially in IGA \cite{Buffa:14}, it is common to consider broken Sobolev spaces, i.e., spaces of functions that are piecewise in $H^{p+1}$.
The problem of approximating functions in broken Sobolev spaces arises in DG, PDEs with discontinuous coefficients and in isoparametric methods where the parametrization is only piecewise regular.
The aim of this section is to show that smooth spline spaces have better approximation properties, provided the discontinuities are representable in the spline space and that the partitions are sufficiently fine.
Let $\Xi=\{\xi_1<\dots<\xi_T\}\subset (0,1)$ be a set of breakpoints and $S=(s_1,\dots,s_{T})$, $s_i\in\{-1,\dots,p-1\}$, be the corresponding smoothness parameters. For notational reasons let $\xi_0=0$ and $\xi_{T+1}=1$.
We consider the broken Sobolev space $\mathfrak H^{p+1}(\Xi,S)$ defined by
\begin{equation}
\mathfrak H^{p+1}(\Xi,S) = \left\{
\begin{aligned}f:\ &\partial^{p+1} f \in L^2(\xi_i,\xi_{i+1}),\, i=0,\dots,T
\\& \partial^{s_i+1} f \in L^2(\xi_{i-1},\xi_{i+1}),\,i=1,\dots,T
\end{aligned} \right\}.
\end{equation}
See Figure \ref{fig:broken} for an example.
We will consider error estimates of the type $$
\norm{f-\Pi f} \le C \norm{\partial^{p+1}f}_\Xi
$$
where $\norm{\cdot}_\Xi$ is the piecewise $L^2$ norm:
$$
\norm{g}_\Xi = \Big (\sum_{i=0}^T \norm{ g}^2_{L^2(\xi_i,\xi_{i+1})} \Big)^{\frac 12}.
$$
To achieve the expected approximation order it is necessary that the spline spaces can represent the discontinuities of the derivatives of the functions in $\mathfrak H^{p+1}(\Xi,S)$.
Because of this we enrich the spline space $\spline pkn$ by adding
$$
\mathfrak J^p_k(\Xi,S)= \{f: f|_{(\xi_i,\xi_{i+1})}\in\poly p,\, f\in \smooth {\min \{k, s_i\}}(\xi_{i-1},\xi_{i+1}),\ i=1,\dots,T \}
$$
and obtaining
$$
\brkS pkn(\Xi,S)= \spline pkn + \mathfrak J^p_k(\Xi,S).
$$
Thus $\brkS pkn(\Xi,S)$ is a spline space having varying degree of smoothness at the breakpoints. An example is shown in Figure~\ref{fig:broken}.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\draw (0,0) node[above] {$0$} -- (7,0) node[above] {$1$};
\draw (0,-.1)--(0,.1)(7,-.1)--(7,.1);
\node[red] (CR) at (6,-1.5) {$\smooth{s_i}$};
\foreach \x[count=\i] in {2.3, 4.3, 5.5}
{
\fill[red] (\x,0) circle (1.5pt) node (Q\i) {};
\draw[red,->] (CR)--(Q\i);
\node[above,red] at (Q\i) {$\xi_\i$};
}
\end{tikzpicture}
\begin{tikzpicture}
\draw (0,0) node[above] {$0$} -- (7,0) node[above] {$1$};
\draw (0,-.1)--(0,.1)(7,-.1)--(7,.1);
\node (CK) at (3,-1.5) {$\smooth k$};
\node[red] (CR) at (6,-1.5) {$\smooth {\min\{s_i,k\}}$};
\foreach \x[count=\i] in {1,...,6}
{
\fill (\x,0) circle (1.5pt) node (K\i){};
\draw[gray,->] (CK)--(K\i);
\node[above] at (K\x) {$\frac \i7$};
}
\foreach \x[count=\i] in {2.3, 4.3, 5.5}
{
\fill[red] (\x,0) circle (1.5pt) node (Q\i) {};
\draw[red,->] (CR)--(Q\i);
\node[above,red] at (Q\i) {$\xi_\i$};
}
\end{tikzpicture}
\end{center}
\caption{Above, the breakpoints and corresponding regularities defining $\mathfrak H^{p+1}(\Xi,S)$. Below, those defining $\brkS pk7(\Xi,S)$.}\label{fig:broken}
\end{figure}
Let $\brkC pkn(\Xi,S)$ be the smallest constant such that for all $f\in\mathfrak H^{p+1}(\Xi,S)$ we have
$$
\norm{f- \mathfrak P_{p,k,n} f} \le \brkC pkn(\Xi,S) \norm{\partial^{p+1}f}_\Xi,
$$
where $\mathfrak P_{p,k,n}$ is the orthogonal projection onto $\brkS pkn(\Xi,S)$.
As in the previous sections we compare $\brkC p{p-1}m(\Xi,S)$ with $\brkC pkn(\Xi,S)$.
In this case it is not always possible to choose $m$ such that $\dim \brkS p{p-1}m(\Xi,S)=\dim\brkS pkn(\Xi,S)$ because an increment of $1$ in $m$ does not necessarily correspond to an increment of $1$ in $\dim \brkS p{p-1}m(\Xi,S)$, e.g., when some of the breakpoints of $\spline p{p-1}m$ align with the points in $\Xi$.
The dimension of $\brkS pkn(\Xi,S)$ is
$$
\dim \brkS pkn(\Xi,S)= (p-k)(n-1) +p+1 + \sum_{i=1}^T \sigma_i
$$
where
$$\sigma_i=\begin{cases}
p-\min\{k, s_i\}& \xi_i\not\in\{j/n,j=1,\dots,n-1\}\\
\max\{k-s_i, 0\}&\xi_i\in \{j/n,j=1,\dots,n-1\}.
\end{cases}
$$
In particular for $k=p-1$ we have $k\le s_i$ and $k-s_i=(p-s_i)-1$ giving
$$
\dim \brkS p{p-1}m(\Xi,S)= m +p + \sum_{i=1}^T (p-s_i) - \#(M\cap \Xi)
$$
where $M=\{i/m:\,i=1,\dots,m-1 \}$.
Our choice of $m$ is
\begin{equation}\label{eq:brk-m}
m= (n-1)(p-k)+1+ \sum_{i=1}^T(\sigma_i+s_i-p)
\end{equation}
for which we have
$$\dim \brkS p{p-1}m(\Xi,S)=\dim \brkS pkn(\Xi,S)-\#(M\cap \Xi)\leq \dim \brkS pkn(\Xi,S).$$
\begin{lemma}
For all $\Xi$ and $S$ we have
\begin{equation}\label{eq:broken-bounds}
\frac{(p+1)!}{(2p+2)!\sqrt{2p+3}} (n+T)^{-p-1}\le \brkC pkn(\Xi,S) \le \const pkn
\end{equation}
\end{lemma}
\begin{proof}
The lower bound is obtained by looking at $k=-1$. In this case $\brkS pkn(\Xi,S)$ is a space of discontinuous piecewise polynomials on the non-uniform partition containing the intersections $(\xi_i,\xi_{i+1})\cap I_j$. This partition has at most $n+T$ elements, moreover for a given number of elements the approximation error of $x^{p+1}$ is minimized by the uniform partition. We can thus use $\const p{-1}{n+T}$ as a lower bound.
Next we look at the upper bound.
Given any $f \in \mathfrak H^{p+1}(\Xi,S)$ we can choose a $g\in \mathfrak J^p_k(\Xi,S)$ such that $f-g\in H^{p+1}$ and
\begin{align*}
\Vert f -\mathfrak P_{p,k,n} f\Vert^2 &=\Vert (f-g) -\mathfrak P_{p,k,n} (f-g) \Vert^2\\
&\le\norm{(f-g) -\proj pkn (f-g)}^2\\
&\le (\const pkn \norm{\partial^{p+1}(f-g)})^2\\
&=(\const pkn)^2\sum_{i=0}^T \norm{\partial^{p+1}(f-g)}^2_{L^2(\xi_i,\xi_{i+1})}\\
&=(\const pkn \norm{\partial^{p+1}f}_\Xi)^2.
\end{align*}
The result then follows by taking the square-root of both sides.
\end{proof}
Similarly to Section \ref{sec:low} we deduce the following result:
\begin{theorem}\label{thm:broken}
Let $\Xi$ and $S$ be given. If $\rt pk{\bar n}<1$ for some $\bar n$, then for all
\begin{equation}\label{ineq:2}
n\ge \Big(1 + \frac{T(p-k)-\sum_{i=1}^T (\sigma_i+s_i-p)}{p-k-1} \Big)\bar n - T
\end{equation}
and $m$ as in \eqref{eq:brk-m} we have
$$
\brkC p{p-1}m<\brkC pkn.
$$
\end{theorem}
\begin{proof}
Reasoning as in Section \ref{sec:comparison-1d} we have
$$
\frac {\brkC p{p-1}m}{\brkC pkn}\le \brkR pkn.
$$
where
\begin{align*}
\brkR pkn&= (\brkB pkn)^{p+1}\sqrt{4q+6}\quad\text{with}\\
\brkB pkn&=\frac{4}{e\pi} \frac{(n+T) (p+1)}{(p-k)(n-1)+1+\sum_{i=1}^T(\sigma_i+s_i-p)}.
\end{align*}
Moreover, $$\brkB pkn \le \B pk{\bar n}\quad \Rightarrow\quad \brkR pkn\le \rt pk{\bar n}$$
and
$\brkB pkn \le \B pk{\bar n}$ is equivalent to \eqref{ineq:2}.
\end{proof}
\section{The multivariate case}\label{sec:tens}
In this section we consider the unit hypercube $\Omega=(0,1)^d$ and a tensor product space
\begin{equation}\label{eq:tensor}
{\bf V}= V_1 \otimes\dots\otimes V_d.
\end{equation}
Let $\Pi_{\bf V}$ be the $L^2(\Omega)$ projection onto $\bf V$, $\Pi_{i}$ the $L^2(0,1)$ projection onto $V_i$ and $C_{i}$ be the smallest constant in the univariate estimate
$$
\Vert f-\Pi_{i} f\Vert \le C_i \Vert \partial^{q_i+1}f\Vert,\qquad\forall f\in H^{q_i+1}(0,1).
$$
\begin{theorem}\label{thm:multivariate}
For all $f\in L^2(\Omega)$, such that $\partial_i^{q_i+1} f \in L^2(\Omega)$ we have
\begin{equation}\label{ineq:kd}
\norm{f-\Pi_{\bf V} f} \le \sum_{i=1}^d C_i \norm{\partial_i^{q_i+1}f}
\end{equation}
and the result is sharp.
\end{theorem}
\begin{proof}
First of all, we remind that $L^2(\Omega)=L^2(0,1)\otimes\dots\otimes L^2(0,1)$ and that $\Pi_{\bf V}$ factorizes as
$$
\Pi_{\bf V} = \Pi_1\otimes \dots \otimes \Pi_d.
$$
These factors commute as in
$\Pi_i\otimes \Pi_j=(\Pi_i\otimes \mathrm{I})\circ(\mathrm{I}\otimes \Pi_j)=(\mathrm{I}\otimes \Pi_j)\circ (\Pi_i\otimes \mathrm{I})$ where $\mathrm I$ is in the identity operator.
The theorem is proved by induction on $d$. For $d=1$ it is the definition of $C_i$.
Now suppose the result is true for dimension $d-1$, i.e., that for $\Pi_*=\Pi_1\otimes\dots\otimes\Pi_{d-1}$ we have
$$
\norm{f-\Pi_* f}\le \sum_{i=1}^{d-1}C_i\norm{\partial_i^{q_i+1}f}.
$$
Using the triangle inequality and that $\norm{\mathrm{I}\otimes\Pi_d}=1$ we find
\begin{align*}
\norm{f- \Pi_1\otimes\dots\otimes \Pi_d f}&\le \norm{f-\mathrm{I}\otimes\Pi_d f} + \norm{\mathrm{I}\otimes\Pi_df- \Pi_*\otimes\Pi_d f}\\
&\le C_d \norm{\partial_{d}^{q_d+1}f} + \norm{\mathrm{I}\otimes\Pi_d} \norm{f - \Pi_*\otimes \mathrm{I} f}\\
&\le \sum_{i=1}^d C_i \norm{\partial_i^{q_i+1}f}.
\end{align*}
To see that the result is sharp we consider $f(x_1,\dots,x_d)=g(x_i)$ and notice that the statement is false for any constant smaller than $C_i$.
\end{proof}
From Theorem \ref{thm:multivariate} we deduce that all conclusions obtained in the univariate comparisons extend to the tensor product setting by considering each direction separately.
\section{Conclusions}
In this paper we have provided a mathematical justification for the numerically observed phenomena that $\smooth {p-1}$ splines of degree $p$, the so-called $k$-\emph{method} in IGA, provide better approximation in degrees of freedom than splines of smoothness $\smooth {-1}$ (DG) and $\smooth {0}$ (FEM). Specifically, we have shown in Section~\ref{sec:comparison-1d} that for sufficiently refined uniform partitions, $\smooth{p-1}$ splines yield better a priori error estimates than $\smooth{-1}$ splines for $p\ge 2$, and $\smooth0$ splines for $p\ge 3$, when approximating functions in the Sobolev space $H^{p+1}$.
For $p=2$ it is an open problem whether $\smooth {1}$ spline spaces provide better a priori error estimates than $\smooth0$ spline spaces of the same dimension. Sharper estimates on the approximation constants could complete the result for this case. Since we have used the lower bound for discontinuous spline approximation also as the lower bound for continuous spline approximation, it seems reasonable to look for an improved lower bound on the approximation constants for $\smooth0$ splines.
It is worth mentioning that the combination of the techniques in Sections~\ref{sec:low} and \ref{sec:broken} allow also the comparison for lower order broken Sobolev spaces. This comparison has not been included.
\section*{Appendix}
\begin{lemma}\label{lem:poly-interp}
Let $p\geq 1$ be any odd number. Then
the interpolation problem: find $g \in \poly p$ such that for all $s=0,2,\ldots,p-1,$
\begin{equation}\label{poly-interp}
\left\{\begin{aligned}
\partial^s g(0)&=a_s\\
\partial^s g(1)&=b_s
\end{aligned}\right .\
\end{equation}
admits a solution for all $\{a_s\}$, $\{b_s\}$.
\end{lemma}
\begin{proof}
We proceed by induction on $p$. If $p=1$ then the linear interpolant $g(x)=a_0 + x (b_0-a_0)$ satisfies $g(0)=a_0$ and $g(1)=b_0$ and is a solution.
Now, let $p\geq 3$ be any odd number and assume the result is true for $p-2$. Let $f\in \poly {p-2}$ be the solution of
\begin{equation*}
\partial^s f(0)=a_{s+2},\quad \partial^s f(1)=b_{s+2}\quad s=0,2,\ldots,p-3,
\end{equation*}
which we know exist by the induction hypothesis. We then define the function $g$ by integrating $f$ twice, i.e.,
$$g(x)=cx+d+\int_0^x\int_0^yf(z)dzdx.$$
This function then satisfies the cases $s\geq2$ of \eqref{poly-interp} for all $c,d\in\mathbb{R}$. We finish the proof by picking the constants $c$ and $d$ such that the case $s=0$, meaning $g(0)=a_0$ and $g(1)=b_0$, is also satisfied.
\end{proof}
\begin{lemma}\label{lem:poly-interp2}
Let $p\geq 0$ be any even number. Then
the interpolation problem: find $g \in \poly p$ such that for all $s=1,3,\ldots,p-1,$
\begin{equation}\label{poly-interp2}
\left\{\begin{aligned}
\partial^s g(0)&=a_k\\
\partial^s g(1)&=b_k
\end{aligned}\right .\
\end{equation}
admits a solution for all $\{a_s\}$, $\{b_s\}$.
\end{lemma}
\begin{proof}
For $p=0$ there is nothing to prove, and so we consider an even number $p\geq 2$. We then let $f\in\poly{p-1}$ be the solution of
\begin{equation*}
\partial^s f(0)=a_{s-1},\quad \partial^s f(1)=b_{s-1}\quad s=0,2,\ldots,p-2,
\end{equation*}
which we know exist by Lemma \ref{lem:poly-interp}. The function $g(x)=c+\int_0^xf(y)dy$ is then a solution of \eqref{poly-interp2} for any $c\in\mathbb{R}$.
\end{proof}
\section*{Acknowledgements}
The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement 339643.
| {
"timestamp": "2019-02-13T02:21:51",
"yymm": "1808",
"arxiv_id": "1808.04163",
"language": "en",
"url": "https://arxiv.org/abs/1808.04163",
"abstract": "In this paper we compare approximation properties of degree $p$ spline spaces with different numbers of continuous derivatives. We prove that, for a given space dimension, $\\smooth {p-1}$ splines provide better a priori error bounds for the approximation of functions in $H^{p+1}(0,1)$. Our result holds for all practically interesting cases when comparing $\\smooth {p-1}$ splines with $\\smooth {-1}$ (discontinuous) splines. When comparing $\\smooth {p-1}$ splines with $\\smooth 0$ splines our proof covers almost all cases for $p\\ge 3$, but we can not conclude anything for $p=2$. The results are generalized to the approximation of functions in $H^{q+1}(0,1)$ for $q<p$, to broken Sobolev spaces and to tensor product spaces.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Approximation in FEM, DG and IGA: A Theoretical Comparison",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9904405998064018,
"lm_q2_score": 0.8104789063814616,
"lm_q1q2_score": 0.8027312141668913
} |
https://arxiv.org/abs/1202.2023 | Surprising symmetries in 132-avoiding permutations | We prove that the total number $S_{n,132}(q)$ of copies of the pattern $q$ in all 132-avoiding permutations of length $n$ is the same for $q=231$, $q=312$, or $q=213$. We provide a combinatorial proof for this unexpected threefold symmetry. We then significantly generalize this result to show an exponential number of different pairs of patterns $q$ and $q'$ of length $k$ for which $S_{n,132}(q)=S_{n,132}(q')$ and the equality is non-trivial. | \section{Introduction}
\subsection{Background and Definitions}
Let $q=q_1 q_2\ldots q_k$ be a permutation in the
symmetric group $S_k$.
We say that the permutation
$p=p_1 p_2 \ldots p_n\in
S_n$ {\it contains a $q$-pattern\/}
if and only if there is a subsequence
$p_{i_1}p_{i_2}\ldots p_{i_k}$ of $p$ whose elements are in the same
relative order as those in $q$, that is,
$$
p_{i_t}<p_{i_u} \mbox{ if and only if } q_t<q_u
$$
whenever $1\leq t,u\leq k$. If $p$ does not contain $q$, then we say that
$p$ {\em avoids} $q$. For instance, 214653 contains 231 (consider the
third, fourth, and sixth entries), but avoids 4321.
See Chapter 14 of \cite{bona}
for an introduction to pattern avoiding permutations, and Chapters 4 and 5
of \cite{combperm} for a somewhat more detailed treatment.
It is straightforward to compute, using the linear property of expectation,
that the average number of $q$-patterns in a randomly selected permutation
of length $n$ is $\frac{1}{k!}{n\choose k}$, where $k$ is the length of $q$.
Joshua Cooper \cite{cooper}
has raised the following interesting family of questions.
Let $r$ be a given permutation pattern. What can be said about the
average number of occurrences of $q$ in a randomly selected $r$-avoiding
permutation of a given length?
Equivalently, can we determine the {\em total number}
$S_{n,r}(q)$ of all $q$-patterns in all $r$-avoiding permutations of
length $n$?
\subsection{Earlier Results}
In \cite{occurrences}, present author found formulae for the generating
functions of
the sequence $S_{132,n}(q)$ for the cases of monotone $q$, that is,
for $q=12\cdots k$ and $q=k(k-1)\cdots 1$, for any $k$. He also
proved that if $n$ is large enough, then for any fixed $k$,
among all patterns $q$ of length $k$, it is
the monotone decreasing pattern that maximizes $S_{132,n}(q)$ and it is
the monotone increasing pattern that minimizes $S_{132,n}(q)$.
\subsection{The Outline of our Paper}
In this paper, we first present a computational proof of the
surprising fact that for all $n$, the equalities
\begin{equation} \label{triple}
S_{132,n}(231)=S_{132,n}(312)=S_{132,n}(213)
\end{equation}
hold. The first equality is trivial, since taking the inverse
of a 132-avoiding permutation keeps that permutation 132-avoiding,
and turns 231-patterns into 312-patterns. However, the second
equality is non-trivial. (The reverse or complement of
a 132-avoiding permutation is not necessarily 132-avoiding.)
In particular, if $a(p)$ denotes the number of
213-copies in $p$, and $b(p)$ denotes the number of 231-copies in $p$, then
the statistics $a(p)$ and $b(p)$ are {\em not} equidistributed over the
set of all 132-avoiding permutations of length $n$, but their
average values are equal over that set.
In other words, we will prove that a randomly selected
non-monotonic pattern of length three in a 132-avoiding
permutation is equally likely to be a 231-pattern, a
312-pattern, or a 213-pattern. It is well-known (see Chapter 14 of \cite{bona})
that 132-avoiding permutations of length $n$ are
counted by the Catalan numbers $c_n={2n\choose n}/(n+1)$, and as such,
they are one of more than 150 distinct kinds of objects counted by those
numbers.
However, we do not know of any other example when a natural statistic
on objects counted by Catalan numbers shows a similar threefold symmetry.
In the next part of the paper we provide a bijective proof
of (\ref{triple}). Finally, we will significantly generalize this
result by showing more than $c_{h-2}$
pairs of patterns $q$ and $q'$ of length $h$ that
behave as 213 and 231, that is, for which $S_{n,132}(q)=
S_{n,132}(q')$, and the equality is non-trivial.
\section{Arguments Using Generating Functions}
Let $d_n$ be the total number of inversions (in other words, copies
of the pattern 21) in
all 132-avoiding $n$-permutations. It is proved in \cite{occurrences} that
\begin{equation}
\label{inversions} D(x)= \sum_{n\geq 1}d_nx^n=\frac{x}{1-4x} \cdot
\left (\frac{1}{\sqrt{1-4x}} - \frac{1-\sqrt{1-4x}}{2x} \right).
\end{equation}
\subsection{Counting Copies of 213}
Let $a_n$ be the total number of all 213-patterns in all 132-avoiding
permutations of length $n$. Clearly, then $a_0=a_1=a_2=0$.
There are three ways that a 132-avoiding permutation $p$ of length $n$
can contain
a 213-pattern $q$. Either $q$ is entirely on the left of the entry $n$, or
$q$ is entirely on the right of $n$, or $q$ ends in $n$.
For $n\geq 3$, this leads to the recurrence relation
\[a_n= \sum_{i=1}^n a_{i-1}c_{n-i} + \sum_{i=1}^{n} c_{n-1}a_{n-i} + \sum_{i=3}^n
d_{i-1} c_{n-i}.\]
Let $A(x)$ (resp. $C(x)$) be the ordinary generating function for the
sequence of the numbers $a_n$ (resp. $c_n$).
Then the last displayed formula
yields the functional equation
\[A(x)= 2xA(x)C(x)+xD(x)C(x) ,\]
which is equivalent to
\begin{equation}
\label{explicitA} A(x)=\frac{xD(x)C(x)}{1-2xC(x)}=\frac{x}{2(1-4x)^2}
+\frac{x-1}{2(1-4x)^{3/2}} + \frac{1}{2(1-4x)}.\end{equation}
From here, we get that if $n\geq 3$, then
\[a_n=\frac{n}{2}4^{n-1}+\frac{1}{2}4^n-(2n+1){2n-1\choose n-1}+
(2n-1){2n-3\choose n-2}, \]
which simplifies to
\begin{equation}
\label{exactfora} a_n=(n+4)\cdot 2^{2n-3}-(2n+1){2n-1\choose n-1}+
(2n-1){2n-3\choose n-2}. \end{equation}
\subsection{Counting Copies of 231}
Let $h_n$ be the total number of all non-inversions (in other words,
copies of the pattern 12) in all
132-avoiding permutations of length $n$. It is proved in \cite{occurrences} that
\begin{equation} \label{non-inversions}
H(x)=\sum_{n\geq 0}h_nx^n=\frac{1}{2(1-4x)} + \frac{1}{2x}
-\frac{1-x}{2x\sqrt{1-4x}}.
\end{equation}
Let $b_n$ be the total number of all 231-copies in all
132-avoiding permutations of length $n$, and let $B(x)=\sum_{n\geq 0}b_nx^n$.
Let \begin{equation}
\label{explicitZ} Z(x)=\sum_{n\geq 0}nc_nx^n=
\sum_{n\geq 0}{2n\choose n}\frac{n}{n+1}x^n=
\frac{1}{\sqrt{1-4x} }- \frac{1-\sqrt{1-4x}}{2x}.\end{equation}
Note that
$Z(x)$ is the generating function for the number of entries (which are
copies of the pattern 1)
in all 132-avoiding $n$-permutations.
If $p$ is a 132-avoiding $n$-permutation, and $q$ is a 231-pattern contained
in $p$, then either $q$ is entirely on the left of the entry $n$, or $q$
is entirely on the right of the entry $n$, or the entry $n$ is the largest
entry of $q$, or the first and second entries of $q$ form a 12-pattern on
the left of $n$, while the third entry of $q$ is on the right of $n$.
For $n\geq 3$, this leads to the recurrence relation
\[b_n=\sum_{i=1}^n b_{i-1}c_{n-i} + \sum_{i=1}^{n} c_{n-1}b_{n-i} +
\sum_{i=2}^{n-1}(i-1)(n-i)c_{i-1}c_{n-i} + \sum_{i=3}^{n-1} h_{i-1}c_{n-i}(n-i).\]
In terms of generating functions, this yields
\[B(x)=2xB(x)C(x) + xZ^2(x)+xH(x)Z(x),\]
\[B(x)=\frac{ xZ^2(x)+xH(x)Z(x)}{1-2xC(x)}=
\frac{ xZ^2(x)+xH(x)Z(x)}{\sqrt{1-4x}}.\]
Given the explicit formulae (\ref{non-inversions}) and (\ref{explicitZ})
for $H(x)$ and $Z(x)$, the last displayed equation yields
the formula
\begin{equation} \label{explicitB}
B(x)=\frac{xD(x)C(x)}{1-2xC(x)}=\frac{x}{2(1-4x)^2}
+\frac{x-1}{2(1-4x)^{3/2}} + \frac{1}{2(1-4x)}.
\end{equation}
The proof of the main result of this section is now immediate.
\begin{theorem}
For all positive integers $n$, the equalities
\[
S_{132,n}(231)=S_{132,n}(312)=S_{132,n}(213)\]
hold.
\end{theorem}
\begin{proof} As we mentioned in the Introduction, the first
equality is trivially true since there is a natural bijection
between the 231-copies of the 132-avoiding permutation $p$
and the 312-copies of the 132-avoiding permutation $p^{-1}$. Indeed, let
$p=p_1p_2\cdots p_n$ be a 132-avoiding permutation, and let
$1\leq i<j<k\leq n$.
Then $p_ip_jp_k$ is a 231-copy in $p$ if and only if
$ijk$ is a 312-copy in $p^{-1}$.
The equality $S_{n,132}(231)=S_{n,132}(213)$ holds since we have seen
in formulae (\ref{explicitA}) and (\ref{explicitB}) that the two
sides of this equality have identical generating functions.
\end{proof}
\section{A Bijective Proof}
In this section we provide a bijective proof for the surprising
identity $S_{n,132}(213)=S_{n,132}(231)$.
\subsection{Binary Plane Trees}
In our proof, we will identify a 132-avoiding permutation $p$ with its
{\em binary plane tree} $T(p)$ using a very well-known bijection.
We will briefly describe this bijection now. For more details,
the reader may consult Chapter 14 of \cite{bona}.
The tree $T(p)$ will be a binary plane tree, that is,
a rooted unlabeled
tree in which each vertex has at most two children, and each child
is a left child or a right child of its parent, even if it is the only
child of its parent.
The root of $T(p)$ corresponds to the entry $n$ of $p$, the left subtree
of the root corresponds to the string of entries of $p$ on the left of
$n$, and the right subtree of the root corresponds to the string of
entries of $p$ on the right of $n$. Both subtrees are constructed
recursively, by the same rule. Note that since $p$ is 132-avoiding,
the position of the entry $n$ of $p$ determines the set of entries
that are on the left (resp. on the right) of $n$. In fact, if $n$
is in the $i$th position,
the set of entries on the left of $n$ must be $\{n-i+1,n-i+2,\cdots ,n-1\}$,
and the set of
entries on the right of $n$ must be $\{1,2,\cdots ,n-i\}$.
We point out that in the process of constructing $T(p)$,
each vertex of $T(p)$ is associated to an entry of
$p$. Indeed, each vertex is added to $T(p)$ as the root of a subtree $S$,
and so
each vertex is associated to the entry that is the largest among the entries
that belong to $S$.
However, it is important to point out that
$T(p)$ is an {\em unlabeled tree} since the way in which the entries
of $p$ correspond to the vertices of $T(p)$ is completely determined
by the unlabeled tree $T(p)$ as long as $p$ is 132-avoiding.
See Figure \ref{binplane} for an illustration.
\begin{figure}[ht]
\begin{center}
\epsfig{file=binplane.eps}
\label{binplane}
\caption{The tree $T(p)$ for $p=67823415$, and the entries of $p$
associated to the vertices of $T(p)$.}
\end{center}
\end{figure}
Note that in order to get $p$ from $T(p)$, it suffices to read
the vertices of $T(p)$ {\em in-order}, that is, by first reading
the left subtree of the root, then the right subtree of the root, and
then the right subtree of the root. The respective subtrees are read
recursively, by this same rule. Therefore, it is meaningful to talk
about the first, second, etc, last vertex of $T(p)$, since that means
the first, second, etc, last vertex of $T(p)$ in the {\em in-order}
reading.
A {\em left descendant} (resp. {\em right descendant} of a vertex $x$
in a binary plane tree is a vertex in the left (resp. right) subtree
of $x$. The left (resp. right) subtree of $x$ does {\em not} contain $x$
itself.
It is straightforward to see that $p_ip_j$ is a 12-pattern in $p$
if and only if $p_i$ is a left-descendant of $p_j$ in $T(p)$. On the other
hand, $p_jp_i$ is a 21-pattern in $p$ if and only if either $p_i$ is a
right descendant of $p_j$ in $T(p)$ or there is a vertex $x$ in $T(p)$ so
that $p_j$ is a left descendant of $x$ and $p_i$ is a right descendant of $x$.
In the previous section we gave an exhaustive list of the ways in which
213-patterns and 231-patterns can occur in a 132-avoiding permutation.
The reader is invited to translate that list into the language of
binary plane trees.
\subsection{Our Bijection}
Let $p$ be a 132-avoiding $n$-permutation, and let $Q$ be an occurrence
of the pattern 213 in $p$. Let $Q_2,Q_1,Q_3$ be the three
vertices of $T(p)$ that correspond to $Q$, going left to right.
Let us
color these three entries black. There are then two possibilities.
\begin{enumerate}
\item Either $Q_1$ is a right descendant of $Q_2$ and $Q_2$ is a left descendant
of $Q_3$, or
\item there exists a lowest left descendant $Q_x$ of $Q_3$ so that $Q_2$ is
a left descendant
of $Q_x$ and $Q_1$ is a right descendant of $Q_x$.
\end{enumerate}
Let $A_n$ be the set of all binary plane trees on $n$ vertices in which three
vertices forming a 213-pattern are colored black. Let $B_n$ be the set
of all binary plane trees on $n$ vertices in which three vertices forming
a 231-pattern are colored black.
Now we are going to define a map $f:A_n\rightarrow B_n$. We will then
prove that $f$ is a bijection.
The map $f$ will be defined differently in the two cases described above.
\begin{itemize}
\item {\em Case 1.} If $T\in A_n$ is in the first case, then let
$f(T)$ be the pair obtained by
interchanging the right subtree of $Q_2$ and the right subtree of $Q_3$.
Keep all three black vertices $Q_i$ black, even as $Q_1$ gets moved.
See Figure \ref{firstmove} for an illustration.
\begin{figure}[ht]
\begin{center}
\epsfig{file=firsttreemove.eps}
\label{firstmove}
\caption{Interchanging the right subtrees of $Q_2$ and $Q_3$.}
\end{center}
\end{figure}
Note that in $f(T)$, in the set of black vertices, there is one that is
an ancestor of the other two, namely $Q_3$.
\item {\em Case 2.} If $T\in A_n$ is in the second case, then let $f(T)$ be the
tree obtained by interchanging the right subtrees of the
vertices $Q_x$ and $Q_3$,
and coloring $Q_2$, $Q_x$ and $Q_1$ black. See Figure \ref{secondmove}
for an illustration.
\begin{figure}[ht]
\begin{center}
\epsfig{file=secondtreemove.eps}
\label{secondmove}
\caption{Interchanging the right subtrees of $Q_x$ and $Q_3$.}
\end{center}
\end{figure}
Note that in $f(T)$, there is no black vertex that is an ancestor of the
other two black vertices. Also note that in $f(T)$, the
lowest common ancestor of $Q_x$ and $Q_1$ is $Q_3$.
\end{itemize}
It is a direct consequence of our definitions that if $T\in A_n$, then
$f(T)=B_n$.
Now we are in a position to prove the main result of this section.
\begin{theorem} \label{bijective}
The map $f:A_n\rightarrow B_n$ defined above is a bijection.
\end{theorem}
\begin{proof}
Let $U\in B_n$. We will show that there is exactly one $T\in A_n$ so that
$f(T)=U$ holds. This will show that $f$ has an inverse, proving that
$f$ is a bijection.
By definition, three nodes of $U$ are colored black, and the entries
of the permutation corresponding to $U$ form a 231-pattern.
Let $K_2$, $K_3$, and $K_1$ denote these three vertices, from left to right.
There are two possibilities for the location of the $K_i$ relative to each
other. We will show that in both cases, $U$ has a unique preimage under $f$,
essentially because swapping two subtrees is an involution.
\begin{enumerate}
\item If $K_3$ is an ancestor of both other black
vertices, then $f(T)=U$ implies that $T$ belongs to Case 1.
In this case, the unique $T\in A_n$ satisfying $f(T)=U$ is
obtained
by swapping the right subtrees of $K_3$ and $K_2$, and keeping all three black
vertices black, even if $K_1$ got moved.
\item If $K_3$ is not an ancestor of both other black vertices and then
$f(T)=U$ implies that
$T$ belongs to Case 2.
In this case, let $K_x$ be the smallest common ancestor
of $U_3$ and $U_1$. Then the unique $T\in A_n$ satisfying $f(T)=U$ is obtained
by swapping the right subtrees of $K_3$ and $K_x$, and coloring $K_x$ black
instead of $K_3$, while keeping $K_1$ and $K_2$ black.
\end{enumerate}
This completes the proof.
\end{proof}
\section{A Generalization}
In this section, we will significantly generalize the result of the
previous section. The key observation is that in the proof of Theorem
\ref{bijective}, the {\em left} subtrees of $Q_1$ and $Q_2$ never changed.
In order to state our result, we announce the following
definitions.
\begin{definition}
Let $q$ be a pattern of length $k$ and let $t$ be a pattern of length $m$.
Then $q\oplus t$ is the pattern of length $k+m$ defined by
\[ (q\oplus t)_i= \left\{ \begin{array}{l@{\ }l}
q_i \hbox{ if $i\leq k$},\\
\\
t_{i-k} +k \hbox{ if $i>k$. }
\end{array}\right.
\]
\end{definition}
In other words, $q\oplus t$ is the concatenation of $q$ and $t$ so that
all entries of $t$ are increased by the size of $q$.
\begin{example}
If $q=3142$ and $t=132$, then $q\oplus t=3142576$.
\end{example}
\begin{definition}
Let $q$ be a pattern of length $k$ and let $t$ be a pattern of length $m$.
Then $q\ominus t$ is the pattern of length $k+m$ defined by
\[ (q\ominus t)_i= \left\{ \begin{array}{l@{\ }l}
q_i + m \hbox{ if $i\leq k$},\\
\\
t_{i-k} \hbox{ if $i>k$. }
\end{array}\right.
\]
\end{definition}
In other words, $q\ominus t$ is the concatenation of $q$ and $t$ so that
all entries of $q$ are increased by the size of $t$.
\begin{example}
If $q=3142$ and $t=132$, then $q\ominus t=6475132$.
\end{example}
Now we are ready to state and prove the most general result of this paper.
\begin{theorem} \label{general}
Let $q$ and $t$ be any non-empty patterns that end in their largest entry.
Let $i_u$ denote the increasing pattern $12\cdots u$.
Then for all positive integers $n$, we have
\[S_{n,132}((q\ominus t)\oplus i_u) = S_{n,132}((q\oplus i_u)\ominus t),\]
where 1 denotes the pattern consisting of one entry.
\end{theorem}
In particular, the result of the previous section is the special case
of Theorem \ref{general} in which $q=t=i_u=1$ (the one-entry pattern 1).
\begin{example} If $q=3124$, $t=213$, and $u=2$,
then Theorem \ref{general} says
that \[S_{n,132}(645721389)=S_{n,132}(645789213).\]
\end{example}
\begin{proof} (of Theorem \ref{general})
Note that we can assume that $q$ and $t$ are both 132-avoiding, since
otherwise the statement of Theorem \ref{general} is trivially true as both
sides are equal to 0.
Let $k$ denote the length of $q$, let $m$ denote the length of $t$.
Similarly to the proof of Theorem \ref{bijective},
let $A_n$ be the set of all binary plane trees on $n$ vertices in which
$h$ vertices forming a $((q\ominus t)\oplus i_u)$-pattern are colored black,
and let $B_n$
be the set of all binary plane trees on $n$ vertices in which $h$ vertices
forming a $((q\oplus i_u)\ominus t)$-pattern are colored black.
Let $T\in A_n$. Let $Q_b$ be the $k$th black vertex of $T$ in the
in-order reading, let $Q_a$ be
the $(k+m)$th black vertex of $T$, and let $Q_c$ be the rightmost
black vertex of $T$. We are now going to construct a bijection
$F:A_n\rightarrow B_n$. The construction is analogous to the one that
we saw before Theorem \ref{bijective}
\begin{enumerate}
\item If $Q_a$ is a right descendant of $Q_b$, then let $F(T)$ be
the tree obtained from $T$ by swapping the right subtree of $Q_b$
and the right subtree of $Q_c$. Note that in $F(T)$, the black vertices
form a $(q\ominus t)\oplus i_u$-pattern, and that $Q_c$ is an ancestor
of all other black vertices in $F(T)$.
See Figure \ref{firstg} for an illustration.
\begin{figure}[ht]
\begin{center}
\epsfig{file=firstg.eps}
\label{firstg}
\caption{Interchanging the right subtrees of $Q_b$ and $Q_c$, and
turning a copy of 341256 into a copy of 345612.}
\end{center}
\end{figure}
\item Otherwise, there exists a lowest vertex $Q_x\in T$ so that $Q_b$
is a left descendant
of $Q_x$ and $Q_a$ is a right descendant of $Q_x$. Note that in this
case, it follows that $Q_x$ is not black. Now let $F(T)$ be the
tree obtained from $T$ by swapping the right subtree of $Q_x$ and the right
subtree of $Q_c$, and by coloring $Q_x$ black, instead of $Q_c$. Note that
again, in $F(T)$, the black vertices
form a $(q\ominus t)\oplus i_u$-pattern. Also note that there is no black
vertex
in $F(T)$ that would be an ancestor of all other black vertices.
See Figure \ref{secondg} for an illustration.
\begin{figure}[ht]
\begin{center}
\epsfig{file=secondg.eps}
\label{secondg}
\caption{Interchanging the right subtrees of $Q_x$ and $Q_c$,
coloring $Q_x$ black instead of $Q_c$, and
turning a copy of 341256 into a copy of 345612.}
\end{center}
\end{figure}
\end{enumerate}
It is straightforward to show that $F:A_n \rightarrow B_n$ is a bijection.
Indeed, let $U\in B_n$. If there is a black vertex in $U$ that is an ancestor
of all other black vertices, then $U$ could only be obtained by the first
rule, otherwise $U$ could only be obtained by the second rule.
The unique preimage $F^{-1}(U)$ is then obtained by swapping the appropriate
right subtrees. In the first case, swap the right subtrees of $U_b$ and
$U_c$, where $U_b$ is the $k$th and $U_c$ is the $(k+u)$th black vertex
of $U$ in the in-order reading.
In the second case, let $U_x$ be the $(k+1)$st black vertex
of $U$, let $U_a$ be the last black vertex of $U$,
and let $U_c$ be the lowest common ancestor of $U_x$ and $U_a$. Then
the unique preimage $F^{-1}(U)$ is obtained by swapping the right
subtrees of $U_x$ and $U_c$, and coloring $U_c$ black instead of $U_x$.
\end{proof}
Note that by transitivity, Theorem \ref{general} implies the following.
\begin{corollary}
Let $q$, $t$, and $i_u$ be as in Theorem \ref{general}, and let
$1\leq v<u$. Then we have
\[S_{n,132}((q\oplus i_v)\ominus t)\oplus i_{u-v})=S_{n,132}((q\ominus t)\oplus i_u).\]
\end{corollary}
\begin{proof}
Theorem \ref{general} shows that both
sides are equal to $S_{n,132}((q\oplus i_u)\ominus t)$.
\end{proof}
\section{Further Directions}
Formula (\ref{exactfora}) implies that $S_{n,132}(213)\sim C_1 4^n n$, while
the generating functions computed in \cite{occurrences} imply that
$S_{n,132}(321)\sim C_2 4^n n^{3/2}$ and $S_{n,132}(123)\sim C_3 4^n n^{1/2}$,
where the $C_i$ are positive constants.
So occurrences of non-monotone patterns of length three
are infinitely rare compared to
occurrences of 321, and infinitely frequent compared to occurrences
of 123; the frequency of non-monotone patterns is halfway between the two
extremes.
While precise formulae like the ones given in earlier sections of
this paper may not be obtainable
for longer patterns, comparative results as the ones described in the previous
paragraph may be possible to prove even for such patterns.
If we set $u=1$ and $h=k+m+1$, then Theorem \ref{general}
provides
$\sum_{i=2}^{h-1}c_{i-2}c_{h-i-1}=c_{h-2}$ non-trivial examples
of two patterns
$s$ and $s'$ for which $S_{n,132}(s)=S_{n,132}(s')$ for all $n$.
Other choices of $u$ provide additional such pairs. However,
it seems that there are other pairs of patterns whose total number
of copies in all 132-avoiding permutations agree. We hope to discuss
such pairs in an upcoming paper.
Are there any other such
pairs? Are there any such pairs when 132 is replaced by another pattern $r$?
Are there any patterns $r$ and $r'$ for which $S_{n,r}(u)=S_{n,r'}(u')$ for all
$n$ and
the equality is non-trivial?
Finally, how do the permutation statistics studied in this paper
translate to the other 150 families of objects counted by the
Catalan numbers listed in \cite{stanley}?
| {
"timestamp": "2012-02-10T02:03:45",
"yymm": "1202",
"arxiv_id": "1202.2023",
"language": "en",
"url": "https://arxiv.org/abs/1202.2023",
"abstract": "We prove that the total number $S_{n,132}(q)$ of copies of the pattern $q$ in all 132-avoiding permutations of length $n$ is the same for $q=231$, $q=312$, or $q=213$. We provide a combinatorial proof for this unexpected threefold symmetry. We then significantly generalize this result to show an exponential number of different pairs of patterns $q$ and $q'$ of length $k$ for which $S_{n,132}(q)=S_{n,132}(q')$ and the equality is non-trivial.",
"subjects": "Combinatorics (math.CO)",
"title": "Surprising symmetries in 132-avoiding permutations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9790357640753516,
"lm_q2_score": 0.8198933403143929,
"lm_q1q2_score": 0.802704902894994
} |
https://arxiv.org/abs/1911.12464 | Words With Few Palindromes, Revisited | In 2013, Fici and Zamboni proved a number of theorems about finite and infinite words having only a small number of factors that are palindromes. In this paper we rederive some of their results, and obtain some new ones, by a different method based on finite automata. | \section{Introduction}
In this paper we are concerned with certain avoidance properties
of finite and infinite words.
Recall that a word $x$ is said to be a {\it factor} of a word
$w$ if there exist words $y,z$ such that $w = yxz$. For example,
the word {\tt act} is a factor of the English word
{\tt factor}. We sometimes say $w$ {\it contains\/} $x$.
Another term for {\it factor}
is {\it subword}, although this latter term sometimes refers to
a different concept entirely. We say a (finite or infinite)
word $x$ {\it avoids} a
set $S$ if no element of $S$ is a factor of $x$.
The reverse of a word $x$ is written $x^R$. Thus, for example,
${\tt (drawer)}^R = {\tt reward}$. A word is a {\it palindrome\/}
if $x = x^R$, such as the English word {\tt radar}. A
palindrome is called {\it even\/} if its length is even, and
{\it odd\/} if its length is odd. For example, the English
word {\tt noon} is even, while {\tt madam} is odd.
Fici and Zamboni \cite{Fici&Zamboni:2013} studied avoidance of
palindromes. In particular, they were interested in constructing
infinite words with the minimum possible number of distinct
palindromic factors, and
infinite words that minimize the length of the largest palindromic
factor. In both cases these minima crucially depend on the size of the
underlying alphabet.
In this paper we revisit their results using a different approach.
The crucial observation is in Section~\ref{two}: the set of
finite words over a finite alphabet containing at most
$n$ distinct palindromic factors (resp., whose largest palindromic
factor is of length at most $n$) is regular.
The companion paper to this one is \cite{Fleischer&Shallit:2019},
where some of the same ideas are used.
\section{Palindromes and regularity}
\label{two}
Let $x$ be a finite or infinite word. The set of all of its
factors is written $\Fac(x)$, and the set of its
factors that are palindromes is written $\PalFac(x)$.
Let $P_\ell (\Sigma)$ (resp., $P_{\leq \ell} (\Sigma)$)
be the set of all palindromes of length
$\ell$ (resp., length $\leq \ell$) over $\Sigma$. Of course, since
both of these sets are finite, they are regular.
\begin{theorem}
Let $S$ be a finite set of palindromes over an alphabet
$\Sigma$. Then the language
$$ C_\Sigma(S) := \{ x \in \Sigma^* \ : \ \PalFac(x) \subseteq S \} $$
is regular.
\label{one}
\end{theorem}
\begin{proof}
Let $\ell$ be the length of the longest palindrome in $S$.
We claim that $ \overline{C_\Sigma (S)} = L$, where
$$L = \bigcup_{t \in P_{\leq \ell+2} - S} \Sigma^* \, t \ \Sigma^* . $$
\noindent $\overline{C_\Sigma(S)} \subseteq L$: If $x \in
\overline{C_\Sigma(S)}$, then $x$ must have some palindromic factor $y$
such that $y \not\in S$. If $|y| \leq \ell+2$, then
$y \in P_{\leq \ell+2} - S$. If $|y| > \ell+2$, we can write
$y = uvu^R$ for some palindrome $v$ such that $|v| \in \{ \ell+1, \ell+2 \}$.
Hence $x$ has the palindromic factor $v$ and $v \in P_{\leq \ell+2} - S$.
In both cases $x \in L$.
\bigskip
\noindent $L \subseteq \overline{C_\Sigma(S)}$: Let $x \in L$.
Then $x \in \Sigma^* \, t \ \Sigma^* $ for some
$t \in P_{\leq \ell+2} - S$. Hence $x$ has a palindromic factor
outside the set $S$ and so $x \not\in C_\Sigma(S)$.
\bigskip
Thus we have written $\overline{C_\Sigma(S)}$ as the finite union of
regular languages, and so $C_\Sigma(S)$ is also regular.
\end{proof}
\begin{remark}
The set $P_{\leq \ell+2} (\Sigma) - S$ can be fairly large. However, because
$$ \Sigma^* \, x \, \Sigma^* \subseteq \Sigma^* \, y \, \Sigma^*$$
if $y$ is a factor of $x$,
we can replace $P_{\leq \ell+2} (\Sigma) - S$ in
Theorem~\ref{one} with the subset of its minimal elements
under the factor ordering.
(An element $x \in T$ is {\it minimal\/} if $x, y \in T$ with
$y$ a factor of $x$ implies that $x = y$.) This typically will
have many fewer elements.
\end{remark}
\begin{corollary}
\leavevmode
\begin{itemize}
\item[(a)]
Let $D_{\ell} (\Sigma)$ be the set of finite words over $\Sigma$ containing
at most $\ell$ distinct palindromes as factors
(including the empty word). Then $D_{\ell} (\Sigma)$ is regular.
\item[(b)]
Let $E_{\ell} (\Sigma)$ be the set of finite words over
$\Sigma$ containing no palindrome of length $> \ell$ as a factor.
Then $E_{\ell} (\Sigma)$ is regular.
\item[(c)]
Let $R_{\ell,m} (\Sigma)$ be the set of finite words over
$\Sigma$ containing no even palindrome of length $>\ell$ nor
any odd palindrome of length $>m$ as factors. Then
$R_{\ell,m}(\Sigma)$ is regular.
\item[(d)]
Let $T_{\ell,m} (\Sigma)$ be the set of finite words over
$\Sigma$ containing at most $\ell$ even palindromes and
$m$ odd palindromes. Then $T_{\ell,m}(\Sigma)$ is regular.
\end{itemize}
\label{palreg}
\end{corollary}
\begin{proof}
\leavevmode
\begin{itemize}
\item[(a)] Note that no word in $D_{\ell} (\Sigma)$ cannot contain a palindrome of
length $r \geq 2\ell$ as a factor, because then it would also contain
palindromes of length $0, 2,\ldots r-2$ as factors ($r$ even)
or length $1,3, \ldots r-2$ as factors ($r$ odd). In both
cases this gives at least $\ell+1$ distinct palindromes.
Hence
$$ D_\ell (\Sigma) =
\bigcup_{
{|S| \leq \ell}
\atop
{S \subseteq P_{\leq 2\ell-1} (\Sigma)}
}
C_{\Sigma} (S) , $$
the union of a finite number of regular languages.
\item[(b)] We have $E_\ell (\Sigma) = C_{\Sigma} (P_{\leq \ell} (\Sigma)) $.
\item[(c)] We have $R_{\ell,m} (\Sigma) =
C_{\Sigma} \biggl( \bigl(P_{\leq \ell} (\Sigma)
\, \cap \, (\Sigma^2)^* \bigr) \ \cup \ \bigl(P_{\leq m} (\Sigma) \, \cap \, \Sigma(\Sigma^2)^* \bigr) \biggr) $.
\item[(d)] We have $$T_{\ell, m} (\Sigma) =
\bigcup_{{|S_1| \leq \ell} \atop {S_1 \subseteq \bigcup_{0 \leq i < \ell}
P_{2i} (\Sigma)}} C_\Sigma (S_1) \ \cup \
\bigcup_{{|S_2| \leq m} \atop {S_1 \subseteq \bigcup_{0 \leq i < m}
P_{2i+1} (\Sigma)}} C_\Sigma (S_2).$$
\end{itemize}
\end{proof}
Theorem~\ref{one} and Corollary~\ref{palreg} implicitly provide
an algorithm for actually finding the DFA's accepting the languages
$D_\ell (\Sigma)$, $E_\ell (\Sigma)$, $R_{\ell,m} (\Sigma)$,
and $T_{\ell,m} (\Sigma)$: namely,
construct automata for each term of the unions and intersections,
and combine them
using standard techniques (e.g., \cite[Sect.~3.2]{Hopcroft&Ullman:1979}),
possibly using minimization at each step. This can be
carried out, for example, using a software package such as
{\tt Grail} \cite{Raymond&Wood:1994,Campeanu:2019}.
However, our experience shows that the intermediate automata so generated
can be quite large. Instead, we use a different approach to
construct the automata directly, which
we now illustrate for the case of $D_\ell(\Sigma)$, as follows.
The states are of the form $\Sigma^{\leq 2\ell-1} \times 2^U$, where
$U$ is the set of the nonempty palindromes of length at most $2\ell-1$.
Given a state of the form $(x, S)$, upon reading the letter $a$,
we go to the new state $(y, T)$, where $y = xa$ (if $|xa| \leq 2\ell-1$)
or the suffix of length $2\ell-1$ of $xa$ (if $|xa| = 2\ell$), and
$T = S \ \cup \ \PalFac(xa)$. If $|T| > \ell$, it is labeled as
a rejecting state.
The resulting automaton, as described, still can be rather large. However,
many states will not be reachable from the start state. Instead,
we construct all reachable states
using a queue, in a breadth-first manner starting from the
initial state $(\varepsilon, \emptyset)$. As soon as we reach a state
$(x,S)$ with $|S| > \ell$, the state is labeled as a dead state and
we do not append it to the queue.
We implemented this idea in Dyalog APL. Our program creates an
automaton in {\tt Grail} format which can then be minimized using
{\tt Grail}.
Our approach allows us to recover many of the results
of Fici and Zamboni, and even more. For example, the DFA's
we compute give us
a complete description of {\it all\/} words, both finite
and infinite, containing at most $\ell$ distinct palindromic factors.
It provides an easy and efficient way to determine whether
or not there exist infinite words containing a given avoidance property,
and if so, whether some of these words are aperiodic.
As corollaries, we can computably determine a linear recurrence
giving the number $a(n)$ of such words of length $n$,
and the asymptotic growth rate of the sequence $(a(n))_{n \geq 0}$.
Finally, our approach replaces a long case-based argument
that can be difficult to follow, and is prone to error,
with a machine computation that can be verified mechanically.
\section{Linear recurrences and automata}
\label{three}
We summarize some well-known techniques for enumerating the
number of length-$n$ words accepted by deterministic finite
automata that we use in this paper. For more details,
see, for example, \cite[Sect.~3.8]{Shallit:2009} and
\cite{Everest&vdp&Shparlinski&Ward:2003}.
We introduce some notation and terminology: if
$q(X) = q_t X^t + q_{t-1} X^{t-1} + \cdots + q_1X + q_0$
is a polynomial and ${\bf a} = (a(n))_{n \geq 0}$ is a sequence,
then $q \circ {\bf a}$ denotes the sequence
$(q_t a(t+i) + q_{t-1} a(t+i-1) + \cdots
+ q_1 a(i+1) + q_0 a(i))_{i \geq 0}$ obtained by taking the
dot product of the coefficients of $q$ with sliding ``windows''
of the sequence $\bf a$.
If $q \circ {\bf a}$ is the sequence $(0,0,0,\ldots)$, we
call $q$ an {\it annihilator\/} of $(a(n))_{n \geq 0}$.
It is now easy to verify that if $q, r$ are polynomials, then
$(qr) \circ {\bf a} = q \circ (r \circ {\bf a})$.
We also define $\Lead(q) = q_t$ to be the leading coefficient of $q$.
Suppose $Q = \{ q_0, q_1, \ldots, q_{r-1} \}$ and
$A = (Q, \Sigma, \delta, q_0, F)$ is an $r$-state DFA.
From this we can compute an $n \times n$ matrix
$M$ such that $M[i,j] = \{ a \in \Sigma \ : \ \delta(q_i, a) = q_j \}$.
Let $v = [1 \ 0 \ 0 \ \cdots 0]$ be the row vector
with a $1$ in the first position and $0$'s elsewhere, and
let $w$ be the column vector with $1$'s in positions corresponding
to the final states $F$ and $0$'s corresponding to $Q-F$.
Then $a(n)$, the number of length-$n$ words accepted by $A$, is
$v M^n w$.
We can find a linear recurrence for the sequence $(a(n))_{n \geq 0}$ as follows:
first, we compute the minimal polynomial $p(X) =
X^t + p_{t-1} X^{t-1} + \cdots + p_1X + p_0$ of $M$ using standard
techniques. Then $p(M) = 0$, so
$M^t + p_{t-1} M^{t-1} + \cdots + p_1M + p_0I = 0$. By multiplying
by $M^i$, we get
$M^{t+i} + p_{t-1} M^{t+i-1} + \cdots + p_1M^{i+1} + p_0 M^i = 0$.
By premultiplication by $v$ and postmultiplication by $w$, we get
$v M^{t+i} w + p_{t-1} v M^{t+i-1} w + \cdots
+ p_1 v M^{i+1} w + p_0 vM^i w = 0$.
Hence $a(t+i) + p_{t-1} a(t+i-1) + \cdots + p_1 a(i+1) + p_0 a(i) = 0$,
and hence $(a(n))_{n \geq 0}$ satisfies a linear
recurrence with constant coefficients given by the $p_i$.
Using our terminology, the polynomial $p$ annihilates $(a(n))_{n \geq 0}$.
However, $p$ may not be the lowest-degree
annihilator of $(a(n))_{n \geq 0}$. A
lower degree annihilator will necessarily be a divisor of the polynomial
$p$. The lowest degree annihilator can be determined using an algorithm
based on the following theorem, which seems to be new.
\begin{theorem}
Suppose the polynomial $p(X)$, with leading coefficient nonzero,
annihilates the sequence $(a(n))_{n \geq 0}$ and suppose $q(x) \, | \, p(x)$.
If the polynomial ${p \over q}$ also
annihilates the sequence $(a(n))_{n \geq 0}$ for the
first $\deg q$ consecutive windows
of $(a(n))_{n \geq 0}$,
then it annihilates all of $(a(n))_{n \geq 0}$.
\end{theorem}
\begin{proof}
Suppose $p \over q$ annihilates ${\bf a} = (a(n))_{n \geq 0}$
for the first $s := \deg q$ consecutive windows of $\bf a$, but
not all of $\bf a$.
Write ${p \over q} = d_t X^t + \cdots + d_1 X + d_0$. Define
$(f(n))_{n \geq 0} = {p \over q} \circ (a(n))_{n \geq 0}$.
Thus $ f(n) = \sum_{0 \leq i \leq t} d_i a(n+i)$. Then by hypothesis
we have $f(n) = 0$ for $n = 0, 1, \ldots , s-1$. Now $p$
annihilates $(a(n))_{n \geq 0}$, so $q$ annihilates $(f(n))_{n \geq 0}$.
Let $r$ be the least index such that
$f(r) \not= 0$. So $(f(0), f(1), \ldots, f(r)) = (0,0, \ldots ,0, e)$
for some $e \not= 0$. But $q$ annihilates $(f(n))_{n \geq 0}$,
so if $r \geq s$ then
$q \circ ( \overbrace{0,0, \ldots , 0}^{s-1}, e) = 0$. But
$q \circ ( \overbrace{0,0,\ldots ,0}^{s-1} ,e) = e \Lead(q) \not= 0$,
a contradiction.
\end{proof}
This gives us the following algorithm for finding the lowest-degree
annihilator of a recurrence.
\bigskip\hrule
\begin{tabbing}
{\sc Algorithm LDA}$(p, {\bf a})$ \\
\ \\
Write $p := q_1 q_2 \cdots q_m$, the product of (not necessarily
distinct) irreducible factors. \\
For \= $i := 1$ to $m$ do \\
\> $r := p/q_i$ \\
\> If $r$ annihilates the first $\deg r$ windows of $\bf a$, set
$p := p/q_i$. \\
return($p$);
\end{tabbing}
\smallskip\hrule
\bigskip
Terms of the form $X^n$ in an annihilator can be removed if one assumes
that the recurrence begins at $a(n)$ instead of $a(0)$. For this reason,
in this paper, we do not report such terms in our annihilators.
In our computations, we used {\tt Maple} to compute minimal polynomials
(via the {\tt LinearAlgebra} package) and factor them.
\section{Automata and infinite words}
We recall some material from the companion paper \cite{Fleischer&Shallit:2019}.
The DFA's generated in this paper are for regular
languages $L$ that are defined by avoidance of a
finite set $S$ of finite words. Such languages are
called {\it factorial}; that is,
every factor of a word of $L$ is also a word of $L$.
The minimal DFA $M = (Q,
\Sigma, \delta, q_0, F)$ for a factorial language $L \not= \Sigma^*$
has exactly one nonaccepting state,
which is the dead state. (A state is {\it dead\/} if it is nonaccepting
and transitions to itself on all letters of the alphabet $\Sigma$.)
{\it In this paper, we do not display
this dead state in our figures, nor count it in our
discussion of the cardinality of a DFA's states.}
The (one-sided) infinite words with the given avoidance property are
then given by the infinite paths through $M$, starting at the start
state $q_0$.
A state $q$ is called {\it recurrent} if there is a nonempty word
$w$ such that $\delta(q,w) = q$. A state $q$ is called
{\it birecurrent} if there are two noncommuting words $x_0, x_1$ such
that $\delta(q,x_0) = \delta(q,x_1) = q$.
As shown in \cite{Fleischer&Shallit:2019}, an infinite word having
the desired avoidance property exists iff $M$ has a recurrent state,
and aperiodic infinite words exist iff $M$ has a birecurrent state.
In this latter case, there are actually uncountably many such words.
As shown in \cite{Fleischer&Shallit:2019}, these correspond to
the image, under the morphism $h: 0 \rightarrow x_0$ and $1 \rightarrow x_1$
of an aperiodic binary word.
Furthermore, we can find infinite words avoiding $S$ that are
(a) uniformly recurrent and aperiodic (b) linearly recurrent and
aperiodic and (c) $k$-automatic for any $k \geq 2$ and uniformly
recurrent and aperiodic
and (d) the fixed point of a primitive uniform morphism, which is
uniformly recurrent.
To see this, note that the image under a nonerasing morphism of a
uniformly recurrent infinite word is uniformly recurrent. So it
suffices to apply $h$ to any uniformly recurrent binary word, such
as the Thue-Morse word $\bf t$ \cite{Allouche&Shallit:1999}.
Similarly, the image under a nonerasing morphism of a linearly
recurrent infinite word is linearly recurrent.
To see that we can find a $k$-automatic word with the desired
properties, note that we can start with any $k$-automatic word
that is uniformly recurrent and aperiodic (for example,
the fixed point of $0 \rightarrow 0^{k-1} 1$ and
$ 1 \rightarrow 1 0^{k-1}$) and apply the morphism $h$
to it.
Finally, assume that $x_0$ and $x_1$ are chosen such that
for some $a\in \{ 0, 1\}$ we have $x_a$ starts with $a$.
Let $b = \{ 0, 1 \} - \{a\}$. Write $g(a) = x_a x_b$ and
$g(b) = x_b x_a$.
Then $g^\omega(a)$, the infinite fixed point of $g$ starting with
$a$, is uniformly recurrent.
In what follows, we use the alphabet $\Sigma_k = \{ 0, 1, \ldots, k-1 \}$.
A-numbers in the paper refer to sequences from the {\it On-Line Encyclopedia of
Integer Sequences} \cite{Sloane:2019}.
\section{Minimizing the number of palindromes}
We define $d_{k,\ell} (n)$ to be the number of length-$n$ words
in $D_\ell(\Sigma_k)$.
\subsection{Alphabet size 2}
\begin{theorem}{(Fici-Zamboni)}
There are infinite binary words containing at most 9 palindromes.
All are periodic, and of the form $x^\omega$ for
$x$ a conjugate of either $001011$ or $001101$.
There are no infinite binary words containing at most 8 palindromes.
\label{thm2-9}
\end{theorem}
\begin{proof}
We construct the DFA for $D_9 (\Sigma_2)$
as in Section~\ref{two}. It has 611 states before
minimization and 98 after minimization, and we omit it
here. No state is birecurrent,
but there are 12 recurrent states. Examining the associated
paths easily gives the result.
To see the result for 8 palindromes, we can construct the
DFA for $D_8(\Sigma_2)$.
It has 259 states before minimization and 23 after minimization.
No state is recurrent. The longest word accepted is of length $8$.
Alternatively, one can prove this result using a simple breadth-first
search of the space of words.
\end{proof}
\begin{theorem}{(Restatement of Fici-Zamboni)}
There are exactly 40 infinite binary words containing exactly 10 palindromes.
All are ultimately periodic, and are of the following forms:
\begin{itemize}
\item $x^\omega$ for $x$ a conjugate of $0001011$, $0001101$, $0010111$,
or $0011101$;
\item $y (001011)^\omega$ for $y \in \{ 0, 01, 111, 0011, 11011, 101011 \}$;
\item $y (001101)^\omega$ for $y \in \{ 0, 11, 001, 0101, 11101, 101101 \}$.
\end{itemize}
\end{theorem}
\begin{proof}
We create the automaton for $D_{10} (\Sigma_2)$ as in Section~\ref{two}.
It has 1655 states before minimization and 280 after. None of these
states are birecurrent. By examining the possible infinite paths, we see
these include those of Theorem~\ref{thm2-9} and the ones listed above.
\end{proof}
\begin{theorem}
There are uncountably many aperiodic, uniformly
recurrent infinite binary words
containing exactly 11 palindromes.
\end{theorem}
\begin{proof}
Using the method in Section~\ref{two}, we can construct the DFA
for $D_{11} (\Sigma_2)$. It has 5253 states before
minimization, and 810 states afterwards, for
$D_{11} (\Sigma_2)$.
We do not give the latter automaton here, as it is
too large to display in a reasonable way, but it can be downloaded
from the second author's website at
State 738 is birecurrent, with
two paths labeled $x_0 = 0001011001011$ and
$x_1 = 001011001011$.
\end{proof}
\begin{corollary}
The number of binary words containing at most 11 distinct palindromic
factors (including the empty word) is $(d_{2,11} (n))_{n \geq 0}$, where
\begin{displaymath}
\begin{split}
&(d_{2,11}(0), \ldots, d_{2,11}(41)) = (1,2,4,8,16,32,64,128,256,512,1024,292,270,268,
276,276,288, \\
& 320,340, 364, 388,404,428,476,512,560,610,644,692,768,840,924,
1020,1100,1190,1316, \\
& 1452,1612, 1786,1952,2134,2348)
\end{split}
\end{displaymath}
and
\begin{align*}
d_{2,11}(n) &= -d_{2,11}({n-1}) - d_{2,11}({n-2}) - d_{2,11}({n-3})
-d_{2,11}({n-4}) - d_{2,11}({n-5}) + 2d_{2,11}({n-6}) \\
&\quad + 4d_{2,11}({n-7}) + 5d_{2,11}({n-8}) + 5 d_{2,11}({n-9})
+ 5 d_{2,11}({n-10}) + 5 d_{2,11}({n-11}) \\
& \quad + 2 d_{2,11}({n-12}) -3 d_{2,11}({n-13}) + -6d_{2,11}({n-14})
-8 d_{2,11}({n-15}) -8 d_{2,11}({n-16}) \\
& \quad -8 d_{2,11}({n-17}) -7 d_{2,11}({n-18}) - 3 d_{2,11}({n-19}) +
3 d_{2,11}({n-21}) + 4d_{2,11}({n-22}) \\
& \quad + 4 d_{2,11}({n-23}) + 4 d_{2,11}({n-24})
+ 3 d_{2,11}({n-25}) +2 d_{2,11}({n-26}) + d_{2,11}({n-27})
\end{align*}
for $n \geq 42$.
Asymptotically, $d_{2,11}(n) \sim c \cdot \alpha^n$, where
$\alpha \doteq 1.1127756842787054706297$ is the largest positive
real zero of $X^7 - X - 1$ and
$c \doteq 20.665$.
\end{corollary}
\begin{proof}
Using {\tt Maple}, we computed the minimal polynomial for the
matrix of the 811-state DFA described above. It is
\begin{multline*}
X^{15} (X-1)(X-2)(X+1)(X^2 + 1)(X^2 + X + 1)(X^2 - X + 1)(X^7 - X - 1) \\
(X^4 + 1)(X^6 + X^5 + X^4 + X^3 + X^2 + X + 1)(X^8 - X^2 - 1) .
\end{multline*}
Next, using the procedure described in Section~\ref{three},
we can find the minimal annihilator of the recurrence.
It is
\begin{equation}
(X-1)(X+1)(X^2+X+1)(X^2-X+1)(X^7-X-1)(X^6 + X^5 + X^4 + X^3 + X^2 + X + 1)(X^8 - X^2 - 1).
\label{polys}
\end{equation}
When expanded, this gives the coefficients of the annhiliator of
the sequence $(d_{2,11}(n))_{n \geq 0}$, which are given above.
To get the asymptotic behavior of the recurrence, we must find the largest
real zero of the polynomials given in \eqref{polys}. It is the largest
real zero of $X^7-X-1$, which is approximately $1.1127756842787054706297$.
\end{proof}
\begin{remark}
This is sequence \seqnum{A330127} in the OEIS.
\end{remark}
In their paper,
Fici and Zamboni constructed a uniformly recurrent aperiodic binary word
containing 13 palindromic factors, and
whose set of factors is closed under reversal. We achieve the same
result using a different construction and a different proof.
\begin{theorem}
Define $G_0 = 001101000110$ and $G_{n+1} = G_n 01 G_n^R$ for $n \geq 0$.
Then $G_{\infty} = \lim_{n \rightarrow \infty} G_n$
is uniformly recurrent, aperiodic, and has 13 palindromic factors.
\end{theorem}
\begin{proof}
We start by constructing the DFA for the language $D_{13}(\Sigma_2)$
using the method described in Section~\ref{two}.
This DFA $M$ has 93125 states before
minimization and 6522 states after minimization. The unique
dead state is numbered 3012.
Next, we look at the transformations $\tau_n$ of states induced by the
words $G_n$. We claim that
\begin{itemize}
\item $\tau_{G_n} = \tau_{G_{n+1}}$ for $n \geq 2$;
\item $\tau_{G_n} = \tau_{G_n^R}$ for $n \geq 1$.
\end{itemize}
which can be easily verified by induction using the transition
function for $M$.
The resulting transformations of states for $n \geq 2$ are as follows:
\begin{align*}
0 & \xrightarrow{\ G_n\ } 4882 \xrightarrow{\ 01 \ } 5058
\xrightarrow{\ G_n^R\ } 4882 \\
0 & \xrightarrow{\ G_n^R\ } 4882 \xrightarrow{\ 10 \ } 5059
\xrightarrow{\ G_n\ } 4882
\end{align*}
Since these paths do not end in the unique nonaccepting state, the
corresponding words contain at most $13$ palindromes.
It is easy to see that the word $G_\infty$ is uniformly recurrent and
closed under reversal. This is left to the reader.
The fact that $G_\infty$ is not ultimately periodic follows
from \cite[Thm.~4]{Shallit:1982b}.
\end{proof}
\subsection{Alphabet size 3}
\begin{theorem}{(Fici-Zamboni)}
If a ternary infinite word contains $4$ palindromes (including
the empty word), it is necessarily of the form $(abc)^\omega$
for distinct letters $a,b, c$. No ternary infinite word can contain
$3$ or fewer palindromes.
\end{theorem}
\begin{proof}
We construct the DFA for $D_4(\Sigma_3)$
using the algorithm suggested in
Section~\ref{two}. It has 52 states and,
when minimized, has 18 states.
It is depicted below in Figure~\ref{pal3num4}.
Only the states numbered $12, 13, 14, 15, 16, 17$ are recurrent, and
none of them are birecurrent. The desired result now easily follows from
examining the possible paths through these states.
\begin{figure}[H]
\begin{center}
\includegraphics[width=5.5in]{pal3num4min.pdf}
\end{center}
\caption{Automaton for ternary words containing at most $4$ palindromes}
\label{pal3num4}
\end{figure}
To see that no ternary infinite word can contain $3$ or fewer palindromes,
we can perform the same construction as above, but for $3$ palindromes.
The resulting automaton has 13
states (3 when minimized) and no recurrent states. We omit it here.
Alternatively, one can prove this result with a simple breadth-first
search of the space of words.
\end{proof}
\begin{theorem}
There are uncountably many aperiodic ternary words containing at most
$5$ palindromic factors.
\end{theorem}
\begin{proof}
We can construct the automaton for $D_5(\Sigma_3)$
as described in Section~\ref{two}.
It has 319 states before minimization and 69 states after.
We do not depict it here, as it is too large to visualize
clearly. The state 39 is birecurrent, with paths labeled
$x_0 = 0012$ and $x_1 = 012$.
\end{proof}
\begin{corollary}
The number of ternary words containing at most $5$ palindromic
factors is $d_{3,5}(n) $, where
$(d_{3,5}(0), \ldots, d_{3,5}(8)) = (1,3,9,27,81,42,54,66,78)$ and
$d_{3,5}(n) = d_{3,5}(n-3) + d_{3,5}(n-4)$ for $n \geq 9$.
Asymptotically we have $d_{3,5}(n) \sim c \alpha^n$ where
$\alpha \doteq 1.2207440846$ and
$c \doteq 16.07007$.
\end{corollary}
\begin{proof}
The minimal polynomial of the corresponding matrix
is
$$ X^5 (X-1) (X-3) (X^2 + X + 1) (X^4 - X - 1) .$$
Using the method in Section~\ref{three}, we can find the minimal
annihilator of the sequence, which is $X^4 - X - 1$.
The result now follows.
\end{proof}
\begin{remark}
This is sequence \seqnum{A329023} in the OEIS.
We have $d_{3,5}(n) = 6 \cdot$\seqnum{A164317}$(n)$ for $n \geq 5$.
\end{remark}
\section{Lengths of palindromes}
Instead of minimizing the total number of palindromes,
Fici and Zamboni also considered minimizing the length of the
longest palindrome. We can also do that with our method.
We define $e_{k,\ell} (n)$ to be the number of length-$n$ words
in $E_\ell (\Sigma_k)$.
\subsection{Alphabet size 2}
\begin{theorem}{(Restatement of Fici-Zamboni)}
There are exactly 20 infinite binary words having no palindromes
of length $>4$, and all are ultimately periodic. They are
as follows:
\begin{itemize}
\item $x^\omega$ for $x$ a conjugate of $001011$;
\item $x^\omega$ for $x$ a conjugate of $001101$;
\item $(0+00+111+1111)(001011)^\omega$;
\item $(0+00+11101+111101)(001101)^\omega$.
\end{itemize}
\label{pal2len4-thm}
\end{theorem}
\begin{proof}
The automaton for $E_4(\Sigma_2)$ is depicted in Figure~\ref{pal2len4},
and the only infinite paths are those given. (There are no
birecurrent states.)
\end{proof}
\begin{figure}[H]
\begin{center}
\includegraphics[width=6.5in]{pal2len4m.pdf}
\end{center}
\caption{Automaton for binary words containing no palindromes of
length $>4$}
\label{pal2len4}
\end{figure}
\begin{theorem}
There are uncountably many uniformly recurrent
binary words containing no palindromes of length $>5$.
They are the labels of the paths through the automaton
in Figure~\ref{pal2len5}.
\end{theorem}
\begin{figure}[H]
\begin{center}
\includegraphics[width=6.5in]{pal2len5m.pdf}
\end{center}
\caption{Automaton for binary words containing no palindromes of
length $>5$}
\label{pal2len5}
\end{figure}
\begin{proof}
As before. There are 719 states in the unminimized automaton
for $E_5 (\Sigma_2)$
and 62 states in the minimized one.
State 44 is birecurrent, with paths
$x_0 = 01010110$ and $x_1 = 0010101110$.
\end{proof}
\begin{theorem}
The sequence $(e_{2,5} (n))_{n \geq 0}$ counting the number of binary words of length $n$
containing no palindromes of length $>5$ satisfies the recurrence
$$e_{2,5}(n) = 3 e_{2,5}(n-6) + 2 e_{2,5}(n-7) + 2 e_{2,5}(n-8) + 2 e_{2,5}(n-9) + e_{2,5}(n-10)$$
for $n \geq 20$. Asymptotically
$e_{2,5}(n) \sim c \alpha^n$ where $\alpha \doteq
1.36927381628918060784\cdots$ is the positive real zero of the equation
$X^{10}-3X^4-2X^3-2X^2-2X-1$, and $c = 9.8315779\cdots$.
\end{theorem}
\begin{proof}
The minimal polynomial of the corresponding matrix is
\begin{equation}
X^{10} (X-2) (X^{10} + X^4 - 2X^3 -2X^2 -2X - 1)(X^{10} -3X^4 -2X^3 -2X^2 -2X - 1).
\label{pal35}
\end{equation}
The technique described in Theorem~\ref{three} can be used to
find the minimal annihilator for the recurrence. It is the last term
in the factorization \eqref{pal35}.
\end{proof}
\begin{remark}
The sequence $e_{2,5} (n)$ is sequence \seqnum{A329824} in the OEIS.
\end{remark}
\subsection{Alphabet size 3}
\begin{theorem}{(Fici-Zamboni)}
The only infinite ternary words having no palindromes of length
$>1$ are those of the form $(abc)^\omega$ for distinct letters
$a,b,c$.
\end{theorem}
\begin{proof}
The automaton for $E_1(\Sigma_3)$
has 16 states before minimization and 10 states after.
We omit it here. There are no birecurrent states, and
the only infinite paths are those given.
\end{proof}
\begin{theorem}
There are uncountably many ternary words containing no palindromes
of length $>2$.
\label{pal3len2-thm}
\end{theorem}
\begin{proof}
We can construct the automaton for
$E_2(\Sigma_3)$ as in Section~\ref{two}. It has 67 states
unminimized and 19 states when minimized.
State $6$ is birecurrent, with paths labeled
$x_0 = 211002$ and $x_1 = 11002$.
\begin{figure}[H]
\begin{center}
\includegraphics[width=6in]{pal3len2m.pdf}
\end{center}
\caption{Automaton for ternary words containing no palindromes of
length $>2$}
\label{pal3len2}
\end{figure}
\end{proof}
The Fibonacci numbers $F_n$ are defined by $F_0 = 0$ and $F_1 = 1$ and
$F_n = F_{n-1} + F_{n-2}$.
\begin{corollary}
The number $e_{3,2}(n)$ of length-$n$ ternary words containing no
palindromes of length $>2$ is $6F_{n+1}$ for $n \geq 3$.
\end{corollary}
\begin{proof}
The minimal polynomial of the matrix is $X^3 (X-3)(X^2 - X - 1)(X^4 +
X^3 + 2X^2 + 2X + 1)$. The minimal annihilator is $X^2 - X - 1$.
The result now follows easily.
\end{proof}
\begin{remark}
The sequence $e_{3,2}(n)$
is sequence \seqnum{A330010} in the OEIS.
\end{remark}
\subsection{Alphabet size 4}
Fici and Zamboni proved that, over the alphabet
$\Sigma_4$, there is an infinite aperiodic
uniformly recurrent word whose only palindromes
are $\varepsilon, 0, 1, 2, 3$.
We show how to handle this
using our method.
\begin{theorem}
There is an infinite aperiodic uniformly recurrent word over
$\Sigma_4$ whose only palindromes are
$\varepsilon, 0, 1, 2, 3$.
\end{theorem}
\begin{proof}
To find the words avoiding all palindromes as factors except
these $5$, we can use Theorem~\ref{one}. After computing
the minimal elements, it suffices to avoid the factors
$$ \{00,11,22,33,010,020,030,101,121,131,202,212,232,303,313,323\} .$$
The minimal DFA is depicted in Figure~\ref{fig1}.
\begin{figure}[H]
\begin{center}
\includegraphics[width=6.5in]{auts4.pdf}
\end{center}
\caption{Automaton for $4$-letter alphabet. The dead state, numbered
5, is omitted.}
\label{fig1}
\end{figure}
The state numbered $6$ is birecurrent, with two paths labeled
$2301$ and $301$. Let ${\bf x}$ be an aperiodic uniformly recurrent
word over $\{ 0, 1\}$ and define the morphism
$h(0) = 2301$ and $h(1) = 301$.
For example, we can take $\bf x$ to be the Thue-Morse word.
Then $h({\bf x})$ has the desired properties.
\end{proof}
\begin{corollary}
The number $e_{4,1}(n)$ of finite words over $\Sigma_4$ having all their palindromic
factors contained in $\{ \varepsilon, 0, 1, 2, 3 \}$ is $3 \cdot 2^n$
for $n \geq 2$.
\end{corollary}
\begin{proof}
The minimal polynomial of the matrix corresponding to the
automaton is
$X^2 (X-1) (X-2)(X-4)(X+1)(X^2 + X + 2)$.
Using the procedure in Section~\ref{three} we can determine
the minimal annihilator,which is $X-2$.
It follows that $e_{4,1} (n) = 3 \cdot 2^n$ for $n \geq 2$.
\end{proof}
Berstel, Boasson, Carton, and Fagnot
\cite{Berstel&Boasson&Carton&Fagnot:2009}
constructed an infinite word over $\Sigma_4$ that is uniformly
recurrent,
has exactly $5$ palindromic factors, and
further is closed under reversal,
as follows: define $B_0 = 01$ and $B_{n+1} = B_n 23 B_n^R$. This
is an example of {\it perturbed symmetry}; see
\cite{Dekking&MendesFrance&vanderPoorten:1982} for more details.
We can verify their construction using our method. Consider the DFA
in Figure~\ref{fig1}; then each word $w$ induces a transformation $\tau_w$ of
the states given by $q \rightarrow \delta(q,w)$. We claim that
\begin{itemize}
\item[(a)] $\tau_{B_n} = \tau_{B_n^R} = (9, 5, 5, 9, 9, 5, 5, 5, 5, 5, 9, 9, 5, 5, 9, 5, 5, 9)$ for $n \geq 1$;
\item[(b)] $\tau_{23} = (17, 17, 17, 5, 5, 5, 17, 5, 5, 17, 5, 5, 17, 17, 5, 5, 5, 5)$.
\item[(c)] $\tau_{32} = (14, 14, 14, 5, 5, 5, 14, 5, 5, 14, 5, 5, 5, 5, 5, 14, 14, 5)$.
\end{itemize}
The claims about $\tau_{B_1}$, $\tau_{B_1^R}$, $\tau_{23}$, and
$\tau{32}$ are easily verified. We now prove the claim about $B_n$
by induction. The reader can now
check that $\tau_{B_{n+1}} = \tau_{B_n 23 B_n^R} = \tau_{B_n}$
and $\tau_{B_{n+1}^R} = \tau_{B_n 32 B_n^R} = \tau_{B_n}$.
Since $0$ is mapped to accepting state $9$ by $B_n$, it follows that
each $B_n$ has the desired properties.
\section{Odd and even palindromes}
In order to illustrate that the technique in this paper has wider
applicability, we now turn to a topic not covered in the paper of
Fici and Zamboni.
Because an odd palindrome factor of length $\ell$ implies the
existence of odd palindrome factors of all shorter lengths,
and the same for even palindrome factors, it makes sense to
consider minimizing the lengths of odd and even palindrome factors
separately. This is what we do in this section.
We define $r_{k,\ell, m} (n)$ to be the number of length-$n$ words
in $R_{\ell,m} (\Sigma_k)$.
\subsection{Alphabet size 2}
\begin{theorem}
There are uncountably many uniformly recurrent binary words
having longest even palindrome factor of length $\leq 2$
and longest odd palindrome of length $\leq 5$.
\end{theorem}
\begin{figure}[H]
\begin{center}
\includegraphics[width=6.5in]{a2palm2-5.pdf}
\end{center}
\caption{Automaton for binary words with
longest even palindrome factor of length $\leq 2$
and longest odd palindrome of length $\leq 5$.}
\label{palm2-25}
\end{figure}
\begin{proof}
We construct the automaton for $R_{2,5}(\Sigma_2)$ as discussed above.
Before minimization it has 155 states.
After minimization it has 44 states.
State 18 is birecurrent, with cycles
labeled $x_0 = 10100011$ and $x_1 = 1010100011$.
\end{proof}
\begin{theorem}
Let $(r_{2,2,5}(n))_{n \geq 0}$ denote the number of finite binary
words containing
longest even palindrome factor of length $\leq 2$
and longest odd palindrome of length $\leq 5$.
Then
$r_{2,2,5}(n) = r_{2,2,5}({n-8}) + r_{2,2,5}({n-10})$ for $n \geq 16$.
Furthermore,
$r_{2,2,5}(n) \sim C_1 \alpha^n + C_2 (-\alpha)^n$,
$C_1 \doteq 15.991809$, $C_2 \doteq 0.023895$,
and $\alpha \doteq 1.0804184273981 $ is the largest
real zero of $X^{10} - X^2 - 1$.
\end{theorem}
\begin{proof}
The minimal polynomial of the corresponding matrix is
$$X^6(X-2)(X^{10} - X^2 - 1).$$
The minimal annihilator of the recurrence can be
determined by using the ideas in Section~\ref{three}; it is
$X^{10} - X^2 - 1$.
\end{proof}
\begin{remark}
The sequence $r_{2,2,5} (n)$ is sequence
\seqnum{A330130} in the OEIS.
\end{remark}
The case of longest even palindrome factor of length $\leq 4$
and longest odd palindrome of length $\leq 3$ is already
covered in Theorem~\ref{pal2len4-thm}.
\begin{theorem}
There are uncountably many uniformly recurrent binary words over
having
longest even palindrome factor of length $\leq 6$
and longest odd palindrome of length $\leq 3$.
\end{theorem}
\begin{figure}[H]
\begin{center}
\includegraphics[width=6.5in]{a2palm6-3.pdf}
\end{center}
\caption{Automaton for binary words with
longest even palindrome factor of length $\leq 6$
and longest odd palindrome of length $\leq 3$.}
\label{palm2-63}
\end{figure}
\begin{proof}
We construct the automaton for $R_{6,3}(\Sigma_2)$ as discussed above.
Before minimization it has 477 states.
After minimization it has 60 states.
State 17 is birecurrent, with cycles
labeled $x_0 = 110010$ and $x_1 = 1111000010$.
\end{proof}
\begin{theorem}
Let $(r_{2,6,3}(n))_{n \geq 0}$ denote the number of finite binary
words containing
longest even palindrome factor of length $\leq 6$
and longest odd palindrome of length $\leq 3$.
Then
$r_{2,6,3}(n) = r_{2,6,3}({n-6}) + 2r_{2,6,3}({n-8}) + 3r_{2,6,3}({n-10})
+ r_{2,6,3}({n-14})$ for $n \geq 21$. Furthermore,
and $r_{2,6,3}(n) \sim C_1 \alpha^{n} + C_2 (-\alpha)^n$,
where
$C_1 \doteq 11.58110542$,
$C_2 \doteq 0.00264754$,
and $\alpha \doteq 1.244528319539183$ is the largest
real zero of $X^{14} - X^8 -2X^6 - 3X^4 - 1$.
\end{theorem}
\begin{proof}
The minimal polynomial of the corresponding matrix is
$$X^7 (X-2)(X^2 + 1)(X^{14} - X^8 -2X^6 - 3X^4 - 1)(X^{12} - X^{10} + X^8 - 2X^6 + X^2 - 1).$$
The minimal annihilator of the recurrence can be
determined by using the ideas in Section~\ref{three}; it is
$X^{14} - X^8 -2X^6 - 3X^4 - 1$.
\end{proof}
\begin{remark}
The sequence $r_{2,6,3}$ is sequence \seqnum{A330131} in the OEIS.
\end{remark}
\subsection{Alphabet size 3}
\begin{theorem}
There are uncountably many uniformly recurrent words over
$\Sigma_3$ containing no (nonempty) even palindromic factors
and longest odd palindrome of length $\leq 3$.
\end{theorem}
\begin{figure}[H]
\begin{center}
\includegraphics[width=6.5in]{palm3-03.pdf}
\end{center}
\caption{Automaton for ternary words with no even palindromic
factors and longest odd palindrome of length $3$}
\label{palm3-03}
\end{figure}
\begin{proof}
We construct the automaton for $R_{0,3} (\Sigma_3)$ as discussed
in Section~\ref{two}.
Before minimization it has 88 states.
After minimization it has 34 states.
State 16 is birecurrent, with cycles
labeled $x_0 = 021210102$ and $x_1 = 1210102$.
\end{proof}
\begin{theorem}
Let $(r_{3,0,3} (n))_{n \geq 0}$ denote the number of finite ternary
words containing no (nonempty) even palindromic factors
and longest odd palindrome of length $3$. Then
$$r_{0,3} (n) = r_{0,3}({n-1}) + r_{0,3} ({n-3})$$
for $n \geq 7$. Furthermore, $r_{0,3} (n) \sim C \alpha^n$, where
$C \doteq 5.37711043$
and $\alpha \doteq 1.465571231876768$ is the largest
real zero of $X^3 - X^2 - 1$.
\end{theorem}
\begin{proof}
The minimal polynomial of the corresponding matrix is
$$X^4 (X-3) (X^2 - X + 1)(X^3 - X^2 - 1)(X^4 + 2X^3 + 2X^2 + X + 1).$$
The minimal annihilator of the recurrence can be
determined by using the technique in Section~\ref{three}; it is
$X^3 - X^2 - 1$.
\end{proof}
\begin{remark}
The sequence is \seqnum{A330132} in the OEIS.
$r_{0,3} (n) = 6 \cdot$ \seqnum{A000930}$(n-1)$ for $n \geq 5$, where
\seqnum{A000930} is the Narayana cow sequence.
\end{remark}
The case of largest even palindrome of length $2$ and largest
odd palindrome of length $1$ is already covered in
Theorem~\ref{pal3len2-thm}.
\section{Number of odd and even palindromes}
Our final application is to infinite words containing a specified
number of even and odd palindromes. We define
$t_{k,\ell,m}(n)$ to be the number of length-$n$ words
in $T_{\ell,m} (\Sigma_k)$.
\subsection{Alphabet size 2}
Here, instead of providing the
details, we simply summarize our results in tabular form. The
minimal annihilators for the
sequences can be computed from the data we computed.
The following cases have infinite words, but not aperiodic infinite
words.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
Max number of & Max number of & States & States & Example word \\
even palindromes & odd palindromes & (unminimized) & (minimized) & \\
\hline
3 & 9 & 10795 & 1468 & $01(00010111)^\omega$ \\
3 & 8 & 3911 & 799 & $1(00010111)^\omega$ \\
4 & 7 & 7505 & 1181 & $01(0001011)^\omega$ \\
4 & 6 & 2413 & 530 & $1(0001011)^\omega$ \\
5 & 5 & 1647 & 419 & $0 (001011)^\omega$ \\
5 & 4 & 461 & 136 & $(001011)^\omega$ \\
6 & 5 & 3141 & 604 & $(00001011)^\omega$ \\
6 & 4 & 699 & 177 & $0(011001)^\omega$ \\
7 & 4 & 1081 & 261 & $10(011001)^\omega$ \\
8 & 4 & 1729 & 375 & $1101(001011)^\omega$ \\
\end{tabular}
\end{table}
The following cases have examples of aperiodic infinite words.
\begin{table}[H]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|c|}
Max number of & Max number of & States & States & $x_0$ & $x_1$ & Birecurrent \\
even palindromes & odd palindromes & (unminimized) & (minimized) & & & state number \\
\hline
3 & 10 & 33685 & 3071 & 00011101 & 0100011101 & 1836 \\
4 & 8 & 26937 & 2830 & 0010111 & 00010111 & 2364 \\
5 & 6 & 7495 & 1269 & 001011 & 0001011 & 1035 \\
7 & 5 & 6741 & 955 & 001011 & 00001011 & 904 \\
9 & 4 & 2789 & 545 & 001011 & 0011001011 & 450 \\
\end{tabular}%
}
\end{table}
\subsection{Alphabet size 3}
The only interesting case is one even palindrome and
five odd palindromes. Here the automaton has 6208 states
(632 when minimized) and has a birecurrent state,
corresponding to $x_0 = 01012$ and $x_1 = 012$.
\section{Conclusions}
We have reproved most of the theorems in \cite{Fici&Zamboni:2013} using
a unified approach based on finite automata. This is evidence for
the thesis, previously announced in \cite{Rajasekaran&Shallit&Smith:2019},
that long case-based arguments are good candidates for replacement by
algorithms and logical decision procedures.
All of the code referred to in this paper is available at \\
\centerline{\url{https://cs.uwaterloo.ca/~shallit/papers.html} \ .}
\newcommand{\noopsort}[1]{} \newcommand{\singleletter}[1]{#1}
| {
"timestamp": "2020-01-07T02:05:53",
"yymm": "1911",
"arxiv_id": "1911.12464",
"language": "en",
"url": "https://arxiv.org/abs/1911.12464",
"abstract": "In 2013, Fici and Zamboni proved a number of theorems about finite and infinite words having only a small number of factors that are palindromes. In this paper we rederive some of their results, and obtain some new ones, by a different method based on finite automata.",
"subjects": "Formal Languages and Automata Theory (cs.FL); Discrete Mathematics (cs.DM); Combinatorics (math.CO)",
"title": "Words With Few Palindromes, Revisited",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9790357567351325,
"lm_q2_score": 0.8198933381139645,
"lm_q1q2_score": 0.8027048947224991
} |
https://arxiv.org/abs/1512.06193 | Ulrich Schur bundles on flag varieties | In this paper, we study equivariant vector bundles on partial flag varieties arising from Schur functors. We show that a partial flag variety with three or more steps does not admit an Ulrich bundle of this form with respect to the minimal ample class. We classify Ulrich bundles of this form on two-step flag varieties F(1,n-1;n), F(2,n-1;n), F(2,n-2;n), F(k,k+1;n) and F(k,k+2;n). We give a conjectural description of the two-step flag varieties which admit such Ulrich bundles. | \section{Introduction}\label{sec-intro}
Let $V$ be an $n$-dimensional vector space and $0< k_1 < \cdots < k_r <n$ an increasing sequence of integers. For convenience, set $k_0 = 0$ and $k_{r+1} =n$. The $r$-step partial flag variety $F(k_1, \dots, k_r; n)$ parameterizes partial flags $$W_1 \subset W_2 \subset \cdots \subset W_r \subset V,$$ where $W_i$ is a $k_i$-dimensional subspace of $V$ for $1 \leq i \leq r$. The variety $F(k_1, \dots, k_r; n)$ is endowed with a collection of tautological subbundles
$$0=T_0 \subset T_1 \subset T_2 \subset \cdots \subset T_r \subset T_{r+1}= \underline{V} = V \otimes \OO_{F(k_1, \dots, k_r; n)}, $$ where $T_i$ is a vector bundle of rank $k_i$ and $\underline{V}$ denotes the trivial bundle of rank $n$. Let $U_i = T_i/ T_{i-1}$. In this paper, we study the equivariant vector bundles $E_\lambda$ on $F(k_1, \dots, k_r; n)$ defined by tensor products of Schur functors of these tautological bundles $$E_\lambda = \mathbb{S}^{\lambda_1} U_1^* \otimes \mathbb{S}^{\lambda_2} U_2^* \otimes \cdots \otimes \mathbb{S}^{\lambda_{r+1}} U_{r+1}^*,$$ where the partition $\lambda = (\lambda_1|\cdots |\lambda_{r+1})$ is the concatenation of partitions $\lambda_i$ of length $k_i-k_{i-1}$. We call the bundles $E_\lambda$ on the flag variety \emph{Schur bundles}.
The question we study in this paper is to determine which Schur bundles on partial flag varieties are Ulrich bundles with respect to the minimal ample class. The Picard group of $F(k_1, \dots, k_r; n)$ is generated by the classes of Schubert divisors. The sum of the Schubert divisors corresponds to a line bundle $\cO(1)$ that defines a projectively normal embedding of $F(k_1, \dots, k_r; n)$ \cite{Ramanathan}. Unless we specify otherwise, we will always consider the partial flag varieties in this embedding.
Let $X \subset \PP^m$ be an arithmetically Cohen-Macaulay variety of dimension $d$. A vector bundle $\mathcal{E}$ of rank $r$ on $X$ is Ulrich if $H^i(X, \mathcal{E}(-i)) = 0$ for $i>0$ and $H^j(X, \mathcal{E}(-j-1))=0$ for $j < d$ (see \cite{BaHU}, \cite{BHU}, \cite{ESW}). These are the bundles whose Hilbert polynomials have $d$ zeros at the first $d$ negative integers. Their pushforwards to $\PP^m$ via the inclusion of $X$ has a minimal free resolution where all the syzygies are linear and the $i$-th Betti number is $\deg(X) r {m-d \choose i}$ \cite[Proposition 2.1]{ESW}.
Ulrich bundles have important applications in liaison theory, singularity theory and Boij-S\"{o}derberg theory. For example, if $X$ admits an Ulrich bundle, then its cone of cohomology tables coincides with that of $\PP^n$ \cite{EisenbudSchreyer}. In view of their importance, Eisenbud, Schreyer and Weyman \cite{ESW} formulated the problem of determining which varieties admit Ulrich bundles. Many authors have considered this problem. We refer the reader to \cite{CasanellasHartshorne}, \cite{CKM}, \cite{CostaMiroRoig1}, \cite{CMP}, \cite{ESW}, \cite{Faenzi}, \cite{Miro}, \cite{MP1}, \cite{MP2} for references and further results. For example, Ulrich bundles exist on curves, linear determinantal varieties, hypersurfaces and more generally on complete intersections (see \cite{BaHU}, \cite{BHU}, \cite{ESW}, \cite{KP}).
The second and fourth authors have initiated the study of homogeneous Ulrich bundles on homogeneous varieties. In \cite{CostaMiroRoig1}, they classified all Schur bundles which are Ulrich on Grassmannians under the Pl\"{u}cker embedding. In this paper, we extend this study to flag varieties. Our first main theorem is the following.
\begin{theorem*}[Theorem \ref{thm-higherStep}]
If $r\geq 3$, then no flag variety $F(k_1, \dots, k_r; n)$ admits a Schur bundle $E_\lambda$ which is Ulrich with respect to $\OO(1)$.
\end{theorem*}
In view of Theorem \ref{thm-higherStep}, we concentrate on two-step flag varieties. We refer the reader to \S \ref{sec-overview2step} for a detailed description of our results for two-step flag varieties. We may summarize our results as follows.
\begin{theorem*}[Theorems \ref{thm-1n1}, \ref{thm-beta1intro}, \ref{thm-beta2intro}, \ref{thm-1n2intro}, \ref{thm-2n2intro}]
We classify all the Schur bundles $E_\lambda$ which are Ulrich with respect to $\cO(1)$ on
$$ F(1,n-1;n), \quad F(1, n-2; n), \quad F(2, n-2; n), \quad F(k, k+1; n), \quad \mbox{and} \quad F(k, k+2; n).$$
\end{theorem*}
The Borel-Weil-Bott Theorem computes the cohomology of Schur bundles on flag varieties. Let $N$ denote the dimension of $F(k_1, \dots, k_r; n)$. In order to determine whether a bundle $E_\lambda$ is Ulrich, we need to check that the cohomology of $N$ consecutive twists of $E_\lambda$ vanishes. Using the Borel-Weil-Bott Theorem, this reduces to an intricate combinatorial problem. In \S \ref{sec-prelim}, we explain the algebraic geometry and representation theory background needed to turn the problem into a combinatorial problem. Then for the rest of the paper we study the combinatorial problem, which may be stated as follows.
\begin{problem}
Let $$P= (a_1^1, \dots, a_{l_1}^1 | a_1^2, \dots, a^2_{l_2}| \cdots | a_1^{r+1}, \dots, a_{l_{r+1}}^{r+1})$$ be a strictly decreasing sequence of integers divided into $r+1$ subsequences. Let $P(t)$ denote the sequence $$(a_1^1 - tr , \dots, a_{l_1}^1 -tr | a^2_1-t(r-1) , \dots, a^2_{l_2} - t (r-1) | \cdots | a_1^{r+1} , \dots, a^{r+1}_{l_{r+1}})$$ obtained by subtracting $t (r-i+1)$ from $a^i_j$. Classify the sequences $P$ for which the sequences $P(t)$ have exactly two equal entries for $1 \leq t \leq \sum_{1\leq i<j \leq r+1} l_i l_j$.
\end{problem}
Theorem \ref{thm-higherStep} is equivalent to the statement that such sequences do not exist if $r \geq 3$. The combinatorial problem is most interesting when $r=2$. In this case, there are infinite families of examples, which makes their classification challenging. We conjecture that there do not exist such sequences with both $l_1$ and $l_2$ at least three. In terms of geometry, this conjecture translates to the following.
\begin{conjecture}\label{conj-main}
The two-step flag variety $F(k_1, k_2; n)$ does not admit a Schur bundle which is Ulrich with respect to $\OO(1)$ if $k_1 \geq 3$ and $k_2-k_1 \geq 3$.
\end{conjecture}
If Conjecture \ref{conj-main} is true, then our results give a complete classification of all Schur bundles which are Ulrich with respect to $\OO(1)$ on partial flag varieties. In particular, such bundles are very rare. Hence, in order to construct Ulrich bundles on partial flag varieties, one has to look elsewhere.
One may also consider Schur bundles which are Ulrich with respect to other embeddings of $F(k_1, \dots, k_r; n)$. Our constructions all scale. In particular, if $E_\lambda$ is Ulrich with respect to $\OO(1)$, then multiplying the integers in $\lambda$ by $m$ gives a bundle which is Ulrich with respect to $\OO(m)$. Our combinatorial techniques can in principle be applied to other polarizations, though we do not discuss any such results here.
\subsection*{The organization of the paper} In \S \ref{sec-prelim}, we explain the necessary background from algebraic geometry and representation theory needed to turn the classification of Schur bundles which are Ulrich into a combinatorial problem. In \S \ref{sec-comb}, we explain the combinatorial problem. In \S \ref{sec-higherStep}, we solve the combinatorial problem for flag varieties of at least three steps and show that they do not admit a Schur bundle which is Ulrich for $\cO(1)$. In \S \ref{sec-overview2step}, we explain our results for two-step flag varieties and classify Ulrich bundles on flag varieties of the form $F(1,n-1;n)$. In sections \S \ref{sec-sumset}, \S \ref{sec-beta2}, \S \ref{sec-1n2} and \S \ref{sec-2n2}, we classify Ulrich bundles on flag varieties of the form $F(k, k+1; n)$, $F(k, k+2; n)$, $F(2, n-1; n)$ and $F(2, n-2; n)$, respectively.
\section{Algebraic geometry background}\label{sec-prelim}
\subsection*{Partial flag varieties}
Let $0< k_1 < \cdots < k_r <n$ be an increasing sequence of integers. Set $k_0 = 0$ and $k_{r+1} = n$. The $r$-step partial flag variety $F(k_1, \dots, k_r; n)$ parameterizes partial flags $W_1 \subset W_2 \subset \cdots \subset W_r \subset V,$ where $\dim W_i = k_i$.
Given any set of indices $1 \leq i_1 < i_2 < \cdots < i_s \leq r$, there is a natural projection $$\pi_{i_1, \dots, i_s} : F(k_1, \dots, k_r; n) \rightarrow F(k_{i_1}, \dots, k_{i_s}; n).$$ In particular, the partial flag variety can be realized as an iterated Grassmannian bundle
$$F(k_1, \dots, k_r; n) \rightarrow F(k_2, \dots, k_r; n) \rightarrow \cdots \rightarrow G(k_r; n).$$ From this description, it is immediate to read that
$$N:=\dim(F(k_1, \dots, k_r; n))= \sum_{i=1}^r k_i (k_{i+1} - k_i).$$
The partial flag variety has $r$ projections to Grassmannians $$\pi_i : F(k_1, \dots, k_r; n) \rightarrow G(k_i; n).$$ The Picard group of $F(k_1, \dots, k_r; n)$ is generated by $L_i= \pi_i^* \OO_{G(k_i; n)}(1)$. A line bundle $L$ is ample on $F(k_1, \dots, k_r; n)$ if and only if $L = L_1^{\otimes a_1} \otimes \cdots \otimes L_r^{\te a_r}$ with $a_i > 0$ for every $1 \leq i \leq r$.
\subsection*{The degree of partial flag varieties} Let $X \subset \PP^n$ be an equivariant embedding of a projective homogeneous variety. The Weyl dimension formula implies that the Hilbert polynomial of $X$ factors as a product of linear polynomials. Consequently, one obtains a formula for the degree of $X$ \cite{GrossWallach}.
Let $\psi$ denote the dominant weight defining the embedding of $X$ in $\PP^n$. Let $\rho$ be half the sum of the positive roots and let $N$ be the dimension of $X$. Then the degree of $X$ is given by
\begin{equation}\label{eq-degree}
N! \prod_{\alpha} \frac{\langle \psi, \check{\alpha} \rangle }{\langle \rho, \check{\alpha} \rangle },
\end{equation}
where the product is over all positive roots $\alpha$ with $ \langle \psi, \check{\alpha} \rangle >0$ and $\check{\alpha}$ denotes the coroot \cite{GrossWallach}.
Now we specialize to the case of partial flag varieties. Let $a_1, \dots, a_r$ be a sequence of positive integers. Set $$b_{ij} = \sum_{k=i}^j a_k.$$ We would like to calculate the degree of $F(k_1, \dots, k_r; n)$ under the embedding defined by $$L_{a_1, \dots, a_r} = L_1^{\otimes a_1} \otimes \cdots \otimes L_r^{\otimes a_r}.$$ The ample line bundle $L_{a_1, \dots, a_r}$ corresponds to the dominant weight
$$ \psi_{a_1, \dots, a_r} = b_{1r} e_1 + \cdots + b_{1r}e_{k_1} + b_{2r} e_{k_1 + 1} + \cdots + b_{2r} e_{k_2} + \cdots + b_{rr} e_{k_r}.$$
We have that $$\rho = (n-1) e_1 + (n-2) e_2 + \cdots + e_{n-1}.$$ The coroots that have nonzero intersection with $\psi_{a_1, \dots, a_r}$ are precisely $e_i - e_j$, where $k_{s-1} < i \leq k_s$ for some $1 \leq s \leq r$ and $k_t < j \leq k_{t+1}$ for $s<t \leq r$. For such $e_i - e_j$, the numerator in Equation (\ref{eq-degree}) is $b_{st}$ and the denominator is $j-i$. We thus conclude the following proposition.
\begin{proposition}
The degree of the partial flag variety $F(k_1, \dots, k_r; n)$ embedded by the line bundle $L_{a_1, \dots, a_r}$ is
\begin{equation}\label{eq:partialflagdegree}
N! \frac{\prod_{1 \leq i \leq j \leq r} b_{ij}^{(k_i - k_{i-1})(k_{j+1} - k_j)}}{\prod_{s=1}^r \prod_{k_{s-1} < i \leq k_s} \frac{(n-i)!}{(k_s -i)!}}.
\end{equation}
\end{proposition}
\begin{remark}
We record a few special cases of the formula.
The degree of $G(k,n)$ under the $a$th power of the Pl\"{u}cker embedding is
$$(k(n-k))! a^{k(n-k)} \prod_{1 \leq i \leq k} \frac{(k-i)!}{(n-i)!}.$$
The degree of $F(k_1, \dots, k_r; n)$ under the line bundle $L_{1,\dots,1}$ is
$$N ! \frac{\prod_{1 \leq i \leq j \leq r} (j-i+1)^{(k_i - k_{i-1})(k_{j+1} - k_j)}}{\prod_{s=1}^r \prod_{k_{s-1} < i \leq k_s} \frac{(n-i)!}{(k_s -i)!}}.$$
In particular, specializing to the case $F(1, n-1; n)$, we obtain
$$(2n-3)! \frac{2}{(n-1)!(n-2)!}.$$
Specializing to the case $F(1, n-2, n)$, we obtain
$$(3n-7)! \frac{4}{(n-1)!(n-2)!(n-3)!}.$$
\end{remark}
\subsection*{Borel-Weil-Bott Theorem and dimension of cohomology groups}
The partial flag variety $F(k_1, \dots, k_r; n)$ is endowed with a collection of tautological subbundles
$$0=T_0 \subset T_1 \subset T_2 \subset \cdots \subset T_{r+1} = \underline{V}, $$ where $T_i$ is a vector bundle of rank $k_i$ and $\underline{V}$ is the trivial bundle of rank $n$. Let $U_i = T_i/ T_{i-1}$.
Let $E_\lambda$ be a Schur bundle on $F(k_1, \dots, k_r; n)$. The partition $\lambda$ is a concatenation $\lambda = (\lambda_1|\cdots|\lambda_{r+1})$, where $\lambda_i$ has length $k_i - k_{i-1}$, and
$$E_\lambda = \mathbb{S}^{\lambda_1} U_1^* \otimes \mathbb{S}^{\lambda_2} U_2^* \otimes \cdots \otimes \mathbb{S}^{\lambda_{r+1}} U_{r+1}^* ,$$ where $\mathbb{S}^{\lambda_i}$ denotes the Schur functor of type $\lambda_i$. Such bundles can be characterized as the pushforwards of line bundles on complete flag varieties.
Since $U_1, \dots, U_{r+1}$ give a filtration of the trivial bundle $\underline{V}$, the tensor product of their determinants is trivial by the Whitney formula. Hence, adding a constant integer to all the entries of $\lambda$ corresponds to tensoring the bundle $E_{\lambda}$ with a power of the trivial line bundle and does not change its isomorphism class.
We will say two partitions $\lambda,\lambda'$ are equivalent if they differ by adding a constant to all the entries; equivalent partitions give isomorphic bundles. There is also no harm in allowing negative integers in $\lambda$, since adding a large constant will make all entries positive.
For studying Ulrich bundles, it is important to know the effect of tensoring $E_\lambda$ by an ample line bundle $L_{a_1,\ldots,a_r}$. By the Littlewood-Richardson rule, adding a constant sequence to $\lambda_i$ corresponds to tensoring the bundle by the determinant of $U_i$. Since the determinant of $U_i$ is $L_i \otimes L_{i-1}^{-1}$, we have
$$E_\lambda \te L_{a_1,\ldots,a_r} \cong E_{\lambda + \psi_{a_1,\ldots,a_r}},$$ where $ \psi_{a_1,\ldots,a_r}$ is the dominant weight corresponding to $L_{a_1,\ldots,a_r}$.
Let $\rho = (n-1, n-2, \dots, 1, 0)$. We say that $\lambda$ is {\em singular} if $\lambda + \rho$ has repeated entries. Otherwise, we say that $\lambda$ is {\em regular}. Let $s$ be the permutation in $\mathfrak{S}_n$ that lists the entries in $\lambda + \rho$ in decreasing order. Let $q(\lambda)$ denote the minimal length of the permutation $s$. The famous Borel-Weil-Bott Theorem computes the cohomology of line bundles on the complete flag variety, which allows us to calculate the cohomology of the Schur bundle $E_{\lambda}$ \cite{Weyman}.
\begin{theorem}[Borel-Weil-Bott]\label{thm-BWB}
Let $E_{\lambda}$ be a Schur bundle on $F(k_1, \dots, k_r; n)$.
\begin{enumerate}
\item If $\lambda$ is singular, then $H^i(E_{\lambda})=0$ for every $i$.
\item If $\lambda$ is regular, then $H^i(E_{\lambda}) = 0$ for $i \not= q(\lambda)$ and $H^{q(\lambda)}(E_{\lambda}) = \mathbb{S}^{s(\lambda+\rho) - \rho} V^*$.
\end{enumerate}
\end{theorem}
We recall how to compute the dimension of a representation $\mathbb{S}^{\mu} V$ of $GL(n)$. Let $Y(\mu)$ be the Young diagram corresponding to the partition $\mu$. The hook-length $\hook(i,j)$ is the number of boxes in the same row to the right of the box $(i,j)$ and in the same column below the box $(i,j)$ (including the box itself).
The dimension of the representation $\mathbb{S}^{\mu} V$ is given by
$$\prod_{(i,j) \in Y(\mu)} \frac{n+j -i}{\hook(i,j)},$$ which is always nonzero.
In particular, the rank of the vector bundle $E_{\lambda}$ is given by
$$\prod_{s=1}^{r+1} \prod_{(i,j) \in Y(\lambda_s)} \frac{k_s-k_{s-1} +j -i}{\hook (i,j)}. $$
\section{Ulrich bundles and the combinatorial problem}\label{sec-comb}
In this section we recall basic facts on Ulrich bundles and phrase the condition that a Schur bundle $E_\lambda$ on the flag variety $F(k_1,\ldots,k_r;n)$ is Ulrich combinatorially.
\subsection{Ulrich bundles in general}Let $(X, \OO_X(1))$ be a polarized projective variety. A vector bundle $E$ on $X$ is {\em arithmetically Cohen Macaulay} if it is locally Cohen-Macaulay and $H_*^i (X, E) =0$ for every $1 \leq i \leq \dim(X) -1$. A vector bundle on $X$ is {\em initialized} if $H^0(X, E) \not= 0$, but $H^0(X, E(-1)) = 0$. If $E$ is ACM, then there exists a twist of $E$ which is initialized.
\begin{definition}
A vector bundle $E$ is an {\em Ulrich bundle} if $E$ is ACM and the initialized twist $E_{\init}$ of $E$ satisfies $h^0(X, E_{\init}) = \rk(E) \deg(X)$.
\end{definition}
\begin{remark}\label{rem-UlrichCrit}For our purposes, there is another useful criterion for determining when a bundle is Ulrich. Let $X$ be a smooth, $N$-dimensional arithmetically Cohen-Macaulay projective variety and suppose $E$ is an initialized bundle on $X$. Then, by \cite[Proposition 2.1, Corollary 2.2]{ESW}, $E$ is Ulrich if and only if $H^i(E(-t))=0$ for all $i$ and $t\in [N] = \{1,\ldots,N\}$ (see \cite[Remark 2.6]{CostaMiroRoig1}). It is well-known that flag varieties are arithmetically Cohen-Macaulay when embedded by any ample complete linear system (see \cite[Theorem 5]{Ramanathan}).
\end{remark}
\begin{remark}
Since for an initialized Ulrich bundle $h^0(E) = \rk(E) \deg(X)$, whenever there is a Schur bundle $E_\lambda$ which is Ulrich we obtain interesting combinatorial identities relating two hook-length formulas to the degree of the flag variety. In the other direction, one could potentially find Ulrich bundles by searching for such identities; however, this seems difficult.
\end{remark}
\subsection{The combinatorial problem} Let $E_\lambda$ be a Schur bundle on $F(k_1,\ldots,k_r;n)$ corresponding to the concatenated sequence $\lambda = (\lambda_1|\cdots|\lambda_{r+1})$ of partitions. The Borel-Weil-Bott Theorem \ref{thm-BWB} allows us to rephrase the condition that $E_\lambda$ is an initialized Ulrich bundle with respect to $\OO(1)$ purely in terms of the combinatorics of $\lambda$. Recall $\lambda_i$ is a partition of length $k_i-k_{i-1}$, so $\lambda$ has total length $n$.
First, we write $P = \lambda +\rho$, where $\rho = (n-1,n-2,\ldots,1,0)$. For concreteness, we explicitly write out the partition $P$ as $$P = (a_1^1, \dots, a_{l_1}^1 | a_1^2, \dots, a^2_{l_2}| \cdots | a_1^{r+1}, \dots, a_{l_{r+1}}^{r+1});$$ here $l_i = k_i-k_{i-1}$. Observe that $H^0(E_\lambda)\neq 0$ if and only if the sequence of entries of $P$ is strictly decreasing.
For any integer $t\geq 0$, we define a partition $$P(t) = (a_1^1 - tr,\ldots, a_{l_1}^1 - tr | a_1^2 - t(r-1),\ldots , a_{l_2}^2-t(r-1)|\cdots|a_1^{r+1},\ldots,a_{l_{r+1}}^{r+1})$$ by subtracting $t(r+1-i)$ from every entry in the $i$th part, and analogously define partitions $\lambda(t)$. It is also convenient to view the entries of $P(t)$ as being functions of $t$ and define $$a_k^i(t) = a_k^i-t(r+1-i),$$ so that $$P(t) = (a_1^1(t),\ldots,a_{l_1}^1(t)|\cdots|a_1^{r+1}(t),\ldots,a_{l_{r+1}}^{r+1}(t)).$$ Then $E_{\lambda}(-t) = E_{\lambda(t)}$ has $H^i(E_{\lambda}(-t))=0$ for all $i$ precisely when there is a repeated entry in $P(t)$. Therefore $E_\lambda$ is an initialized Ulrich bundle if and only if $P(t)$ has a repeated entry for all integers $t\in [N]$, where $N = \dim F(k_1,\ldots,k_r;n)$ is the dimension of the flag variety.
We now make the central combinatorial definitions of the paper.
\begin{definition}
Let the partition $$P = (a_1^1,\ldots,a_{l_1}^1|a_1^2,\ldots,a_{l_2}^2|\cdots|a_1^{r+1},\ldots, a_{l_{r+1}}^{r+1})$$ consist of a decreasing sequence of integers arranged into $r+1$ blocks, with $l_i>0$ for all $i$. The \emph{type} of the partition is $(l_1,\ldots,l_{r+1})$. The \emph{dimension} of the partition is $N = \sum_{k<h} l_kl_h$.
The partition $P$ is \emph{Ulrich} if for all $t\in [N]=\{1,\ldots,N\}$ the partition $P(t)$ has a repeated entry.
Two partitions are \emph{equivalent} if they differ by adding a constant to all the entries. When we count Ulrich partitions or talk about uniqueness, it is understood that we always do so modulo equivalence.
\end{definition}
We make the correspondence between the algebro-geometric and combinatorial problems explicit.
\begin{proposition}
There is a bijective correspondence $\lambda \mapsto \lambda + \rho = P$ between equivalence classes of partitions $\lambda$ such that the Schur bundle $E_\lambda$ on $F(k_1,\ldots,k_r;n)$ is Ulrich with respect to $\OO(1)$ and equivalence classes of Ulrich partitions $P$ of type $(k_1,k_2-k_1,\ldots,k_r-k_{r-1},n-k_r)$.
\end{proposition}
The main question which we study from this point forward is to determine the possible types of Ulrich partitions, and, when there exists an Ulrich partition of a given type, to classify all such partitions up to equivalence.
\begin{remark}
The dimension $N$ of a partition $P$ is precisely the number of pairs of entries coming from different blocks. For there to be a repeated entry in $P(t)$ at every time $t\in [N]$, it is therefore necessary and sufficient that every pair of entries meets at some time $t\in [N]$ and that no two pairs of entries meet at the same time.
\end{remark}
\begin{remark}
There is no harm in considering lists $P$ where some of the entries are negative since the notion of Ulrich partition is invariant under equivalence. It will at times also be convenient to change the definition of $P(t)$ slightly by subtracting a different consecutive range of integers from each part. For example, in the two-step flag variety case where $P$ has $3$ parts it will be convenient to subtract $1,0,-1$ from the three parts, so that the middle block is constant with time. We will make our choice of normalization clear in each section as needed.
\end{remark}
We now provide a simple example illustrating the combinatorial definitions and the conversion to the algebro-geometric language.
\begin{example}\label{ex-combToAG}
Consider the partition $P = (5|3,-1,-2,-4|{-5})= (a|b_1,b_2,b_3,b_4|c)$ of type $(1,4,1)$. For convenience, we normalize $P(t)$ by subtracting $1,0,-1$ at each time. The dimension is $N=9$, and this partition is Ulrich. To visualize the Ulrich condition, we draw a \emph{time evolution diagram} showing the positions of the entries of $P(t)$ in Figure \ref{fig-141}. Along the first row we place the entries $a,b,b,b,b,c$ in the corresponding positions; we provide numerical positions above some of the entries for reference. For each time $t\in [N]$ we similarly place the entries of $P(t)$ in the corresponding row. When two entries intersect, we put a box around them to highlight the intersection. The Ulrich condition means that for every time $t\in [N]$ there is exactly one box in the corresponding row. We also display $P(N+1)$ at time $N+1$, which is relevant to the computation of the \emph{dual partition} $P^*$; see \S \ref{ssec-symmetryDuality}.
\begin{figure}[htbp]
\input{1n1Ex.pstex_t}
\caption{Time evolution diagram of the Ulrich partition $(5|3,-1,-2,-4|{-5})$ of type $(1,4,1)$. See Example \ref{ex-combToAG}.}\label{fig-141}
\end{figure}
Replacing $P$ by the equivalent partition $(11|9,5,4,2|1)$ and putting $\lambda = P - \rho$, we have $\lambda = (6|5,2,2,1|1) = (\lambda_1|\lambda_2|\lambda_3)$. The corresponding initialized Ulrich bundle on $F(1,5;6)$ is $$E_\lambda = \mathbb{S}^{\lambda_1} U_1^* \otimes \mathbb{S}^{\lambda_2} U_2^* \otimes \mathbb{S}^{\lambda_{3}} U_3^*.$$ A straightforward computation with the hook-length formula shows that $\rk(E_\lambda) = 70$ and $h^0(E_\lambda) = 17640$. This is consistent with the calculation $\deg F(1,5;6) = 252$ for the embedding of $F(1,5;6)$ by $\OO(1)$.
\end{example}
\subsection{Symmetry and duality}\label{ssec-symmetryDuality} There are certain symmetries satisfied by Ulrich partitions which we will exploit throughout the paper. Suppose $P = (a_1^1,\ldots,a_{l_1}^1|\cdots | a_1^{r+1},\ldots,a_{l_{r+1}}^{r+1})$ is a partition of type $(l_1,\ldots,l_{r+1})$. Multiplying all the entries of $P$ by $-1$ and reversing the order in which they are written, we obtain the \emph{symmetric} partition $$P^s=(-a_{l_{r+1}}^{r+1},\ldots,-a_{1}^{r+1}|\cdots| {-a_{l_1}^1},\ldots,-a_1^1)$$ of the reversed type $(l_{r+1},\ldots,l_1)$. It is clear that if $P$ is Ulrich then $P^s$ is also Ulrich. The time evolution diagram of $P^s$ is flipped about a vertical axis. It is often convenient to add a constant to all the entries of the symmetric partition to keep some entries fixed; there is no harm in this since we only care about Ulrich partitions up to equivalence.
There is another slightly less trivial symmetry which we call \emph{duality}. Starting from an Ulrich partition $P$ we define the partition $$P^* = (a_1^{r+1}(N+1),\ldots,a_{l_{r+1}}^{r+1}(N+1)|\cdots | a_1^1(N+1),\ldots , a_{l_1}^1(N+1)).$$ The integers which appear in $P^*$ are precisely the integers in $P(N+1)$; we have just reordered them to be decreasing. In the time evolution diagram for $P$, $P^*$ can be read off from the final row below the horizontal line. Then $P^*$ is Ulrich of type $(l_{r+1},\ldots, l_1)$; its time evolution diagram is obtained from the diagram for $P$ by flipping about a horizontal axis $t = (N+1)/2$.
Both the symmetries we have described reverse the type of the Ulrich partition, so change the type unless it is symmetric. The partition $(P^s)^*$ has the same type as $P$, and we call this partition the \emph{symmetric dual}. In particular, if there is a unique Ulrich partition of a given type then it is equal to its symmetric dual.
\begin{example}
The symmetric partition obtained from $(5|3,-1,-2,-4|{-5})$ is $(5|4,2,1,-3|{-5})$. Both of these partitions are self-dual.
\end{example}
\begin{remark}
Suppose the flag variety $F(k_1,\ldots,k_r;n)$ carries an Ulrich Schur bundle $E_{\lambda}$, and let $P = \lambda +\rho$. Then $E_{P^s-\rho}$ and $E_{P^* -\rho}$ are Ulrich bundles on the dual flag variety $F(n-k_r,\ldots,n-k_1;n)$, while $E_{(P^{s})^*-\rho}$ is an Ulrich bundle on the original flag variety.
\end{remark}
\section{Ulrich bundles on flag varieties with three or more steps}\label{sec-higherStep}
In this section, we prove the following theorem.
\begin{theorem}\label{thm-higherStep}
A flag variety of three or more steps does not have a Schur bundle $E_\lambda$ which is Ulrich with respect to the line bundle $\cO(1)$.
\end{theorem}
As a special case, we obtain the following corollary.
\begin{corollary}
The complete flag variety $F(1, 2, 3, \dots, n-1; n)$ does not admit a Schur bundle which is Ulrich for $n \geq 4$.
\end{corollary}
We will split the proof into three parts. We first analyze flag varieties with at least five steps. The three and four step flag varieties require separate (and more intricate) arguments. We begin by deriving simple congruences among the entries of an Ulrich partition.
\begin{lemma}\label{lem-congruence}
Let $(a^1_i|a^2_i|\cdots|a^{s}_i)$ be an Ulrich partition. Suppose $1 \leq j \leq k \leq s$. Then all the entries $a^j_i$ in the $j$-block are congruent to all the entries $a^k_i$ in the $k$-block modulo $k-j$.
\end{lemma}
\begin{proof}
We can normalize the evolution of the Ulrich partition so that we keep all the entries $a^j_i$ in the $j$-block fixed and add $k-j$ to all the entries $a^k_i$ in the $k$-block. This preserves the congruence classes of the $a^j_i$ and $a^k_i$ modulo $k-j$. In order for them to collide at some time, they must all have the same congruence class.
\end{proof}
\subsection{Flag varieties with at least five steps}
\begin{proposition}\label{prop-fiveStep}
There are no Ulrich partitions with $6$ or more blocks.
\end{proposition}
\begin{proof}
Suppose there is an Ulrich partition $(a^1_1,\ldots,a^1_{l_1}|\cdots|a^s_1,\ldots,a^s_{l_s})$ with $s \geq 6$. Since $s \geq 6$, we have that $\max\{k-1,s-k\} \geq 3$ for every $1 \leq k \leq s$. By Lemma \ref{lem-congruence}, we conclude that the entries $a^k_i$ in the $k$-block all have the same congruence class modulo 6. In particular, they all have the same parity.
At time $t=1$, two adjacent blocks must meet; say $a_{l_k}^k(1) = a_{1}^{k+1}(1)$. This forces $a_{l_k}^k$ and $a_1^{k+1}$ to have opposite parities.
If $k-j$ is even, then the entries $a^k_i(t)$ and $a^j_h(t)$ in the $k$- and $j$-blocks have the same parities at all times $t$. On the other hand, if $k-j$ is odd, then the entries in the $k$- and $j$-blocks have the same parity at odd times and opposite parities at even times $t$. Consequently, at even times the intersections must occur between entries in nonadjacent blocks.
Suppose that at time $t=2$, $a^j_{l_j}$ and $a^{j+2}_1$ collide. Normalize the evolution of the sequence so that $1,0,-1$ is subtracted from the $j$-, $(j+1)$-, and $(j+2)$-blocks, respectively.
Then, up to switching their order, at times $t=1,3$, the collisions must be $a^j_{l_j}a^{j+1}_1$ and $a^{j+1}_1a^{j+2}_1$ depending on the sequence at time $t=0$. We observe that the possible positions of the entries at time $t=0$ are
$$ \cdots \times \times \ a^j_{l_j} \ a^{j+1}_1 \ \times \times \ a^{j+2}_1 \ \cdots \quad \mbox{or} \quad \cdots \times \times \ a^j_{l_j} \ \times \times \ a^{j+1}_1 \ a^{j+2}_1 \cdots,$$ where we denote the positions unoccupied by an entry by $\times$. In particular, after normalizing the positions, the triple $(a_{l_j}^j,a_1^{j+1},a_1^{j+2})$ either equals $(1,0,-3)$ or $(3,0,-1)$. Furthermore, there is a unique entry in the $(j+1)$-part: $l_{j+1}=1$.
At time $t=4$, the collision again must be between entries from nonadjacent blocks. For $k \not= j$, if $a^k_{l_k}(4) = a^{k+2}_1(4)$, then $a^k_{l_k}$ passes through the entries in the $(k+1)$-block without intersecting them, contradicting that the partition is Ulrich. Hence, the only possible collisions at time $t=4$ are either $a^j_{l_j} a^{j+2}_2 $ or $a^j_{l_j-1} a^{j+2}_1$. However, these are both impossible since they would give a separation of only four between either $a^j_{l_j}$ and $a^{j}_{l_j-1}$ or $a^{j+2}_1$ and $a^{j+2}_2$. This contradicts the fact that entries in each block are congruent modulo 6.
We conclude that there cannot be any Ulrich partitions with $s \geq 6$.
\end{proof}
\subsection{Three-step flag varieties} In this subsection, we analyze Ulrich bundles on three-step flag varieties.
\begin{proposition}\label{prop-threeStep}
There are no Ulrich partitions with $4$ blocks.
\end{proposition}
\begin{proof}
Let $(a_1,\dots, a_j|b_1, \dots, b_k|c_1, \dots, c_l|d_1, \dots, d_m)$ be an Ulrich partition for a three-step flag variety. By Lemma \ref{lem-congruence}, the parities of $a_i(t)$ and $c_h(t)$ (respectively, $b_i(t)$ and $d_h(t)$) are equal at all times. On the other hand, by the same argument as in the proof of Proposition \ref{prop-fiveStep}, entries in adjacent blocks have the same parity at odd times $t$ and opposite parities at even times $t$. Consequently, when $t$ is even, the only collisions can be $ac$ or $bd$ collisions. Since the last intersection at time $t = N$ has to be between entries from adjacent blocks, we conclude that the dimension $N$ is odd.
By symmetry (see \S \ref{ssec-symmetryDuality}), we can assume that at time $t=2$, the collision is between $a_j$ and $c_1$.
We normalize the evolution of the partition so that we subtract $2,1,0,-1$, respectively, from the $A$-, $B$-, $C$-, and $D$-blocks. Consequently, the first three collisions are
$$a_jb_1,\ a_j c_1, \ b_1 c_1 \quad \mbox{or} \quad b_1 c_1,\ a_j c_1, \ a_j b_1.$$ The corresponding sequences at time $t=0$ look like
$$\cdots a_j \ b_1 \times \times \ c_1 \ \times \times \cdots \quad \mbox{respectively} \quad \cdots a_j \ \times \times \ b_1 \ c_1 \ \times \times \cdots $$
In either case, we conclude that the length $k$ of the $B$-block is one since by time $t=3$, $a_j$ must have passed through the entire $B$-block. From now on we denote the unique entry in the $B$-block by $b$.
Similarly, $t=N-1$ is an even time. Hence, the only possible collisions are $ac$ or $bd$ collisions. If the collision is between $b$ and $d_m$, then the last three collisions at times $N-2, N-1$ and $N$ are
$$ c_1 d_m,\ b d_m, \ b c_1 \quad \mbox{or} \quad b c_1, \ b d_m, \ d_m c_1.$$ In particular, we conclude that the length of the $C$-block is also one. Denote the lengths of the $A$- and $D$-blocks by $j$ and $m$, respectively. The dimension of the flag variety is $$N=2j + jm + 2m + 1.$$ Since the collisions at even times are $ac$ or $bd$ collisions, the dimension can be at most one more than twice the total number $j+m$ of possible $ac$ and $bd$ collisions. We, therefore, have the inequality
$$2j + jm + 2m \leq 2(j+m).$$ This implies that $jm \leq 0,$ which is absurd.
We conclude that at time $t= N-1$, the intersection must be between $a_1$ and $c_l$. Hence, the last three intersections at times $N-2,N-1,N$ are one of
$$a_1 b,\ a_1 c_l, \ b c_l \quad \mbox{or} \quad b c_l,\ a_1 c_l, \ a_1 b.$$ Normalize the position of the sequence so that $b=y$ and $d_1=-y$. Then $c_1$ equals either $y-1$ or $y -3$. Furthermore, there cannot be any $c$'s in positions $-y+1$, $-y+2$ or $-y+3$ since $d_1$ is at these positions at times $1,2,3$ when there are other collisions. In either case, $d_1$ intersects $c_1$ at a time $t_0 \geq 2y -3$. On the other hand, $b$ intersects $c_l$ at some time $t_1< 2y-3$. Hence, the intersection $bc_l$ happens before the intersection $c_1d_1$. This is a contradiction, since $bc_l$ is one of the last three intersections and $c_1d_1$ is not. We conclude that there are no Ulrich partitions for three step flag varieties with respect to $\cO(1)$.
\end{proof}
\subsection{Four-step flag varieties} Finally, we analyze Ulrich bundles on four-step flag varieties. This is the most intricate case.
\begin{proposition} \label{prop-fourStep}
There are no Ulrich partitions with $5$ blocks.
\end{proposition}
\begin{proof}
Let $(a_1, \dots, a_j|b_1, \dots, b_k|c_1, \dots, c_l|d_1, \dots, d_m|e_1, \dots, e_p)$ be an Ulrich partition for a four-step flag variety. We can normalize the evolution so that at each time we subtract $2,1,0,-1,-2$ from the entries in the $A$-, $B$, $C$, $D$, $E$-blocks, respectively. By parity, at time $t=2$ we must have an $ac$, $bd$ or $ce$ collision. Suppose we have a $bd$ collision. As before, the length $l$ of the $C$-block is $1$, so we denote its entry by $c$. Then the first three collisions are one of
$$b_k c, \ \ b_k d_1, \ \ c d_1 \quad \mbox{or} \quad c d_1, \ \ b_k d_1, \ \ b_k c$$ corresponding to the following sequences at time $t=0$
$$ \cdots b_k \ c \ \times \ \times \ d_1 \cdots \quad \mbox{or} \quad \cdots b_k \ \times\ \times \ c \ d_1 \cdots.$$
By Lemma \ref{lem-congruence}, the entries in the $B$- (respectively, $D$-) block are the same modulo 6, so the distances between consecutive entries are at least 6. Similarly, the consecutive entries in the $A$- (respectively, $E$-) blocks are at least 12 apart. At time $t=4$, the collision cannot be a $bd$ collision since then the distance between two consecutive entries in either the $B$- or $D$- blocks would be only 4 instead of 6. Hence, the collision at time $t=4$ must be either $a_jc$ or $ce_1$. However, neither of these are possible. In the first case, $a_j$ never collides with $d_1$ and in the latter case $e_1$ never collides with $b_k$. We conclude that the first collision cannot be a $bd$ collision.
The collision at time $t=2$ must be an $ac$ or $ce$ collision. Without loss of generality, we may assume it is $ac$. Therefore, there is only one $b$ and $b$ collides with $c_1$ at time $1$ or $3$. As in the case of three-step flag varieties, the dimension $N$ of the flag variety must be odd. Hence, at time $N-1$ we must have an $ac$, $bd$ or $ce$ collision. By symmetry, we see that the intersection at time $t=N-1$ cannot be a $bd$ collision, hence must be an $ac$ or $ce$ collision. If it were an $ac$ collision, then $b$ would have to intersect $c_l$ at time $N$ or $N-2$. Since the time it takes for $d_1$ to intersect $c_1$ is longer than the time it takes $b$ to intersect $c_l$ (as in the proof of Proposition \ref{prop-threeStep}), we conclude that the intersection at time $t = N-1$ must be $c_1e_p$ and there can be only one entry in the $D$-block.
We have so far concluded that the sequence looks like $(a_1, \dots, a_j|b|c_1, \dots, c_l|d|e_1, \dots, e_p)$. The first three intersections are one of
$$\mbox{Case I:} \ a_j b ,\ \ a_j c_1, \ \ b c_1 \quad \mbox{or} \quad \mbox{Case II:} \ b c_1, \ \ a_j c_1, \ \ a_j b$$ and the last three intersections are one of
$$\mbox{Case A:} \ d e_p, \ \ c_1 e_p, \ \ c_1 d \quad \mbox{or} \quad \mbox{Case B:} \ c_1 d, \ \ c_1 e_p, \ \ d e_p.$$
For the rest of the argument, we normalize positions so that $b=y$ and $d=-y$. Then at time $y$ we have the $bd$ collision at position $0$.
We first show that case IB is impossible. We compute $c_1 = y-3$, and the equality $$c_1 =c_1(N-2) = d(N-2)=d+N-2$$ gives $N = 2y-1$. Similarly, from $e_p(N) = d(N)$ we find $e_p = -3y+1$. As $a_j = y+1$, we conclude the intersection $a_je_p$ happens at time $y$. This contradicts that $b$ and $d$ already collide at time $y$. Symmetrically, case IIA is also impossible since the intersection $a_1e_1$ happens at time $y$.
We conclude that we must be in Case IA or Case IIB. These cases are the same under duality (see \S\ref{ssec-symmetryDuality}). Consequently, it suffices to analyze Case IA.
In Case IA, we first prove that at time $t=0$ the sequence must look like
\begin{equation}\tag{$\ast$} \cdots a_j \ b \ \times \times \ c_1 \ \times \times \times \ c_2 \ \times \times \times \dots \times \times \times \times \times \ d \ \times \times \times \times \times \cdots ,\end{equation}
where $\times$ denotes a position unoccupied by any entries $a,b,c,d,e$. At time $t=1,2,3$, the collisions are $a_j b, a_j c_1$ and $b c_1$. Since at these times $d$ is at positions $-y+1, -y+2, -y+3$, there cannot be any entries of the $C$-block in these positions. Furthermore, there cannot be any entries of the $C$-block in position $y-5$ because $a_j$ is at position $y-5$ at time $t=3$ while the collision $bc_1$ occurs. By Lemma \ref{lem-congruence}, the distance between the entries in the $C$-block are even. Hence, there cannot be any entries of the $C$-block at positions $y-2\alpha$ for any integer $\alpha$. At time $t=4$, a collision of the form $ac$, $bd$, or $ce$ must occur. Since $e_1$ has not passed through $d$ and $b(4)>a_j(4)>d(4)$, the only possible collision is $a_jc_2$. Hence, $c_2=y-7$.
As in case IB, we now compute that the dimension of the flag variety is $N=2y-3$. At times $N-2$, $N-1$, and $N$, the collisions are specified and do not involve $b$. At these times $b$ is at positions $-y+5, -y+4, -y+3$. Hence, we conclude that there cannot be any entries of the $C$-block at positions $-y+1, \ldots, -y+5$. Since $e_p(N-2) = d(N-2)$ we have $e_p = -3y+5$. The $c_2d$ collision happens at time $2y-7$, and $e_p(2y-7) = y-9$, so there cannot be a $c$ at position $y-9$ either. Combining all these claims, we recover the claimed shape of the sequence at time $t=0$.
Next consider the position of $e_1$. Since there are intersections for times $1\leq t \leq 4$, we find $e_1\leq -y-5$. To complete the claim that the entries of the partition are as in $(\ast)$, we only need to show that in fact $e_1< -y-5$. So suppose $e_1 = -y-5$. Since $c_2 = y-7$, the intersection $c_2e_1$ occurs at time $y-1$. However, since $a_j = y+1$ and $e_p = -3y+5$, the intersection $a_j e_p$ also occurs at time $y-1$. This contradiction shows $e_1< -y-5$, and establishes the pattern $(\ast)$ for the entries.
We finally obtain a contradiction by showing there is no possible collision at time $t=5$. The collision cannot be a $de$ collision since $e_1<-y-5$. Since nothing has passed through the $d$ before time $5$, the time $t=5$ intersection cannot involve an $e$.
Since there are no $c$ entries at position $-y+5$, the collision cannot be of the form $cd$. Since there are no $c$ entries at position $y-5$, the collision cannot be of the form $bc$. Since $b$ has not passed though $c_2$, the collision cannot be of the form $bd$. Since there are no $c$ entries in position $y-9$, the collision cannot be $a_j c$. Since the entries in the $A$-block are at least 12 apart, the collision cannot be $a_{j-1} c_1$. We conclude that the collision is not of the form $ac$. For the same reason, the collision cannot be of the form $a b$.
This only leaves the possibility of the collision being between $a_1$ and $d$. This implies $y=7$, so $c_2=0$ and there is a triple intersection by $b$, $c_2$, and $d$ at time $7$. This concludes the proof that there cannot be an Ulrich partition for four-step flag varieties.
\end{proof}
Theorem \ref{thm-higherStep} follows immediately from Propositions \ref{prop-fiveStep}, \ref{prop-threeStep} and \ref{prop-fourStep}.
\section{Results on two-step flag varieties}\label{sec-overview2step}
The rest of the paper is entirely combinatorial and will focus on the classification of Ulrich partitions with three parts, corresponding to Ulrich bundles $E_\lambda$ on two-step flag varieties with respect to $\OO(1)$. In this section, we state our classification results, and outline the plan for the remainder of the paper.
From now on, $F(p,q;v)$ denotes a two-step flag variety.
Let $(\alpha,\beta,\gamma)= (p, q-p, v-q)$ be the type of a partition. We present two different flavors of classification results. We classify Ulrich partitions where $\beta \leq 2$ but $\alpha$ and $\gamma$ are arbitrary. We then also classify Ulrich partitions where $\alpha \leq 2$ and $\gamma \leq 2$ but $\beta$ is arbitrary. We begin by analyzing the simplest case of $(1,n,1)$ partitions, which correspond to Schur bundles on $F(1, n; n+1)$.
\begin{theorem}\label{thm-1n1}
There are $2^n$ Ulrich partitions of type $(1,n,1)$ for any $n\geq 1$ (up to equivalence).
\end{theorem}
\begin{proof}Indeed, suppose $(a|b_1,\ldots,b_n|c)$ is Ulrich. Since $a$ and $c$ have the same parity, we may normalize $a = y+1$, $c = -y-1$ for some integer $y\geq 0$. At time $y+1$, we have the $ac$ intersection. Before time $y+1$, every intersection is of type $ab$ or $bc$. Thus, for every position $p\in [y]$, exactly one of $p$ or $-p$ is in $B=\{b_1,\ldots,b_n\}$. We conclude $y = n$, and if $B'\subset [n]$ is any subset then there is a unique Ulrich partition $(n+1|B|{-n-1})$ with $B\cap [n] = B'$. Thus equivalence classes of Ulrich partitions of type $(1,n,1)$ are in bijective correspondence with subsets of $[n]$.
\end{proof}
\begin{example}
The Ulrich partition $(5|3,-1,-2,-4|{-5})$ corresponding to $B' = \{3\}\subset [4]$ in the previous construction was already studied in Example \ref{ex-combToAG}.
\end{example}
We next describe the small $\beta$ case, where $\beta\leq 2$. There are new one- and two-parameter infinite families of examples; let $m_1,m_2\geq 0$ be nonnegative integers and define $$k_i = \sum_{j=0}^{m_i} 4^j = \frac{4^{m_i+1}-1}{3}.$$ These sums of powers of $4$ will somewhat surprisingly appear in many examples. We prove the next result in \S \ref{sec-sumset} by recasting it as a problem about sumsets.
\begin{theorem}\label{thm-beta1intro}
For every $m_1\geq 0$, there is a unique Ulrich partition of type $(2,1,k_1)$. Up to symmetry, any Ulrich partition of type $(\alpha,1,\gamma)$ is of this form.
\end{theorem}
The $\beta = 2$ case is more difficult, and occupies \S \ref{sec-beta2}.
\begin{theorem}\label{thm-beta2intro}
There are Ulrich partitions of the following types:
\begin{enumerate}
\item\label{case222} $(2,2,2)$,
\item\label{case223} $(2,2,3)$,
\item\label{case12k} $(1,2,k_1)$ for any $m_1\geq 0$, and
\item\label{case12kk} $(1,2,k_1+k_2)$ for any $m_1,m_2\geq 0$.
\end{enumerate}
Unless the partition is of type $(1,2,2)$ or $(2,2,2)$, it is unique up to taking the symmetric dual.
Up to symmetry, any Ulrich partition of type $(\alpha,2,\gamma)$ is one of the above examples.
\end{theorem}
We next turn our attention to cases where $\alpha$ and $\gamma$ are small and $\beta$ is arbitrary. We first classify all partitions of type $(1,n,2)$ in \S\ref{sec-1n2}.
\begin{example}\label{example-1n2}
There is a fundamental example of this type, given by the partition $$(a|b_1,\ldots,b_n|c_1,c_2) = (2n+1|n-1,n-2,\ldots,1,0|{-1},-2n-3).$$
Then
\begin{itemize}
\item for times $t\in [1,n]$, the $c_1$ entry meets the $B$-block.
\item At time $t = n+1$, $a$ meets $c_1$.
\item For times $t\in [n+2,2n+1]$, $a$ meets the $B$-block.
\item When $t=2n+2$, $a$ meets $c_2$, and
\item for $t\in [2n+3,3n+2]$, $c_2$ meets the $B$-block.
\end{itemize}
Therefore the partition is Ulrich.
In Figure \ref{fig-132} we present the time evolution diagram of the Ulrich partition $(7|2,1,0|{-1},-9)$ of type $(1,3,2)$.
\begin{figure}[htbp]
\input{132Ex.pstex_t}
\caption{Time evolution diagram of the Ulrich partition $(7|2,1,0|{-1},-9)$ of type $(1,3,2)$.}\label{fig-132}
\end{figure}
\end{example}
\begin{theorem}\label{thm-1n2intro}
There is a bijective correspondence between Ulrich partitions of type $(1,n,2)$ and decompositions $$n = 2km+m-1 \qquad (k\geq 0, m>0)$$ of the integer $n$.
\end{theorem}
The fundamental example of a type $(1,n,2)$ partition corresponds to the trivial decomposition of $n$ given by $k=0$ and $m=n+1$.
For example, when $n=4$, $k=2$, and $m=1$ we will obtain a partition of type $(1,4,2)$ different from the fundamental example.
Finally, we classify all partitions of type $(2,n,2)$ in \S \ref{sec-2n2}.
\begin{theorem}\label{thm-2n2intro}
If there is an Ulrich partition of type $(2,n,2)$, then $n$ is even. If $n$ is even, there are exactly two partitions of this type, and they are symmetric to each other.
\end{theorem}
\subsection{Conjectures}
Our classification of Ulrich partitions for two-step flag varieties remains incomplete. However, we conjecture that we have found all the examples:
\begin{conjecture}\label{conj-fullList}
If $(\alpha,\beta,\gamma)$ is the type of an Ulrich partition, then up to symmetry this type is one of
$$(1,n,1),\quad
(1,n,2),\quad
(2,2n,2),\quad
(2,2,3),\quad
(2,1,k_1),\quad
(1,2,k_1),\quad\mbox{or}\quad
(1,2,k_1+k_2)$$
where $n$ denotes a positive integer and $k_i$ is a number of the form $(4^{m_i+1}-1)/3$ for some $m_i\geq 0$.
\end{conjecture}
Our partial classification implies that Conjecture \ref{conj-fullList} is equivalent to the following a priori weaker statement.
\begin{conjecture}\label{conj-simple}
There are no Ulrich partitions of type $(\alpha,\beta,\gamma)$ if $\beta\geq 3$ and $\gamma \geq 3$.
\end{conjecture}
Indeed, suppose Conjecture \ref{conj-simple} is true. If $\beta\leq 2$, then Theorems \ref{thm-beta1intro} and \ref{thm-beta2intro} completely classify Ulrich partitions of type $(\alpha,\beta,\gamma)$. So suppose $\beta \geq 3$. If $\alpha \leq 2$ and $\gamma\leq 2$, then Theorems \ref{thm-1n1}, \ref{thm-1n2intro}, and \ref{thm-2n2intro} completely classify Ulrich partitions of type $(\alpha,\beta,\gamma)$. If instead either $\alpha \geq 3$ or $\gamma\geq 3$, then by symmetry we may assume $\gamma\geq 3$ and therefore there are no Ulrich partitions of type $(\alpha,\beta,\gamma)$. Therefore Conjecture \ref{conj-simple} implies Conjecture \ref{conj-fullList}.
\begin{remark}
As evidence for Conjecture \ref{conj-simple}, we performed a brute-force computer search to verify that if there is an Ulrich partition of type $(\alpha,\beta,\gamma)$ with $\beta\geq 3$ and $\gamma \geq 3$, then $\alpha + \beta +\gamma \geq 13$.
We note that Conjecture \ref{conj-simple} is open even in the special case $\alpha=1$.
\end{remark}
\section{Sumsets and Ulrich partitions of type $(\alpha,1,\gamma)$}\label{sec-sumset}
In this section we produce a new class of examples of Ulrich partitions and classify all the types $(\alpha,\beta,\gamma)$ of an Ulrich partition where $\beta = 1$.
\begin{theorem}\label{thm-sumset}
For every $m\geq 0$, there is a unique Ulrich partition of type $(2,1,k)$, where $k = (4^{m+1}-1)/3$. Up to symmetry, any Ulrich partition of type $(\alpha,1,\gamma)$ is of this form.
\end{theorem}
\begin{proof}
We first rephrase our combinatorial problem to make it more tractable in this case. Given a partition $$(a_1,\ldots,a_\alpha|b|c_1,\ldots,c_\gamma)$$ we first normalize $b=0$ (so the $c_j$ are all negative). The entry $a_i$ meets $b=0$ at time $t= a_i$ and meets $c_j$ at time $\frac{1}{2}(a_i-c_j)$, while $c_j$ meets $b$ at time $t=-c_j$. In order for the $a$'s to meet the $c$'s at integral times, they must all have the same parity. Furthermore, they must all be odd, for otherwise at time $t=1$ no two entries would meet.
Let $A =\{a_1,\ldots,a_\alpha\}$ and $C = \{-c_1,\ldots,-c_\gamma\}$. Then the Ulrich condition says precisely that the sumset $A+C:= \{a+c:a\in A,c\in C\}$ has size $\alpha\gamma$ and the set $[N]$ of times is a disjoint union $$ [N] = A\amalg C \amalg \tfrac{1}{2}(A+C).$$
Equivalently, by subtracting $1$ from every element of $A$ and $C$ and putting $N' = N-1$, we see that the existence of an Ulrich partition of type $(\alpha,1,\gamma)$ is equivalent to the existence of subsets of even numbers $A',C' \subset [0,N']$ of sizes $\alpha$ and $\gamma$ such that $[0,N']$ is a disjoint union $$[0,N'] = A'\amalg C' \amalg \tfrac{1}{2}(A'+C').$$ The numerics force $A'+C'$ to have size $\alpha\gamma$---no repetition can occur in the sumset. This decomposition is the problem we will study for the rest of the proof.
We now claim that if $(\alpha,1,\gamma)$ is the type of an Ulrich partition then at least one of $\alpha,\gamma$ is $2$. Suppose $A',C'\subset [0,N']$ correspond to such an Ulrich partition, and put $S' = \frac{1}{2}(A'+C')$. Since $A'$ and $C'$ are disjoint, we have $0\notin S'$, and therefore either $0\in A'$ or $0\in C'$. By symmetry we may assume $0\in A'$. We must have $1\in S'$ since $A',C'$ consist of even numbers, and this forces $2\in C'$.
The number $N'$ must be even since $A'$ and $C'$ consist of even numbers no larger than $N'$. Then $N'$ must actually be in $A'$ or $C'$. If $N'\in C'$ then since $N'-1$ must be in $S'$ we have to have $N'-2\in A'$. This is a contradiction since $0+N' = 2+(N'-2)$ gives a repetition in the sumset $A'+C'$. We conclude $N'\in A'$ and similarly $N'-2\in C'$.
We next show by induction that if $k$ is an integer with $k\geq 0$ and $4k+2<N'$ then $4k+2\in C'$. We already know $2\in C'$. Suppose $4k+2\in C'$ for all $k\in [0,k_0)$. Then $4k+2 \notin A'$ for all $k\in [0,k_0)$. Furthermore, $4k+4\notin A'$ for all $k\in [0,k_0)$, for if $4k+4\in A'$ then $(4k+4)+(N'-2) = N'+(4k+2)$ is a repetition in the sumset. Thus $A'$ contains no numbers in the interval $(0,4k_0]$. The odd number $2k_0+1$ must be in $S'$, but since every element of $C'$ is $\geq 2$ while the only nonzero elements of $A'$ are at least $4k_0+2$ the only possibility is that $4k_0+2\in C'$. By induction, $C'$ must contain all numbers in $(0,N')$ which are $2 \pmod{4}$. This argument also shows that $A' = \{0,N'\}$, i.e. $\alpha=2$.
Suppose $N'$ is not divisible by $4$. Reversing the argument in the previous paragraph, we see that $C'$ must consist precisely of all the even numbers in $(0,N')$. In particular, since $N'>4$ we have $4\in C'$, so $2\in S'$; this contradicts that $2\in C'$. Therefore $4$ divides $N'$.
At this point we conclude that the sets $A',C'\subset [0,N']$ must have a recursive structure. Let $$N'' = N'/4 \qquad A'' = \{0,N''\} \qquad C''=\{c:4c\in C'\} \qquad C''' = \{c\in [0,N']:c\equiv 2 \pmod 4\},$$ so that $C' = 4C'' \cup C'''$. Then $$C''' \cup \tfrac{1}{2}(A'+C''') = \{d\in [0,N']:d\not\equiv 0 \pmod 4\};$$ furthermore, this union is disjoint and there is no repetition in the sumset. It then follows that the sets $A',C'$ correspond to an Ulrich partition if and only if $$[0,N''] = A'' \amalg C'' \amalg \tfrac{1}{2}(A''+C'').$$ That is, either $A'',C''$ correspond to an Ulrich partition or $C''$ is empty, $N'' = 1$, and $A'' = \{0,1\}$, in which case we originally had $A' = \{0,4\}$ and $C' = \{2\}$.
Since $C''$ always has fewer elements than $C'$, we conclude by induction. We may assume $C''$ has $(4^m-1)/3$ elements for some $m\geq 1$. Then $$N'' +1=|A''|+|C''|+|A''||C''|=2+\frac{(4^{m}-1)}{3}+\frac{2(4^m-1)}{3}=4^m+1,$$ so $N'' = 4^m$ and $N' = 4^{m+1}$. From this we similarly conclude $|C'| = (4^{m+1}-1)/3$, completing the proof.
\end{proof}
\begin{example}
By studying the proof of Theorem \ref{thm-sumset}, it is straightforward to recover the Ulrich partitions from the sumset construction. As an example, let us find the Ulrich partition of type $(2,1,5)$, corresponding to $m=1$. Then $N = 17$, so $N' = 16$, $A' = \{0,16\}$, and $C' = \{2,6,8,10,14\}$. Correspondingly, the Ulrich partition is $(17,1|0|{-3},-7,-9,-11,-15)$. We give the time evolution diagram of this partition in Figure \ref{fig-215}.
\begin{figure}[htbp]
\input{215Ex.pstex_t}
\caption{Time evolution diagram of the Ulrich partition of type $(2,1,5)$.}\label{fig-215}
\end{figure}
\end{example}
\section{Ulrich partitions of type $(\alpha,2,\gamma)$}\label{sec-beta2}
In this section we completely classify Ulrich partitions of type $(\alpha,2,\gamma)$. There are many examples of such partitions, including a two-parameter infinite family and several sporadic examples. The existence of these examples complicates the classification.
\subsection{The greedy algorithm} Most of our arguments on 3-part Ulrich partitions will involve studying the following algorithm for the construction of Ulrich partitions. We normalize the evolution of partitions by subtracting $1,0,-1$, respectively, from the three blocks. Fix a finite set $B\subset \Z$, which we view as being the list of entries in the $B$-block of a hypothetical Ulrich partition. Let $A\subset \Z$ and $C\subset \Z$ be two finite sets such that $a>b>c$ holds for any $a\in A$, $b\in B$, $c\in C$. We write $$A(t) = A -t \qquad B(t) = B \qquad C(t) = C+t,$$ where the right hand side is interpreted as a sumset, so that e.g. $A(t)$ gives the set of positions of the $A$-entries of $(A|B|C)$ at time $t$.
\begin{definition}
The partition $(A|B|C)$ is \emph{pre-Ulrich} if
\begin{enumerate}
\item all elements of $A$ and $C$ have the same parity, and
\item the set $T\subset \N_{>0}$ of times $t$ where two of $A(t), B(t),C(t)$ have a common entry has size $N$, the dimension of $(A|B|C)$.
\end{enumerate}
\end{definition}
\begin{remark} Write $(A'|B'|C')\subset (A|B|C)$ if $A'\subset A$, $B'\subset B$, and $C'\subset C$. Then if $(A|B|C)$ is pre-Ulrich, $(A'|B'|C')$ is pre-Ulrich. Clearly Ulrich partitions are pre-Ulrich; the Ulrich condition requires that in addition $T = [N]$ is a consecutive range of integers.\end{remark}
Given any pre-Ulrich triple $(A|B|C)$, we introduce two operations for enlarging the $A$ or $C$ part of the triple.
\begin{definition}
Let $(A|B|C)$ be a pre-Ulrich partition. Let $t_0\in \N_{>0}$ be the smallest time not in $T$.
\begin{enumerate}
\item Put $a_0(t_0) := \max(B(t_0) \cup C(t_0))$ and $a_0 := a_0(t_0)+t_0$. The triple $(A\cup \{a_0\} |B|C)$ is \emph{obtained by adding a new $a$}.
\item Put $c_0(t_0)=\min(A(t_0)\cup B(t_0))$ and $c_0 = c_0(t_0)-t_0$. The triple $(A|B|C\cup \{c_0\})$ is \emph{obtained by adding a new $c$}.
\end{enumerate}
\end{definition}
\begin{warning}
It is not generally the case that the triple obtained from a pre-Ulrich triple by adding a new $a$ or new $c$ is again pre-Ulrich, since the new entry may meet old entries at times after $t_0$ that already have intersections. See the proof of Lemma \ref{lem-BblockRestrict} for an example of this.
\end{warning}
The next result shows that an Ulrich partition $(A|B|C)$ can always be obtained from $(\emptyset|B|\emptyset)$ by greedily adding $a$'s and $c$'s.
\begin{proposition}
If $(A|B|C)$ is an Ulrich triple, then it can be obtained by repeatedly adding new $a$'s and $c$'s to the triple $(\emptyset|B|\emptyset)$. Furthermore, the sequence of $a$'s and $c$'s which are added is uniquely determined.
\end{proposition}
\begin{proof}
The sequence of $a$'s and $c$'s to add to $(\emptyset|B|\emptyset)$ is readily determined by the pattern of intersections of the Ulrich partition. Let $(A'|B|C') \subset (A|B|C)$ be any triple. Since $(A|B|C)$ is Ulrich, $(A'|B|C')$ is pre-Ulrich. Let $t_0\in \N_{>0}$ be the smallest time where $(A'|B|C')$ does not have an intersection, and consider the intersection occurring in $(A|B|C)$ at time $t_0$.
First suppose the intersection at time $t_0$ occurs between the $A$- and $B$- blocks, between an entry $a_0\in A$ and $b_0\in B$, so that $a_0(t_0)= b_0(t_0)$. Then $a_0\notin A'$, since otherwise $(A'|B|C')$ has an intersection at time $t_0$. No intersections between $a_0$ and an entry of $B$ or $C'$ have occurred before time $t_0$ since all these times already have intersections from $(A'|B|C')$. In order for the intersections between $a_0$ and the entries of $B$, $C'$ to occur at time $t_0$ or later, we must have $a_0(t_0) \geq b(t_0)$ and $a_0(t_0) \geq c(t_0)$ for all $b\in B$ and $c\in C$, and we conclude $a_0(t_0) = \max(B(t_0)\cup C(t_0))$. The triple obtained from $(A'|B|C')$ by adding a new $a$ is precisely $(A'\cup \{a_0\}|B|C')\subset (A|B|C)$.
If instead the intersection at time $t_0$ occurs between the $B$- and $C$-blocks, a symmetric argument shows that the triple $(A',B,C'\cup \{c_0\})$ obtained by adding a new $c$ is contained in $(A,B,C)$.
Finally suppose the intersection at time $t_0$ occurs between the $A$- and $C$-blocks, and let $a_0\in A$ and $c_0\in C$ be such that $a_0(t_0) = c_0(t_0)$. By the choice of $t_0$, it can't be the case that both $a_0\in A'$ and $c_0\in C'$; we claim that in fact exactly one of $a_0\in A'$ or $c_0\in C'$ holds. Suppose $a_0\notin A'$. As before, this implies that $a_0(t_0) \geq b(t_0)$ and $a_0(t_0) \geq c(t_0)$ for all $b\in B$ and $c\in C'$, so $a_0(t_0) = \max(B(t_0)\cup C(t_0))$. In fact, $a_0(t_0)>b(t_0)$ for all $b\in B$ since $a_0$ has not yet met the $B$-block at time $t_0$; fix some $b_0\in B$. Since $c_0(t_0)=a_0(t_0) >b_0(t_0)$, the intersection between $c_0$ and $b_0$ must have occurred at a time before $t_0$. This implies $c_0\in C'$. Thus if $a_0\notin A'$, we conclude that the triple obtained from $(A'|B|C')$ by adding a new $a$ is $(A'\cup \{a_0\}|B|C')\subset (A|B|C)$; similarly, if $c_0\notin C'$, then the triple obtained from $(A'|B|C')$ by adding a new $c$ is $(A'|B|C'\cup \{c_0\})\subset (A|B|C)$.
Starting from $(\emptyset|B|\emptyset)$, we can now construct a chain
$$(\emptyset|B|\emptyset) \subset (A_1|B|C_1)\subset \cdots \subset (A_n|B|C_n) = (A|B|C)$$ of pre-Ulrich partitions where each triple differs from the previous one by adding an $a$ or a $c$. Uniqueness is clear.
\end{proof}
\subsection{Duality and the trapezoid rule} The greedy algorithm will allow us to determine the structure of an Ulrich partition $(A|B|C)$ for early times $t$. Recall that the dual $$(A|B|C)^* := ( A^* |B^*|C^*) := (C(N+1),B(N+1),A(N+1))$$ is also an Ulrich partition (see \S\ref{ssec-symmetryDuality}); this fact will allow us to determine the structure of an Ulrich partition at late (i.e. close to $N$) times. The trapezoid rule is a simple observation which will allow us to combine information about a partition and its dual to say something about middle (i.e. close to $N/2$) times. This is our principal tool for classifying Ulrich partitions of type $(\alpha,\beta,\gamma)$ with $\beta = 2$.
\begin{observation}[Trapezoid rule]\label{obsTrapezoid}
Let $(A|B|C)$ be an Ulrich partition and $(A^*|B^*|C^*)$ its dual. If there exist $a\in A$, $a^*\in A^*$, $c\in C$, and $c^*\in C^*$ such that $a^*-a = c-c^*$, then $c(N+1) = a^*$ and $a(N+1) = c^*$. In particular, $N+1=a^*-c=a-c^*$.
\end{observation}
\begin{proof}
The assumptions give that $c':=a^*-(N+1)\in C$ and $a':=c^*+(N+1)\in A$. Then $a$ meets $c'$ at time $\frac{1}{2}(a-c')=\frac{1}{2}(a-a^*+(N+1))$ and $a'$ meets $c$ at time $\frac{1}{2}(a'-c)=\frac{1}{2}(c^*-c+(N+1))$. The hypothesis is that these times are equal; thus since $(A|B|C)$ is Ulrich we must have $a=a'$ and $c=c'$.
\end{proof}
The name of the trapezoid rule comes from the following geometric interpretation using time evolution diagrams. Suppose we can find $a,a^*,c,c^*$ as in the trapezoid rule. View $a$ and $c$ as being entries of $(A|B|C)$ at time $0$, and view $a^*$ and $c^*$ as being entries of $(C|B|A)$ at time $N+1$. The plane trapezoid with vertices at $(a,0)$, $(c,0)$, $(c^*,N+1)$, and $(a^*,N+1)$ is horizontally symmetric by assumption. The conclusion of the trapezoid rule is that this trapezoid has diagonals which meet orthogonally. See Figure \ref{fig-trapezoid}.
\begin{figure}[htbp]
\input{trapezoid.pstex_t}
\caption{Graphical depiction of the trapezoid rule. If $(A|B|C)$ is Ulrich and $a\in A,$ $a^*\in A^*,$ $c\in C,$ and $c^*\in C^*$ can be chosen such that the trapezoid displayed above is horizontally symmetric, then the diagonals meet orthogonally. Equivalently, $N+1 = a^*-c = a-c^*$.}\label{fig-trapezoid}
\end{figure}
See \S\ref{diff5} for the first simple application of this rule; more intricate applications will be discussed throughout the rest of the section. We also point out one trivial rule which excludes many combinations of configurations of intersections for the early and late times.
\begin{observation}[Rectangle rule]\label{obsRectangle} Let $(A|B|C)$ be Ulrich, and let $(A^*|B^*|C^*)$ be its dual. It is not possible that $A$ and $A^*$ share (at least) two entries.
\end{observation}
\begin{proof}
The hypotheses imply that there are two entries in $A$ with the same gap between them as two entries in $C$; there will necessarily be a multiple intersection at some time.
\end{proof}
Of course, by symmetry, an analogous version of the rectangle rule holds for $C$ and $C^*$.
\subsection{Restricting the $B$-block} Suppose $(A|b_1,b_2|C)$ is an Ulrich partition. Our first result restricts the gap $b_1-b_2$ in the $B$-block.
\begin{lemma}\label{lem-BblockRestrict}
If $(A|b_1,b_2|C)$ is an Ulrich partition then $$b_1-b_2\in \{1,3,5\}.$$
\end{lemma}
\begin{proof}
First let us show that $b_1-b_2$ is odd. By way of contradiction, suppose $b_1=k$ and $b_2=-k$. We consider what happens when we add new $a$'s and $c$'s to the pre-Ulrich triple $(\emptyset|k,-k|\emptyset)$. Without loss of generality, we first add a new $a$, giving the triple $(k+1|k,-k|\emptyset)$. This triple has intersections at times $1$ and $2k+1$; in particular there is no intersection at time $2$. If we add a new $a$ to this triple we get $(k+2,k+1|k,-k|\emptyset)$, while if we add a new $c$ we get $(k+1|k,-k|{-k-2})$. Neither triple is pre-Ulrich, since they both violate the parity requirement.
Next suppose that $b_1-b_2$ is odd and $\geq 7$. We consider adding $a$'s and $c$'s to $(\emptyset|k,-k-1|\emptyset)$. Without loss of generality, we first add an $a$ at time $1$ to get $(k+1|k,-k-1|\emptyset)$. The triple is no longer pre-Ulrich if we add an $a$ at time $2$, so we add a $c$ at time $2$ and get $(k+1|k,-k-1|{-k-3})$. At times $3$ and $4$ (which do not have intersections yet since $k\geq 3$) the parity condition implies we must add an $a$ and then another $c$, yielding $$(k+3,k+1|k,-k-1|{-k-3},-k-5).$$ This partition is no longer pre-Ulrich since both the $A$- and $C$-blocks have entries which are $2$ apart; there is a multiple intersection at time $k+3$.
\end{proof}
The classification varies considerably according to the value of $b_1-b_2$. We consider each case separately in the next several subsections.
\subsection{Classification when $b_1-b_2=5$}\label{diff5} This is the easiest case to classify. For this subsection we fix $B = \{5,0\}$, let $(A|B|C)$ be an Ulrich partition, and assume that the intersection at time $t=1$ occurs between the $A$- and $B$-blocks, so $6\in A$. After these choices, we prove there is a unique such partition.
\begin{example}[The (2,2,1) partition]\label{ex-221}
The partition $(8,6|5,0|{-2})$ is Ulrich. It is obtained from $(\emptyset|5,0|\emptyset)$ by adding an $a$, then a $c$, then an $a$. See Figure \ref{fig-221}.
\begin{figure}[htbp]
\input{221Ex.pstex_t}
\caption{Time evolution diagram of the Ulrich partition $(8,6|5,0|{-2})$ of type $(2,2,1)$.}\label{fig-221}
\end{figure}
\end{example}
\begin{lemma}\label{diff5contain}
The partition $(A|B|C)$ contains $(8,6|5,0|{-2})$.
\end{lemma}
\begin{proof}
Consider the sequence of $a$'s and $c$'s which must be added to $(\emptyset|5,0|\emptyset)$ to produce $(A|B|C)$. By assumption, at time $t=1$ an $a$ must be added, giving $(6|5,0|\emptyset)$. By parity, at time $t=2$ a $c$ is added, giving $(6|5,0|{-2})$. Again by parity, at time $t=3$ an $a$ is added, yielding $(8,6|5,0|{-2})$ as required.
\end{proof}
The next proposition is now our first application of duality and the trapezoid rule.
\begin{proposition}
The partition $(A|B|C)$ equals $(8,6|5,0|{-2})$.
\end{proposition}
\begin{proof}
By Lemma \ref{diff5contain}, we have $(8,6|5,0|{-2})\subset (A|5,0|C)$. The dual partition $(A^*|B^*|C^*)$ is Ulrich. Thus, by Lemma \ref{diff5contain}, it contains either $(8,6|5,0|{-2})$ or the symmetric partition $(7|5,0|{-1},-3)$, according to whether the time $t=1$ intersection occurs between the first two or last two blocks. The first possibility is ruled out by the rectangle rule (Observation \ref{obsRectangle}). Likewise, if it contains $(7|5,0|{-1},-3)$, then since $7-6 = -2-(-3)$ the trapezoid rule (Observation \ref{obsTrapezoid}) shows $N=8$. Since $(8,6|5,0|{-2})$ has dimension $8$, this implies $(A|B|C) = (8,6|5,0|{-2})$.
\end{proof}
\subsection{Classification when $b_1-b_2=3$}
The classification here is substantially more complicated than when $b_1-b_2=5$. We normalize $B=\{3,0\}$, and assume the first intersection occurs between the $A$- and $B$-blocks, so $4\in A$. There are three such examples of Ulrich partitions.
\begin{example}[The (1,2,1) partition]\label{ex121}
The partition $(4|3,0|{-2})$ is Ulrich. It is obtained from $(\emptyset|B|\emptyset)$ by adding an $a$ and then a $c$. This is an example of a partition obtained from Theorem \ref{thm-1n1}.
\end{example}
\begin{example}[The (2,2,2) partition]\label{ex222} The partition $(12,4|3,0|{-2},-8)$ is Ulrich. It is obtained from $(\emptyset|3,0|\emptyset)$ by adding $a,c,c,a$. See Figure \ref{fig-222}.
\begin{figure}[htbp]
\input{222Ex.pstex_t}
\caption{Time evolution diagram of the Ulrich partition $(12,4|3,0|{-2},-8)$ of type $(2,2,2)$.}\label{fig-222}
\end{figure}
\end{example}
\begin{example}[The (3,2,2) partition]\label{ex322}
The partition $(16,10,4|3,0|{-2},{-12})$ is Ulrich. It is obtained from $(\emptyset|3,0|\emptyset)$ by adding in order $a,c,a,c,a$. See Figure \ref{fig-322}.
\begin{figure}[htbp]
\input{322Ex.pstex_t}
\caption{Time evolution diagram of the Ulrich partition $(16,10,4|3,0|{-2},-12)$ of type $(3,2,2)$.}\label{fig-322}
\end{figure}
\end{example}
By comparison with the $b_1-b_2=5$ case, there is less forced structure to the early intersections for an Ulrich triple in this case. We let $\sigma\in \{a,c\}^*$ be the string of $a$'s and $c$'s which must be added to $(\emptyset|B|\emptyset)$ to yield the Ulrich triple $(A|B|C)$. We write $|\sigma|$ for the length of $\sigma$.
\begin{lemma}\label{lemInitialSegment}
If $\sigma \neq ac$ (i.e. if $(A|B|C)$ is not the type $(1,2,1)$ partition of Example \ref{ex121}), then $\sigma$ begins with one of the strings
\begin{enumerate}
\item $acaaa$, so that $(16,14,10,4|3,0|{-2})\subset (A|B|C)$,
\item $acaca$, so that $(16,10,4|3,0|{-2},-12)\subset (A|B|C)$,
\item $acca$, so that $(12,4|3,0|{-2},-8)\subset (A|B|C)$, or
\item $acccc$, so that $(4|3,0|{-2},-8,-10,-14)\subset (A|B|C)$.
\end{enumerate}
\end{lemma}
\begin{proof}
By assumption the first letter of $\sigma$ is $a$, and by parity the second letter must be $c$. Observe that the pre-Ulrich partitions $(10,4|3,0|{-2})$ and $(4|3,0|{-2},-8)$ corresponding to the strings $aca$ and $acc$ are both not Ulrich, so $|\sigma|\geq 4$. To prove the lemma, we must show that unless $\sigma$ begins with $acca$ then $|\sigma|\geq 5$ and the fifth letter in $\sigma$ is determined by the first four.
Suppose $\sigma$ begins with $acaa$. This gives $(14,10,4|3,0|{-2})\subset (A|B|C)$. The first time where $(14,10,4|3,0|{-2})$ has no intersection is time $t=9$, and this triple is not Ulrich so $|\sigma|\geq 5$. Adding a $c$ would yield $(14,10,4|3,0|{-2},-14)$, but this is not pre-Ulrich since there is a multiple intersection at time $t=14$. Thus $\sigma$ must begin with $acaaa$.
If instead $\sigma$ begins with $acac$, then $(10,4|3,0|{-2},-12)\subset (A|B|C)$, and this triple is not Ulrich so $|\sigma|\geq 5$. Adding a $c$ would yield $(10,4|3,0|{-2},-12,-14)$, which is not pre-Ulrich since there is a multiple intersection at time $t=12$. So, $\sigma$ begins with $acaca$.
Finally suppose $\sigma$ begins with $accc$, so $(4|3,0|{-2},-8,-10)\subset (A|B|C)$. This is not Ulrich, and adding an $a$ gives $(16,4|3,0|{-2},-8,-10)$. Considering time $13$ shows this is not pre-Ulrich, so $\sigma$ begins with $acccc$.
\end{proof}
We now treat each case of Lemma \ref{lemInitialSegment} in further detail.
\begin{proposition}\label{propaccc}
Case (4) of Lemma \ref{lemInitialSegment} never actually arises: if $(A|B|C)$ is an Ulrich partition normalized as in this subsection, then the word $\sigma$ cannot begin with $acccc$.
\end{proposition}
\begin{proof}
If $\sigma$ begins with $acccc$ then $(4|3,0|{-2},-8,-10,-14)\subset (A|B|C)$, so $N \geq 17$ since there is an intersection at time $17$ in this subpartition. We first study the dual Ulrich partition $(A^*|B^*|C^*)$. By Lemma \ref{lemInitialSegment}, this partition contains one of the following 8 partitions, of which the first 7 are easily ruled out.
\begin{enumerate}
\item $(16,14,10,4|3,0|{-2})$. In this case the equality $4-4=-2-(-2)$ and the trapezoid rule implies $N=5$, which is absurd.
\item $(5|3,0|{-1},-7,-11,-13)$. Here the equality $5-4=-10-(-11)$ and the trapezoid rule gives $N=14$, a contradiction.
\item $(16,10,4|3,0|{-2},-12)$. The same logic as in (1) applies.
\item $(15,5|3,0|{-1},-7,-13)$. This time the equality $15-4 = -2-(-13)$ and the trapezoid rule gives $N = 16$.
\item $(12,4|3,0|{-2},-8)$. Same as (1).
\item $(11,5|3,0|{-1},-9)$. The equality $5-4=-8-(-9)$ and the trapezoid rule gives $N=12$.
\item $(4|3,0|{-2},-8,-10,-14)$. Same as (1).
\item $(17,13,11,5|3,0|{-1}).$
\end{enumerate}
We conclude that $(A^*|B^*|C^*)$ must contain $(17,13,11,5|3,0|{-1})$. Note that the sequence of $a$'s and $c$'s which must be added to $(\emptyset,B,\emptyset)$ to arrive at this subpartition is the sequence $caaaa$.
As a first observation, we find $|A|\geq 2$, for if $|A|=1$ then $N=4$. Thus there is some $k\geq 4$ such that $\sigma$ begins with $ac^ka$. By way of contradiction, let $k\geq 4$ be minimal such that there is an Ulrich partition $(A|B|C)$ such that the corresponding word $\sigma$ begins with $ac^ka$. It follows from minimality and symmetry that the word $\sigma^*$ corresponding to the dual partition $(A^*|B^*|C^*)$ at least begins with $ca^k$.
We now compute the partition $(4|3,0|C_k)$ ($k\geq 1$) obtained from $(\emptyset|B|\emptyset)$ by adding the letters of the word $ac^k$. We label the elements of $C_k$ in decreasing order as $C_k = \{c_1,\ldots,c_k\}$, so $C_{k+1} = C_k \cup \{c_{k+1}\}$. We have $c_1 = -2$. Let $T_k\subset \N_{>0}$ be the set of times which are \emph{not} intersection times for the partition $(4|3,0|C_k)$, so $T_1 = [6,\infty]$. We let $t_{k+1}=\min T_k$. For $k\geq 1$, when a $c$ is added to $(4|3,0|C_k)$ it is added at time $t_{k+1}$ and meets the $A$-entry at that time, so $c_{k+1}(t_{k+1}) = 4-t_{k+1}$ and $c_{k+1} = 4-2t_{k+1}$. Computing the sequence of sets $C_k$ is therefore equivalent to computing the sequence of times $\{t_k\}_k$.
The computation of the sequence of times $\{t_k\}_k$ when new $c$'s are added is best explained in terms of a sieve. Adding a new $c$ at time $t_{k+1}$ means we include $c_{k+1} = 4-2t_{k+1}$ in $C_{k+1}$. Then $c_{k+1}(2t_{k+1}-4) = 0$ and $c_{k+1}(2t_{k+1}-1) = 3$, so $c_{k+1}$ meets the $B$-block at times $2t_{k+1}-4$ and $2t_{k+1}-1$. Therefore $T_{k+1} = T_{k} \setminus \{t_{k+1},2t_{k+1}-4,2t_{k+1}-1\}$.
We now make this computation explicit. To get started, we have $t_2 = 6$, which sieves out times $8$ and $11$. We must then include $t_3 = 7$, which sieves times $10$ and $13$, etc. We include the result of this sieve computation for small times below; $\times$'s denote times which are sieved out.
$$\begin{array}{ccccccccccccccccccccccc}6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25 & 26\\
t_2 & t_3 &\times& t_4 & \times & \times &t_5& \times &\times &t_6&t_7&\times&t_8&t_9&\times&t_{10}&t_{11}&\times&t_{12}&t_{13}&\times
\\
\\
27 & 28 & 29 & 30 & 31 & 32 & 33 & 34 & 35 & 36 & 37 & 38 & 39 & 40 & 41 & 42 & 43 & 44 & 45 & 46 & 47 \\
t_{14}&\times&\times&t_{15}&\times&\times&t_{16} &\times&\times & t_{17} &\times&\times& t_{18} & \times & \times& t_{19} & \times & \times & t_{20} & \times& \times
\\
\\
48 &49 & 50 & 51 & 52 & 53 & 54 & 55 & 56 & 57 & 58 & 59 & \cdots & 95 & 96 & 97 & 98 & 99 & 100 \\
t_{21} & \times& \times & t_{22} & t_{23} & \times & t_{24} & t_{25} & \times & t_{26} & t_{27} & \times & \cdots & \times & t_{52} & t_{53} & \times & t_{54} & \times
\end{array}$$
The result of the computation is easy to describe. First, every time $t\geq 6$ with $t\equiv 0 \pmod{3}$ appears in $\{t_k\}$. No times with $t \equiv 2 \pmod{3}$ appear. The times congruent to $1 \pmod{3}$ may or may not appear, but the pattern with which they appear is simple. The first time $(7)$ congruent to $1$ appears, the next 2 times (10,13) do not, the next 4 times (16,19,22,25) do appear, the next 8 do not, the next 16 do, etc. This description is straightforward to prove, and completely specifies the sequence $\{t_k\}$.
Now suppose that an $a$, call it $a_2$, is added to $( 4|3,0|C_k)$. Unless $k$ is very special, it turns out that the resulting partition is not pre-Ulrich. We have $a_2(t_{k+1}) = c_1(t_{k+1})$, so $a_2 = 2t_{k+1}-2$. Since $\{c_1,c_2,c_3\} = \{-2,-8,-10\}$, we find that $a_2$ will also meet the $C$-block at times $t_{k+1}+3$ and $t_{k+1}+4$. For the resulting partition to be pre-Ulrich, it is therefore necessary that these times are not sieved out. Additionally, if $t_{k+2} = t_{k+1} + 1$ then no intersection with $a_2$ will occur at time $t_{k+2}$, so it is necessary to add another $a$ or $c$ at this time; it must be a $c$ that is added, for otherwise we will have two $a$'s which are $2$ apart from one another, a contradiction since $c_2$ and $c_3$ are already $2$ apart from one another. But then the $c$ which is added at time $t_{k+2}$, call it $c_{k+1}$, will satisfy $c_{k+1} = 4-2t_{k+2} = 2-2t_{k+1}$. Then $a_2$, $0\in B$, and $c_{k+1}$ all coincide at time $2t_{k+1}-2$. We finally conclude that the times $t_{k+1}+1$ must be sieved out and $t_{k+1}+3$ and $t_{k+1}+4$ cannot be sieved out in order for the partition to have a chance of being pre-Ulrich. Inspecting the sieve, the only way for this to occur is for $t_{k+1}$ to be one of the times immediately before the $t_k$'s start occurring in pairs, e.g. $t_5 = 12$, $t_{21} = 48$, or $t_{85} = 192$. A straightforward computation shows that these times are precisely the numbers $t_{k+1}$ of the form $3\cdot 4^m$ with $m\geq 1$, and $k+1 = \sum_{i=0}^m 4^i$.
Finally, suppose $t_{k+1} = 3\cdot 4^m$ and $a_2 = 2t_{k+1}-2$. We use the trapezoid rule to show that $(a_2,4|3,0|C_k)$ is not contained in any Ulrich partition $(A|B|C)$. Observe that a $c$ was added at time $\frac{1}{2}t_{k+1}+1$; this time is the last time that was added in the block of pairs of times preceding $t_{k+1}$. Thus $4-2(\frac{1}{2}t_{k+1}+1)=2-t_{k+1}\in C$. Since the dual partition $(A^*|B^*|C^*)$ has corresponding word $\sigma^*$ beginning with $ca^k$ we symmetrically have $-(2-t_{k+1})+3=t_{k+1}+1\in A^*$ and $-1\in C^*$. The equality $$(t_{k+1}+1)-(2t_{k+1}-2)=(2-t_{k+1})-(-1)$$ and the trapezoid rule prove $a_2(N+1) = -1$, from which it follows that $|A|=2$. We also have $N=a_2 = 2t_{k+1}-2$. At time $N$ we have $a_2(N) = 0\in B$, so at time $N-1$ the smallest entry $c_\gamma \in C$ has $c_\gamma(N-1) = 3$. Then $c_\gamma = 3-(N-1)= 4-2(t_{k+1}-1)$. This means that $c_\gamma$ had to be added at time $t_{k+1}-1$, contradicting that this time was sieved out since it is congruent to $2 \pmod{3}$.
\end{proof}
The next result has a proof that uses the exact same technique as the proof of Proposition \ref{propaccc}, so we omit it.
\begin{proposition}\label{propacaa}
Case (1) of Lemma \ref{lemInitialSegment} never actually arises: if $(A|B|C)$ is an Ulrich partition normalized as in this subsection, then the word $\sigma$ cannot begin with $acaaa$.
\end{proposition}
Fortunately, the positive classification results are easier.
\begin{proposition}\label{prop322}
If $\sigma$ begins with $acaca$ then $\sigma = acaca$ and $(A|B|C)$ is the $(3,2,2)$ partition of Example \ref{ex322}.
\end{proposition}
\begin{proof}
Since $\sigma$ begins in $acaca$, $(A|B|C)$ contains $(16,10,4|3,0|{-2},-12)$, and therefore $N\geq 16$ with equality iff $\sigma = acaca$. By duality, the partition $(A^*|B^*|C^*)$ is Ulrich. By Lemma \ref{lemInitialSegment} and Propositions \ref{propaccc} and \ref{propacaa} it contains one of the following partitions: \begin{enumerate}
\item $(16,10,4|3,0|{-2},-12)$; this is impossible by the rectangle rule.
\item $(15,5|3,0|{-1},-7,-13)$; in this case, the equality $15-10=-2-(-7)$ and the trapezoid rule give $N=16$, so in fact $\sigma = acaca$.
\item $(12,4|3,0|{-2},-8)$; the equality $4-4=-2-(-2)$ and the trapezoid rule give $N = 5$, a contradiction.
\item $(11,5|3,0|{-1},-9)$; the equality $11-4=-2-(-9)$ and the trapezoid rule again gives $N=12$.
\end{enumerate}
Thus the only possibility is that $\sigma = acaca$.
\end{proof}
Our final result in this subsection completes the classification when $B = \{3,0\}$.
\begin{proposition}\label{prop222}
If $\sigma$ begins with $acca$ then $\sigma = acca$ and $(A|B|C)$ is the $(2,2,2)$ partition of Example \ref{ex222}.
\end{proposition}
\begin{proof}
The Ulrich partition $(A|B|C)$ contains $(12,4|3,0|{-2},-8)$, so $N\geq 12$ with equality iff $\sigma = acca$. As in the proof of Proposition \ref{prop322}, the dual $(A^*|B^*|C^*)$ contains one of the following partitions:
\begin{enumerate}
\item $(16,10,4|3,0|{-2},-12)$; this is impossible by the trapezoid rule applied to the equality $4-4 = -2 - (-2)$.
\item $(15,5|3,0|{-1},-7,-13)$; the equality $5-12= -8-(-1)$ and the trapezoid rule gives $N=12$ so $\sigma = acca$ (in fact this is also a contradiction, since the dual partition has the wrong structure.)
\item $(12,4|3,0|{-2},-8)$; this contradicts the rectangle rule.
\item $(11,5|3,0|{-1},-9)$; the equality $11-4=-2-(-9)$ and the trapezoid rule give $N=12$, so $\sigma = acca$.
\end{enumerate}
We conclude that $\sigma = acca$.
\end{proof}
\subsection{Classification when $b_1-b_2=1$}
In this final case we normalize $B = \{1,0\}$. We first classify an infinite family of examples which serve as primitive building blocks for a further family of examples.
\begin{proposition}\label{propn21}
Let $(2|1,0|C_k)$ be the partition obtained from $(2|1,0|\emptyset)$ by adding $k$ $c$'s. This partition is Ulrich if and only if $k$ is a number of the form $k = \sum_{i=0}^m 4^i = \frac{1}{3}(4^{m+1}-1)$ for some $m\geq 0$.
\end{proposition}
\begin{proof}
Write $C_k = \{c_1,\ldots,c_k\}$ with increasing entries. We prove the result by induction on $k$; clearly $(2|1,0|C_1)=(2|1,0|{-4})$ is Ulrich. Fix some $m_0\geq 0$ and let $k_0 = \frac{1}{3}(4^{m_0+1}-1)$, and assume $(2|1,0|C_{k_0})$ is Ulrich. The dimension of the flag variety corresponding to the type $(1,2,k_0)$ is $4^{m_0+1}+1$, so the set of times where $(2|1,0|C_{k_0})$ has an intersection is $[1,4^{m_0+1}+1]$.
At time $4^{m_0+1}+2$ the $a$ entry is at position $-4^{m_0+1}$, so adding a new $c$ to $(2|1,0|C_{k_0})$ gives $c_{k_0+1}(4^{m_0+1}+2) = -4^{m_0+1}$, i.e. $c_{k_0+1} = -2\cdot 4^{m_0+1}-2.$ This entry intersects the $B$-block at times $2\cdot 4^{m_0+1}+2$ and $2\cdot 4^{m_0+1}+3$. Continuing inductively, if $\ell$ $c$'s are added to $(2|1,0|C_{k_0})$ and $\ell\leq 4^{m_0+1}$, then $c_{k_0+\ell}(4^{m_0+1}+1+\ell) = -4^{m_0+1}+1-\ell$ so $c_{k_0+\ell} = -2\cdot 4^{m_0+1} - 2\ell$, which meets the $B$-block at times $2\cdot 4^{m_0+1}+2\ell$ and $2\cdot 4^{m_0+1}+1+2\ell$. Then if $\ell <4^{m_0+1}$, the partition $(2|1,0|C_{k_0+\ell})$ is not Ulrich since there is no intersection at time $2\cdot 4^{m_0+1}+1$. When $\ell = 4^{m_0+1}$, there are intersections between some $c_{k_0+\ell}$ and the $a$ for all $t\in [4^{m_0+1}+2,2\cdot 4^{m_0+1}+1]$ and between some $c_{k_0+\ell}$ and the $B$-block for all $t\in [2\cdot 4^{m_0+1}+2,4^{m_0+2}+1]$. Therefore $(2|1,0|C_{k_0})$ is Ulrich.
\end{proof}
\begin{remark}
Analyzing the proof of Proposition \ref{propn21}, the sets $C_k$ (equivalently, the numbers $c_k$) which appear are easily computable. Here are the first several terms:
$$-4, \qquad -10,-12,-14,-16, \qquad -34,-36,\ldots,-64, \qquad -130,-132,\ldots,-256, \qquad \ldots$$
A decreasing block of $4^k$ even integers ending in $4^{k+1}$ is followed by a gap of size $4^{k+1} + 2$. In Figure \ref{fig-125} we give the time evolution diagram for the partition $(2|1,0|C_5)$ of type $(1,2,5)$.
\begin{figure}[htbp]
\input{125Ex.pstex_t}
\caption{Time evolution diagram of the Ulrich partition $(2|1,0|C_5)$ of type $(1,2,5)$.}\label{fig-125}
\end{figure}
\end{remark}
A more careful analysis proves the following result.
\begin{lemma}\label{lem-stupid}
Suppose $(A|1,0|C)$ is an Ulrich partition and that the word $\sigma\in \{a,c\}^*$ generating it from $(\emptyset|B|\emptyset)$ begins with $acc$. Then $|A|=1$ and $2\in A$, so $(A|B|C)$ is one of the examples of Proposition \ref{propn21}.
\end{lemma}
\begin{proof}
We will be brief, as the proof is very similar to other arguments in this section. Consider the partition obtained from $(2|1,0|C_k)$ by adding an $a$, with $k\geq 2$. It is easy to show that unless $k$ is of the form $k = (4^{m+1}-1)/3$ (so $m\geq 1$ and the original partition is Ulrich) then the resulting partition is not even pre-Ulrich. On the other hand, if $k$ is of this form, then adding a new $a_2$ at time $t_0 = 4^{m+1}+2$ will give $a_2 = 2\cdot 4^{m+1}$. One shows that we will be forced to add new $c$'s at positions $-3\cdot 4^{m+1}$ and $-4^{m+2}$. Then the $c$ at position $-3\cdot 4^{m+1}$ meets $0\in B$ at the same time as $a_2$ meets the $c$ at position $-4^{m+2}$, a contradiction.
\end{proof}
Proposition \ref{propn21} and Lemma \ref{lem-stupid} allow us to focus on classifying Ulrich partitions $(A|1,0|C)$ with $2\in A$ and $|A|\geq 2$, since they completely classify Ulrich partitions where $A = \{2\}$. The next lemma explains the importance of the examples of Proposition \ref{propn21} to more general Ulrich partitions.
\begin{lemma}\label{lemNoMorecs}
Let $(A|1,0|C) $ be an Ulrich partition, and suppose $2\in A$. Let $c_1 = \max C$, so that $c_1$ meets the $B$-block at time $t_0:=-c_1$. Let $A'\subset A$ be those $a$'s which have met the $B$-block before time $t_0$. Then the partition $(A'|1,0|c_1)\subset (A|1,0|C)$ is Ulrich of dimension $t_0+1$, so the dual $(A'|1,0|c_1)^*$ is one of the Ulrich partitions of Proposition \ref{propn21}.
\end{lemma}
\begin{proof}
Write $A = \{a_\alpha,\ldots, a_1\}$ and $C = \{c_1,\ldots, c_\gamma\}$ in decreasing order, and consider how $(A|1,0|C)$ is built from $(\emptyset|1,0|\emptyset)$ by adding $a$'s and $c$'s; let $\sigma\in \{a,c\}^*$ be the corresponding word. Since the time $t=1$ intersection is between $A$ and $B$, $\sigma$ begins with an $a$. We can then write $\sigma = a^k c \sigma'$ for some $k\geq 1$ and some word $\sigma'$. Then $A$ contains the first $k$ even integers $\{2,\ldots,2k\}$ and the intersections at times $t\in [1,2k-1]$ occur between the $A$- and $B$-blocks. At time $2k$ the entry $a_1$ is at position $2-2k$, so $c_1(2k) = -2k+2$ and $c_1 = -4k+2$.
Suppose $c_2$ meets $a_1$ at time $t_1$. We claim $t_1>t_0$. Indeed, first notice that $c_1$ meets $a_i$ $(1\leq i\leq k)$ at time $2k-1+i$, so $t_0 \geq 3k$. Since $c_2$ meets $a_1$ at time $t_1$, it meets $a_i$ $(1\leq i\leq k)$ at time $t_1-1+i$. But then assuming $3k\leq t_1 \leq t_0=4k-2$, we find that $c_2$ meets some $a_i$ at the same time as $c_1$ meets $0\in B$. We conclude $t_1>t_0$.
Therefore, if $\sigma$ contains at least $2$ $c$'s, then the second $c$ is added after time $t_0$. It follows that if $\sigma' = a^k c a^\ell$, where $\ell$ is the additional number of $\ell$'s which are added before time $t_0$, then $\sigma'$ is an initial segment of $\sigma$, the corresponding subpartition is $(A'|1,0|c_1)$, and this partition is Ulrich. The last intersection in this partition occurs between $c_1$ and $1\in B$, so its dual is $(2|1,0|A'^*)$ and Proposition \ref{propn21} applies.
\end{proof}
\begin{example}[A two-parameter family of Ulrich partitions]\label{ex2param}
Let $m_1,m_2\geq 0$, and let $k_i = \frac{1}{3}(4^{m_i+1}-1)$. Let $(2|1,0|C_{k_1})$ be the Ulrich partition of Proposition \ref{propn21}, and let $(A_{k_2}|1,0|{-1})$ be the partition symmetric to $(2|1,0|C_{k_2})$. There is an Ulrich partition $(A|B|C)$ uniquely specified by the requirements that the type is $(k_1+k_2,2,1)$ and \begin{align*}(2|1,0|C_{k_1})^* &\subset (A|B|C) \\ (A_{k_2}|1,0|{-1})^* &\subset (A|B|C)^*.\end{align*} The dimension of the type $(k_1+k_2,2,1)$ is $N = 4^{m_1+1}+4^{m_2+1}$.
Let $t_0 = 4^{m_1+1}$ be as in Lemma \ref{lemNoMorecs}. For times $t\in [1,t_0]$, the pattern of intersections in $(A|B|C)$ is the same as that of $(2|1,0|C_{k_1})^*$. Applying Lemma \ref{lemNoMorecs} to the dual $(A|B|C)^*$, the corresponding time is $t_0^* = 4^{m_2+1}$, and the pattern of intersections in $(A|B|C)$ for times $t\in [t_0+1,N]$ is the same as that of $(A_{k_2}|1,0|{-1})$ for times $t\in [1,t_0^*]$. Thus every time $t\in [1,N]$ has an intersection.
Observe that if $m_1\neq m_2$ then the partitions corresponding to $(m_1,m_2)$ and $(m_2,m_1)$ are distinct but related by the symmetric dual. The partition corresponding to $(m_1,m_1)$ is its own symmetric dual.
For example, consider the case $m_1=0$ and $m_2= 1$.
Then we compute \begin{align*} N&= 20\\C_{k_1} &= \{-4\} \\ A_{k_2} &= \{17,15,13,11,5\}, \\(2|1,0|C_{k_1})^* &= (2|1,0|{-4})\\ (A_{k_2}|1,0|{-1})^*&= (17|1,0|{-1},-3,-5,-7,-13)\\
(A|B|C) &= (20,18,16,14,8,2|1,0|{-4}).
\end{align*}
See Figure \ref{fig-621} for the time evolution diagram. Note that the examples of Proposition \ref{propn21} can be regarded as (duals of) the degenerate case where $m_2 = -1$.
\begin{figure}[htbp]
\input{621Ex.pstex_t}
\caption{Time evolution diagram of the partition $(20,18,16,14,8,2|1,0|{-4})$ of type $(6,2,1)$.}\label{fig-621}
\end{figure}
\end{example}
\begin{theorem}
If $(A|1,0|C)$ is an Ulrich partition with $2\in A$ and $|A|\geq 2$ then it is either the dual of one of the examples from Proposition \ref{propn21} or there exist $m_1,m_2\geq 0$ such that $(A|1,0|C)$ is the partition corresponding to $(m_1,m_2)$ in Example \ref{ex2param}.
\end{theorem}
\begin{proof}Applying Lemma \ref{lemNoMorecs} to $(A|B|C)$ and its dual, we find that there are $m_1,m_2\geq 0$ and $k_i = \frac{1}{3}(4^{m_i+1}-1)$ such that $(2|1,0|C_{k_1})^*\subset (A|B|C)$ and either $(2|1,0|C_{k_2})^*\subset (A|B|C)^*$ or $(A_{k_2}|1,0|{-1})^*\subset (A|B|C)^*$, according to whether $2\in A^*$ or $-1\in C^*$. Without loss of generality, assume $k_1\leq k_2$.
If $2\in A^*$ then $(2|1,0|C_{k_2})^*\subset (A|B|C)^*$. Computing the dual of the Ulrich partition $(2|1,0|C_{k_2})$ of dimension $4^{m_2+1}+1$, we find $-4^{m_2+1}\in C^*$ and $4^{m_2+1}-2\in A^*$ (since $-4\in C_{k_2}$). Using the containment $(2|1,0|C_{k_1})^*\subset (A|B|C)$, we also have that $-4^{m_1+1}\in C$ and $4^{m_1+1}-2\in A$. Then the equality $$(4^{m_2+1}-2) - (4^{m_1+1}-2) = (-4^{m_1+1})-(-4^{m_2+1})$$ and the trapezoid rule give that $N = 4^{m_1+1}+4^{m_2+1}-1$ and $|C|=1$. This is impossible: the dimension of the type $(k_1+k_2,2,1)$ is $4^{m_1+1}+4^{m_2+1}$, so no type $(k_3,2,1)$ can have dimension $4^{m_1+1}+4^{m_2+1}-1$ by considering congruences mod $3$. Therefore it must be the case that $-1\in C^*$.
We now know that $(2|1,0|C_{k_1})^*\subset (A|B|C)$ and $(A_{k_2}|1,0|{-1})^*\subset (A|B|C)^*$. Assume both of these containments are proper, since otherwise we are in the case of Proposition \ref{propn21}. Think of building $(A|B|C)$ from $(\emptyset|B|\emptyset)$ by adding $a$'s and $c$'s. After the word corresponding to $(2|1,0|C_{k_1})^*$ has been added, we must either add an $a$ or a $c$.
\emph{Case 1:} Suppose the next letter which is added is an $a$. We claim that the trapezoid rule implies that $(A|B|C)$ is the partition of Example \ref{ex2param} corresponding to the integers $m_1,m_2$. The Ulrich subpartition $(2|1,0|C_{k_1})^*$ has dimension $4^{m_1+1}+1$, so the new $a$ is added at time $4^{m_1+1}+2$. The element of $C$ arising from the inclusion $(2|1,0|C_{k_1})^*\subset (A|B|C)$ is $-4^{m_1+1}\in C$, so the new $a$ is at position $2$ at time $4^{m_1+1}+2$, and thus $4^{m_1+1}+4\in A$. On the other hand, the inclusion $(A_{k_2}|1,0|{-1})^* \subset (A|B|C)^*$ gives $4^{m_2+1}+1 \in A^*$ and $5-(4^{m_2+1}+2)=-4^{m_2+1}+3\in C^*$. We have $$(4^{m_2+1}+1)-(4^{m_1+1}+4)=(-4^{m_1+1})-(-4^{m_2+1}+3)$$ so by the trapezoid rule $N = 4^{m_1+1}+4^{m_2+1}$ and $|C|=1$. Therefore $(A|B|C)$ is the Example \ref{ex2param}.
\emph{Case 2:} Suppose the next letter which is added is a $c$. We claim that this is impossible: there are no such Ulrich partitions. We focus solely on the Ulrich partition $(A|B|C) := (2|1,0|C_{k_1})^*,$ as the contradiction arises from this initial segment and not from ``global'' considerations given by the trapezoid rule. If $m_1 = 0$ then Lemma \ref{lem-stupid} gives $|A| = 1$, a contradiction, so we assume $m_1\geq 1$. Let $(A|B|C\cup \{c_2\})$ be the partition obtained by adding $c_2$ at time $t_0:=4^{m_1+1} + 2$. Writing $A = \{a_1,\ldots,a_{k_1}\}$ in increasing order, we have $c_2(t_0) =a_1 ( t_0) = -4^{m_1+1}$ since $a_1 = 2$. For $1\leq i\leq 4^{m_1}$ we have $a_i = 2i$, so $c_2$ meets the $A$-block for all times $t\in [t_0,t_0+4^{m_1}-1]$.
At time $t_0+4^{m_1}$ there is no intersection yet. It is not possible to add a new $c$ at this time. If we were to add some $c_3$ at time $t_0+4^{m_1}$ then this would provide intersections between $c_3$ and the $A$-block for the next $4^{m_1}$ times. However, $a_{4^{m_1}+1} = a_{4^{m_1}}+4^{m_1}+2$ meets $c_2$ at time $t_0+4^{m_1}+2\cdot 4^{m_1-1}$, which is a time excluded by the intersection of $c_3$ with the $A$-block. Thus, at time $t_0+4^{m_1}$ we must add a new $a$, call it $a_{k_1+1}$. It has $$a_{k_1+1}(t_0+4^{m_1}) = c_1(t_0+4^{m_1})$$ so $$a_{k_1+1} = -4^{m_1+1}+2(t_0+4^{m_1})=t_0+2\cdot 4^{m_1}+2.$$
By the same argument as in the last paragraph, no new $c$'s can be added before the time $t_1$ when $c_2$ and $a_{k_1}$ meet. At each time in $[t_0,t_1]$ where $c_2$ does not meet the $A$-block, a new $a$ \emph{must} be added. This statement amounts to the claim that $a_{k_1+1}$ does not meet the $B$-block before time $t_1$. Since $a_{k_1}(t_0)=-4$ and $c_2(t_0) = -4^{m_1+1}$, we have $t_1 =t_0+2\cdot 4^{m_1}-2$. Thus $a_{k_1+1}$ meets the $B$-block at times $t_1+3$ and $t_1+4$.
Finally, we obtain our contradiction at time $t_1+1$. No intersection has been scheduled yet. We cannot add some $c_3$ at this time, since it would still meet the $A$-block at time $t_1+3$ when $a_{k_1+1}$ meets $1\in B$. Thus we must add an $a$, call it $a'$, at time $t_1+1$. We have $a'(t_1+1)=c_1(t_1+1)$, so $$a'(t_0) = a'(t_1+1)+(t_1+1-t_0)= -4^{m_1+1}+2(t_1+1)-t_0= 4^{m_1+1}$$ while $c_2(t_0) = -4^{m_1+1}$. Thus, at time $t_0+4^{m_1+1}$, all three of $a'$, $0\in B$, and $c_2$ coincide. Therefore, the partition $(A|B|C\cup \{c_2\})$ cannot be extended to an Ulrich partition by adding $a$'s and $c$'s.
\end{proof}
\section{Ulrich partitions of type $(2,n,1)$}\label{sec-1n2}
In this section, we classify Ulrich partitions of type $(2,n,1)$. Throughout the section, we consider Ulrich partitions of the form $P = (a_1,a_2|b_1,\ldots,b_n|c)$, and normalize the evolution of the partition to subtract $1,0,-1$ from the blocks, as in \S\ref{sec-beta2}. We further normalize the positions $$a_2 = y, \quad a_1 = y+2m,\quad \mbox{and}\quad c=-y,$$ so that the intersection $a_2c$ happens at time $y$ and the gap between the $a$-entries is $2m$. In our classification we will view $m$ as being a fixed parameter, similarly to how $b_1-b_2$ was a fixed parameter in the classification of Ulrich partitions of type $(\alpha,2,\gamma)$.
\begin{definition}
For any $m\geq 2$, the {\em fundamental pattern $F_{m}$ of type $m$} is the partition of type $(2, m-1,1)$ given by $$F_{m}=(3m, m | m-1, m-2, \dots, 2, 1 | {-m}).$$ The fundamental pattern is Ulrich by Example \ref{example-1n2}.
\end{definition}
\begin{definition}
The {\em elongation} of a partition $$P=(a_1, a_2|b_1, \dots, b_n|c) = (y+2m,y|b_1,\ldots,b_n|{-y})$$ of type $(2,n,1)$ is the partition $E(P)$ of type $(2, n + 2m, 1)$ obtained by adding two contiguous blocks of length $m$ at the beginning and end of the $b$ sequence and shifting the $a$ and $c$ entries as follows: $$(y + 5m, y + 3m | y + 3m-1, \dots, y + 2m, b_1, \dots, b_n,- y-m, \dots, -y-2m+1 |{-y- 3m}).$$ The $k$th elongation of $P$ is defined inductively by $E^k(P) := E(E^{k-1}(P))$ and $E^0(P)=P$; it has type $(2,n+2mk,1)$.
\end{definition}
\begin{example}
The fundamental pattern of type 2 is the partition $F_2= (6,2|1|{-2})$. Its first and second elongations are $$E(F_2)= (12, 8|7, 6, 1, -4,-5|{-8}) \quad E^2(F_2) = (18, 14|13,12, 7,6,1,-4,-5, -10,-11 |{ -14}).$$
The fundamental pattern of type 3 is the partition $F_3 = ( 9,3|2,1|{-3})$. Its first elongation is $$E(F_3)= (18, 12|11,10,9,2, 1, -6,-7,-8|{-12}).$$ See Figure \ref{fig-251} for the time evolution diagram of $E(F_2)$. \begin{figure}[htbp]
\input{251Ex.pstex_t}
\caption{Time evolution diagram of the partition $E(F_2)$ of type $(2,5,1)$.}\label{fig-251}
\end{figure}
\end{example}
\begin{remark}
We will also need a degenerate case of the previous definitions. We define the fundamental pattern $F_1$ to be the partition $(3,1|\emptyset|{-1})$. Its elongation $E(F_1) = (6,4|3,-2|{-4})$ still makes sense. Observe that $E(F_1)$ is the Ulrich partition of Example \ref{ex-221}. To avoid discussing trivialities in the arguments that follow we generally focus on the $m\geq 2$ case and assure the reader that appropriate arguments work in the $m=1$ case.
\end{remark}
The main theorem in this section is the following.
\begin{theorem}\label{thm-2n1}
A partition $P$ of type $(2,n,1)$ is Ulrich if and only if there exists $k \geq 0$ and $m >0$ such that $n = 2mk + m -1$ and $P=E^k(F_{m})$.
\end{theorem}
The partitions $E^k(F_m)$ are clearly all distinct, so Theorem \ref{thm-2n1} implies Theorem \ref{thm-1n2intro}. First we observe that the partitions $E^k(F_m)$ are indeed Ulrich.
\begin{lemma}\label{lem-2n1}
If $P$ is an Ulrich partition of type $(2,n,1)$ and $P$ has dimension $2y+m-1$ then $E(P)$ is an Ulrich partition of dimension $2y'+m-1$, where $y' = y+3m$. In particular, the partition $E^k(F_m)$ is Ulrich of type $(2,2mk+m-1,1)$.\end{lemma}
\begin{proof}
The beginning and ending intersections in $E(P)$ all occur between $a$'s or $c$'s and the new contiguous blocks of $b$'s as follows.
\begin{itemize}
\item For $t\in [1,m]$, $a_2$ meets the left new $B$-block.
\item For $t\in (m,2m]$, $c$ meets the right new $B$-block.
\item For $t\in (2m,3m]$, $a_1$ meets the left new $B$-block.
\item For $t\in [2y+4m,2y+5m)$, $a_2$ meets the right new $B$-block.
\item For $t\in [2y+5m,2y+6m)$, $c$ meets the left new $B$-block.
\item For $t\in [2y+6m,2y+7m)$, $a_1$ meets the right new $B$-block.
\end{itemize}
Note that $P$ can be obtained from $E(P)$ by shifting to the time $3m$ position $(E(P))(3m)$ and throwing out the two new $B$ blocks. Since $P$ is Ulrich of dimension $2y+m-1$, we conclude that there are unique intersections in $E(P)$ at all times $t\in (3m,2y+4m)$. Clearly $\dim E(P) = \dim P + 6m=2y+7m-1$, so $E(P)$ is Ulrich.
For the second statement, it suffices to observe that the fundamental partition $F_m$ satisfies the equality $\dim F_m = 2y+m-1=3m-1$, which is clear.
\end{proof}
\begin{observation}\label{obs-2n1}
By Lemma \ref{lem-2n1}, if $P=(y+2m,y|b_1,\ldots,b_n|{-y})$ is of the form $E^k(F_m)$ then it satisfies the formula $$\dim P = 2y+m-1.$$ For any $P$, we say that it satisfies the \emph{dimension formula} if the above equality holds. Theorem \ref{thm-2n1} in particular claims that the dimension formula holds for any Ulrich partition of type $(2,n,1)$.
We also observe that Theorem \ref{thm-2n1} implies that for any Ulrich $P$ the sequence $b_1,\ldots,b_n$ begins with a contiguous block $y-1,\ldots,y-l$ of length exactly $l$, where $l$ is either $m$ or $m-1$ depending on whether $k>0$ or $k=0$ in the equality $P = E^k(F_m)$.
\end{observation}
The next lemma will form the base of an induction to prove Theorem \ref{thm-2n1}.
\begin{lemma}\label{lem-2n1base}
Let $P=(a_1,a_2|b_1,\ldots,b_n|c) = (y+2m,y|b_1,\ldots,b_n|{-y})$ be an Ulrich partition. If $y\leq 2m$ then $P = F_m$.
\end{lemma}
\begin{proof}
The intersection $a_2c$ occurs before the intersection $a_1b_1$. It follows that the partition $(y|b_1,\ldots,b_n|{-y})$ is Ulrich of type $(1,n,1)$. Recalling the classification of such partitions, the only possibility is that $n=y-1$, $a_1$ meets $c$ at time $2y$, and $(b_1,\ldots,b_n) = (y-1,\ldots,1)$.
\end{proof}
On the other hand, if $P$ is too large to be treated by Lemma \ref{lem-2n1base} then we show that it is an elongation of a smaller partition. The next lemma completes the proof of Theorem \ref{thm-2n1}.
\begin{lemma}
Let $P = (a_1,a_2|b_1,\ldots,b_n|c) = (y+2m,y|b_1,\ldots,b_n|{-y})$ be an Ulrich partition. If $y>2m$ then there is some Ulrich partition $P'$ of type $(2,n',1)$ with $E(P')=P$.
\end{lemma}
\begin{proof}
Inducting on $n$, by Lemma \ref{lem-2n1base} we may assume that any Ulrich partition of the form $$P'=(y'+2m,y'|b'_1,\ldots,b'_{n'}|{-y'})$$ with $n'<n$ is equal to $E^k(F_m)$ for some $k$. In particular, $P'$ satisfies the dimension formula $$\dim P' = 2y' + m-1$$ and the $(b')$'s begin with a contiguous block $y'-1,\ldots,y'-l$ of length exactly $m$ or $m-1$.
\emph{Claim 1:} in the partition $P$ the time $t=1$\ intersection is $a_2b_1$, so $b_1 = y-1$. Suppose this is not the case. Then $b_n = -y+1$, and $a_1$ meets $b_n$ at time $2y+2m-1$. By time $2y$ the $a_2$ and $c_1$ entries have already passed through the $B$-block, so all intersections for times $t\in [2y,2y+2m)$ must occur between $a_1$ and the $B$-block. This gives that a contiguous block $B'=\{-y+2m,\ldots,-y+1\}\subset B$ of length $2m$ occurs in the $B$-block. Since $a_2$ meets $B'$ for times in $[2y-2m,2y)$ and $c$ meets $B'$ for times in $[1,2m]$, it follows that the partition $$P'=(y+m,y-m|B\setminus B'|{-y+m})$$ is Ulrich. By induction, $B\sm B'$ starts with a contiguous block $\{y-m-1,\ldots,y-m-l\}$ of length exactly $l \in \{m,m-1\}$. In $P'$ the intersection at time $l+1$ must be between $c$ and some $b_0\in B\setminus B'$. This is a contradiction, since in $P$ we find that $a_1$ meets $b_0$ at the same time as $a_2$ meets an entry of $B'$. Therefore $b_1=y-1$.
Having established the claim, let $m_1\geq 1$ be the largest integer such that the contiguous block $B' = \{y-1,\ldots,y-m_1\}\subset B$. Clearly $m_1\leq 2m$, since otherwise $a_1$ and $a_2$ would both intersect $B'$ at the same time. An argument similar to the previous paragraph shows that in fact we must have $m_1<2m$. At time $m_1+1$ the intersection must be $b_nc$; let $m_2\geq 1$ be the largest integer such that the contiguous block $B''=\{-y+m_1+m_2,\ldots,-y+m_1+1\}\subset B$. Observe that $$\dim P = 2y+2m-m_1-1$$ since the last intersection is $a_1b_n$, so $P$ satisfies the dimension formula if and only if $m_1=m$; our eventual goal is to show that $m_1=m_2 = m$.
\emph{Claim 2: $m_1+m_2 = 2m$.} Let $t\in (m_1,2m]$. When $a_1$ is at position $-y+t$, both $a_2$ and $c$ have finished intersecting the $B$-block. Thus $-y+t\in B''$ for all $t\in (m_1,2m]$, and so $m_1+m_2\geq 2m$. On the other hand, at time $t=2m+1$ we have an intersection $a_1b_1$, so $-y+2m+1\notin B''$. Thus $m_1+m_2= 2m$.
\emph{Claim 3: $y> 2m+m_1$.} By assumption $y>2m$. If $t\in (2m,2m+m_1]$ then $a_1$ meets $B'$ at time $t$, so it is not possible for the intersection $a_2c$ to happen at such a time. Thus $y>2m+m_1$.
\emph{Claim 4: $m_1=m_2=m$.} Consider the partition $$P' = (y-m_1,y-2m-m_1|B\sm(B'\cup B'')|{-y+2m+m_1})$$ obtained by looking at the time $2m+m_1$ evolution $P(2m+m_1)$ and removing the contiguous blocks $B',B''$. This makes sense since $y>2m+m_1$, and $P'$ is Ulrich because $P$ is Ulrich. By induction, $P'$ satisfies the dimension formula $$\dim P' = 2(y-2m-m_1)+m-1.$$ On the other hand, the type of $P'$ is $(2,n-2m,1)$, so $$\dim P = \dim P'+6m = 2y+3m-2m_1-1.$$ Comparing this with our earlier expression for $\dim P$ gives $m_1=m$ and so $m_2=m$ as well. This implies $P = E(P')$.
\end{proof}
\section{Ulrich partitions of type $(2,n,2)$}\label{sec-2n2}
In this section, we classify Ulrich partitions of type $(2,n,2)$. Throughout the section, we consider Ulrich partitions of the form $(a_1, a_2| b_1, \dots, b_n| c_1 , c_2)$, and normalize the evolution of the partition to subtract $1, 0, -1$ from the blocks. By symmetry, we may as well assume $a_1-a_2>c_1-c_2$. We further normalize the positions
$$a_1 = y + s + r, \quad a_2 = y, \quad c_1 = - y, \quad \mbox{and} \quad c_2 = -y -s.$$
The intersection $a_2 c_1$ occurs at time $y$, the gap between the $a$-entries is $s+r$, and the gap between the $c$-entries is $s$.
\begin{definition}
Let $P_{u}$ be the partition of type $(2, 2u,2)$ given by
$$P_{u} := (6 u +5, 2u+1 | 2 u, 2 u -2, \dots, 4, 2, -1, -3, \dots, - 2 u +3, - 2u +1 | {- 2u -1}, - 6 u -3).$$ We observe that the subpartition $(a_2|B|c_1)$ is the Ulrich partition of type $(1,2u,1)$ corresponding to the subset of $[2u]$ consisting of even numbers in the proof of Theorem \ref{thm-1n1}.
\end{definition}
\begin{example}
We have $$P_1 = (11, 3|2, -1|{-3}, -9),$$
$$P_2 = (17, 5|4,2, -1, -3|{-5}, -15).$$ The time evolution diagram of $P_1$ was given in Example \ref{ex222}. The time diagram for $P_2$ is Figure \ref{fig-242}.
\begin{figure}[htbp]
\input{242Ex.pstex_t}
\caption{Time evolution diagram of the partition $P_2$ of type $(2,4,2)$.}\label{fig-242}
\end{figure}
\end{example}
The main theorem of this section asserts these are the only examples.
\begin{theorem}\label{thm-2n2}
If $P$ is an Ulrich partition of type $(2, n, 2)$, then $n= 2 u$ is even and up to symmetry $P= P_{u}$.
\end{theorem}
We first show that these examples are in fact Ulrich.
\begin{lemma}
The partition $P_{u}$ is Ulrich. In particular, every flag variety $F(2, 2n+2; 2n+4)$ admits a Schur bundle which is Ulrich for $\cO(1)$.
\end{lemma}
\begin{proof}
The dimension of $P_u$ is $N = 8u+4$. For times $t\in[1,4u+1]$, there are intersections from the Ulrich subpartition $(a_2|B|c_1)$ of type $(1,2u,1)$. At time $4u+2$ we have the intersection $a_2 c_2$. As $P_u$ is its own symmetric dual, times in $[4u+3,8u+4]$ also have intersections.
\end{proof}
The plan of the proof of Theorem \ref{thm-2n2} is similar to the classification of partitions of type $(2,n,1)$ in the previous section. We will first show that if $P$ is an Ulrich partition with $y\leq s$ then $P$ is a known example. We next show that if instead $y>s$ then $P$ can be obtained from a shorter example by a process of elongation. However, we will finally show that elongations of the known examples are never Ulrich; this final step is the primary difference from the strategy in the $(2,n,1)$ case, where such elongations were possible.
Before beginning the proof in earnest, we establish a couple of lemmas which are true for arbitrary Ulrich partitions of type $(2,n,2)$. Let $P$ be an Ulrich partition of this type, normalized as in the first paragraph of the section. (In particular, recall that $s+r = a_1-a_2>c_1-c_2 = s$.)
\begin{lemma}\label{lem-afirst}
In the Ulrich partition $P$, the intersection at time $t=1$ is $a_2 b_1$.
\end{lemma}
\begin{proof}
If not, then the intersection at time $t=1$ is of the form $b c_1$. Let $m\geq 1$ be the maximal number so that $\{-y+1,\ldots,-y+m\}\subset B$, so $-y+m+1\notin B$. This implies that $y-i\notin B$ for $i\in [m]$. Note that $m\leq s$. At the time $t_0 = 2y+s+r-m-1$ when $a_1(t_0) = -y+m+1$ we have $c_2(t_0)=y-m-1+r\geq y-m$, so $c_2(t_0)\notin B$. Furthermore, $c_1(t_0)=y+s+r-m-1>y$ and $a_1(t_0) = -y-s-r+m+1<-y$. We conclude that there is no intersection at time $t_0$ even though $a_2$ has not yet passed through the $B$-block.
\end{proof}
Lemma \ref{lem-afirst} applied to the symmetric dual implies that the last intersection must be $a_1 b_n$. From now on, we let $m\geq 1$ be the largest number such that $\{y-1,\ldots,y-m\}\subset B$.
\begin{lemma}\label{lem-boundm}
We have $m<\min\{r,y-1\}$.
\end{lemma}
\begin{proof}
First we show that $m<y-1$. Clearly $m\leq y-1$ since the intersection $a_2c_1$ happens at time $y$. If $m= y-1$ then $B$ is the contiguous block $\{y-1,\ldots,1\}$. By Lemma \ref{lem-afirst} the last intersection in $P$ is between $a_1$ and $1\in B$, so $\dim P = s+r+y-1 = 4y$. At time $2y$ we have $a_2(2y) = -y$ and $c_1(2y) = y$, so the intersection must be of the form $ac$; since $a_2-c_2< a_1-c_2$, it must be $a_2c_2$. Thus $s=2y$, $r = y+1$, and $$P = (4y+1,y|y-1,\ldots,1|{-y},-3y).$$ But then $a_1$ and $c_2$ meet at position $(y+1)/2$, which is in $B$ if $y\geq 3$. Parity is violated if $y=2$, so we conclude $m<y-1$.
Next we show $m<r$. If $m>r$ then the last intersection in $P$ is $b_1c_2$, contradicting Lemma \ref{lem-afirst}. Suppose $m=r$. At time $m+1$ the intersection is one of $bc_1$, $a_1b$, or $a_2c_1$. If it is $a_2c_1$ then $m=y-1$ and we are done by the previous paragraph. If the intersection is $bc_1$ then $-y+r+1\in B$ and $c_2$ meets $y-1\in B$ when $a_1$ meets $-y+r+1\in B$, a contradiction. Finally, if the intersection is $a_1b$ then again the last intersection is of type $bc$, violating Lemma \ref{lem-afirst}.
\end{proof}
\begin{observation}\label{obs-dim2n2}
Combining Lemma \ref{lem-afirst} and $\ref{lem-boundm}$, we have $b_n = -y+m+1$ for any Ulrich partition $P$ of type $(2,n,2)$, and thus we have the dimension formula $$\dim P =2y+s+r-m-1.$$
\end{observation}
Next we establish the fact that if the intersection $a_2c_1$ happens before $a_1$ or $c_2$ meet the $B$-block then $P$ is a known example.
\begin{lemma}\label{lem-fundamental2n2}
Let $P$ be an Ulrich partition of type $(2,n,2)$ normalized as in this section. If $y\leq s+m$ then $P = P_{\frac{s-2}{4}}$.
\end{lemma}
\begin{proof}
If $y \leq s+m$, then every intersection until time $y$ is of the form $a_2 b$ or $b c_1$. Hence, there must be a $b$ either at position $p$ or $-p$ for every $1 \leq p < y$. Consequently, the total number of $b$ entries is $y-1$ and for $t \in (y, 2y)$ every intersection is also of the form $a_2 b$ or $b c_1$. By the dimension formula, $4y = 2y+s+r-m-1$, so $2y = s+r-m-1$.
At time $t=2y$, $a_2$ and $c_1$ are at positions $-y$ and $y$, respectively. They cannot intersect a $b$. The intersection $a_2 c_2$ happens before $a_1 c_1$, so at time $t = 2 y$ the intersection is $a_2 c_2$ and $s=2y$. At time $2y+1$, $c_2$ cannot intersect a $b$, hence the intersection must be $a_1 c_1$. We conclude that $r=2$, so $m = 1$.
We now inductively determine the $B$-block. We have $y-1,-y+2\in B$. When $a_2$ is at position $-y+k$ with $3\leq k<y$ the entry $c_1$ is at position $y-k+2$, while $c_2$ and $a_1$ have already intersected all other entries. By induction we may assume $y-k+2\in B$ iff $k$ is odd; therefore $-y+k\notin B$ iff $k$ is odd and $y-k\in B$ iff $k$ is odd.
Finally, note that $y$ is odd. Equivalently, $2\in B$. Otherwise, at time $t = 3y$, $c_2$ is at position $0$ and $a_2$ is at position $2$ and there is no intersection. Therefore $s\equiv 2 \pmod 4$, and $P = P_{\frac{s-2}{4}}$.
\end{proof}
We next argue that any Ulrich partition of type $(2,n,2)$ must be obtained from some $P_u$ by a process of elongation. Finally, we will see that $P_u$ cannot be elongated.
Given a partition $P$ at time $t=0$, we will view it as three sequences $A B C$, where $A$ is the sequence of $a$'s and blank spaces $a_1 \times \cdots \times a_2$, $C$ is the sequence of $c$'s and blank spaces $c_1 \times \cdots \times \ c_2$ and $B$ is the sequence of $b$'s and blank spaces in between $a_2$ and $c_1$. The partition $P$ is a concatenation of these three. Given a contiguous pattern $\Psi$ of $b$'s and blank spaces at positive positions, let $\Psi^c$ be the complementary pattern which has a $b$ at position $p$ if $\Psi$ does not have a $b$ at position $-p$ and vice versa. Let $\ell(\Psi)$ be the length of $\Psi$. Let $X_\ell$ denote a contiguous sequence of blank spaces of length $\ell$.
\begin{example}
If $\Psi = b \times b \ b \ \times$, then $\Psi^c = b \times \times \ b \ \times $. Also, $X_3 = \times \times \times$.
\end{example}
\begin{definition}
Let $P=ABC$ be a partition and $\Psi$ a pattern of $b$'s and blanks. The \emph{$\Psi$-elongation of $P$} is the partition corresponding to the concatenation $$A \Psi X_{\ell(\Psi)} B X_{\ell(\Psi)} \Psi^c C$$ of patterns.
\end{definition}
\begin{example}
Continuing the previous example, if $$P = a_1 \times \times \times \ a_2 \ b_1 \ b_2 \ \times \times \ b_3 \times \times \ c_1 \times c_2,$$ then the $\Psi$-elongation of $P$ is
$$ a_1 \times \times \times a_2 \ b \times b\ b \times \times \times
\times \times \times \ b_1 \ b_2 \times \times \ b_3 \times \times \times \times \times \times \times b \times \times \ b \times c_1 \times c_2.$$
\end{example}
For integers $q,r>0$, we let $K(q, r,m)$ be the pattern of $b$'s consisting of $q$ iterations of a contiguous block of $m$ $b$'s followed by $r-m$ blanks.
\begin{example}
$K(2, 6, 2) = b \ b \times \times \times \times \ b \ b \times \times \times \times $.
\end{example}
Our final lemma easily implies Theorem \ref{thm-2n2}.
\begin{lemma}\label{lem-mainMeat}
Let $P$ be an Ulrich partition of type $(2,n,2)$, normalized as above. If $y > s+m$, then $r$ divides $s+m$; let the quotient be $q$. Then $P$ is the $K(q, r,m)$-elongation of an Ulrich partition $P'$ of type $(2, n- s-m, 2)$. Furthermore, the initial block of $b$'s in $P'$ also has length $m$.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{thm-2n2} assuming Lemma \ref{lem-mainMeat}]
We have already classified the Ulrich partitions of type $(2,2,2)$ in \S \ref{sec-beta2}. By induction on $n$, suppose that for $n < n_0$ the only Ulrich partitions of type $(2, n, 2)$ are $P_{u}$ with $n=2u$. Let $P$ be an Ulrich partition of type $(2,n_0,2)$. By Lemma \ref{lem-fundamental2n2} we may assume $y>s+m$. By Lemma \ref{lem-mainMeat}, $P$ is the $K(q, r,m)$-elongation of a smaller Ulrich partition $P'$. By induction and Lemma \ref{lem-mainMeat}, we must have $r=2$ and $m=1$. However, $s$ is even, so $r=2$ does not divide $s+m= s+1$. This contradiction proves the theorem.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem-mainMeat}]
We can determine the pattern of intersections inductively until the time $s+m<y$ immediately before $c_2$ first meets the $B$-block. Until this time, all intersections are of the form $a_1b$ or $bc_2$.
For the base of the induction, we first recall $\{y-1,\ldots,y-m\}\subset B$. We know $-y+m+1\in B$ by Lemma \ref{lem-boundm}, and $m<r$. Suppose $v\in (m,r]$. The only possible intersection at time $t_0$ when $a_1(t_0) = -y+v$ is $a_1b$ since $a_2(t_0)=-y+v-r-s < -y$ and $c_2(t_0)=y-v+r\geq y$. Therefore $-y+v\in B$ for $v\in (m,r]$.
We now continue by induction until we reach time $s+m$. The positions $y - (h-1)r -v$ have a $b$ for $1 \leq v \leq m$ by induction. Consequently, the positions $-y+ hr + v$ cannot have $b$'s for $1 \leq v \leq m$. Otherwise, when $c_2$ is at position $y- (h-1)r -v$, $a_1$ would be at position $-y + hr + v$ giving a coincident intersection. Thus there is a contiguous $B$-block of length $m$ at positions $y - hr - v$ for $1 \leq v \leq m$. Similarly, there are no $b$ entries in positions $y - (h-1)r - v$ for $m+1 \leq v \leq r$ by induction. When $c_2$ is at these positions $a_1$ is at the positions $-y + hr + v$. Since $a_1b$ is the only possible intersection, there must be a contiguous $B$-block of length $r-m$ at these positions.
\emph{Claim: $r|s+m$.} Write $s+m = qr+j$ with $0\leq j<r$ as in the division algorithm. What we have shown so far is that the pattern of $b$'s and blanks in the interval $B \cap (y,y-s-m]$ is the truncation of $K(q,r,m)$ to a sequence of length $s+m$. Similarly, the pattern of $b$'s and blanks in $B\cap [-y+s+m,-y)$ is the truncation of $K(q,r,m)^c$ to a sequence of length $s+m$. As $\ell(K(q,r,m)) = qr$, the claim is that no truncation actually takes place. There are two cases to consider depending on the remainder $j$.
\emph{Case 1: $1\leq j\leq m$}. In this case, $y-s-m\in B$. When $c_2$ is at position $y-s-m$, we find that $c_1$ is at position $y-m\in B$, a contradiction.
\emph{Case 2: $m< j <r$}. Consider the time $t_0$ when $a_1(t_0) = -y+s+m+1$. Then $c_2(t_0) = y-s-m-1+r\notin B$, and $c_1(t_0)\geq y$, $a_2(t_0)\leq -y$ hold. Furthermore, $-y+s+m+1\notin B$, since $c_1$ is at this position when $c_2$ is at $-y+m+1\in B$. Thus there is no intersection at time $t_0$.
Therefore $r|s+m$. Let us analyze the known intersections. For times $t\in [1,s+m]$, the intersections are of type $a_2b$ or $bc_1$. When $t\in (s+m,2s+2m]$, the intersections are all $a_1b$ or $bc_2$. This gives $y-t,-y+t\notin B$ for all such times and $y>2s+2m$. Dually, a symmetric description holds for the last $2s+2(r-m)$ times. Evolving $P$ to time $2s+2m$ and throwing out all the $b$'s which have already met $a$'s and $c$'s, we arrive at an Ulrich partition $$P'=(y-s+r-2m,y-2s-2m|B'|{-y+2s+2m},-y+s+2m)$$ such that $P$ is the $K(q,r,m)$-elongation of $P'$; it is easy to see that $B'$ is nonempty.
Finally, we analyze the length $m'$ of the initial block of $b$'s in $P'$. The dimension formula Observation \ref{obs-dim2n2} gives equalities \begin{align*}
\dim P' &= 2(y-2s-2m)+r+s-m'-1\\
\dim P &= 2y+r+s-m-1\\
\dim P &= \dim P' + 4(s+m),
\end{align*}
from which $m=m'$ follows immediately.
\end{proof}
\bibliographystyle{plain}
| {
"timestamp": "2015-12-22T02:05:06",
"yymm": "1512",
"arxiv_id": "1512.06193",
"language": "en",
"url": "https://arxiv.org/abs/1512.06193",
"abstract": "In this paper, we study equivariant vector bundles on partial flag varieties arising from Schur functors. We show that a partial flag variety with three or more steps does not admit an Ulrich bundle of this form with respect to the minimal ample class. We classify Ulrich bundles of this form on two-step flag varieties F(1,n-1;n), F(2,n-1;n), F(2,n-2;n), F(k,k+1;n) and F(k,k+2;n). We give a conjectural description of the two-step flag varieties which admit such Ulrich bundles.",
"subjects": "Algebraic Geometry (math.AG); Commutative Algebra (math.AC); Combinatorics (math.CO)",
"title": "Ulrich Schur bundles on flag varieties",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357195106374,
"lm_q2_score": 0.8175744761936437,
"lm_q1q2_score": 0.8026420666394992
} |
https://arxiv.org/abs/1502.03733 | Approximation error estimates and inverse inequalities for B-splines of maximum smoothness | In this paper, we develop approximation error estimates as well as corresponding inverse inequalities for B-splines of maximum smoothness, where both the function to be approximated and the approximation error are measured in standard Sobolev norms and semi-norms. The presented approximation error estimates do not depend on the polynomial degree of the splines but only on the grid size.We will see that the approximation lives in a subspace of the classical B-spline space. We show that for this subspace, there is an inverse inequality which is also independent of the polynomial degree. As the approximation error estimate and the inverse inequality show complementary behavior, the results shown in this paper can be used to construct fast iterative methods for solving problems arising from isogeometric discretizations of partial differential equations. | \section{Introduction}\label{sec:intro}
The objective of this paper is to prove approximation error estimates
as well as corresponding inverse estimates for B-splines of maximum smoothness.
The presented approximation error estimates do not depend on the degree of the splines but only
on the grid size. All bounds are given in terms of classical Sobolev norms and semi-norms.
In approximation theory, B-splines have been studied for a long time and many properties
are already well known. We do not go into the details of the existing results but present
the results of importance for our study throughout this paper.
The emergence of Isogeometric Analysis, introduced in~\cite{Hughes:2005}, sparked new interest
in the theoretical properties of B-splines. Since isogeometric Galerkin methods are aimed at
solving variational formulations of differential equations, approximation properties
measured in Sobolev norms need to be studied.
The results presented in this paper improve the results given in
\cite{Schumaker:1981,devore:1993,Bazilevs:2006} by explicitly studying the dependence
on the polynomial degree~$p$. Such an analysis was done in~\cite{daVeiga:2011}. However,
the results there do not cover (for~$p\geq 2$) the most important case of B-splines
of maximum smoothness~$k=p-1$. It turns out that the methods established in~\cite{daVeiga:2011}
for proving those bounds are not suitable in that case. Therefore, we develop a framework based on
Fourier analysis to prove rigorous bounds for $k=p-1$, which has the limitation that it is
only applicable for uniform grids.
Unlike the aforementioned papers we only consider approximation with B-splines
in the parameter domain within the framework of Isogeometric Analysis.
A generalization of the results to NURBS as well as the
introduction of a geometry mapping, as
presented in \cite{Bazilevs:2006}, is straightforward and does not lead to any additional
insight.
Note that a detailed study of direct and inverse estimates may lead to a deeper
understanding of isogeometric multigrid methods and give insight to suitable
preconditioning methods. We refer to \cite{Garoni:2014,Donatelli:2015}, where
similar techniques were used.
\subsection{The main results}
We now go through the main results of this paper.
For simplicity, we consider the case of one dimension first, where $\Omega = (a,b)$
with $a<b$ is the open \emph{parameter domain}.
For this domain we can introduce a \emph{uniform grid} by subdividing
$\Omega$ into \emph{elements} (subintervals) of length $\hn$. The setup of a
uniform grid is only possible if
\begin{equation*}
\nh := \hn^{-1}(b-a)\in \mathbb{N},
\end{equation*}
where $\mathbb{N}:= \{1,2,3,\ldots\}$.
In other words, the grid size $h$ has to be chosen such that $\nh$,
the number of subintervals, is an integer. We will assume this
implicitly throughout the paper.
On these grids we can introduce spaces of spline functions.
\begin{definition}
The space of spline functions on the domain $\Omega$ of degree $p\in \mathbb{N}_0:=\{0,1,2,\ldots\}$ and
continuity $k\in\{-1,0,1,2,\ldots\}$ over the uniform grid
of size $\hn$ is given by
\begin{equation*}
S_{p,k,h}(\Omega) := \left\{ u \in H^k(\Omega): \; u |_{(a+hj,a+h(j+1)]} \in \mathbb{P}^p \mbox{ for all } j=0,\ldots,\nh-1 \right\},
\end{equation*}
where $\mathbb{P}^p$ is the space of polynomials of degree $p$.
\end{definition}
Here and in what follows, $L^2(\Omega)$ and $H^r(\Omega)$ denote the standard Lebesque and
Sobolev spaces with norms $\|\cdot\|_{L^2(\Omega)}$, $\|\cdot\|_{H^r(\Omega)}$ and
semi-norms $|\cdot|_{H^r(\Omega)}$.
Moreover, let $(\cdot,\cdot)_{L^2(\Omega)}$ be the standard scalar product for $L^2(\Omega)$ and
\begin{equation*}
(u,v)_{H^r(\Omega)} := \left(\frac{\partial^r}{\partial x^r} u,\frac{\partial^r}{\partial x^r} v\right)_{L^2(\Omega)}
\end{equation*}
be the scalar product for $H^r(\Omega)$, where $\frac{\partial^r}{\partial x^r}$ denotes the $r$-th derivative. We then have $|u|^2_{H^r(\Omega)} := (u,u)_{H^r(\Omega)}$ as well as
\begin{equation*}
\|u\|^2_{H^r(\Omega)} := \|u\|^2_{L^2(\Omega)} + \sum^r_{s=1} |u|^2_{H^s(\Omega)}
\end{equation*}
for all $r\in\mathbb{N}_0:=\{0,1,2,\ldots\}$.
Using standard trace theorems, we obtain that for $k>0$ the
space $S_{p,k,h}(\Omega)$ is the space of all $k-1$ times continuously differentiable
functions ($C^{k-1}(\Omega)$-functions), which are polynomials of degree $p$ on each
element of the uniform grid on $\Omega$. For $k=0$, there is no continuity condition, i.e.,
the space $S_{p,0,h}(\Omega)$ is the space of piecewise polynomials of degree $p$.
For $k>p$, the spline spaces reduce to spaces
of global polynomials. So, the largest possible choice for $k$ without having this
effect is $k=p$. Therefore we call B-splines with $k=p$ B-splines of \emph{maximum smoothness}.
As we are mostly interested in this case, here and in what follows,
we will use $S_{p,h}(\Omega):=S_{p,p,h}(\Omega)$.
The main result of this paper is the following.
\newcommand{\citeThrmApprox}{1}
\begin{theorem}\label{thrm:approx}
For all $u\in H^1(\Omega)$, all grid sizes $h$
and each degree $p\in\mathbb{N}$ with ${h\,p < |\Omega| = b-a}$,
there is a spline approximation $u_{p,h}\in S_{p,h}(\Omega)$ such that
\begin{equation}\label{eq:thrm:approx}
\|u-u_{p,h}\|_{L^2(\Omega)} \le \sqrt{2}\; \hn |u|_{H^1(\Omega)}
\end{equation}
is satisfied.
\end{theorem}
Note that, in contrast to the existing results presented in the next subsection, this theorem achieves
two goals, it covers the case of maximum smoothness and gives a uniform estimate for all polynomial degrees~$p$.
\begin{remark}
Obviously $S_{p,k,h}(\Omega) \supseteq S_{p,h}(\Omega)$ for all $0\le k < p$. So,
Theorem~\ref{thrm:approx} is also valid in that case. However, for this case there might
be better estimates for these larger B-spline spaces.
Moreover, Theorem~\ref{thrm:approx} is also satisfied in
the case of having repeated knots, as this is just a local
reduction of the continuity (which enlarges the corresponding space of spline
functions).
\end{remark}
In Section~\ref{sec:reduced}, we will introduce a
subspace $\widetilde{S}_{p,h}(\Omega) \subseteq S_{p,h}(\Omega)$ (cf. Definition~\ref{defi:Ssymm}) and
show that the spline approximation
is even in that subspace (cf. Corollary~\ref{cor:approx:nonper}).
Moreover, we show also a corresponding \emph{inverse inequality} for $\widetilde{S}_{p,h}(\Omega)$ (cf. Theorem~\ref{thrm:inverse}
in Section~\ref{sec:inverse}),
i.e., we will show that
\begin{equation*}
|u_{p,h}|_{H^1(\Omega)} \le 2 \sqrt{3} \hn^{-1} \|u_{p,h}\|_{L^2(\Omega)}
\end{equation*}
is satisfied for all grid sizes $h$, each $p\in \mathbb{N}$ and all $u_{p,h}\in \widetilde{S}_{p,h}(\Omega)$.
We will moreover show that both the approximation error estimate and the
inverse inequality are \emph{sharp up to constants}
(Corollaries~\ref{corr:sharp1} and~\ref{corr:sharp2}).
\begin{remark}\label{rem:counterexample}
This inverse inequality does not extend to the whole space $S_{p,h}(\Omega)$.
Here it is easy to find a counterexample: Let $\Omega = (0,1)$.
The function $u_{p,h}$, given by
\begin{equation*}
u_{p,h}(x) = \left\{
\begin{array}{ll}
(1-x/\hn)^p & \mbox{\qquad for $x \in [0,\hn)$}\\
0 & \mbox{\qquad for $x\in [\hn,1]$},
\end{array}
\right.
\end{equation*}
is a member of the space $S_{p,h}(0,1)$. Straight-forward computations yield
\begin{equation*}
\frac{|u_{p,h}|_{H^1(0,1)}}{\|u_{p,h}\|_{L^2(0,1)}} = \sqrt{\frac{2p+1}{2p-1}} \;p \;\hn^{-1},
\end{equation*}
which cannot be bounded from above by a constant times $\hn^{-1}$ uniformly in $p$.
Using a standard scaling argument, this counterexample can be extended to any
$\Omega=(a,b)$.
\end{remark}
The approximation error estimate and the inverse inequality are extended to
higher Sobolev indices in Section~\ref{sec:sobolev}. Corresponding results for two
and more dimensions are given in Section~\ref{sec:dim}. There, also the extension
to Isogeometric Analysis is discussed.
\subsection{Known approximation error estimates}
Before proving the main theorems, we start with recalling two important pre-existing
approximation error estimates.
The first result is well-known in literature, cf.~\cite{Schumaker:1981}, Theorem~6.25 or
\cite{devore:1993}, Theorem~7.3. In the framework of Isogeometric Analysis,
such results have been used, e.g., in \cite{Bazilevs:2006}, Lemma~3.3.
\begin{theorem}\label{thrm:known}
For each $r\in\mathbb{N}_0$, each $k\in\mathbb{N}$, each $q\in\mathbb{N}$ and each
$p\in\mathbb{N}$, with $0\le r\le q\le p+1$ and $r\le k \le p$,
there is a constant $C(p,k,r,q)$ such that the following approximation error estimate
holds.
For all $u\in H^q(\Omega)$ and all grid sizes $h$, there is a spline approximation
$u_{p,k,h} \in S_{p,k,h}(\Omega)$ such that
\begin{equation*}
|u-u_{p,k,h}|_{H^r(\Omega)} \le C(p,k,r,q) \hn^{q-r} |u|_{H^q(\Omega)}
\end{equation*}
is satisfied.
\end{theorem}
This theorem is valid for tensor-product spaces in any dimension and
gives a local bound for locally quasi-uniform knot vectors. However, the dependence of the constant on the
polynomial degree has not been derived.
A major step towards estimates with explicit $p$-dependence was presented in \cite{daVeiga:2011},
Theorem~2, where an estimate with an explicit dependence on $p$, $k$, $r$ and $q$ was given. However,
there the continuity $k$ is limited by the upper bound $\tfrac12(p+1)$. In our notation, the theorem reads
as follows.
\begin{theorem}\label{thrm:known:2}
There is a constant $C>0$ such that for each $r\in\mathbb{N}_0$, each $k\in\mathbb{N}$, each
$q\in\mathbb{N}$ and each $p\in\mathbb{N}$ with $0\le r\le k\le q\le p+1$
and $k \le \tfrac12(p+1)$ and all grid sizes $h$, the following approximation error estimate
holds. For all $u\in H^q(\Omega)$, there is a spline
approximation $u_{p,k,h}\in S_{p,k,h}(\Omega)$ such that
\begin{equation}\nonumber
|u-u_{p,k,h}|_{H^r(\Omega)} \le C \hn^{q-r} (p-k+1)^{-(q-r)} |u|_{H^q(\Omega)}
\end{equation}
is satisfied.
\end{theorem}
Again, the original result was stated for locally quasi-uniform knots.
For any $p\geq2$ the relevant case $k=p$, which we consider, is not covered by this theorem.
Similar results to Theorem~\ref{thrm:approx} are known in approximation theory, cf.~\cite{Korneichuk:1991}.
There, however, different norms have been discussed. Hence we do not go into the details.
In~\cite{Evans:2009}, it was suggested and confirmed by numerical experiments that
Theorem~\ref{thrm:approx} is satisfied. A proof was however not given.
\subsection{Organization of this paper}
This paper is organized as follows.
In Section~\ref{sec:prelim}, we present
the main steps of the proof of Theorem~\ref{thrm:approx} and give some preliminaries.
In the following two sections, the details of the proof are worked out.
In Section~\ref{sec:reduced}, we introduce the reduced spline space $\widetilde{S}_{p,h}(\Omega)$,
discuss its properties and extend Theorem~\ref{thrm:approx} to that space.
In the following section, Section~\ref{sec:inverse}, we give an inverse inequality
for $\widetilde{S}_{p,h}(\Omega)$. In the remainder of the paper, we generalize
those results: In Section~\ref{sec:sobolev} we consider higher
Sobolev indices and in Section~\ref{sec:dim}, the results are generalized
to two or more dimensions.
\section{Concept of the proof of Theorem~\citeThrmApprox{} and Preliminaries}\label{sec:prelim}
The proof of Theorem~\ref{thrm:approx} is based on an estimate for periodic splines, which is
formulated as Lemma~\ref{lem:approx:per}. The proof of Lemma~\ref{lem:approx:per} is
based on a telescoping argument based on a hierarchy of grids. For the proof, we require
\begin{itemize}
\item an estimate for the difference of the spline approximations of a given function on two consecutive grids,
cf.~\eqref{eq:whattolfa}, and
\item an estimate for the difference between the spline approximation on
some finest grid and the given function, cf.~Lemma~\ref{lem:non:robust}.
\end{itemize}
As the size of the finest grid approaches $0$, the constant in Lemma~\ref{lem:non:robust}
or its dependence on the spline degree $p$ does not matter, whereas the constant
in~\eqref{eq:whattolfa} directly affects the constant in the final result.
The estimate~\eqref{eq:whattolfa} is shown in Section~\ref{sec:twogrid}, cf. Lemma~\ref{lem:lfa}. There, the proof
is done by means of Fourier analysis, which causes the restriction of the analysis to equidistant
grids. The Fourier analysis follows a classical line: first, a matrix-vector formulation is
introduced, cf. Lemma~\ref{lemma:decomp}, then the symbols of the involved matrices
are derived, cf. Subsections~\ref{subsec:symbols} and~\ref{subsec:symbols2}. A closed
form for the symbol of the mass matrix is not available, so some statements on that
matrix are derived (Lemmas~\ref{lem:mass} and~\ref{lem:mass:estim}), which are used
in the proof of Lemma~\ref{lem:lfa}.
Having the result for two consecutive grids in the periodic case, we use the aforementioned
telescoping argument to give an approximation error estimate for apprximating a general
periodic $H^1$-function. The extension to the non-periodic case is done by means of a
periodic extension.
\subsection{Periodic splines}
To establish the theory within this paper, we need to introduce spaces of periodic splines,
which we define as follows.
\begin{definition}\label{defi:Speriodic}
Given a spline space $S_{p,h}(\Omega)$ over $\Omega=(a,b)$, the \emph{periodic spline space}
$\widehat{S}_{p,h}(\Omega)$ contains all functions $u_{p,h}\in S_{p,h}(\Omega)$ that satisfy the linear periodicity condition
\begin{equation}\label{eq:sym:cond}
\frac{\partial^{l}}{\partial x^{l}} u_{p,h}(a)=\frac{\partial^{l}}{\partial x^{l}} u_{p,h}(b)
\mbox{ for all } l \in \mathbb{N}_0 \mbox{ with } l < p.
\end{equation}
\end{definition}
The next step is to introduce a B-spline-like basis for this space. First, we introduce the
cardinal B-splines. On $\mathbb{R}$, the cardinal B-splines are defined
as follows, cf.~\cite{Schumaker:1981}, (4.22).
\begin{definition}
The cardinal B-splines of degree $p=0$, $\psi^{(i)}_{0}: \;\mathbb{R}\rightarrow\mathbb{R}$ coincide
with the characteristic function, i.e.,
\begin{equation*}
\psi^{(i)}_{0}(x) := \left\{
\begin{array}{ll}
1 & \mbox{ for } x \in ( i , i+1 ],\\
0 & \mbox{ else,}
\end{array}
\right.
\end{equation*}
where $i \in \mathbb{Z}$.
The cardinal B-splines $\psi^{(i)}_{p}: \;\mathbb{R}\rightarrow\mathbb{R}$ of degree $p\in\mathbb{N}$ are
given by the recurrence formula
\begin{equation}\label{eq:recur:bspline}
\psi^{(i)}_{p}(x) := \frac{x-i}{p} \psi^{(i)}_{p-1}(x) + \frac{(p+i+1)-x}{p} \psi^{(i+1)}_{p-1}(x),
\end{equation}
where $i \in\mathbb{Z}$.
\end{definition}
From the cardinal B-splines $\psi^{(i)}_{p}$, we derive the B-splines $\varphi_{p,h}^{(i)}$ on $\Omega$ over
a uniform grid of size $\hn$ by a suitable scaling and shifting.
\begin{definition}
For $i\in\mathbb{Z}$ the uniform B-spline $\varphi_{p,h}^{(i)}: \;\Omega= (a,b)\rightarrow\mathbb{R}$ of degree $p\in\mathbb{N}_0$ and
grid size $\hn$ is given by
\begin{equation}
\varphi_{p,h}^{(i)}(x) := \psi^{(i)}_{p}\left(\frac{x-a}{h}\right).
\end{equation}
\end{definition}
We obtain by construction that $\mbox{supp}(\varphi_{p,h}^{(i)}) \subset [i \hn+a, (i+p+1) \hn+a ]$. Hence, $-p$ and $\nh-1$ with $\nh = \hn^{-1}(b-a)$
are the first and last indices of the B-splines supported in $\Omega$, respectively,
i.e. $\textnormal{supp} (\varphi_{p,h}^{(i)}) \cap \Omega \neq \emptyset$ is equivalent to $-p \leq i \leq \nh-1$.
Moreover, $\{\varphi_{p,h}^{(i)}\}^{\nh-1}_{i=-p}$ forms a basis for $S_{p,h}$, see, e.g., \cite{Schumaker:1981}.
Note that both $\nh$ and the basis functions depend implicitly on the choice of $\Omega$, i.e., the
values $a$ and $b$. Throughout the paper, it is clear from the context which
$\Omega$ is chosen.
For the construction of the basis for the periodic spline space~$\widehat{S}_{p,h}(\Omega)$, we assume that
\begin{equation}\label{eq:condition-grid-size}
hp < |\Omega| = b-a,
\end{equation}
i.e., that the grid is fine enough not to have basis functions that are non-zero at both end points
of the grid, cf.~\cite{Schumaker:1981}.
\begin{definition}\label{defi:basis-per}
For $\widehat{S}_{p,h}(\Omega)$, the \emph{B-spline-like basis} $\{\widehat{\varphi}_{p,h}^{(i)}\}^{\nh-1}_{i=0}$ is given by
\begin{align*}
& \widehat{\varphi}_{p,h}^{(i)}:= \varphi_{p,h}^{(i)}&&\mbox{ if } i<\nh-p, \mbox{ and} \\
& \widehat{\varphi}_{p,h}^{(i)}:= \varphi_{p,h}^{(i)}+\varphi_{p,h}^{(i-\nh)}&&\mbox{ if } i \geq \nh-p.
\end{align*}
\end{definition}
Up to indexing, this definition coincides with~(8.6) and (8.7) in~\cite{Schumaker:1981}.
Theorem~8.2 in~\cite{Schumaker:1981} states that~\eqref{eq:basis:varphi} is actually a basis.
As $\varphi_{p,h}^{(i)}$ vanishes on $\Omega$ for all $i\not\in \{-p,\ldots,\nh-1\}$, we have
\begin{equation}\label{eq:basis:varphi}
\widehat{\varphi}_{p,h}^{(i)}=\sum_{j\in\mathbb{Z}} \varphi_{p,h}^{(i+j \nh)},
\end{equation}
where $\mathbb{Z}$ is the set of integers, for all $i=0,\ldots, \nh-1$. Using this definition,
we directly obtain that also $\widehat{\varphi}_{p,h}^{(i)} = \widehat{\varphi}_{p,h}^{(i+j\nh)}$ for any $j\in \mathbb{Z}$,
which we will use for ease of notation throughout this paper.
We call this basis B-spline-like, as each function is a non-negative linear combination of B-splines and it forms
a partition of unity on $\Omega$.
\subsection{A non-robust approximation error estimate in the periodic case}
We can extend Theorem~\ref{thrm:known} for $k=p-1$ to the following Lemma \ref{lem:non:robust} stating
that the approximation error estimate is still satisfied if we approximate
periodic functions with periodic splines. First, we introduce the spaces
of periodic functions as follows.
\begin{definition}
For $\Omega=(a,b)$, the space $\widehat{H}^q(\Omega)$ is the space of all
$u\in H^q(\Omega)$ that satisfy the periodicity condition
\begin{equation}\label{eq:pc}
\frac{\partial^{l}}{\partial x^{l}} u(a)=\frac{\partial^{l}}{\partial x^{l}} u(b)
\mbox{ for all } l \in \mathbb{N}_0 \mbox{ with } l < q.
\end{equation}
\end{definition}
Note that standard trace theorems guarantee that
the periodicity condition~\eqref{eq:pc} is well-defined.
For this space, the following lemma holds.
\begin{lemma}\label{lem:non:robust}
For each $r\in\mathbb{N}_0$, each $q\in\mathbb{N}$ and each
$p\in\mathbb{N}$ with $0\le r\le q\le p+1$, there is a constant $C(p,r,q)$
such that the following approximation error estimate holds.
For all $u \in \widehat{H}^{q}(\Omega)$ and all grid sizes $h$, there is
a spline approximation $u_{p,h} \in \widehat{S}_{p,h}(\Omega)$ such that
\begin{equation*}
|u-u_{p,h} |_{H^r(\Omega)} \le C(p,r,q) \hn^{q-r} |u|_{H^q(\Omega)}
\end{equation*}
is satisfied.
\end{lemma}
\begin{proof}
In the following, we assume without loss of generality that $\Omega=(0,1)$. The extension
to any other $\Omega=(a,b)$, follows using a standard scaling argument.
Let $w$ be the periodic extension of the function $u$ to $\mathbb{R}$, i.e., $w(x):=u(x-\lfloor x \rfloor)$.
Note that the restriction of $w$ to any finite interval is again a function in the Sobolev space~$H^q$.
The following of the proof is based on the proof in \S~6.4 in~\cite{Schumaker:1981}.
We make use of the fact that the proof uses local projections. Let $Q_{p,h}: H^q(\mathbb{R})
\rightarrow S_{p,h}(\mathbb{R})$ be the projection operator, as introduced in (6.40) in~\cite{Schumaker:1981}.
The value of the approximation $Q_{p,h}w$ of a function $w$ at a certain subinterval
$I_i:=(i\;h,(i+1)\;h)\subseteq \Omega$ only depends on the values of the function to be approximated in
a certain neighborhood $\widetilde{I}_i:=((i-p)\;h,(i+p+1)\;h)$. So, from the periodicity of $w$, the
periodicity of $Q_{p,h}w$ follows immediately. Hence its restriction to $(0,1)$ is a periodic spline,
i.e. $Q_{p,h}w|_{(0,1)}\in\widehat{S}_{p,h}(0,1)$. We define $u_{p,h}$ to be the restriction of $Q_{p,h}w$ to $(0,1$).
Due to \cite{Schumaker:1981}, Theorem~6.24, the local estimate
\begin{equation}\nonumbe
|w-Q_{p,h}w|_{H^r(I_i)} \le \widetilde{C}(p,r,q) \hn^{q-r} |w|_{H^q(\widetilde{I}_i)}.
\end{equation}
is satisfied for the projector $Q_{p,h}$ and
a constant $\widetilde{C}(p,r,q)$, which is independent of~$\hn$. By summing
over all elements, we obtain
\begin{align*}
&|u-u_{p,h} |_{H^r(0,1)}^2
= |w-Q_{p,h}w |_{H^r(0,1)}^2
= \sum_{i=0}^{\nh-1} |w-Q_{p,h}w|_{H^r(I_i)}^2 \\
& \quad \le \widetilde{C}^2(p,r,q) \hn^{2(q-r)} \sum_{i=0}^{\nh-1} |w|_{H^q(\widetilde{I}_i)}^2
=\widetilde{C}^2(p,r,q) \hn^{2(q-r)} \sum_{i=0}^{\nh-1}\sum_{j=-p}^{p} |w|_{H^q(I_{i+j})}^2.
\end{align*}
Using the periodicity of $w$, we can express the last term using~$|u|_{H^q(I_{l})}$
for~$l\in\{0,\ldots,\nh-~1\}$ only. By counting the occurrences of the
summands~$|u|_{H^q(I_{l})}$, we obtain
\begin{equation}\nonumber
\sum_{i=0}^{\nh-1}\sum_{j=-p}^{p} |w|_{H^q(I_{i+j})}^2
= (2p+1) \sum_{i=0}^{\nh-1} |u|_{H^q(I_{i})}^2 = (2p+1) |u|_{H^q(0,1)}^2,
\end{equation}
which finishes the proof for $C(p,r,q) = (2p+1)^{1/2} \widetilde{C}(p,r,q)$.\qed
\end{proof}
\section{A robust approximation error estimate for two consecutive grids in the periodic case}\label{sec:twogrid}
In this section we analyze the case of approximating a periodic spline function on a fine grid
by a periodic spline function on a coarser grid. In the next section,
we extend these results to the approximation of general functions and
to the non-periodic case. The extension to the non-periodic case is done by
extending functions in $H^1(0,1)$ to $(-1,1)$ by reflecting them on the $y$-axis. So,
without loss of generality, we will restrict ourselves to $\Omega=(-1,1)$ throughout this
section. Moreover, for the construction of~\eqref{eq:our:mass:formula}, we will
need that $hp<1$, which is stronger than the requirement $hp<b-a$, cf. Theorem~\ref{thrm:approx}.
So, throughout this section, we will use the following assumptions.
\begin{assumption}
The domain is given by $\Omega=(-1,1)$ and the grid size is small enough such that
$hp<1$ holds.
\end{assumption}
In the next section, we will make use of a telescoping argument. For this purpose,
we have to analyze a fixed interpolation operator. So, within this section, we will show
that
\begin{equation}\label{eq:whattolfa}
\|(I- \widehat{\Pi}_{p,\hn} ) u_{p,h}\|_{L^2(-1,1)} \le \frac{1}{\sqrt{2}}\; \hn |u_{p,h}|_{H^1(-1,1)}
\end{equation}
holds for all $u_{p,h}\in \widehat{S}_{p,\frac{h}{2}}(-1,1)$, where
$I$ is the identity and $\widehat{\Pi}_{p,\hn}$ is the $H^1$-orthogonal projection operator,
given by the following definition.
\begin{definition}\label{def:H1projection}
The projection $\widehat{\Pi}_{p,\hn}:\widehat{H}^1(-1,1)\rightarrow \widehat{S}_{p,h}(-1,1)$
maps every $u\in \widehat{H}^1(-1,1)$ to the function $u_{p,h} \in \widehat{S}_{p,h}(-1,1)$ satisfying
\begin{equation}\label{eq:def:projection}
(u_{p,h}, v_{p,h})_{H^1_{\circ}(-1,1)} = (u, v_{p,h})_{H^1_{\circ}(-1,1)}
\end{equation}
for all $v_{p,h}\in \widehat{S}_{p,h}(-1,1)$, where
\begin{equation*}
(u,v)_{H^1_{\circ}(-1,1)}:=(u,v)_{H^1(-1,1)} + \left(\int_{-1}^1 u(x)\textnormal{d} x\right)\left(\int_{-1}^1 v(x)\textnormal{d} x\right).
\end{equation*}
\end{definition}
Within the next subsections, we will prove~\eqref{eq:whattolfa}. This will be done by
a rigorous version of Fourier analysis. Fourier analysis is a well-known tool for
analyzing convergence properties of numerical methods, cf. the work by A. Brandt, like~\cite{Brandt:1977},
and many others. It provides a framework to determine sharp bounds for the convergence
rates of multigrid methods and other iterative solvers for problems arising from partial differential equations.
This is different to classical analysis, which typically yields qualitative statements only.
For a detailed introduction into Fourier analysis, see, e.g.,~\cite{Trottenberg:2001}.
Recently, it has also been applied in the area of Isogeometric Analysis, cf.~\cite{Garoni:2014}.
Typically, Fourier analysis is done under simplifying assumptions, like assuming uniform
grids and neglecting the boundary. In this case, one refers to \emph{local} Fourier analysis
(or local mode analysis). This analysis can be understood as a heuristic method to study
methods of interest. In a recent work, cf.~\cite{Garoni:2014}, it was understood
also as a rigorous statement for a limit case.
We, however, are interested in a completely rigorous analysis. As we restrict ourselves
to periodic spline spaces, the Fourier modes are the exact eigenvectors of the matrices
of interest, which will allow us to diagonalize these matrices using a similarity
transformation. Based on such a diagonalization, we will be able to
prove~\eqref{eq:whattolfa}.
As a first step, we introduce a matrix-vector formulation of~\eqref{eq:whattolfa}.
\subsection{A matrix-vector formulation of the estimate}
Having fixed the B-spline like basis $\{\widehat{\varphi}_{p,h}^{(i)}\}_{i=0}^{\nh-1}$, we can write
any function $u_{p,h}\in \widehat{S}_{p,h}(-1,1)$ as a linear combination of these basis functions:
\begin{equation*}
u_{p,h} = \sum_{i=0}^{\nh-1} u_{p,h}^{(i)} \widehat{\varphi}_{p,h}^{(i)}.
\end{equation*}
The coefficients $u_{p,h}^{(i)}$ can be collected in a coefficient vector: We define
$\ul{u}_{p,h}:=(u_{p,h}^{(i)})_{i=0}^{\nh-1}$. So, the vector $\ul{u}_{p,h}$ is
the representation of the function $u_{p,h}$ with respect to the B-spline like basis.
Here and in what follows, we will always assume underlined quantities to be the
basis representation of the corresponding function with respect to the basis $\{\widehat{\varphi}_{p,h}^{(i)}\}_{i=0}^{\nh}$.
By plugging such a decomposition into the standard $L^2$-scalar product $(\cdot,\cdot)_{L^2(-1,1)}$,
we obtain
\begin{equation*}
(u_{p,h},v_{p,h})_{L^2(-1,1)} = \sum_{i=0}^{\nh-1}\sum_{j=0}^{\nh-1}u_{p,h}^{(i)}\;v_{p,h}^{(j)}
\; (\widehat{\varphi}_{p,h}^{(i)},\widehat{\varphi}_{p,h}^{(j)})_{L^2(-1,1)}.
\end{equation*}
As the grid is equidistant and the splines are periodic, we obtain that for all $i$ and $j$ the relation
$(\widehat{\varphi}_{p,h}^{(i)},\widehat{\varphi}_{p,h}^{(j)})_{L^2(-1,1)}=m_{p,h}^{(i-j)}$ holds with coefficients
$m_{p,h}^{(i)}:=(\widehat{\varphi}_{p,h}^{(i)},\widehat{\varphi}_{p,h}^{(0)})_{L^2(-1,1)}$. Those coefficients
form a circulant matrix $M_{p,h}:=(m_{p,h}^{(i-j)})_{i=0,\ldots,\nh-1}^{j=0,\ldots,\nh-1}$, which is called the
mass matrix. We immediately obtain
\begin{equation*}
(u_{p,h},v_{p,h})_{L^2(-1,1)} = (\ul{u}_{p,h},\ul{v}_{p,h})_{M_{p,h}} := \ul{v}_{p,h}^T M_{p,h}\ul{u}_{p,h}
\end{equation*}
and
\begin{equation*}
\|u_{p,h}\|_{L^2(-1,1)}^2 = \|\ul{u}_{p,h}\|_{M_{p,h}}^2 := \ul{u}_{p,h}^T M_{p,h} \ul{u}_{p,h}.
\end{equation*}
Having a look onto the support of the functions $\widehat{\varphi}_{p,h}^{(0)}$, we obtain that the bandwidth
of the mass matrix is $2p+1$, i.e. $m_{p,h}^{(i-j)} = 0$ for all $i,j$ with $|i-j|>p$.
Analogously to the definition of the mass matrix, we can introduce the stiffness matrix, representing the $H^1_{\circ}$-scalar product.
The stiffness matrix is given by $K_{p,h}:=(k_{p,h}^{(i-j)})_{i=0,\ldots,\nh-1}^{j=0,\ldots,\nh-1}$, where the coefficients are given by
\begin{equation*}
k_{p,h}^{(i)} := \left( \widehat{\varphi}_{p,h}^{(i)}, \widehat{\varphi}_{p,h}^{(0)} \right)_{H^1_{\circ}(-1,1)}.
\end{equation*}
Since the basis functions $\widehat{\varphi}_{p,h}^{(i)}$ form a partition of unity on $\Omega=(-1,1)$,
$\int_{-1}^1 \widehat{\varphi}_{p,h}^{(i)}(x) \textnormal{d} x=\hn$ and further
\begin{equation}\label{eq:def:stiff}
k_{p,h}^{(i)} = \left( \widehat{\varphi}_{p,h}^{(i)}, \widehat{\varphi}_{p,h}^{(0)} \right)_{H^1(-1,1)}
+ \hn^2.
\end{equation}
Note that for uniform knot vectors the identity
\begin{equation*}
\frac{\partial}{\partial x} \varphi_{p,h}^{(j)}(x) = \frac{1}{\hn}\left(\varphi_{p-1,h}^{(j-1)}(x)- \varphi_{p-1,h}^{(j)}(x)\right)
\end{equation*}
holds, see e.g. (5.36) in \cite{Schumaker:1981}. This statement directly carries over to the periodic splines using relation \eqref{eq:basis:varphi}, i.e.,
\begin{equation*}
\frac{\partial}{\partial x} \widehat{\varphi}_{p,h}^{(j)}(x) = \frac{1}{\hn}\left(\widehat{\varphi}_{p-1,h}^{(j-1)}(x)- \widehat{\varphi}_{p-1,h}^{(j)}(x)\right)
\end{equation*}
also holds. By plugging this into~\eqref{eq:def:stiff}, the entries of
the stiffness matrix can be derived directly using the entries of the mass matrix for splines of order $p-1$. Straight-forward
calculations show that
\begin{equation}\label{eq:k:decomp}
K_{p,h} = D_{h} M_{p-1,h} D_{h}^T + E_{h},
\end{equation}
where the gradient matrix $D_{h}:=(d_{h}^{(i-j)})_{i=0,\ldots,\nh-1}^{j=0,\ldots,\nh-1}$ is given by the coefficients
\begin{equation*}
d_{h}^{(i)} := \frac{1}{\hn} \left\{
\begin{array}{ll}
1 & \mbox{ for } i\in \nh \,\mathbb{Z} \\
-1 & \mbox{ for } i\in \nh \,\mathbb{Z}-1 \\
0 & \mbox{ else}
\end{array}
\right.,
\end{equation*}
the rank-one matrix $E_h$ is given by
$E_h := \hn^2 \underline{\bf{1}}_{h} \underline{\bf{1}}_{h}^T$, where $\underline{\bf{1}}_{h}:=(1,\ldots,1)^T\in\mathbb{R}^{\nh}$
is a vector consisting only of ones, representing the constant function.
Note that $D_h$, $E_h$ and, consequently, $K_h$ are also circulant matrices.
To derive a matrix-vector formulation of \eqref{eq:whattolfa}, we have to introduce a matrix that represents
the canonical embedding from $\widehat{S}_{p,h}(-1,1)$ into $\widehat{S}_{p,\frac{h}{2}}(-1,1)$.
The following lemma is rather well-known in literature, cf. \cite{Chui:1992} equation (4.3.4), and can be easily
shown by induction in $p$.
\begin{lemma}
For all $p\in\mathbb{N}$, all grid sizes $h$ and all $x\in\mathbb{R}$,
\begin{equation}\nonumber
\varphi_{p,h}^{(j)}(x) = 2^{-p} \sum_{l=0}^{p+1} \left(\begin{array}{c}p+1\\l\end{array}\right)
\varphi_{p,\tfrac{h}{2}}^{(2j+l)}(x)
\end{equation}
is satisfied for all $j=-p,\ldots,\nh-p-1$.
\end{lemma}
This directly carries over to the periodic splines, i.e., we obtain
\begin{equation}\label{eq:intergrid:per}
\widehat{\varphi}_{p,h}^{(j)}(x) = 2^{-p} \sum_{l=0}^{p+1} \left(\begin{array}{c}p+1\\l\end{array}\right)
\widehat{\varphi}_{p,\tfrac{h}{2}}^{(2j+l)}(x)
= \sum_{i\in\mathbb{Z}}
\underbrace{2^{-p}\left(\begin{array}{c}p+1\\ i-2j \end{array}\right)}
_{\displaystyle p_{p,\tfrac{h}{2}}^{(i,j)}:=}
\widehat{\varphi}_{p,\tfrac{h}{2}}^{(i)}(x).
\end{equation}
Here, we use equation \eqref{eq:basis:varphi} and that
the binomial coefficient $\left(\begin{array}{c}a\\b \end{array}\right)$ vanishes for $b\not\in \{0,\ldots,a\}$.
Again, we define the matrix $P_{p,\tfrac{h}{2}}:=(p_{p,\tfrac{h}{2}}^{(i,j)})_{i=0,\ldots, 2\nh-1}^{j=0,\ldots,\nh-1}$.
Here and in what follows, we make use of $n_{\frac{h}{2}} = 2 \nh$.
\begin{lemma}\label{lemma:decomp}
The inequality~\eqref{eq:whattolfa} is equivalent to
\begin{equation}\label{eq:whattolfa2}
\|M_{p,\frac{\hn}{2}}^{1/2} (I-P_{p,\frac{\hn}{2}} K_{p,\hn}^{-1} P_{p,\frac{\hn}{2}}^T K_{p,\frac{\hn}{2}}) K_{p,\frac{\hn}{2}}^{-1/2}\|
\le \frac{1}{\sqrt{2}} h ,
\end{equation}
which is a consequence of the combination of
\begin{align}
& \|M_{p,\frac{\hn}{2}}^{1/2}M_{p-1,\frac{\hn}{2}}^{-1/2}\|\le 1\qquad \mbox{and}\label{eq:whattolfa3}\\
& \|M_{p-1,\frac{\hn}{2}}^{1/2} (I-P_{p,\frac{\hn}{2}} K_{p,\hn}^{-1} P_{p,\frac{\hn}{2}}^T K_{p,\frac{\hn}{2}}) K_{p,\frac{\hn}{2}}^{-1/2}\|
\le \frac{1}{\sqrt{2}} h. \label{eq:whattolfa4}
\end{align}
\end{lemma}
Here and in what follows, $\|\cdot\|$ is the Euclidean norm and the square
root $A^{1/2}$ of a symmetric and positive definite matrix $A$ is that symmetric
and positive definite matrix that satisfies $A^{1/2}A^{1/2} = A$.
\begin{proof}{\em of Lemma~\ref{lemma:decomp}}
Using the introduced matrices $K_{p,h}$ and $P_{p,\tfrac{h}{2}}$, we can rewrite~\eqref{eq:def:projection} for
the choice $u := u_{p,\frac{h}{2}} \in \widehat{S}_{p,\frac{h}{2}}$ in matrix-vector form as
\begin{equation}\nonumber
(P_{p,\frac{\hn}{2}} \ul{u}_{p,h}, P_{p,\frac{\hn}{2}} \ul{v}_{p,h})_{K_{p,\frac{\hn}{2}}}
= (\ul{u}_{p,\frac{h}{2}}, P_{p,\frac{\hn}{2}}\ul{v}_{p,h})_{K_{p,\frac{\hn}{2}}},
\end{equation}
which is equivalent to
\begin{equation}\nonumber
P_{p,\frac{\hn}{2}}^TK_{p,\frac{\hn}{2}}P_{p,\frac{\hn}{2}} \ul{u}_{p,h} =
P_{p,\frac{\hn}{2}}^T K_{p,\frac{\hn}{2}} \ul{u}_{p,\frac{h}{2}}.
\end{equation}
This yields, using the Galerkin principle ($P_{p,\frac{\hn}{2}}^TK_{p,\frac{\hn}{2}}P_{p,\frac{\hn}{2}}=K_{p,\hn}$),
that the coarse-grid approximation $\ul{u}_{p,h}$ is given by
\begin{equation}\nonumber
\ul{u}_{p,h} = K_{p,\hn}^{-1} P_{p,\frac{\hn}{2}}^T K_{p,\frac{\hn}{2}} \ul{u}_{p,\frac{h}{2}}.
\end{equation}
By plugging this into~\eqref{eq:whattolfa}, we see that we have to show
\begin{equation*}
\|(I-P_{p,\frac{\hn}{2}} K_{p,\hn}^{-1} P_{p,\frac{\hn}{2}}^T K_{p,\frac{\hn}{2}}) \ul{u}_{p,\frac{h}{2}}\|_{M_{p,\frac{\hn}{2}}}
\le \frac{1}{\sqrt{2}} \hn \|\ul{u}_{p,\frac{h}{2}}\|_{K_{p,\frac{\hn}{2}}}
\end{equation*}
for all $\ul{u}_{p,\frac{h}{2}} \in \mathbb{R}^{2\nh}$. By rewriting this using a standard matrix norm, we obtain
\eqref{eq:whattolfa2}.
Using the semi-multiplicativity of matrix norms, we obtain that~\eqref{eq:whattolfa2} is
a consequence of \eqref{eq:whattolfa3} and \eqref{eq:whattolfa4}.
\qed\end{proof}
Note that the stiffness matrix for some degree $p$
depends implicitly on the mass matrix for the degree $p-1$. So, analyzing~\eqref{eq:whattolfa4} is more convenient
than analyzing~\eqref{eq:whattolfa2} as the inequality~\eqref{eq:whattolfa4} depends just on the one mass matrix $M_{p-1,\frac{\hn}{2}}$,
whereas~\eqref{eq:whattolfa2} depends on two mass matrices: $M_{p-1,\frac{\hn}{2}}$ and $M_{p,\frac{\hn}{2}}$.
We will show~\eqref{eq:whattolfa3} in the next subsection and~\eqref{eq:whattolfa4} in the remainder of this section.
\subsection{A lemma relating the mass matrices for different polynomial degrees}
The estimate~\eqref{eq:whattolfa3} is a direct consequence of the following lemma.
\begin{lemma}\label{lem:mass}
For all $p\in \mathbb{N}$, grid sizes $h$ and vectors
$\underline{u}_h \in \mathbb{R}^{\nh}$, the inequality
\begin{equation}\nonumbe
\|\underline{u}_h\|_{M_{p,h}} \le \|\underline{u}_h\|_{M_{p-1,h}}
\end{equation}
is satisfied.
\end{lemma}
\begin{proof}
First we observe that the convolution formula for cardinal B-splines, cf. equation~(13) in~\cite{Garoni:2014},
can be carried over to the functions $\widehat{\varphi}_{p,h}^{(i)}$, i.e., that
\begin{equation}\label{eq:rel1}
\widehat{\varphi}_{p,h}^{(i)}(x) = h^{-1} \int_0^{h} \widehat{\varphi}_{p-1,h}^{(i)}(x-t) \textnormal{d} t
\end{equation}
holds. Let $\underline{u}_h=(u_h^{(i)})_{i=0}^{\nh-1}$. Then, using~\eqref{eq:rel1}, we have that
\begin{align*}
\|\underline{u}_h\|_{M_{p,h}}^2
& = \int_{-1}^1 \left( \sum_{i=0}^{\nh-1} u_h^{(i)} \widehat{\varphi}_{p,h}^{(i)}(x) \right)^2 \textnormal{d} x \\
& = \int_{-1}^1 \left( \sum_{i=0}^{\nh-1} u_h^{(i)} h^{-1} \int_0^h \widehat{\varphi}_{p-1,h}^{(i)}(x-t) \textnormal{d} t \right)^2 \textnormal{d} x \\
& = h^{-2} \int_{-1}^1 \left( \int_0^{h} \left( \sum_{i=0}^{\nh-1} u_h^{(i)} \widehat{\varphi}_{p-1,h}^{(i)}(x-t) \right) \textnormal{d} t \right)^2 \textnormal{d} x \\
& = h^{-2} \int_{-1}^1 \left( \int_0^{h} 1 \, s(x-t) \textnormal{d} t \right)^2 \textnormal{d} x
\end{align*}
holds, where $s(x):= \sum_{i=0}^{\nh-1} u_h^{(i)} \widehat{\varphi}_{p-1,h}^{(i)}(x-t)$.
Now, we apply the Cauchy-Schwarz inequality to the inner integral and obtain
\begin{align*}
\|\underline{u}_h\|_{M_{p,h}}^2
& \leq h^{-2} \int_{-1}^1 \left( \int_0^{h} 1^2 \textnormal{d} t \right)\left( \int_0^{h}s^2(x-t) \textnormal{d} t \right) \textnormal{d} x \\
& = h^{-1} \int_{-1}^1 \int_0^{h} s^2(x-t) \textnormal{d} t\,\textnormal{d} x = h^{-1} \int_0^{h} \int_{-1}^1 s^2(x-t) \textnormal{d} x\,\textnormal{d} t.
\end{align*}
Observe that due to periodicity, $\int_{-1}^1 s^2(x-t) \textnormal{d} x =
\int_{-1}^1 s^2(\xi) \textnormal{d} \xi$ for all $t\in[0,h]$, which implies
\begin{align*}
\|\underline{u}_h\|_{M_{p,h}}^2
& \le h^{-1} \int_0^{h} \int_{-1}^1 s^2(\xi) \textnormal{d} \xi\,\textnormal{d} t
= h^{-1} \left(\int_{0}^h 1\textnormal{d} t\right)\left( \int_{-1}^1 s^2(\xi) \textnormal{d} \xi \right)\\
& = \int_{-1}^1 \left( \sum_{i=0}^{\nh-1} u_h^{(i)} \widehat{\varphi}_{p-1,h}^{(i)}(\xi) \right)^2 \textnormal{d} \xi = \|\underline{u}_h\|_{M_{p-1,h}}^2,
\end{align*}
which finishes the proof.
\qed\end{proof}
\subsection{Symbols of mass matrix and stiffness matrix}\label{subsec:symbols}
As the matrices $M_{p,h}$ and $K_{p,h}$ are circulant matrices,
we can analyze them using Fourier analysis. So, we consider the Fourier vectors
\begin{equation*}
\underline{f}_{h,j}:=(\textnormal{\bf e}^{ 2ij h\pi \textnormal{\bf i}})_{i=0}^{\nh-1} \qquad \mbox{ for } j = 0,\ldots, \nh-1,
\end{equation*}
where $\textnormal{\bf i}$ is the imaginary unit.
We observe (using that the bandwith of the mass matrix is $2p+1$) that
\begin{align*}
( M_{p,h} \underline{f}_{h,j} )_i
&= \sum_{l=-p}^p m_{p,h}^{(l)} \textnormal{\bf e}^{2(i+l)jh\pi \textnormal{\bf i} }
= \sum_{l=-p}^p m_{p,h}^{(l)} \textnormal{\bf e}^{2ljh\pi \textnormal{\bf i} } \textnormal{\bf e}^{2ijh\pi \textnormal{\bf i} }\\
&= \underbr{\sum_{l=-p}^p m_{p,h}^{(l)} \textnormal{\bf e}^{2ljh\pi \textnormal{\bf i} }}{ \widehat{m}_{p,h}^{(j)}:= } ( \underline{f}_{h,j} )_i
\end{align*}
for all $i=0,\ldots,\nh-1$ and $j=0,\ldots,\nh-1$ and consequently
\begin{equation}\nonumbe
M_{p,h} \underline{f}_{h,j} = \widehat{m}_{p,h}^{(j)} \underline{f}_{h,j}
\end{equation}
is satisfied for all $j=0,\ldots,\nh-1$, i.e., that $\underline{f}_{h,j}$ is an eigenvector of $M_{p,h}$ with corresponding eigenvalue
$\widehat{m}_{p,h}^{(j)}$. As we have identified $\nh$ different eigenvalues, the corresponding eigenvectors
define a basis of $\mathbb{R}^{\nh}$. Therefore, the matrix $\mathbb{F}_{h}$, obtained by collecting
the vectors $\underline{f}_{h,j}$, i.e.,
\begin{equation*}
\mathbb{F}_{h} := \left( \begin{array}{cccc} \underline{f}_{h,0} & \underline{f}_{h,1} & \cdots & \underline{f}_{h,\nh-1} \end{array}\right)
= (\textnormal{\bf e}^{ 2ij h\pi \textnormal{\bf i}})_{i=0,\ldots,\nh-1}^{j=0,\ldots,\nh-1},
\end{equation*}
is a non-singular matrix. As~$\mathbb{F}_{h}$ is the matrix built from the eigenvectors, it diagonalizes
the matrix $M_{p,h}$, i.e.,
\begin{equation}\label{eq:mhat}
\mathbb{F}_{h}^{-1} M_{p,h} \mathbb{F}_{h} = \widehat{M}_{p,h},
\end{equation}
where $\widehat{M}_{p,h}:=\textnormal{diag}(\widehat{m}_{p,h}^{(0)},\ldots,\widehat{m}_{p,h}^{(\nh-1)})$. Analogously, we obtain
\begin{equation}\label{eq:dhat}
\mathbb{F}_{h}^{-1} D_{h} \mathbb{F}_{h} = \widehat{D}_{h},
\end{equation}
where $\widehat{D}_{h}:=\textnormal{diag}(\widehat{d}_{h}^{(0)},\ldots,\widehat{d}_{h}^{(\nh-1)})$ with
\begin{equation}\label{eq:dhatcoef}
\widehat{d}_{h}^{(j)}:=\hn^{-1}(1-\textnormal{\bf e}^{2jh\pi\textnormal{\bf i}}).
\end{equation}
Using the same construction we obtain that further
\begin{equation}\label{eq:dhat:star}
\mathbb{F}_{h}^{-1} D_{h}^T \mathbb{F}_{h} = \widehat{D}_{h}^*.
\end{equation}
With $\widehat{D}_{h}^*$ we denote the adjoint (the conjugate transpose) of the matrix $\widehat{D}_{h}$. Note that $E_h=\hn^2 \underline{\bf{1}}_h \underline{\bf{1}}_h^T$
is a circulant matrix with rank $1$. The only non-zero eigenvalue is $\hn$, with corresponding eigenvector
$\underline{\bf{1}}_h = \underline{f}_{h, 0}$. So, we obtain
\begin{equation}\label{eq:ehat}
\mathbb{F}_{h}^{-1} E_h \mathbb{F}_{h} = \widehat{E}_h
\end{equation}
where $\widehat{E}_h:=\textnormal{diag}(\widehat{e}_{h}^{(0)},\ldots,\widehat{e}_{h}^{(\nh-1)})$ with
\begin{equation}\label{eq:ehatcoef}
\widehat{e}_{h}^{(j)}:=\left\{\begin{array}{ll}\hn &\mbox{ for } j=0\\0&\mbox{ otherwise.}\end{array}\right.
\end{equation}
So, we can determine, $\widehat{K}_{h}$, the symbol of the stiffness matrix. Using~\eqref{eq:k:decomp},
\eqref{eq:mhat}, \eqref{eq:dhat}, \eqref{eq:dhat:star} and~\eqref{eq:ehat}, we obtain that
\begin{equation}\label{eq:khat}
\mathbb{F}_{h}^{-1} K_{p,h} \mathbb{F}_{h}= \widehat{K}_{h},
\end{equation}
where $\widehat{K}_{h}:=\textnormal{diag}(\widehat{k}_{p,h}^{(0)},\ldots,\widehat{k}_{p,h}^{(\nh-1)})$ with
\begin{equation}\label{eq:khatcoef}
\widehat{k}_{p,h}^{(j)} := \widehat{d}_{h}^{(j)}\widehat{m}_{p-1,h}^{(j)} (\widehat{d}_{h}^{(j)})^*+\widehat{e}_{h}^{(j)}.
\end{equation}
\subsection{Symbol of the intergrid transfer}\label{subsec:symbols2}
The following lemma characterizes the symbol of the intergrid transfer.
\begin{lemma}\label{lem:phat}
We have
\begin{equation}\label{eq:phat}
\mathbb{F}_{\frac{\hn}{2}}^{-1} P_{p,\frac{\hn}{2}}\mathbb{F}_{\hn} = \widehat{P}_{p,\frac{\hn}{2}},
\end{equation}
where $\widehat{P}_{p,\frac{\hn}{2}}:=(\widehat{p}_{p,\frac{\hn}{2}}^{(i,j)})_{i=0,\ldots,2\nh-1}^{j=0,\ldots,\nh-1}$ with
\begin{equation}\label{eq:phatcoef}
\widehat{p}_{p,\frac{\hn}{2}}^{(i,j)} :=
2^{-p-1}\left\{
\begin{array}{ll}
\left(1+\textnormal{\bf e}^{- 2i\frac{h}{2}\pi\textnormal{\bf i}} \right)^{p+1}
& \mbox{ for } i - j \in \{0,\nh\}\\
0
& \mbox{ otherwise }
\end{array}
\right.
\end{equation}
for all $i=0,\ldots,2\nh-1$ and all $j=0,\ldots,\nh-1$.
\end{lemma}
\begin{proof}
The equation~\eqref{eq:phat} is equivalent to $P_{\frac{\hn}{2}}\mathbb{F}_{\hn} = \mathbb{F}_{\frac{\hn}{2}} \widehat{P}_{\frac{\hn}{2}}$.
We obtain using~\eqref{eq:intergrid:per} and the definition of $\mathbb{F}_{\hn}$ for any unit vector
$\underline{\textbf{I}}_{\hn}^{(j)}$ with $j=0,\ldots,\nh-1$ that
\begin{align*}
&P_{\frac{\hn}{2}}\mathbb{F}_{\hn}\underline{\textbf{I}}_{\hn}^{(j)}
= P_{\frac{\hn}{2}}\underline{f}_{\hn,j}
= 2^{-p} \left( \sum_{r\in \mathbb{Z} } \left(\begin{array}{c}p+1\\ i-2r \end{array}\right)
\textnormal{\bf e}^{ 2jr h \pi \textnormal{\bf i}} \right)_{i=0}^{2\nh-1}.
\end{align*}
Because $\tfrac12(1+\textnormal{\bf e}^{ t \pi \textnormal{\bf i}})$ takes the value $0$ for $t$ being odd and $1$ for $t$ being even, we
can substitute $r$ by $2t$ and obtain
\begin{align*}
&P_{\frac{\hn}{2}}\mathbb{F}_{\hn}\underline{\textbf{I}}_{\hn}^{(j)}
= 2^{-p-1} \left( \sum_{t\in \mathbb{Z} } \left(\begin{array}{c}p+1\\ i-t \end{array}\right)
\textnormal{\bf e}^{ 2 jt \frac{h}{2} \pi \textnormal{\bf i}} ( 1+\textnormal{\bf e}^{ t \pi \textnormal{\bf i}})\right)_{i=0}^{2\nh-1} \\
&\quad = 2^{-p-1} \left( \sum_{k\in \mathbb{Z} } \left(\begin{array}{c}p+1\\ k \end{array}\right)
\textnormal{\bf e}^{ 2 j(i-k) \frac{h}{2} \pi \textnormal{\bf i}} ( 1+\textnormal{\bf e}^{ (i-k) \pi \textnormal{\bf i}})\right)_{i=0}^{2\nh-1} \\
&\quad = 2^{-p-1} \sum_{k\in \mathbb{Z} } \left(\begin{array}{c}p+1\\k\end{array}\right)
\left( \textnormal{\bf e}^{- 2 jk \frac{h}{2} \pi\textnormal{\bf i}}
\underline{f}_{\frac{\hn}{2},j}
+ \textnormal{\bf e}^{-2(j+\nh)k\frac{h}{2} \pi\textnormal{\bf i}}
\underline{f}_{\frac{\hn}{2},j+\nh} \right) \\
&\quad = 2^{-p-1}\left(1+\textnormal{\bf e}^{- 2j\frac{h}{2}\pi\textnormal{\bf i}} \right)^{p+1} \underline{f}_{\frac{\hn}{2},j}
+ 2^{-p-1}\left(1+\textnormal{\bf e}^{-2(j+\nh)\frac{h}{2}\pi\textnormal{\bf i}} \right)^{p+1} \underline{f}_{\frac{\hn}{2},j+\nh}.
\end{align*}
This shows that the $j$-th column of $P_{\frac{\hn}{2}}\mathbb{F}_{\hn}$ is just the combination
of two columns of $\mathbb{F}_{\frac{\hn}{2}}$. Therefore, the matrix
$\widehat{P}_{\frac{\hn}{2}}$ has just two non-zero
entries, in the $j$-th row: those which we have claimed in~\eqref{eq:phatcoef}.
\qed\end{proof}
For determining the symbol of $P_{p,\frac{h}{2}}^T$, we observe as follows.
As the Fourier modes $\underline{f}_{\hn,j}$ are pairwise orthogonal, and $\underline{f}_{\hn,j}^*\underline{f}_{\hn,j} = \nh$, we immediately
obtain $\mathbb{F}_{\hn}^*\mathbb{F}_{\hn} = \nh I$ and, consequently, $\mathbb{F}_{\hn}^{-1} = \hn \mathbb{F}_{\hn}^*$.
So, we obtain using~\eqref{eq:phat} that
\begin{equation}\label{eq:phattranspose}
\mathbb{F}_{h}^{-1} P_{p,\frac{h}{2}}^T \mathbb{F}_{\frac{h}{2}}
= ( \mathbb{F}_{\frac{h}{2}}^* P_{p,\frac{h}{2}} \mathbb{F}_{h}^{-*} )^*
= ( 2 \mathbb{F}_{\frac{h}{2}}^{-1} P_{p,\frac{h}{2}} \mathbb{F}_{h} )^*
= 2 \widehat{P}_{p,\frac{h}{2}}^*.
\end{equation}
\subsection{Some statements on the symbol of the mass matrix}
A closed form for the symbol of the mass matrix is not known.
Within this subsection we will show a few statements characterizing the symbol,
which we will need later on.
Due to \cite{Chui:1992,Wang:2010}, we have
\begin{equation}\label{eq:our:mass:formula}
m_{p,\hn}^{(j)} = \hn \frac{ E(2 p + 1, p + j)}{(2 p + 1)!},
\end{equation}
where $j\in\{-p,\ldots,p\}$. Here, $E(n,k)$ are the Eulerian numbers, which satisfy the recurrence relation
\begin{equation*}
E(n, k) = (n - k) E(n - 1, k - 1) + (k + 1) E(n - 1, k)
\end{equation*}
and the initial condition
\begin{equation*}
E(0, j) =\left\{
\begin{array}{ll}
1 & \mbox{for } j= 0\\
0 & \mbox{for } j\not= 0
\end{array}
\right..
\end{equation*}
A similar result was also stated in~\cite{Garoni:2014}. There, the entries of the mass
matrix, i.e., the $L^2$-products of two B-splines of order $p$ have been shown to be
equal to the function value of one B-spline of order $p+1$. Using the recurrence
relation~\eqref{eq:recur:bspline}, one obtains that the result in~\cite{Garoni:2014}
is equivalent to~\eqref{eq:our:mass:formula}.
As $m_{p,\hn}^{(j)}=m_{p,\hn}^{(-j)}$ and $\textnormal{\bf e}^{\theta \textnormal{\bf i}} +\textnormal{\bf e}^{-\theta \textnormal{\bf i}} = 2 \cos\theta$,
we obtain
\begin{equation}\nonumbe
\widehat{m}_{p,\hn}^{(j)} = \hn \sum_{l=-p}^p\frac{ E(2 p + 1, p + l)}{(2p + 1)!}\cos(2ljh\pi).
\end{equation}
The symbol is better characterized by the following lemma.
\begin{lemma}\label{lem:mass:estim}
The following two statements hold:
\begin{itemize}
\item $\widehat{m}_{p,\hn}^{(j)}> 0$ for all $j=0,\ldots,\nh-1$ and
\item $\widehat{m}_{p,\hn}^{(j)}\le \widehat{m}_{p,\hn}^{(k)}$ for all $j,k=0,\ldots,\nh-1$ with
$\cos(2jh\pi) \le \cos(2kh\pi)$.
\end{itemize}
\end{lemma}
\begin{proof}
For $c\in[0,2]$, we define
\begin{equation*}
g_{p}(c) := \sum_{l=-p}^p\frac{ E(2 p + 1, p + l)}{(2p + 1)!}\cos(l\arccos (c-1))
\end{equation*}
and observe $g_{p}(c) = \hn^{-1} \widehat{m}_{p,\hn}^{(\eta(c))}$, where $\eta(c):=\frac{1}{2h\pi}\arccos (c-1)$.
The statement of the lemma is now equivalent to the combination of the following two statements:
\begin{itemize}
\item $\hn^{-1}\widehat{m}_{p,\hn}^{(\eta(0))}=g_{p}(0)>0$ and
\item $\hn^{-1}\widehat{m}_{p,\hn}^{(\eta(c))}=g_{p}(c)$ is monotonically increasing for $c>0$.
\end{itemize}
Since we can express $\cos(l\arccos (c-1))$ as the $l$-th Chebyshev polynomial,
$g_{p}$ is a polynomial function in $c$. Using the recurrence relation for the Eulerian numbers, we can
derive the following recurrence formula for $g_{p}$:
\begin{align*}
g_{p}(c)=\frac{1+c p}{1+2 p} g_{p-1}(c)+\frac{(2-c) (1+c (2 p-1))}{p (1+2 p)} g_{p-1}'(c)+\frac{(c-2)^2 c}{p (1+2 p)} g_{p-1}''(c).
\end{align*}
We can make an ansatz
\begin{equation*}
g_{p}(c) = \sum_{j=0}^p a_{p,j} c^j,
\end{equation*}
where we use $0^0 = 1$, and derive the recurrence formula
\begin{equation*}
a_{p,j}=\underbr{\frac{(1-j+p)^2}{p+2 p^2}}{A_{p,j}:=} a_{p-1,j-1}
+\underbr{\frac{4j (p-j)+j+p}{p+2 p^2}}{B_{p,j}:=} a_{p-1,j}
+\underbr{\frac{2+6 j+4 j^2}{p+2 p^2}}{C_{p,j}:=} a_{p-1,j+1}
\end{equation*}
for the coefficients $a_{p,j}$.
For $p=1$, we obtain
\begin{equation*}
a_{1,j} = \left\{
\begin{array}{ll}
\tfrac13 & \mbox{ for $j\in\{0,1\}$ } \\
0 & \mbox{ otherwise.}
\end{array}
\right.
\end{equation*}
As $A_{p,j}>0$, $B_{p,j}>0$ and $C_{p,j}>0$ for $0\le j \le p$, one can show using
induction in $p$ that for all $p\ge 1$:
\begin{equation*}
\left\{
\begin{array}{ll}
a_{p,j} > 0 & \mbox{ for $j\in\{0,1,\ldots,p\}$ } \\
a_{p,j} = 0 & \mbox{ otherwise.}
\end{array}
\right.
\end{equation*}
This immediately implies that $g_{p}(0)>0$ and that
$g_{p}(c)$ is monotonically increasing for $c>0$, which concludes the proof.\qed
\end{proof}
\subsection{An estimate for the projection operator}
Now, we are able to prove the following lemma.
\begin{lemma}\label{lem:lfa0}
The inequality~\eqref{eq:whattolfa4} holds.
\end{lemma}
\begin{proof}
The inequality~\eqref{eq:whattolfa4} is equivalent to
\begin{equation*}
\underbr{\hn^{-1} \|M_{p-1,\frac{h}{2}}^{1/2}(I-P_{p,\frac{h}{2}} K_{p,h}^{-1} P_{p,\frac{h}{2}}^T K_{p,\frac{h}{2}}) K_{p,\frac{h}{2}}^{-1/2}\|}{q:=}
\le \frac{1}{\sqrt{2}} .
\end{equation*}
Using Galerkin orthogonality, we obtain $K_{p,h} = P_{p,\frac{h}{2}}^T K_{p,\frac{h}{2}}P_{p,\frac{h}{2}}$. Note that
$\mathcal{H}:=I-P_{p,\frac{h}{2}} K_{p,h}^{-1} P_{p,\frac{h}{2}}^T K_{p,\frac{h}{2}}$ is a projection operator, so
$\mathcal{H}\mathcal{H} = \mathcal{H}$. Moreover, observe that $\mathcal{H} K_{p,\frac{h}{2}}^{-1} = K_{p,\frac{h}{2}}^{-1}\mathcal{H}^T$.
Using these identities and $\|W\|^2 = \rho(WW^T)$, where $\rho$ denotes the spectral radius,
we obtain
\begin{align*}\nonumber
q^2 &= \hn^{-2}\rho( M_{p-1,\frac{h}{2}}^{-1/2}\mathcal{H} K_{p,\frac{h}{2}}^{-1}\mathcal{H}^T M_{p-1,\frac{h}{2}}^{-1/2})
= \hn^{-2}\rho( K_{p,\frac{h}{2}}^{-1}M_{p-1,\frac{h}{2}}^{-1}\mathcal{H} )
\\&=
\hn^{-2}\rho(
K_{p,\frac{h}{2}}^{-1}M_{p-1,\frac{h}{2}} (I-P_{p,\frac{h}{2}} (P_{p,\frac{h}{2}}^TK_{p,\frac{h}{2}}P_{p,\frac{h}{2}})^{-1} P_{p,\frac{h}{2}}^T K_{p,\frac{h}{2}})
).
\end{align*}
Using~\eqref{eq:mhat}, \eqref{eq:dhat}, \eqref{eq:dhat:star}, \eqref{eq:khat}, \eqref{eq:phat} and~\eqref{eq:phattranspose}, we obtain further
\begin{align*}\nonumber
q^2 = \hn^{-2}\rho(
\underbr{\widehat{K}_{p,\frac{h}{2}}^{-1}\widehat{M}_{p-1,\frac{h}{2}}
(I-2 \widehat{P}_{p,\frac{h}{2}} (2\widehat{P}_{p,\frac{h}{2}}^*\widehat{K}_{p,\frac{h}{2}}\widehat{P}_{p,\frac{h}{2}})^{-1}
\widehat{P}_{p,\frac{h}{2}}^* \widehat{K}_{p,\frac{h}{2}})
}{ \widehat{T}_{p,\frac{h}{2}}:=}
).
\end{align*}
Lemma~\ref{lem:mass:estim} states that all digonal entries of $\widehat{M}_{p-1,\frac{h}{2}}$ are non-zero.
It is straight-forward to see that also the diagonal entries of
$\widehat{K}_{p,\frac{h}{2}}$ and $\widehat{K}_{p,h}=\widehat{P}_{p,\frac{h}{2}}^*\widehat{K}_{p,\frac{h}{2}}\widehat{P}_{p,\frac{h}{2}}$
are non-zero. So, $\widehat{T}_{p,\frac{h}{2}}$ is well-defined.
Recall that
Lemma~\ref{lem:phat} states that the matrix $\widehat{P}_{p,\frac{h}{2}}=(\widehat{p}_{p,\frac{h}{2}}^{(i,j)})_{i=0,\ldots,2\nh-1}^{j=0,\ldots,\nh-1}$
has a block-structure, given by
\begin{equation*}
\widehat{p}_{p,\frac{h}{2}}^{(i,j)} = 0 \mbox{ for all } i-j \not\in\{0,\nh\}.
\end{equation*}
Therefore and because the matrices $\widehat{M}_{p-1,\frac{h}{2}}$ and
$\widehat{K}_{p,\frac{h}{2}}$ are diagonal,
the matrix $\widehat{T}_{p,\frac{h}{2}} = (\widehat{t}_{p,\frac{h}{2}}^{(i,j)})_{i=0,\ldots,2\nh-1}^{j=0,\ldots,2\nh-1}$
has a block-structure, given by
\begin{equation*}
\widehat{t}_{p,\frac{h}{2}}^{(i,j)} = 0 \mbox{ for all } i-j \not\in\{-\nh,0,\nh\}.
\end{equation*}
By reordering the coefficients of the matrix $\widehat{T}_{p,\frac{h}{2}}$, we obtain a block-diagonal matrix with
blocks
\begin{equation*}
\mathcal{T}_{p,\frac{h}{2}}^{(l)} = \left(\begin{array}{cc}
\widehat{t}_{p,\frac{h}{2}}^{(l,l)} & \widehat{t}_{p,\frac{h}{2}}^{(l,l+\nh)} \\
\widehat{t}_{p,\frac{h}{2}}^{(l+\nh,l)} & \widehat{t}_{p,\frac{h}{2}}^{(l+\nh,l+\nh)}
\end{array}\right).
\end{equation*}
As this block-diagonal matrix is spectrally equivalent to $\widehat{T}_{p,\frac{h}{2}}$ and the spectral radius
of a block-diagonal matrix is just the maximum over the spectral radii of the blocks, we obtain
\begin{equation*}
q^2 = \rho(\widehat{T}_{p,\frac{h}{2}}) = \max_{l=0,\ldots,\nh-1} \rho( \mathcal{T}_{p,\frac{h}{2}}^{(l)} ).
\end{equation*}
So, in the following, we derive the spectral radius of $\mathcal{T}_{p,\frac{h}{2}}^{(l)}$ for any particular $l$. Straight-forward
computation yields that for $l\in\{0,\ldots,\nh-1\}$, $i\in\{l,l+\nh\}$ and $j\in\{l,l+\nh\}$, we have
\begin{equation}\label{eq:xxxy}
\widehat{t}_{p,\frac{h}{2}}^{(i,j)} =
\frac{\widehat{m}_{p-1,\frac{h}{2}}^{(i)}}{\widehat{k}_{p,\frac{h}{2}}^{(i)}}
\left (\delta_{i,j} - \frac{\widehat{p}_{p,\frac{h}{2}}^{(i,l)}(\widehat{p}_{p,\frac{h}{2}}^{(j,l)})^* }{\sum_{r=0}^1
%
(\widehat{p}_{p,\frac{h}{2}}^{(l+r\nh,l)})^* \widehat{k}_{p,\frac{h}{2}}^{(l+r\nh)} \widehat{p}_{p,\frac{h}{2}}^{(l+r\nh,l)}
%
} \widehat{k}_{p,\frac{h}{2}}^{(j)} \right),
\end{equation}
where $\delta_{i,j}$ is the Kronecker-delta, i.e., $\delta_{i,j}=1$ for $i=j$ and $\delta_{i,j}=0$ for $i\not=j$.
Now, consider \emph{case A}: $l\in\{1,\ldots,\nh-1\}$. Here, we plug the values of $\widehat{k}_{p,\frac{h}{2}}^{(j)}$,
$\widehat{d}_{\frac{h}{2}}^{(j)}$, $\widehat{e}_{\frac{h}{2}}^{(j)}$ (which takes the value $0$ for $j\in\{l,l+\nh\}$),
$\widehat{p}_{p,\frac{h}{2}}^{(i,j)}$, as given by \eqref{eq:khatcoef}, \eqref{eq:dhatcoef},
\eqref{eq:ehatcoef} and~\eqref{eq:phatcoef}, into~\eqref{eq:xxxy} and substitute $\widehat{m}_{p-1,\frac{h}{2}}^{(l+n_h)}$ by
$\xi \widehat{m}_{p-1,\frac{h}{2}}^{(l)}$. Doing so, the term $\widehat{m}_{p-1,\frac{h}{2}}^{(l)}$ cancels out
and we obtain by straight-forward computation
\begin{equation}\nonumber
\mathcal{T}_{p,\frac{h}{2}}^{(l)} =
\frac{1}{\delta}
\left(
\begin{array}{c}
-z(1 - z)^{p-3} \xi
\\ z(1 + z)^{p-3}
\end{array}
\right)
\left(
\begin{array}{c}
(-1)^{p} (1 - z)^{p+1} \\ (1 + z)^{p+1}
\end{array}
\right)^T,
\end{equation}
where $\delta:=(1 + z)^{2 p} + (-1)^p (1 - z)^{2 p} \xi$
and $z:=\textnormal{\bf e}^{ 2 l \frac{h}{2} \pi\textnormal{\bf i}}$. Note that the computations are not a problem, as none of the symbols (except
$\widehat{e}_{\frac{h}{2}}^{(j)}$) takes the value $0$ for case A. Moreover, for case~A we have that $z\not\in\{-1,1\}$.
Observe that $\mathcal{T}_{p,\frac{h}{2}}^{(l)}$ has rank $1$. Therefore, its spectral radius
equals its trace, so we obtain by straight-forward computations
that
\begin{align*}
\rho( \mathcal{T}_{p,\frac{h}{2}}^{(l)} ) &=
\frac{
z (1+z)^{2p-2} - (-1)^p z (1-z)^{2p-2} \xi
}{
(1+z)^{2p} + (-1)^p (1-z)^{2p} \xi
} \\
&=
\frac{
z^{-p+1} (1+2z+z^2)^{p-1} - (-1)^p z^{-p+1} (1-2z+z^2)^{p-1} \xi
}{
z^{-p} (1+2z+z^2)^{p} + (-1)^p z^{-p} (1-2z+z^2)^{p} \xi
} \\
&=
\frac{
(z^{-1}+2+z)^{p-1} - (-1)^p (z^{-1}-2+z)^{p-1} \xi
}{
(z^{-1}+2+z)^{p} + (-1)^p (z^{-1}-2+z)^{p} \xi
} \\
&=
\frac{
(2+2 c)^{p-1} - (-1)^p (-2+2 c)^{p-1} \xi
}{
(2+2 c)^{p} + (-1)^p (-2+2 c)^{p} \xi
}
=
\underbr{\frac{
(1+ c)^{p-1} + (1- c)^{p-1} \xi
}{
2 ( (1+ c)^{p} + (1- c)^{p} \xi )
}}{\Psi_p(c,\xi):=}
\end{align*}
holds, where $c:=\cos(2 l \frac{h}{2} \pi)$ and, as defined above,
$\xi=\widehat{m}_{p-1,\frac{h}{2}}^{(l+\nh)}/\widehat{m}_{p-1,\frac{h}{2}}^{(l)}$. Note
that $c\in(-1,1)$ holds as we have restricted ourselves to $l\in\{1,\ldots,\nh-1\}$.
Observe that Lemma~\ref{lem:mass:estim} implies that $\xi>0$. Now, consider two cases:
\begin{itemize}
\item If $c=\cos(2l\frac{h}{2}\pi)> 0$, then
$\cos(2(l+\nh)\frac{h}{2}\pi)=\cos(2l\frac{h}{2}\pi+\pi)\le 0$.
For this case Lemma~\ref{lem:mass:estim} states that $\widehat{m}_{p-1,\frac{h}{2}}^{(l+\nh)}\le\widehat{m}_{p-1,\frac{h}{2}}^{(l)}$,
so $\xi\le 1$ holds.
\item Analogously, $\xi\ge 1$ holds if $c\le0$.
\end{itemize}
To finalize the proof of case~A, we need to show
\begin{equation*}
\Psi_p\left(\cos\left(2l\frac{h}{2}\pi \right),\frac{\widehat{m}_{p-1,\frac{h}{2}}^{(l+\nh)}}{\widehat{m}_{p-1,\frac{h}{2}}^{(l)}} \right) \le \frac{1}{2}
\end{equation*}
for all $l=1,\ldots,\nh-1$. It suffices to show
\begin{equation}\label{eq:cond1}
\Psi_p(c,\xi) \le \frac{1}{2}
\end{equation}
for all $(c,\xi) \in [0,1)\times(0,1]\cup (-1,0] \times[1,\infty)$ and all $p \in \mathbb{N}$, i.e., to show the
inequality for the whole range of $c$ and $\xi$, ignoring their dependence on $l$.
As a next step, we observe that $\Psi_p(c,\xi)= \Psi_p(-c,\xi^{-1})$, which indicates that it suffices to
show~\eqref{eq:cond1}
for all $(c,\xi) \in [0,1)\times(0,1]$ and all $p \in \mathbb{N}$. We observe that
\begin{equation}\nonumber
\Psi_p(c,\xi) = \frac{
1 + \left(\frac{1-c}{1+ c}\right)^{p-1} \xi
}{
2 \left( (1+ c) + (1-c)\left(\frac{1-c}{1+ c}\right)^{p-1} \xi \right)
}
\end{equation}
and
$ \omega :=\left(\frac{1-c}{1+c}\right)^{p-1} \in [0,1]$ for $c\in[0,1]$.
So, it suffices to show that
\begin{equation}\label{eq:cond2}
\frac{
1 + \omega \xi
}{
2 ( (1+ c) + (1-c) \omega \xi )
} \le \frac12
\end{equation}
for all $(c,\xi,\omega)\in [0,1)\times(0,1]\times[0,1]$ and all $p \in \mathbb{N}$, again
ignoring the dependence of $\omega$ on $p$ and $c$.
As the denominator is always positive,~\eqref{eq:cond2} is equivalent to
\begin{equation*}
1+ \omega \xi \le 1+\omega\xi + c (1- \omega \xi),
\end{equation*}
which is obviously true for all $(c,\xi,\omega)\in [0,1)\times(0,1]\times[0,1]$.
Now, we consider \emph{case B}: $l = 0$. Here, we have to use that
$\widehat{e}_{p,\frac{h}{2}}^{(0)}\not=0$ and obtain -- by straight-forward computation -- that
\begin{equation*}
\mathcal{T}_{p,\frac{h}{2}}^{(0)} = \left(
\begin{array}{cc} 0&0\\0&\tfrac14\end{array}
\right)
\end{equation*}
and consequently
$\rho( \mathcal{T}_{p,\frac{h}{2}}^{(0)} ) = \tfrac14$. Also this is bounded from above by $\tfrac12$,
which finishes the proof.
\qed
\end{proof}
\subsection{The approximation error estimate}
Now, we are able to show the approximation error estimate~\eqref{eq:whattolfa}.
\begin{lemma}\label{lem:lfa}
The inequality~\eqref{eq:whattolfa}, i.e.,
\begin{equation}\nonumber
\|(I- \widehat{\Pi}_{p,\hn} ) u_{p,\frac{h}{2}}\|_{L^2(-1,1)} \le \frac{1}{\sqrt{2}}\; \hn |u_{p,\frac{h}{2}}|_{H^1(-1,1)},
\end{equation}
holds for all $u_{p,\frac{h}{2}}\in \widehat{S}_{p,\frac{h}{2}}(-1,1)$.
\end{lemma}
\begin{proof}
Lemma~\ref{lemma:decomp} states that~\eqref{eq:whattolfa} is a consequence of
\eqref{eq:whattolfa3} and \eqref{eq:whattolfa4}. As
Lemma~\ref{lem:mass} shows~\eqref{eq:whattolfa3} and
Lemma~\ref{lem:lfa0} shows~\eqref{eq:whattolfa4}, this finishes the proof.\qed
\end{proof}
\section{The proof of Theorem~\citeThrmApprox{}}\label{sec:thrm1}
In the previous section, we have given a proof for the approximation error of discretized
functions between two consecutive grids. Using a telescoping argument,
we can extend this result to an approximation error estimate for general functions. As in the last
section, we first consider the periodic case.
\begin{lemma}\label{lem:approx:per}
For all $u \in \widehat{H}^{1}(-1,1)$, all grid sizes $h$ and each $p\in \mathbb{N}$, with $hp<1$,
\begin{equation*}
\|(I-\widehat{\Pi}_{p,h})u\|_{L^2(-1,1)} \le \sqrt{2}\; \hn |u|_{H^1(-1,1)}
\end{equation*}
is satisfied, where $\widehat{\Pi}_{p,h}$ is given as in Definition~\ref{def:H1projection}.
\end{lemma}
\begin{proof}
Using a telescoping argument, i.e. iteratively applying the triangular inequality, and the relation $\widehat{\Pi}_{p,2h} \widehat{\Pi}_{p,h} = \widehat{\Pi}_{p,2h}$ for the projectors, we obtain for any $q\in\mathbb{N}$
\begin{align*}
\|(I-\widehat{\Pi}_{p,h})u\|_{L^2(-1,1)} & \le \|(I-\widehat{\Pi}_{p,2^{-q}h})u \|_{L^2(-1,1)} \\
&\qquad+ \sum_{l=0}^{q-1} \|(I-\widehat{\Pi}_{p,2^{-l}h} )\widehat{\Pi}_{p,2^{-l-1}h} u \|_{L^2(-1,1)}.
\end{align*}
We use Lemma~\ref{lem:non:robust} and a standard Aubin-Nitsche duality argument
to estimate $\|(I-\widehat{\Pi}_{p,2^{-q}h})u\|_{L^2(-1,1)}$ from above.
Using \cite{Braess:1997}, Lemma~7.6, and Lemma~\ref{lem:non:robust} for $r=1$ and $q=2$, we immediately
obtain
\begin{equation}\label{eq:aubin}
\|(I-\widehat{\Pi}_{p,2^{-q}h})u\|_{L^2(-1,1)} \le \widetilde{C}(p) 2^{-q} h \|u\|_{H^1(-1,1)},
\end{equation}
where $\widetilde{C}(p)$ is independent of the grid size.
Using~\eqref{eq:aubin} and Lemma~\ref{lem:lfa}, we obtain
\begin{align*}
\|(I-\widehat{\Pi}_{p,h})u \|_{L^2(-1,1)} &\le \widetilde{C}(p) \; 2^{-q}\hn \|u \|_{H^1(-1,1)} \\
&\qquad + \sum_{l=0}^{q-1} \frac{1}{\sqrt{2}} \;2^{-l} \hn |\widehat{\Pi}_{p,2^{-l-1}h} u|_{H^1(-1,1)}.
\end{align*}
Because $\widehat{\Pi}_{p,h}$ is $H^1$-orthogonal, we obtain
$|\widehat{\Pi}_{p,2^{-l-1}h} u|_{H^1(-1,1)} \leq |u |_{H^1(-1,1)}$ and further
\begin{equation*}
\|(I-\widehat{\Pi}_{p,h})u \|_{L^2(-1,1)} \le \widetilde{C}(p) \; 2^{-q} \hn \|u \|_{H^1(-1,1)} + \sum_{l=0}^{q-1}
\frac{1}{\sqrt{2}} \;2^{-l} \hn |u |_{H^1(-1,1)}.
\end{equation*}
The summation formula for the infinite geometric series gives
\begin{equation*}
\|(I-\widehat{\Pi}_{p,h})u \|_{L^2(-1,1)} \le \widetilde{C}(p) \; 2^{-q} \hn \|u \|_{H^1(-1,1)} +
2 \frac{1}{\sqrt{2}} \hn |u|_{H^1(-1,1)}.
\end{equation*}
As this is true for all $q\in \mathbb{N}$, we can take the limit $q\rightarrow \infty$ and obtain the desired
result.\qed
\end{proof}
Having this result, we note that
Theorem~\ref{thrm:approx} is just the extension of Lemma~\ref{lem:approx:per} to
the non-periodic case. So, we can easily prove Theorem~\ref{thrm:approx}.
\begin{proof}{\em of Theorem~\ref{thrm:approx}}
In the following, we assume without loss of generality that $\Omega=(0,1)$. The extension
to any other $\Omega=(a,b)$, follows using a standard scaling argument.
Observe that any $u\in H^{1}(0,1)$ can be extended to a $w\in \widehat{H}^1(-1,1)$
by defining $w(x):=u(|x|)$. The assumption $hp<1$ in Theorem~\ref{thrm:approx} guarantees that
Lemma~\ref{lem:approx:per} can be applied.
We set $w_{p,h}:= \widehat{\Pi}_{p,h} w \in \widehat{S}_{p,h}(-1,1)$ as in
Lemma~\ref{lem:approx:per}, such that
\begin{equation*}
\|w-w_{p,h}\|_{L^2(-1,1)} \le \sqrt{2}\; \hn |w|_{H^1(-1,1)}.
\end{equation*}
The function $w_{p,h}$ is symmetric, i.e., $w_{p,h}(x)=w_{p,h}(-x)$ holds. This can be seen by
the following argument: As $w$ satisfies $w(x)=w(-x)$, we have for $\widetilde{w}_{p,h}(x):=w_{p,h}(-x)$
\begin{equation*}
\|w-w_{p,h}\|_{L^2(-1,1)} = \|w-\widetilde{w}_{p,h}\|_{L^2(-1,1)}
\end{equation*}
and as $w_{p,h}$ was a unique minimizer, consequently $w_{p,h}(x) = \widetilde{w}_{p,h}(x)=w_{p,h}(-x)$ holds.
By restricting $w_{p,h}$ to $(0,1)$, we obtain a function $u_{p,h}\in S_{p,h}(0,1)$.
This function satisfies the desired approximation error estimate since
$|w|_{H^1(-1,1)} = \sqrt{2} |u|_{H^1(0,1)}$ and
$\|w-w_{p,h}\|_{L^2(-1,1)} = \sqrt{2} \|u-~u_{p,h}\|_{L^2(0,1)}$ hold due to
the symmetry of $w$.\qed
\end{proof}
\section{Approximation error estimate for a reduced spline space}\label{sec:reduced}
In the proof of Theorem~\ref{thrm:approx} we have defined $u_{p,h}$ to be the restriction of a symmetric and
periodic spline $w_{p,h} \in \widehat{S}_{p,h}(-1,1)$ to $(0,1)$. So, we know more about $u_{p,h}$ than just $u_{p,h}\inS_{p,h}(0,1)$.
Throughout this Section we again assume $hp<|\Omega|$.
As we have shown in the proof of Theorem~\ref{thrm:approx} the spline~$w_{p,h}$ is symmetric,
i.e., $w_{p,h}(x)=w_{p,h}(-x)$, so we have
\begin{equation*}
\frac{\partial^{l}}{\partial x^{l}} w_{p,h}(x)= (-1)^l \frac{\partial^{l}}{\partial x^{l}} w_{p,h}(-x)
\mbox{ for all } l \in \mathbb{N}_0 .
\end{equation*}
By plugging~$x=0$ into this relation, we obtain that all odd derivatives vanish
for~$x=0$. By plugging~$x=1$ into the relation, we obtain together with~\eqref{eq:sym:cond}
that also for~$x=1$ all odd derivatives vanish.
So, we have shown that the approximation error estimate~\eqref{eq:thrm:approx} is still satisfied
if we restrict the approximating spline $u_{p,h}$ to be in the space $\widetilde{S}_{p,h}(0,1)$, defined
as follows.
\begin{definition}\label{defi:Ssymm}
Given a spline space $S_{p,h}(\Omega)$ over $\Omega=(a,b)$, the \emph{space of splines
with vanishing odd derivatives} $\widetilde{S}_{p,h}(\Omega)$ is the space of all $u_{p,h}\in S_{p,h}(\Omega)$
that satisfy the following condition:
\begin{equation*}
\frac{\partial^{2l+1}}{\partial x^{2l+1}} u_{p,h}(a)=\frac{\partial^{2l+1}}{\partial x^{2l+1}} u_{p,h}(b) = 0
\mbox{ for all } l \in \mathbb{N}_0 \mbox{ with } 2l+1 < p.
\end{equation*}
\end{definition}
Using a standard scaling argument, we can again extend the result for $\Omega=(0,1)$
to any $\Omega=(a,b)$ and obtain the following Corollary.
\begin{corollary}\label{cor:approx:nonper}
For all $u\in H^1(\Omega)$, all grid sizes $h$ and all $p\in\mathbb{N}$, with $hp<|\Omega|$,
there is a spline approximation $u_{p,h}\in \widetilde{S}_{p,h}(\Omega)$ such that
\begin{equation*}
\|u-u_{p,h}\|_{L^2(\Omega)} \le \sqrt{2}\; \hn |u|_{H^1(\Omega)}
\end{equation*}
is satisfied.
\end{corollary}
In the Appendix, we will introduce a basis for the space~$\widetilde{S}_{p,h}(\Omega)$. Based on the bases
of those spaces, we obtain that their dimensions are as given in Table~\ref{tab:dof}.
\begin{table}
\begin{center}
\begin{tabular}{cccc}
\hline\noalign{\smallskip}
& dim $S_{p,h}(0,1)$ & dim $\widehat{S}_{p,h}(0,1)$ & dim $\widetilde{S}_{p,h}(0,1)$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$p$ even & $n+p$ & $n$ & $n$ \\
$p$ odd & $n+p$ & $n$ & $n+1$ \\
\noalign{\smallskip}\hline
\end{tabular}
\caption{Degrees of freedom, where $n$ is the number of elements in $(0,1)$.}
\label{tab:dof}
\end{center}
\end{table}
\section{An inverse inequality for the reduced spline space}\label{sec:inverse}
For the space~$\widetilde{S}_{p,h}(\Omega)$, a robust inverse inequality holds. Note that an
extension to~$S_{p,h}(\Omega)$ is not possible (cf. Remark~\ref{rem:counterexample}).
\begin{theorem}\label{thrm:inverse}
For all grid sizes $h$ and each $p\in \mathbb{N}$,
\begin{equation}\label{eq:inv2}
|u_{p,h}|_{H^1(\Omega)} \le 2 \sqrt{3} \hn^{-1} \|u_{p,h}\|_{L^2(\Omega)}
\end{equation}
is satisfied for all $u_{p,h}\in \widetilde{S}_{p,h}(\Omega)$.
\end{theorem}
\begin{proof}
In the following, we assume without loss of generality that $\Omega=(0,1)$. The extension
to any other $\Omega=(a,b)$, follows directly using a standard scaling argument.
We can extend every $u_{p,h}\in\widetilde{S}_{p,h}(0,1)$ to $(-1,1)$ by defining $w_{p,h}(x):=u_{p,h}(|x|)$
and obtain $w_{p,h} \in \widehat{S}_{p,h}(-1,1)$. \eqref{eq:inv2} is equivalent to
\begin{equation}\label{eq:inv2aa}
|w_{p,h}|_{H^1(-1,1)} \le 2 \sqrt{3} \hn^{-1} \|w_{p,h}\|_{L^2(-1,1)}.
\end{equation}
This is shown using induction in $p$ for all $u\in \widetilde{S}_{p,h}(-1,1)$.
For $p=1$, \eqref{eq:inv2aa} is known, cf.~\cite{schwab:1998}, Theorem~3.91.
Now, we show that the constant does not increase for larger $p$. So assume $p>1$ to be fixed.
Due to the periodicity and due to the Cauchy-Schwarz inequality,
\begin{align*}
|w_{p,h}|_{H^1(-1,1)}^2 &= \int_{-1}^1 (w_{p,h}')^2 dx = -\int_{-1}^1 w_{p,h}'' w_{p,h} dx \\
&\le \|w_{p,h}''\|_{L^2(-1,1)} \|w_{p,h}\|_{L^2(-1,1)}= |w_{p,h}'|_{H^1(-1,1)} \|w_{p,h}\|_{L^2(-1,1)}
\end{align*}
is satisfied.
Using the induction assumption (and~$w_{p,h}'\in \widehat{S}_{p-1,h}(-1,1)$, cf. \cite{Schumaker:1981}, Theorem~5.9), we know that
\begin{equation*}
|w_{p,h}'|_{H^1(-1,1)} \le 2 \sqrt{3} \hn^{-1} \|w_{p,h}'\|_{L^2(-1,1)} = 2 \sqrt{3} \hn^{-1} |w_{p,h}|_{H^1(-1,1)}.
\end{equation*}
Combining these results, we obtain
\begin{equation*}
|w_{p,h}|_{H^1(-1,1)}^2 \le 2 \sqrt{3} \hn^{-1} |w_{p,h}|_{H^1(-1,1)}\|w_{p,h}\|_{L^2(-1,1)}
\end{equation*}
and further
\begin{equation*}
|w_{p,h}|_{H^1(-1,1)} \le 2 \sqrt{3} \hn^{-1} \|w_{p,h}\|_{L^2(-1,1)}.
\end{equation*}
This shows \eqref{eq:inv2aa}, which concludes the proof.\qed
\end{proof}
\begin{remark}
Neither Theorem~3.91 in~\cite{schwab:1998}, nor any of the arguments in the proof
of Theorem~\ref{thrm:inverse} requires the grid to be equidistant. So, also having a
general grid, estimate
\begin{equation}\nonumber
|u_{p,h}|_{H^1(\Omega)} \le 2 \sqrt{3} \; h_{\min}^{-1} \|u_{p,h}\|_{L^2(\Omega)}
\end{equation}
is satisfied for all splines $u_{p,h}$ on $\Omega=(a,b)$ with vanishing odd derivatives at
the boundary. Here, as in any standard inverse inequality, $h_{\min}$ is the size of
the \emph{smallest} element.
\end{remark}
As we have proven both an approximation error estimate and a corresponding inverse inequality,
both of them are sharp (up to constants independent of $p$ and $\hn$). First, we show that
there is a lower bound for the approximation error. As~\eqref{eq:corr:sharp1} is obviously
true for constant functions, we show that there also exist other functions satisfying this inequality.
\begin{corollary}\label{corr:sharp1}
For all grid sizes $h$ and each $p\in\mathbb{N}$,
there is a non-constant function $u\in H^1(\Omega)$ such that
\begin{equation}\label{eq:corr:sharp1}
\inf_{u_{p,h}\in \widetilde{S}_{p,h}(\Omega)} \|u-u_{p,h}\|_{L^2(\Omega)} \ge \frac{1}{4\sqrt{3}}\; \hn |u|_{H^1(\Omega)}.
\end{equation}
\end{corollary}
\begin{proof}
Let $u\in S_{p,\frac{\hn}{2}}(\Omega)\backslash\{0\}$ be such that $(u,\tilde{u}_{p,\hn})_{L^2(\Omega)}=0$ for
all $\tilde{u}_{p,\hn}\in S_{p,h}(\Omega)$. As the constant functions are in $S_{p,h}(\Omega)$, this
orthogonality implies that $u$ is non-constant.
Using this orthogonality, we know that the infimum in~\eqref{eq:corr:sharp1} is taken for $u_{p,h}=0$.
So, we obtain using Theorem~\ref{thrm:inverse} $\inf_{u_{p,h}\in \widetilde{S}_{p,h}(\Omega)} \|u-u_{p,h}\|_{L^2(\Omega)}
= \|u\|_{L^2(\Omega)} \ge \frac{1}{2\sqrt{3}}\; \frac{\hn}{2} |u|_{H^1(\Omega)}$,
which finishes the proof.
\qed\end{proof}
Similarly, we can give an lower bound for the inverse inequality. Again, we show the existence of a non-trivial function.
\begin{corollary}\label{corr:sharp2}
For all grid sizes $h$ with $2hp<|\Omega|$ and each $p\in\mathbb{N}$,
there is a non-constant function $u_{p,h}\in \widetilde{S}_{p,h}(\Omega)$ such that
\begin{equation}\nonumber
|u_{p,h}|_{H^1(\Omega)} \ge \frac{1}{2\sqrt{2}}\; \hn^{-1} \|u_{p,h}\|_{L^2(\Omega)}.
\end{equation}
\end{corollary}
\begin{proof}
Let $u_{p,h}\in S_{p,h}(\Omega)\backslash\{0\}$ be such that $(u_{p,h},\tilde{u}_{p,2\hn})_{L^2(\Omega)}=0$ for
all $\tilde{u}_{p,2\hn}\in S_{p,2\hn}(\Omega)$. As the constant functions are in $S_{p,2\hn}(\Omega)$,
this orthogonality implies that $u_{p,h}$ is non-constant.
Using this orthogonality and Theorem~\ref{thrm:approx}, we obtain
$\|u_{p,h}\|_{L^2(\Omega)} = \inf_{u_{p,2\hn}\in \widetilde{S}_{p,n-1}(\Omega)} \|u_{p,h}-u_{p,2\hn}\|_{L^2(\Omega)} \le
\sqrt{2} (2\hn) |u_{p,h}|_{H^1(\Omega)}$, which finishes the proof.
\qed\end{proof}
\section{An extension to higher Sobolev indices}\label{sec:sobolev}
We can easily lift the statement of Theorem~\ref{thrm:approx}
(and also Corollary~\ref{cor:approx:nonper}) up to higher Sobolev indices.
\begin{theorem}\label{thrm:approx:sob}
For all grid sizes $h$, each $q \in \mathbb{N}$ and each $p\in \mathbb{N}$
with $0< q\le p+1$ and with $h(p-q+1)<|\Omega|$,
there is for each $u\in H^q(\Omega)$,
a spline approximation $u_{p,h}\in \widetilde{S}_{p,h}^{(q)}(\Omega)$ such that
\begin{equation*}
|u-u_{p,h}|_{H^{q-1}(\Omega)} \le \sqrt{2} \; \hn |u|_{H^q(\Omega)},
\end{equation*}
where $\widetilde{S}_{p,h}^{(q)}(\Omega)$ is the space of all $u_{p,h} \in S_{p,h}(\Omega)$ that
satisfy the following symmetry condition:
\begin{equation*}
\frac{\partial^{2l+q}}{\partial x^{2l+q}} u_{p,h}(a)=\frac{\partial^{2l+q}}{\partial x^{2l+q}} u_{p,h}(b) = 0
\mbox{ for all } l \in \mathbb{N}_0 \mbox{ with } 2l+q < p.
\end{equation*}
\end{theorem}
\begin{proof}
Let again $\Omega=(0,1)$ without loss of generality.
The proof is done by induction. From Corollary~\ref{cor:approx:nonper}, we know the estimate for $q=1$
(as $\widetilde{S}_{p,h}^{(1)}(0,1)=\widetilde{S}_{p,h}(0,1)$) and all $p> q-1=0$. For $q=1$ and $p=q-1=0$, the estimate
is a well-known result, cf. \cite{Schumaker:1981}, Theorem~6.1,~(6.7), where (in our notation)
$|u-u_{0,h}|_{L^2(0,1)} \le \hn |u|_{H^1(0,1)}$ has been shown.
So, now we assume to know the estimate for some $q-1$ and show it for $q$.
As $u\in H^q(0,1)$, we know that $u'\in H^{q-1}(0,1)$, so we can apply the induction hypothesis and
obtain that there is some $u_{p-1,n}\in \widetilde{S}_{p-1,n}^{(q-1)}(0,1)$ with
\begin{equation*}
|u'-u_{p-1,n}|_{H^{q-2}(0,1)} \le \sqrt{2} \; \hn |u'|_{H^{q-1}(0,1)}.
\end{equation*}
Define
\begin{equation}\label{eq:thrm:approx:sob}
u_{p,h}(x):=c+\int_0^x u_{p-1,n}(\xi)d\xi.
\end{equation}
Note that $u_{p,h}\in S_{p,h}(0,1)$ as integrating increases
both the polynomial degree and the differentiability by $1$, cf.~\cite{Schumaker:1981}, Theorem~5.16.
After integrating, the boundary conditions on the $l$-th derivative
become conditions on the $l+1$-st derivative, therefore we further have $u_{p,h}\in \widetilde{S}_{p,h}^{(q)}(0,1)$.
Therefore, we have
\begin{equation*}
|u'-u_{p,h}'|_{H^{q-2}(0,1)} \le \sqrt{2} \; \hn |u'|_{H^{q-1}(0,1)},
\end{equation*}
which is the same as
\begin{equation*}
|u-u_{p,h}|_{H^{q-1}(0,1)} \le \sqrt{2} \; \hn |u|_{H^{q}(0,1)}.
\end{equation*}
The bound on the grid size with respect to the degree, i.e. $h(p-q+1)<|\Omega|$ is sufficient, as the degree of $\partial^{q-1}/\partial x^{q-1} u$ is equal to $p-q+1$.
This finishes the proof.\qed
\end{proof}
\begin{remark}
The integration constant (integration constants for $q>2$) in~\eqref{eq:thrm:approx:sob} can be used
to guarantee that
\begin{equation*}
\int_{\Omega} \frac{\partial^l}{\partial x^l}(u(x)-u_{p,h}(x)) \textnormal{d} x= 0
\end{equation*}
for all $l\in\{0,1,\ldots,q-1\}$.
\end{remark}
For the spaces $\widetilde{S}_{p,h}^{(q)}(\Omega)$ there is again an inverse inequality.
\begin{theorem}\label{thrm:inverse:sob}
For all grid sizes $h$, each $q\in \mathbb{N}$ and each $p\in \mathbb{N}$ with $0< q \le p+1$,
\begin{equation}\label{eq:inv2:sob}
|u_{p,h}|_{H^q(\Omega)} \le 2 \sqrt{3} \hn^{-1} |u_{p,h}|_{H^{q-1}(\Omega)}
\end{equation}
is satisfied for all $u_{p,h}\in \widetilde{S}_{p,h}^{(q)}(\Omega)$, where
$\widetilde{S}_{p,h}^{(q)}(\Omega)$ is as defined in Theorem~\ref{thrm:approx:sob}.
\end{theorem}
\begin{proof}
First note that~\eqref{eq:inv2:sob} is equivalent to
\begin{equation}\label{eq:inv2:sob:2}
\left|\frac{\partial^{q-1}}{\partial x^{q-1}} u_{p,h}\right|_{H^1(\Omega)} \le 2 \sqrt{3} \hn^{-1}
\left\|\frac{\partial^{q-1}}{\partial x^{q-1}} u_{p,h}\right\|_{L^2(\Omega)}.
\end{equation}
As $\frac{\partial^{q-1}}{\partial x^{q-1}}u_{p,h}\in \widetilde{S}_{p-q+1,n}^{(1)}(\Omega) = \widetilde{S}_{p-q+1,n}(\Omega)$,
cf. \cite{Schumaker:1981}, Theorem~5.9,
the estimate~\eqref{eq:inv2:sob:2} follows directly from Theorem~\ref{thrm:inverse}.\qed
\end{proof}
Again, as we have both an approximation error estimate and an inverse inequality, we know that
both of them are sharp (cf. Corollaries~\ref{corr:sharp1} and~\ref{corr:sharp2}).
The following theorem is directly obtained from telescoping.
\begin{theorem}\label{thrm:approx:sob:2}
For all grid sizes $h$, each $q\in\mathbb{N}_0$, each $p\in\mathbb{N}$, each $r\in\mathbb{N}$
with $0\le r\le q\le p+1$ and $h(p-r)<|\Omega|$, there is for each $u\in H^q(\Omega)$
a spline approximation $u_{p,h}\in S_{p,h}(\Omega)$ such that
\begin{equation*}
|u-u_{p,h}|_{H^r(\Omega)} \le (\sqrt{2}\; \hn)^{q-r} |u|_{H^q(\Omega)}
\end{equation*}
is satisfied.
\end{theorem}
\begin{proof}
Theorem~\ref{thrm:approx:sob} states the desired result for $r=q-1$. For $r<q-1$, the
statement is shown by induction in $r$. So, we assume to know
the desired result for some $r$, i.e., there is a spline approximation $w_{p,h}\in S_{p,h}(\Omega)$ such that
\begin{equation}\label{eq:thrm:0:ia}
|u-w_{p,h}|_{H^r(\Omega)} \le (\sqrt{2}\; \hn)^{q-r} |u|_{H^q(\Omega)}.
\end{equation}
Now, we show that there is some $u_{p,h}\in S_{p,h}(\Omega)$ such that
\begin{equation}\label{eq:thrm:0:ih}
|u-u_{p,h}|_{H^{r-1}(\Omega)} \le (\sqrt{2}\; \hn)^{q-(r-1)} |u|_{H^q(\Omega)}.
\end{equation}
As $u-w_{p,h}\in H^r(\Omega)$,
Theorem~\ref{thrm:approx:sob} states that there is a function $u_{p,h}\in S_{p,h}(0,1)$ such
that
\begin{equation*}
|u-u_{p,h}|_{H^{r-1}(\Omega)} \le \sqrt{2}\; \hn |u-w_{p,h}|_{H^r(\Omega)},
\end{equation*}
which shows together with the induction assumption~\eqref{eq:thrm:0:ia} the induction
hypothesis~\eqref{eq:thrm:0:ih}. Again, the bound on the grid size $h(p-r)<|\Omega|$ follows directly from the bounds in Theorem~\ref{thrm:approx:sob}. \qed
\end{proof}
Here, it is not known to the authors how to choose a proper subspace of~$S_{p,h}(\Omega)$
such that a complementary inverse inequality can be shown.
\section{Extension to two and more dimensions and application in Isogeometric Analysis}\label{sec:dim}
Without loss of generality and to simplify the notation, we restrict ourselves to $\Omega:=(0,1)^d$ throughout this section.
We can extend Theorem~\ref{thrm:approx} (and also Corollary~\ref{cor:approx:nonper}) to the following theorem for a tensor-product
structured grid on $\Omega$.
Here, we can introduce $\widetilde{W}_{p,h}(\Omega) = \otimes_{l=1}^d \widetilde{S}_{p,h}(0,1)$. Let $n=\nh$, for even $p$, and $n=\nh+1$ for odd $p$.
Assuming that $(\widetilde{\varphi}_{p,h}^{(0)},\ldots , \widetilde{\varphi}_{p,h}^{(n-1)})$ is a basis of $\widetilde{S}_{p,h}(0,1)$, the space
$\widetilde{W}_{p,h}(\Omega)$ is given by
\begin{equation*}
\widetilde{W}_{p,h}(\Omega)=\left\{w\,:\,w(x_1,\ldots,x_d)=\hspace{-2mm}\sum_{i_1,\ldots,i_d=0}^{n-1}\hspace{-2mm} w_{i_1,\ldots,i_d} \widetilde{\varphi}_{p,h}^{(i_1)}(x_1) \cdots \widetilde{\varphi}_{p,h}^{(i_d)}(x_d) \right\}.
\end{equation*}
\begin{theorem}\label{eq:approx2d}
For all $u\in H^1(\Omega)$, all grid sizes $h$ and each $p\in\mathbb{N}_0$, with $hp<1$,
there is a spline approximation $w_{p,h}\in \widetilde{W}_{p,n}(\Omega)$ such that
\begin{equation*}
\|u-w_{p,h}\|_{L^2(\Omega)} \le \sqrt{2d}\; \hn |u|_{H^1(\Omega)}
\end{equation*}
is satisfied.
\end{theorem}
The proof is similar to the proof in~\cite{Beirao:2012}, Section 4, for the two dimensional case. To
keep the paper self-contained we give a proof of this theorem.
\begin{proof}{\em of Theorem~\ref{eq:approx2d}}
For sake of simplicity, we restrict ourselves to $d=2$. The extension to more dimensions
is completely analogous. Here
\begin{equation*}
\widetilde{W}_{p,h}(\Omega)=\widetilde{S}_{p,h}(0,1)\otimes\widetilde{S}_{p,h}(0,1)=\left\{w\;:\;w(x,y)=\sum_{i,j=0}^{n-1} w_{i,j} \widetilde{\varphi}_{p,h}^{(i)}(x) \widetilde{\varphi}_{p,h}^{(j)}(y) \right\}.
\end{equation*}
We assume $u\in C^\infty(\Omega)$ and show the desired result using a standard
density argument.
Using Corollary~\ref{cor:approx:nonper}, we can introduce for each $x\in(0,1)$ a function
$v(x,\cdot)\in \widetilde{S}_{p,h}(0,1)$ with
\begin{equation*}
\|u(x,\cdot)-v(x,\cdot)\|_{L^2(0,1)} \le \sqrt{2}\; \hn |u(x,\cdot)|_{H^1(0,1)}.
\end{equation*}
By squaring and taking the integral over $x$, we obtain
\begin{equation}\label{eq:2d:1}
\|u-v\|_{L^2(\Omega)} \le \sqrt{2}\; \hn \left\|\frac{\partial}{\partial y} u\right\|_{L^2(\Omega)}.
\end{equation}
By choosing $v(x,\cdot)$ to be the $L^2$-orthogonal projection, we also have
\begin{equation*}
\|v(x,\cdot)\|_{L^2(0,1)} \le \|u(x,\cdot)\|_{L^2(0,1)}
\end{equation*}
for all $x\in(0,1)$ and consequently
\begin{equation}\label{eq:2d:x}
\left\|\frac{\partial}{\partial x}v(x,\cdot)\right\|_{L^2(0,1)} \le \left\|\frac{\partial}{\partial x}u(x,\cdot)\right\|_{L^2(0,1)}.
\end{equation}
As $v(x,\cdot)\in \widetilde{S}_{p,h}(0,1)$, there are coefficients $v_j(x)$ such that
\begin{equation*}
v(x,y) = \sum_{j=0}^{n-1} v_j(x) \widetilde{\varphi}_{p,h}^{(j)}(y).
\end{equation*}
Using Corollary~\ref{cor:approx:nonper}, we can introduce for each $j\in\{0,\ldots,N\}$ a function
$w_j\in \widetilde{S}_{p,h}(0,1)$ with
\begin{equation}\label{eq:2d:2a}
\|v_j-w_j\|_{L^2(0,1)} \le \sqrt{2}\; \hn |v_j|_{H^1(0,1)}.
\end{equation}
Next, we introduce a function $w$ by defining
\begin{equation*}
w(x,y):=\sum_{j=0}^{n-1} w_{j}(x) \widetilde{\varphi}_{p,h}^{(j)}(y),
\end{equation*}
which is obviously a member of the space $\widetilde{W}_{p,n}(\Omega)$.
By squaring \eqref{eq:2d:2a}, multiplying it with $\widetilde{\varphi}_{p,h}^{(j)}(y)^2$, summing over $j$ and
taking the integral, we obtain
\begin{equation}\nonumber
\int_0^1 \sum_{j=0}^{n-1}\|v_j-w_j\|_{L^2(0,1)}^2\widetilde{\varphi}_{p,h}^{(j)}(y)^2\textnormal{d} y
\le 2\; \hn^2 \int_0^1\sum_{j=0}^{n-1}|v_j|_{H^1(0,1)}^2 \widetilde{\varphi}_{p,h}^{(j)}(y)^2\textnormal{d} y.
\end{equation}
Using the definition of the norms, we obtain
\begin{equation}\nonumber
\int_0^1\int_0^1 \sum_{j=0}^{n-1} (v_j(x)-w_j(x))^2\widetilde{\varphi}_{p,h}^{(j)}(y)^2\textnormal{d} x\textnormal{d} y
\le 2\; \hn^2 \int_0^1\int_0^1\sum_{j=0}^{n-1} v_j'(x)^2 \widetilde{\varphi}_{p,h}^{(j)}(y)^2\textnormal{d} x\textnormal{d} y
\end{equation}
and further
\begin{equation}\nonumber
\|v-w\|_{L^2(\Omega)} \le \sqrt{2}\; \hn \left\|\frac{\partial}{\partial x} v\right\|_{L^2(\Omega)}.
\end{equation}
Using~\eqref{eq:2d:x}, we obtain
\begin{equation}\label{eq:2d:2}
\|v-w\|_{L^2(\Omega)} \le \sqrt{2}\; \hn \left\|\frac{\partial}{\partial y} u\right\|_{L^2(\Omega)}.
\end{equation}
Using~\eqref{eq:2d:1} and~\eqref{eq:2d:2}, we obtain
\begin{align}
\|u-w\|_{L^2(\Omega)} &\le \|u-v\|_{L^2(\Omega)} + \|v-w\|_{L^2(\Omega)} \nonumber \\
& \le \sqrt{2}\; \hn \left\|\frac{\partial}{\partial y} u\right\|_{L^2(\Omega)} +
\sqrt{2}\; \hn \left\|\frac{\partial}{\partial x} u\right\|_{L^2(\Omega)}\label{eq:anisotropic-estimate}\\
& \le 2\; \hn |u|_{H^1(\Omega)},\nonumber
\end{align}
which finishes the proof.\qed
\end{proof}
The extension of Theorem~\ref{thrm:inverse} to two or more dimensions is rather easy.
\begin{theorem}
For all grid sizes $h$ and each $p\in \mathbb{N}$, the inequality
\begin{equation}\nonumber
|u_{p,h}|_{H^1(\Omega)} \le 2 \;\sqrt{3d} \;\hn^{-1}\;\|u_{p,h}\|_{L^2(\Omega)}
\end{equation}
is satisfied for all $u_{p,h}\in \widetilde{W}_{p,h}(\Omega)$.
\end{theorem}
\begin{proof}
For sake of simplicity, we restrict ourselves to $d=2$. The generalization to more
dimensions is completely analogous.
We have obviously
\begin{align*}\nonumber
|u_{p,h}|_{H^1(\Omega)}^2 &= \left\|\frac{\partial}{\partial x} u_{p,h}\right\|_{L^2(\Omega)}^2 + \left\|\frac{\partial}{\partial y} u_{p,h}\right\|_{L^2(\Omega)}^2\\
& = \int_0^1 |u_{p,h}(\cdot,y)|_{H^1(0,1)}^2 \textnormal{d} y
+ \int_0^1 |u_{p,h}(x,\cdot)|_{H^1(0,1)}^2 \textnormal{d} x
\end{align*}
This can be bounded from above using Theorem~\ref{thrm:inverse} via
\begin{align*}\nonumber
|u_{p,h}|_{H^1(\Omega)}^2 \leq& 12 \hn^{-2} \left( \int_0^1 \|u_{p,h}(\cdot,y)\|_{L^2(0,1)}^2 \textnormal{d} y
+ \int_0^1 \|u_{p,h}(x,\cdot)\|_{L^2(0,1)}^2 \textnormal{d} x\right) \\ =& 24 \hn^{-2} \|u_{p,h}\|_{L^2(\Omega)}^2,
\end{align*}
which finishes the proof.\qed
\end{proof}
The extension to isogeometric spaces can be done following the approach presented in \cite{Bazilevs:2006}, Section 3.3.
In Isogeometric Analysis, we have a geometry parameterization $\mathbf{F}:(0,1)^d \rightarrow \hat\Omega$. An isogeometric function on $\hat\Omega$
is then given as the composition of a B-spline on $(0,1)^d$ with the inverse of $\mathbf{F}$.
The following result can be shown using a standard chain rule argument.
There exists a constant $C=C(\mathbf{F},q)$ such that
\begin{equation}\label{eq:norm:equivalence}
C^{-1} \left\| f \right\|_{H^{q}(\hat\Omega)}
\leq \left\| f \circ \mathbf{F} \right\|_{H^{q}((0,1)^d)}
\leq C \left\| f \right\|_{H^{q}(\hat\Omega)}
\end{equation}
for all $f\in H^{q}(\hat\Omega)$.
See \cite{Bazilevs:2006}, Lemma 3.5, or \cite{Beirao:2012}, Corollary 5.1, for related results. In both papers the statements are slightly more general, \cite{Bazilevs:2006} gives a more detailed dependence on the parameterization $\mathbf{F}$ whereas \cite{Beirao:2012} establishes bounds for anisotropic grids.
Obviously, an extension to anisotropic grids can be achieved directly using the estimate \eqref{eq:anisotropic-estimate}. Note that the degree and the grid size are not necessarily equal in each parameter direction.
Using this equivalence of norms, we can transfer all results from the parameter domain $(0,1)^d$ to the physical domain $\hat\Omega$. However, we need to point out that this equivalence is not valid for seminorms. Hence, in Theorem \ref{thrm:approx} (and follow-up Theorems \ref{thrm:approx:sob}, \ref{thrm:approx:sob:2} and \ref{eq:approx2d}) the seminorms on the right hand side of the equations need to be replaced by the full norms. Moreover, the bounds depend on the geometry parameterization via the constant $C$ in \eqref{eq:norm:equivalence}.
A similar strategy can be followed when extending the results to NURBS. We do not go into the details here but refer to \cite{Bazilevs:2006,Beirao:2012} for a more detailed study. In the case of NURBS the seminorms again have to be replaced by the full norms due to the quotient rule of differentiation. In that case the constants of the bounds additionally depend on the given denominator of the NURBS parameterization.
\section*{Acknowledgements} The authors want to thank Walter Zulehner for his suggestions, which helped to improve the
presentation of the results in this paper.
\section*{Appendix}
At this point, we want to give a basis for $\widetilde{S}_{p,h}(\Omega)$ to make the reader more familiar with
that space and to demonstrate that it is possible to work with it. The basis, which we introduce,
is directly related to the (scaled) cardinal B-splines $\{\varphi_{p,h}^{(i)}\}^{\nh-1}_{i=-p}$.
\begin{lemma}\label{lem:basis-tilde}
The set
$\{ \widetilde{\varphi}_{p,h}^{(i)} \}^{}_{i=-\left\lceil\frac p2\right\rceil , \ldots, \nh-\left\lfloor\frac p2\right\rfloor-1}$
with
\begin{equation}\label{eq:basis-tilde}
\widetilde{\varphi}_{p,h}^{(i)} := \sum_{l \in \{-i-p-1,i,2\nh -i -p -1\}} \varphi_{p,h}^{(l)}
\end{equation}
is a basis of $\widetilde{S}_{p,h}(\Omega)$.
\end{lemma}
Before we prove Lemma~\ref{lem:basis-tilde} we give a more practical representation of the basis functions
by removing all contributions vanishing in $\Omega$. We obtain for odd $p$ that
\begin{align*}
&\widetilde{\varphi}_{p,h}^{(i)} = \varphi_{p,h}^{(i)} && i = -(p+1)/2\\
&\widetilde{\varphi}_{p,h}^{(i)} = \varphi_{p,h}^{(i)} + \varphi_{p,h}^{(-i-p-1)} && i = -(p-1)/2,\ldots,-1 \\
&\widetilde{\varphi}_{p,h}^{(i)} = \varphi_{p,h}^{(i)} && i=0,\ldots,\nh -p \\
&\widetilde{\varphi}_{p,h}^{(i)} = \varphi_{p,h}^{(i)} + \varphi_{p,h}^{(2\nh-i-p-1)} && i=\nh-p+1,\ldots, \nh-(p+1)/2 \\
&\widetilde{\varphi}_{p,h}^{(i)} = \varphi_{p,h}^{(i)} && i=\nh-(p-1)/2
\end{align*}
and for even $p$ that
\begin{align*}
&\widetilde{\varphi}_{p,h}^{(i)} = \varphi_{p,h}^{(i)} + \varphi_{p,h}^{(-i-p-1)} && i = -p/2,\ldots,-1 \\
&\widetilde{\varphi}_{p,h}^{(i)} = \varphi_{p,h}^{(i)} && i=0,\ldots,\nh -p-1 \\
&\widetilde{\varphi}_{p,h}^{(i)} = \varphi_{p,h}^{(i)} + \varphi_{p,h}^{(2\nh-i-p-1)} && i=\nh-p,\ldots, \nh-p/2-1.
\end{align*}
Note that here we need that $0 \leq \nh-p-1$, which is equivalent to $hp<1$.
\begin{proof}{\em of Lemma~\ref{lem:basis-tilde}}
For the sake of simplicity, we consider the case $\Omega=(0,1)$ only.
We show first that
every function in~\eqref{eq:basis-tilde} is in $\widetilde{S}_{p,h}(0,1)$. Note that we have constructed
$\widetilde{S}_{p,h}(0,1)$ such that the restriction of any symmetric function in $\widehat{S}_{p,h}(-1,1)$
to $(0,1)$ is a member of $\widetilde{S}_{p,h}(0,1)$. Let $n=1/h$. So, consider the functions
$\{\widehat{\varphi}_{p,h}^{(j)}\}_{j=-n}^{n-1}$,
forming a basis for $\widehat{S}_{p,h}(-1,1)$. Here we consider a different indexing with $j=i-n$. Defining
$$s_{j}(x) := \widehat{\varphi}_{p,h}^{(j)}(x)+\widehat{\varphi}_{p,h}^{(j)}(-x)= \widehat{\varphi}_{p,h}^{(j)}(x)+\widehat{\varphi}_{p,h}^{(-j-p-1)}(x),$$
for $j=-n,\ldots,n-1$, we obtain symmetric functions in $\widehat{S}_{p,h}(-1,1)$. Using the relation
$$\widehat{\varphi}_{p,h}^{(j)}|_{(0,1)} = \sum_{k\in\mathbb{Z}} \varphi_{p,h}^{(j+2nk)},$$
we obtain that the restriction of $s_j$ to $(0,1)$ fulfills
$$s_j|_{(0,1)} = \sum_{k\in\mathbb{Z}} \varphi_{p,h}^{(j+2nk)} + \sum_{k\in\mathbb{Z}} \varphi_{p,h}^{(-j-p-1+2nk)}
= \varphi_{p,h}^{(j)} + \sum_{k\in\mathbb{Z}} \varphi_{p,h}^{(-j-p-1+2nk)},$$
which is
\begin{align*}
&s_j|_{(0,1)} = \varphi_{p,h}^{(j)} + \varphi_{p,h}^{(-j-p-1)}
&&\mbox{ for } j\in \{ -n,\ldots,-1\},\\
&s_j|_{(0,1)} = \varphi_{p,h}^{(j)}
&&\mbox{ for } j\in \{ 0,\ldots,n-p-1\}, \mbox{ or}\\
&s_j|_{(0,1)} = \varphi_{p,h}^{(j)} + \varphi_{p,h}^{(-j-p-1+2n)}
&&\mbox{ for } j\in \{ n-p,\ldots,n-1\}.
\end{align*}
In all three cases $s_j$ equals $\widetilde{\varphi}_{p,h}^{(j)}$ or $2\widetilde{\varphi}_{p,h}^{(j)}$.
This shows that
$\widetilde{\varphi}_{p,h}^{(i)}\in \widetilde{S}_{p,h}(0,1)$.
It is easy to see that the functions in~\eqref{eq:basis-tilde} are linear independent
for $i=-\left\lceil\frac p2\right\rceil , \ldots, n-\left\lfloor\frac p2\right\rfloor-1$.
So, it remains to show that every function $u_{p,h} \in \widetilde{S}_{p,h}(0,1)$ can be expressed as a linear
combination of the functions in~\eqref{eq:basis-tilde}. As we have already noticed, by construction
the function $u_{p,h}$ can be extended to $(-1,1)$, by defining $w_{p,h}(x):=u_{p,h}(|x|)$. Note that
$w_{p,h} \in \widehat{S}_{p,h}(-1,1)$. So, we can express it as a linear combination of basis functions
of the basis given in~\eqref{eq:basis:varphi} via
\begin{equation*}
w_{p,h} = \sum_{j=-n}^{n-1} w_j \widehat{\varphi}_{p,h}^{(j)}.
\end{equation*}
By construction, $w_{p,h}(x)=w_{p,h}(-x)$, so we obtain
\begin{align*}
w_{p,h}(x) & = \frac12 (w_{p,h}(x)+w_{p,h}(-x))
= \frac12 \sum_{j=-n}^{n-1} w_j (\widehat{\varphi}_{p,h}^{(j)}(x)+\widehat{\varphi}_{p,h}^{(j)}(-x) ) \\
& = \frac12 \sum_{j=-n}^{n-1} w_j (\widehat{\varphi}_{p,h}^{(j)}(x)+\widehat{\varphi}_{p,h}^{(-j-p-1)}(x) )\\
& = \frac12 \sum_{j=-n}^{n-1}\sum_{k\in\mathbb{Z}} w_j (\varphi_{p,h}^{(j+2nk)}(x)+\varphi_{p,h}^{(-j-p-1+2nk)}(x) )\\
& = \frac12 \sum_{j=-n}^{n-1}w_j (\varphi_{p,h}^{(-j-p-1)}(x)+\varphi_{p,h}^{(j)}(x)+\varphi_{p,h}^{(2n-j-p-1)}(x) ).
\end{align*}
Again, it can be checked easily, that for all $j,n\in\mathbb{Z}$ the term
$$
\varphi_{p,h}^{(-j-p-1)}(x)+\varphi_{p,h}^{(j)}(x)+\varphi_{p,h}^{(2n-j-p-1)}(x)
$$
is in the span of $\{ \widetilde{\varphi}_{p,h}^{(i)} \}^{}_{i=-\left\lceil\frac p2\right\rceil , \ldots, n-\left\lfloor\frac p2\right\rfloor-1}$, which concludes the proof.
\qed\end{proof}
We observe that the basis forms a partition of unity. Moreover, all basis functions are obviously
non-negative linear combinations of B-splines. Hence we call it a \emph{B-spline-like basis}.
Fig.~\ref{fig:p2} and~\ref{fig:p4} depict the B-spline basis functions that span $\widetilde{S}_{p,h}(0,1)$.
Here, the basis functions that have an influence at the boundary
are plotted with solid lines. The basis functions that have zero derivatives up to order $p-1$
at the boundary coincide with standard B-spline functions. They are plotted
with dashed lines.
If we compare the pictures of the B-spline basis functions in $\widetilde{S}_{p,h}(0,1)$ (Fig.~\ref{fig:p2}
and~\ref{fig:p4}) with the standard B-spline basis functions for $S_{p,h}(0,1)$ (Fig.~\ref{fig:p2a}
and~\ref{fig:p4a}) obtained from a classical open knot vector, we see that the latter ones
have more basis functions that are associated with the boundary. This can also be seen by counting
the number of degrees of freedom, cf. Table~\ref{tab:dof}.
\begin{figure}
\begin{center}
\includegraphics[scale=.55]{spline1.pdf}
\includegraphics[scale=.55]{spline2.pdf}
\caption{B-spline-like basis functions for $\widetilde{S}_{1,h}(0,1)$ and $\widetilde{S}_{2,h}(0,1)$}
\label{fig:p2}
\end{center}
\begin{center}
\includegraphics[scale=.55]{spline3.pdf}
\includegraphics[scale=.55]{spline4.pdf}
\caption{B-spline-like basis functions for $\widetilde{S}_{3,h}(0,1)$ and $\widetilde{S}_{4,h}(0,1)$}
\label{fig:p4}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=.55]{spline1a.pdf}
\includegraphics[scale=.55]{spline2a.pdf}
\caption{B-spline basis functions for ${S}_{1,h}(0,1)$ and ${S}_{2,h}(0,1)$}
\label{fig:p2a}
\end{center}
\begin{center}
\includegraphics[scale=.55]{spline3a.pdf}
\includegraphics[scale=.55]{spline4a.pdf}
\caption{B-spline basis functions for ${S}_{3,h}(0,1)$ and ${S}_{4,h}(0,1)$}
\label{fig:p4a}
\end{center}
\end{figure}
\bibliographystyle{amsplain}
| {
"timestamp": "2015-11-16T02:08:37",
"yymm": "1502",
"arxiv_id": "1502.03733",
"language": "en",
"url": "https://arxiv.org/abs/1502.03733",
"abstract": "In this paper, we develop approximation error estimates as well as corresponding inverse inequalities for B-splines of maximum smoothness, where both the function to be approximated and the approximation error are measured in standard Sobolev norms and semi-norms. The presented approximation error estimates do not depend on the polynomial degree of the splines but only on the grid size.We will see that the approximation lives in a subspace of the classical B-spline space. We show that for this subspace, there is an inverse inequality which is also independent of the polynomial degree. As the approximation error estimate and the inverse inequality show complementary behavior, the results shown in this paper can be used to construct fast iterative methods for solving problems arising from isogeometric discretizations of partial differential equations.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Approximation error estimates and inverse inequalities for B-splines of maximum smoothness",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.981735725388777,
"lm_q2_score": 0.8175744695262777,
"lm_q1q2_score": 0.8026420648997247
} |
https://arxiv.org/abs/1904.05322 | Convex envelopes on Trees | We introduce two notions of convexity for an infinite regular tree. For these two notions we show that given a continuous boundary datum there exists a unique convex envelope on the tree and characterize the equation that this envelope satisfies. We also relate the equation with two versions of the Laplacian on the tree. Moreover, for a function defined on the tree, the convex envelope turns out to be the solution to the obstacle problem for this equation. | \section{Introduction}
In mathematics, a real-valued function defined on an interval is called convex if the line segment between any two points on the graph of the function lies above or on the graph. Equivalently, a function is convex if its epigraph (the set of points on or above the graph of the function) is a convex set. For a twice differentiable function of a single variable, if the second derivative is always greater than or equal to zero in the entire interval then the function is convex.
Convex functions play an important role in many areas of mathematics. They are especially important in the study of optimization problems where they are distinguished by a number of convenient properties. For instance, a (strictly) convex function has no more than one minimum. Even in infinite-dimensional spaces, under suitable additional hypotheses, convex functions continue to satisfy such properties and as a result, they are the most well-understood functionals in the calculus of variations. In probability theory, a convex function applied to the expected value of a random variable is always less than or equal to the expected value of the convex function of the random variable.
On the other hand, linear and nonlinear equations (coming mainly from mean value properties) on trees are models that are close (and related to)
to linear and nonlinear PDEs in the unit ball of $\mathbb{R}^N$, therefore, it seems natural to look for convex functions
on trees. This is our main goal in this paper.
For other analytical issues on discrete structures (including graphs such
as trees) we refer to \cite{ary,BBGS,DPMR1,DPMR2,DPMR3,KLW,KW,Ober,s-tree,s-tree1} and references therein.
Let us begin by making precise the well-known notion of convexity in $\mathbb{R}^N.$
We fix a bounded smooth domain $\Omega \subset {\mathbb{R}}^N$.
A function $u\colon\Omega \to {\mathbb{R}}$ is said to be a convex function
if for any two points $x,y \in \Omega$ such that the segment $[x,y]$ is contained in $\Omega$,
it holds that
\begin{equation} \label{convexo-usual}
u(tx+(1-t)y) \leq tu(x) + (1-t) u(y), \qquad \forall t \in (0,1).
\end{equation}
With this definition one can define the convex envelope of a boundary
datum $g \colon \partial \Omega \to {\mathbb{R}}$ as
\begin{equation} \label{convex-envelope-usual}
u^* (x) \coloneqq \sup \left\{u(x)\colon u \text{ is convex and verifies } u|_{\partial \Omega} \leq g \right\}.
\end{equation}
Here by $u|_{\partial \Omega} \leq g,$ we understand
\[
\limsup_{\Omega \ni x\to z}u(z)\le g(z)
\quad\forall z\in\partial\Omega.
\]
This convex envelope turns out to be the largest solution to
\begin{equation} \label{convex-envelope-usual-eq}
\begin{array}{ll}
\lambda_1 (D^2 u) (x) = 0 \qquad &x\in \Omega,
\end{array}
\end{equation}
(the equation has to be
interpreted in viscosity sense) with
\[
u(x) \leq g (x) \qquad x\in \partial \Omega.
\]
Here $\lambda_1\leq \lambda_2\leq\cdots\leq\lambda_N$ are the ordered eigenvalues of
the Hessian matrix, $D^2u$. We refer to \cite{BlancRossi,HL1,OS,Ober33}.
Notice that the equation
\[
\lambda_1 (D^2 u) (x) = 0
\]
is equivalent to
\[
\min \left\{\langle D^2 u(x) v, v \rangle\colon v\in\mathbb{R}^N
\text{ such that } \|v\|=1\right\} =0.
\]
This says that the equation that governs the convex envelope
is just the minimum among all possible directions of
the second derivative of the function at
$x$ equal to zero. Here we notice that we have existence of a continuous up to the boundary
convex envelope for every continuous data if and only if the domain is strictly convex, see \cite{BlancRossi,HL1,OS}.
In this paper, our main goal is to develop similar ideas and concepts on a tree.
When one wants to expend the notion of convexity to an ambient space beyond
the Euclidean setting the two key ideas are to introduce what is a ``{}segment"{}
in our space and once this is done, to understand what is a ``{}midpoint"{} in the segment.
We introduce two different definitions of segments and midpoints and study the associated
notion of convexity linked to each of them. In particular, for both definitions we are able to
characterize the related equation that governs the convex envelope of a given boundary
datum.
Closely related to our results is \cite{BKND} where the authors considered convex functions
on finite trees and showed that some relevant functions that are naturally related to spectral
problems on the tree are convex.
Before starting with our main goal, we need to introduce our
ambient space.
Given $m\in\mathbb{N}_{\ge2},$ a tree $\mathbb{T}_m$ with regular
$m-$branching is an infinite graph that consists of
the empty set $\emptyset$ and all finite sequences
$(a_1,a_2,\dots,a_k)$ with $k\in{\mathbb N},$
whose coordinates $a_i$ are chosen from $\{0,1,\dots,m-1\}.$
\begin{center}
\pgfkeys{/pgf/inner sep=0.19em}
\begin{forest}
[$\emptyset$,
[0
[0
[0 [,edge=dotted]]
[1 [,edge=dotted]]
[2 [,edge=dotted]]
]
[1
[0 [,edge=dotted]]
[1 [,edge=dotted]]
[2 [,edge=dotted]]
]
[2
[0 [,edge=dotted]]
[1 [,edge=dotted]]
[2 [,edge=dotted]]
]
]
[1
[0
[0 [,edge=dotted]]
[1 [,edge=dotted]]
[2 [,edge=dotted]]
]
[1
[0 [,edge=dotted]]
[1 [,edge=dotted]]
[2 [,edge=dotted]]
]
[2
[0 [,edge=dotted]]
[1 [,edge=dotted]]
[2 [,edge=dotted]]
]
]
[2
[0
[0 [,edge=dotted]]
[1 [,edge=dotted]]
[2 [,edge=dotted]]
]
[1
[0 [,edge=dotted]]
[1 [,edge=dotted]]
[2 [,edge=dotted]]
]
[2
[0 [,edge=dotted]]
[1 [,edge=dotted]]
[2 [,edge=dotted]]
]
]
]
\end{forest}
A tree with $3-$branching.
\end{center}
The elements in $\mathbb{T}_m$ are called vertices.
Each vertex $x$ has $m$ successors, obtained by adding
another coordinate. We will denote by
\[
\S(x)\coloneqq\{(x,i)\colon i\in\{0,1,\dots,m-1\}\}
\]
the set of successors of the vertex $x.$
If $x$ is not the root then $x$ has a only an
immediate predecessor, which we will denote $\hat{x}.$
The segment connecting a vertex $x$ with $\hat{x}$ is called an
edge and denoted by $(\hat{x},x).$
A vertex $x\in\mathbb{T}_m$ has level $k\in\mathbb{N}$ if $x=(a_1,a_2,\dots,a_k)$.
The level of $x$ is denoted by $|x|$ and
the set of all $k-$level vertices is denoted by $\mathbb{T}_m\!\!\!^k.$
We say that the edge $e=(\hat{x},x)$ has $k-$level if
$x\in \mathbb{T}_m\!\!\!^k.$
A branch of $\mathbb{T}_m$ is an infinite sequence of vertices, where each one of them is followed
by one of its immediate successors.
The collection of all branches forms the boundary of $\mathbb{T}_m$, denoted
by $\partial\mathbb{T}_m$.
Observe that the mapping $\psi:\partial\mathbb{T}_m\to[0,1]$ defined as
\[
\psi(\pi)\coloneqq\sum_{k=1}^{+\infty} \frac{a_k}{m^{k}}
\]
is surjective, where $\pi=(a_1,\dots, a_k,\dots)\in\partial\mathbb{T}_m$ and
$a_k\in\{0,1,\dots,m-1\}$ for all $k\in\mathbb{N}.$ Whenever
$x=(a_1,\dots,a_k)$ is a vertex, we set
\[
\psi(x)\coloneqq\psi(a_1,\dots,a_k,0,\dots,0,\dots).
\]
At this point, we just have to recall that as we have mentioned,
the definition of a convex function depends on what
we understand by a segment and how to define the midpoint of the segment.
A path from a vertex $x$ to a vertex $y$ in $\mathbb{T}_m$ is a sequence of
vertices $x_0, x_1, x_2, \dots, x_k$ such that
$x_0= x$, $x_k=y_0$ and for any $i= 1, 2, \dots, k$ we have that
either $\hat{x}_{i-1}=x_i$ or $x_i\in\mathcal{S}({x}_{i-1})$, that is, $x_i$ and $x_{i+1}$ are adjacent
(connected)
in the graph $\mathbb{T}_m$.
A path is called a minimal path if and only if all the vertices
on the path are different. Observe that for any $x,y\in\mathbb{T}_m$ there is
a unique minimal path from $x$ to $y.$ This minimal path is
denoted by $[x,y].$ This is our first idea of what is a segment of $\mathbb{T}_m$.
\begin{center}
\pgfkeys{/pgf/inner sep=0.19em}
\begin{forest}
[$\emptyset$,circle,fill=yellow,draw=blue,
[0,circle,fill=yellow,draw=blue[0[0][1][2]][1[0][1][2]]
[2,circle,fill=yellow,draw=blue[0]
[$x$,circle,fill=yellow,draw=blue][2]]]
[1,circle,fill=yellow,draw=blue[0,circle,fill=yellow,draw=blue[$y$,circle,fill=yellow,draw=blue][1][2]][1[0][1][2]][2[0][1][2]]]
[2[0[0][1][2]][1[0][1][2]][2[0][1][2]]]
]
\end{forest}
A path from a vertex $x$ to a vertex $y$.
\end{center}
In a slight abuse of notation, we say that an edge $e$
belongs to a path $\gamma$ if there is a vertex $x\in\gamma$
such that $e=(\hat{x},x).$
We now define the length of an edge $e$ as follows:
\[
\text{length} (e) \coloneqq \frac{1}{m^k}
\qquad \text{ if } e \text{ has level } k.
\]
With this length we can define the total length of a path $\gamma$ as the sum
of the lengths of the edges involved in $\gamma$, that is,
\[
\text{lenght} (\gamma) \coloneqq
\sum_{e\in\gamma} \text{length} (e).
\]
Now, with this notion of length of an edge and then of a path, let us
consider the distance between nodes given by the length
of the minimal path.
That is, given two nodes $x$, $y$ we let
\[
d(x,y) \coloneqq
\text{lenght}([x,y]).
\]
Remark that $d$ is a genuine metric since $d(x,y)\ge0,$
$d(x,y)=0$ iff $x=y$ and the triangular inequality holds (since
the infimum of the lengths of the paths that joins $x$ with $y$
is less or equal than the infimum of the length of the paths
that joins $x$ with $z$ plus the infimum of the length of the paths
that joins $z$ with $y$).
We are now ready to introduce our first notion of convex function. We say that a function $u:\mathbb{T}_m \mapsto \mathbb{R}$ is convex if
for any $x,y,z\in \mathbb{T}_m$ with $z\in[x,y],$ it holds that
\[
u(z) \leq \frac{d(y,z)}{d(x,y)} u(x) +
\frac{d(x,z)}{d(x,y)} u(y).
\]
Using this definition, we can look for the convex envelope
of a function defined on $\partial \mathbb{T}_m$. Given a function $g\colon[0,1]\to\mathbb{R},$
we define
the convex envelope of $g$ on $\mathbb{T}_m$ as follows
\begin{equation} \label{convex-envelope-arbol}
u^*_g (x) \coloneqq
\sup \Big\{u(x) \colon u\in\mathcal{C}(g) \Big\},
\end{equation}
where
\[
\mathcal{C}(g)\coloneqq
\left\{ u\colon\mathbb{T}_m\to\mathbb{R}\colon
u \text{ is a convex function} \text{ and }
\limsup_{x\to \pi\in \partial \mathbb{T}_m} u(x)\leq g (\psi(\pi))
\right\}.
\]
The convex envelope is unique (this follows easily since the maximum
of two convex functions is also convex). Moreover,
associated to this convex envelope we have a nonlinear equation that is verified on the whole tree.
\begin{teo} \label{teo.convex.arbol}
The convex envelope of a continuous function $g\colon[0,1]\to\mathbb{R}$
is the largest solution to
\begin{equation} \label{eq.tree.convex}
u (x) = \min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u(y) +u(z)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{ u(\hat{x}) + m \, u(y)}{m+1}
\right\}\quad\text{on }\mathbb{T}_m,
\end{equation}
that verifies
\begin{equation}\label{eq:bordecond}
\limsup_{x\to \pi\in \partial \mathbb{T}_m}u(x) \leq g (\psi(\pi)).
\end{equation}
\end{teo}
Let us clarify that in the case $x=\emptyset$, relation
\eqref{eq.tree.convex} is understood as
\[
u (x)=
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u(y) +u(z)}2,
\]
since $\emptyset$ does not have a predecessor because it is the
root of $\mathbb{T}_m.$
Notice that \eqref{eq.tree.convex} is a nonlinear mean value property at the nodes of the tree.
Once we have characterized the convex envelope by means of being the largest solution
to \eqref{eq.tree.convex} that is below the datum on $\partial \mathbb{T}_m$, we want to study the
associated Dirichlet problem, that is, given a datum $g$ on $\partial \mathbb{T}_m$ we want to solve the equation
in $\mathbb{T}_m$ and find a solution that attains continuously the datum in the sense that
\begin{equation}\label{eq:bordecond.77}
\lim_{x\to \pi\in \partial \mathbb{T}_m}u(x) = g (\psi(\pi)).
\end{equation}
For this Dirichlet problem, we can show existence and uniqueness for continuous data.
\begin{teo} \label{teo.convex.arbol.eq}
Given a continuos boundary datum $g$, there is a unique
solution to \eqref{eq.tree.convex} that verifies
\eqref{eq:bordecond.77}.
\end{teo}
Therefore, from Theorems \ref{teo.convex.arbol} and \ref{teo.convex.arbol.eq}, we have that for every continuous datum on $\partial \mathbb{T}_m$ the unique
convex envelope attains this datum with continuity, that is, \eqref{eq:bordecond.77} holds.
Recall that in the Euclidean case we have to ask the domain $\Omega$ to be strictly convex for
the validity of this
continuity up to the boundary of the convex envelope.
Notice that the equation \eqref{eq.tree.convex} can be rewritten as
\begin{equation} \label{eq.tree.convex.derivadassegundas}
0 = \min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u(y) + u(z) -2u(x)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{u(\hat{x}) + m u(y)
-(m+1) u(x)}{m+1},
\right\}
\end{equation}
and hence we identify one possible analogous to the eigenvalues of
the Hessian for the Euclidean case but in the case of the tree
\begin{equation} \label{uu}
\left\{
\frac{u(x,i) + u(x,j) -2u(x)}2
\right\}_{i < j}
\text{ and }
\left\{
\frac{u(\hat{x}) + m u(y)
-(m+1) u(x)}{m+1}
\right\}_{y\in\S(x)}.
\end{equation}
Recall that for the convex envelope
in the Euclidean space the associated equation reads as
\[
\min_{1\le j\le N} \lambda_j (D^2 u)=0,
\]
and compare it with \eqref{eq.tree.convex.derivadassegundas}.
Then, recalling that in the Euclidean setting when we add the
eigenvalues of the Hessian we obtain the Laplacian, we obtain the
following versions of a Laplacian on the tree adding
the expressions found in \eqref{uu},
\begin{equation} \label{eq.Laplaciano.tree}
u(x)=
\frac{2}{(m+1)^2} u(\hat{x} ) +
\frac{ m^2 +2m -1}{(m+1)^2} \frac1{m}\sum_{y\in\S(x)} u(y).
\end{equation}
Notice that this is a special case of the equations (given by mean value properties) that we previously studied
in \cite{DPFR-DtoN} showing existence and uniqueness for the Dirichlet problem.
Finally, we also study the convex envelope of a function defined on $\mathbb{T}_m$. That is, given a bounded function
$f:\mathbb{T}_m \mapsto \mathbb{R}$ (not necessarily convex), we look for its convex envelope that is given by
\begin{equation} \label{convex-envelope-arbol.f}
u^*_f (x) \coloneqq
\sup \big\{u(x) \colon u\in\mathcal{C}(f) \big\},
\end{equation}
where
\[
\mathcal{C}(f)\coloneqq
\Big\{ u\colon\mathbb{T}_m\to\mathbb{R}\colon
u \text{ is a convex function and }
u(x)\leq f (x) \ \forall x\in \mathbb{T}_m
\Big\}.
\]
The convex envelope is unique (this follows easily since the maximum
of two convex functions is also convex). Notice that when $f$ attains a minimum this minimum coincides
with the minimum of the convex envelope (and it is attained at the same vertices).
This convex envelope turns out to be the solution to
the obstacle problem for the equation \eqref{eq.tree.convex}. For the analogous property in the Euclidean
setting, we refer to \cite{Ober33}.
Recall that for the obstacle problem one important set is the coincidence set, that is given by the
set of points where $f$ and its convex envelope $u^*_f$ coincide,
\[
CS(f) =\left\{ x\in \mathbb{T}_m \, : \, f(x) = u^*_f (x) \right\}.
\]
\begin{teo} \label{teo.convex.arbol.f}
The convex envelope of a function $f\colon \mathbb{T}_m \to\mathbb{R}$
is the solution to the obstacle problem for the equation \eqref{eq.tree.convex}.
That is, $u^*_f$
is the largest solution to
\begin{equation} \label{eq.tree.convex.ffff}
u (x) \leq \min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u(y) +u(z)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{ u(\hat{x}) + m u(y)}{m+1}
\right\}\quad\text{on }\mathbb{T}_m,
\end{equation}
that verifies
\[
u(x) \leq f(x) \qquad \forall x \in \mathbb{T}_m.
\]
In the coincidence set, the function $f$ verifies an inequality
\begin{equation} \label{eq.tree.convex.ffff.latengo}
f (x) \leq \min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{f(y) +f(z)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{ f(\hat{x}) + m f(y)}{m+1}
\right\}\quad\text{on } CS(f),
\end{equation}
while outside the coincidence set the convex envelope, $u^*_f$, is a solution of the equation, i.e., it holds
\begin{equation} \label{eq.tree.convex.ffff.latengo.mmmm}
u^*_f (x) = \min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u^*_f(y) +u^*_f(z)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{ u^*_f(\hat{x}) + m u^*_f(y)}{m+1}
\right\}\quad\text{on }\mathbb{T}_m\setminus CS(f).
\end{equation}
\end{teo}
On the other hand, in the arborescence (directed) tree, i.e., the tree defined as before
but adding a direction to the edges in such a way that any two edges are not directed to
the same vertex and the root is the unique vertex that has no edge directed into it), the
Laplacian is defined as the mean value of the successors minus the value at the vertex, that is,
\[
\Delta u (x) \coloneqq \frac1m \sum_{y\in\S(x)} u(y) - u(x)
\quad\forall x\in\mathbb{T}_m,
\]
see \cite{KLW}.
Now, to obtain an interpretation of this Laplacian
as the sum of eigenvalues of the Hessian as we did before,
we just have to change the notion of convexity.
Now we need to introduce extra notations.
Given $x\in\mathbb{T}_m,$ $\mathbb{T}_{2}^{x}$ denotes
the set of all subgraphs $\mathbb{B}$ that are formed from a finite subset
of the vertices of $\mathbb{T}_m$ and such that $x\in\mathbb{B},$
$\S(x)\cap\mathbb{B}$ has two elements and
for any $y\in\mathbb{B}\setminus\{x\},$
\begin{itemize}
\item $|y|>|x|;$
\item either $\S(y)\cap\mathbb{B}=\emptyset$
or $\S(y)\cap\mathbb{B}$ has exactly two elements.
\end{itemize}
We say that $y\in \mathbb{B}$ is an endpoint
of $\mathbb{B}$ if $\S(y)\cap\mathbb{B}=\emptyset.$ The set of
all endpoints of $\mathbb{B}$ is denoted $\mathcal{E}(\mathbb{B}).$
That is, $\mathbb{B}$ is just a finite binary subtree of $\mathbb{T}_m$.
\begin{center}
\pgfkeys{/pgf/inner sep=0.18em}
\begin{forest}
[$\emptyset$,
[$x$, circle,fill=green,draw =orange
[0[0][1][2]]
[1, circle,fill=green,draw =orange [0][1][2]]
[2, circle,fill=green,draw =orange
[0, circle,fill=green,draw =orange]
[1, circle,fill=green,draw =orange][2]]]
[1[0[0][1][2]][1[0][1][2]][2[0][1][2]]]
[2[0[0][1][2]][1[0][1][2]][2[0][1][2]]]
]
\end{forest}
An element of $\mathbb{T}_{2}^{x}.$ Root: $x$. Endpoints: $(x,1),$ $(x,2,0),$ and
$(x,2,1).$
\end{center}
Our second notion of convexity is the following: a function $u:\mathbb{T}_m \to \mathbb{R}$ is called binary convex if
for any $x\in\mathbb{T}_m$
\begin{equation}\label{convex.II}
u(x) \leq \sum_{y\in\mathcal{E}(\mathbb{B})}\dfrac1{2^{|y|-|x|}} u(y)
\quad \forall \mathbb{B}\in\mathbb{T}_{2}^{x}.
\end{equation}
In this notion of convexity, a segment is $\mathbb{B}$, a finite binary subgraph of $\mathbb{T}_m$; a midpoint
is the root of this subgraph $\mathbb{B}$ and the convexity property just says that the value of the function
at the midpoint is less or equal than the mean value of the values at the endpoints.
Associated to this new version of convexity, we have a convex envelope
of a bounded boundary datum $g$ that is, defined as in \eqref{convex-envelope-arbol}, that is
we define the binary convex envelope of $g$ on $\mathbb{T}_m$ as follows
\[
\tilde{u}_g(x) \coloneqq
\sup \left\{u(x) \colon u\in\mathcal{B}(g)
\right\}
\]
where
\[
\mathcal{B}(g)\coloneqq \left\{
u\colon \mathbb{T}_m\to\mathbb{R}\colon
\text{ is a binary convex function on } \mathbb{T}_m,
\limsup_{x\to \pi\in \partial \mathbb{T}_m} u(x)\leq g (\psi(\pi)) \right\}.
\]
For this notion of binary convex envelope, we also have an equation
(simpler than with the previous notion).
\begin{teo} \label{teo.convex.arbol.II}
The binary convex envelope of a bounded boundary datum $g$
is the largest solution to
\begin{equation} \label{eq.tree.convex.II}
u(x) =
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u(y) + u(z)}2 \quad\text{on }\mathbb{T}_m,
\end{equation}
that verifies \eqref{eq:bordecond}.
\end{teo}
Again, when $g$ is continuous we have a unique solution
to the equation that attains the boundary datum continuously.
\begin{teo} \label{teo.convex.arbol.eq.II}
Given a continuous boundary datum $g$, there exists a unique
\eqref{eq.tree.convex.II} that verifies \eqref{eq:bordecond.77}.
\end{teo}
In this case, written \eqref{eq.tree.convex.II} as
\[
0 = \min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\left\{ \frac12
u(y) + \frac12 u(z) - u(x) \right\},
\]
we can identify the analogous to the eigenvalues of the Hessian that for
this case are given by,
\begin{equation} \label{uu.II}
\left\{
\frac12 u(x,i) + \frac12 u(x,j) - u(x)
\right\}_{i < j} .
\end{equation}
Then, adding the eigenvalues in \eqref{uu.II}, we obtain
\begin{equation} \label{eq.Laplaciano.tree.II}
0= \frac1{m}\sum_{y\in \S(x)} u(y) - u(x).
\end{equation}
Notice that this is the usual Laplacian in the arborescence tree studied in \cite{KLW}.
For this notion of convexity we can also deal with the problem of the convex envelope
of a function $f:\mathbb{T}_m \mapsto \mathbb{R}$ defined as in \eqref{convex-envelope-arbol.f}. In this case
we also find that this convex envelope can be characterized as the solution to the obstacle problem
for the associated equation, \eqref{eq.tree.convex.II}, and prove a theorem analogous to
Theorem \ref{teo.convex.arbol.f} for this case. Once we have proved Theorem \ref{teo.convex.arbol.II}, the proof of this result is similar
to the one of Theorem \ref{teo.convex.arbol.f} and hence we leave the statement and the proof to the reader.
To end this introduction, let us give a natural generalization of
the notion of binary convexity. Given $k\in\{2,\dots,m-2\}$ and $x\in\mathbb{T}_m,$
$\mathbb{T}_{2}^{x k}$ denotes the set of all subgraphs $\mathbb{B}$ that are formed from a finite subset
of vertices in $\mathbb{T}_m$ and such that, $x\in\mathbb{B},$
$\S(x)\cap\mathbb{B}$ has exactly $k$ elements and
for any $y\in\mathbb{B}\setminus\{x\},$
\begin{itemize}
\item $|y|>|x|;$
\item either $\S(y)\cap\mathbb{B}=\emptyset$
or $\S(y)\cap\mathbb{B}$ has exactly $k$ elements.
\end{itemize}
Observe that $\mathbb{T}_{2}^{x 2}=\mathbb{T}_{2}^{x}.$
As before we denote by $\mathcal{E}(\mathbb{B})$ (the set of endpoints) the set of points
$y\in \mathbb{B}$ such that $\S(y)\cap\mathbb{B}=\emptyset.$
A function $u:\mathbb{T}_m \to \mathbb{R}$ is called $k-$convex if
for any $x\in\mathbb{T}_m$
\[
u(x) \leq \sum_{y\in\mathcal{E}(\mathbb{B})}\dfrac1{k^{|y|-|x|}} u(y)
\quad \forall \mathbb{B}\in\mathbb{T}_{2}^{x k}.
\]
As before, associated to this version of convexity, we have a convex envelope
of a bounded boundary datum $g$ that we will call the $k-$convex envelope of
$g.$ Following the proof of Theorems \ref{teo.convex.arbol.II} and
\ref{teo.convex.arbol.eq.II}, we can show that the $k-$convex envelope
of $g$ is the largest solution to
\begin{equation}\label{eq.tree.convex.III}
u(x) =
\min_{\substack{x_1,\dots,x_k\in\mathcal{S}(x)\\ x_i\neq x_j}}
\frac1k\sum_{i=1}^k u(x_i) \quad\text{on }\mathbb{T}_m,
\end{equation}
that verifies \eqref{eq:bordecond}.
Moreover, if $g$ is a continuous function then the $k-$convex envelope
of $g$ is the a unique solution to \eqref{eq.tree.convex.III} that verifies
\eqref{eq:bordecond.77}.
In this case, written \eqref{eq.tree.convex.III} as
\[
0 = \min_{\substack{x_1,\dots,x_k\in\mathcal{S}(x)\\ x_i\neq x_j}}
\left\{ \frac1k\sum_{i=1}^k
u(x_i) - u(x) \right\},
\]
we identify the analogous to the eigenvalues of the Hessian,
\begin{equation} \label{uu.III}
\left\{
\frac1k\sum_{i=1}^k
u(x,j_i) - u(x),
\right\}_{j_i < j_{i+1}}.
\end{equation}
Adding the eigenvalues in \eqref{uu.III}, we obtain again
\eqref{eq.Laplaciano.tree.II}, the usual Laplacian on the arborescence tree.
\bigskip
{\bf Organization of the paper.}
In Section \ref{sect-convex},
we will prove general results for convex functions;
In Section \ref{sect-convex-envelopes}, we prove
our main result for the convex envelope; In Sections
\ref{section.biconvfunction} and \ref{sect-biconvex-envelopes},
we extend the results of previous sections to the notion of binary convexity.
\section{Convex functions}\label{sect-convex}
We begin this section showing a different characterization of convex functions.
\begin{lem}\label{lema:convexeq}
A function $u$ on the tree is convex
if and only if $u$ satisfies
\begin{equation} \label{subsol1}
u (x) \le \min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u(y) +u(z)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{ u(\hat{x}) + m u(y)}{m+1}
\right\} \qquad \forall x \in \mathbb{T}_m.
\end{equation}
\end{lem}
\begin{proof}
Let $u$ be a convex function. Our goal is to show that \eqref{subsol1} holds.
Given $x$ for any $y,z\in\S(x)$ with $y\neq z$
we have that
\[
u(x)\le\dfrac12 u(y)+\dfrac12 u(z)
\]
due to the fact that $u$ is a convex function (just take $x\in[y,z],$
$d(y,z)=\tfrac2{m^{|x|+1}},$ and
$d(y,x)=d(z,x)=\tfrac1{m^{|x|+1}}$ in the definition of convexity).
Then, we get
\[
u (x) \le \min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u(y) +u(z)}2
\]
for any $x\in\mathbb{T}_m.$
Now, given $x\in \mathbb{T}_m$ for any $y\in \S(x)$
\[
u(x)\le\dfrac{m}{m+1} u(y)+\dfrac{1}{m+1} u(\hat{x})
\]
again due to the fact that $u$ is a convex function (in this case, take $x\in[\hat{x},y],$
$d(\hat{x},y)=\tfrac{m+1}{m^{|x|+1}},$
$d(\hat{x},x)=\tfrac{1}{m^{|x|}},$
and
$d(y,x)=\tfrac1{m^{|x|+1}}$). Thus
\[
u (x) \le
\min_{y\in\mathcal{S}(x)}
\frac{ u(\hat{x}) + m u(y)}{m+1}
\]
for any $x\in\mathbb{T}_m.$
Therefore, we have that if $u$ is a convex function then
$u$ satisfies \eqref{subsol1}.
To see the converse, let $u$ be a function defined on the tree that verifies \eqref{subsol1} at every node and $x,y\in \mathbb{T}_m.$
We begin by analyzing the case that $[x,y]$ is a ``straight
line", that is the vertices $x_0,\dots,x_N$ of $[x,y]$
are such that $x_0=x,$ $x_N=y,$ $\hat{x}_i=x_{i+1}$
for any $i\in\{0,\dots,N-1\}.$ More precisely, first
we show that if $[x,y]$ is a ``straight
line" then
\begin{equation}
\label{eq:auxenvol1}
u(x_i)\le \dfrac{d(x_i,x_0)}{d(x_N,x_0)} u(x_N) +
\dfrac{d(x_i,x_N)}{d(x_N,x_0)} u(x_0) \quad
\forall i\in\{0,\dots,N\}.
\end{equation}
We proceed by induction in $N$. When $N=2,$ by \eqref{subsol1}, we
have
\[
u(x_1)\le\dfrac{u(x_2)+mu(x_0)}{m+1}=
\dfrac{d(x_1,x_0)}{d(x_2,x_0)} u(x_2) +
\dfrac{d(x_1,x_2)}{d(x_2,x_0)} u(x_0)
\]
since $d(x_1,x_2)=\tfrac{1}{m^{|x_1|}},$
$d(x_1,x_0)=\tfrac{1}{m^{|x_0|}}=\tfrac{1}{m^{|x_1|+1}}$
and $d(x_2,x_0)=\tfrac{m+1}{m^{|x_1|+1}}.$
Suppose now that \eqref{eq:auxenvol1} is true for all
straight line that have $N-1$ vertices, where $N>2.$
Then,
\[
u(x_1)\le
\dfrac{d(x_1,x_{N-1})}{d(x_{N-1},x_0)} u(x_0)
+\dfrac{d(x_1,x_{0})}{d(x_{N-1},x_0)} u(x_{N-1})
\]
and
\[
u(x_{N-1})\le
\dfrac{d(x_1,x_{N-1})}{d(x_{N},x_1)} u(x_N)
+\dfrac{d(x_{N},x_{N-1})}{d(x_{N},x_1)} u(x_{1}).
\]
Thus,
\begin{align*}
u(x_1)\le \dfrac{d(x_1,x_{N-1})}{d(x_{N-1},x_0)}u(x_0)
&+\dfrac{d(x_1,x_{0})}{d(x_{N-1},x_0)}
\dfrac{d(x_1,x_{N-1})}{d(x_{N},x_1)} u(x_N)\\
& \qquad + \dfrac{d(x_1,x_{0})}{d(x_{N-1},x_0)}
\dfrac{d(x_{N},x_{N-1})}{d(x_{N},x_1)} u(x_{1}).
\end{align*}
Therefore,
\begin{align*}
&d(x_1,x_{N-1})\left[d(x_1,x_{N})u(x_0)
+d(x_1,x_0)u(x_N)\right]\\[10pt] & \ge \left[d(x_{N-1},x_0)d(x_1,x_N)-d(x_1,x_0)d(x_{N-1},x_N)
\right]u(x_1)\\[10pt]
&\ge\left[
\left\{
d(x_{N},x_0)-d(x_N,x_{N-1})
\right\}
\left\{
d(x_N,x_0)-d(x_1,x_0)
\right\}
-d(x_1,x_0)d(x_{N-1},x_N)
\right]u(x_1)\\[10pt]
&\ge d(x_{N},x_0)
\left[ d(x_{N},x_0)
-d(x_N,x_{N-1})-d(x_1,x_0)
\right]u(x_1)\\[10pt]
&\ge d(x_{N},x_0)d(x_{1},x_{N-1})u(x_1).
\end{align*}
Then
\[
u(x_1)\le \dfrac{d(x_1,x_0)}{d(x_N,x_0)} u(x_N) +
\dfrac{d(x_1,x_N)}{d(x_N,x_0)} u(x_N).
\]
In similar manner, we get
\begin{equation}
\label{eq:auxenvol2}
u(x_{N-1})\le \dfrac{d(x_{N-1},x_0)}{d(x_{N},x_0)}
u(x_N) +\dfrac{d(x_{N-1},x_N)}{d(x_N,x_0)} u(x_N).
\end{equation}
If $z\in [x,y]\setminus\{x_0,x_1,x_{N-1},x_N\},$ by
the inductive hypothesis and \eqref{eq:auxenvol2}, we have
\begin{align*}
u(z)&\le\dfrac{d(z,x_{N-1})}{d(x_{N-1},x_0)} u(x_0)
+\dfrac{d(z,x_{0})}{d(x_{N-1},x_0)} u(x_{N-1})\\[10pt]
&\le \dfrac{d(z,x_0)}{d(x_N,x_0)}u(x_N)
+\dfrac{d(z,x_{N-1})d(x_N,x_0)+d(z,x_0)d(x_{N-1},x_N)}
{d(x_{N-1},x_0)d(x_N,x_0)}u(x_0)\\[10pt]
&\le \dfrac{d(z,x_0)}{d(x_N,x_0)}u(x_N)\\[10pt]
&\quad +\dfrac{\left[d(z,x_{N})-d(x_{N-1},x_N)\right]d(x_N,x_0)+
\left[d(x_N,x_0)-d(z,x_N)\right]d(x_{N-1},x_N)}
{d(x_{N-1},x_0)d(x_N,x_0)}u(x_0)\\[10pt]
&\le \dfrac{d(z,x_0)}{d(x_N,x_0)}u(x_N)
+d(z,x_N)
\dfrac{d(x_N,x_0)-d(x_{N-1},x_N)}
{d(x_{N-1},x_0)d(x_N,x_0)}u(x_0)\\[10pt]
&\le
\dfrac{d(z,x_0)}{d(x_N,x_0)}u(x_N)
+\dfrac{d(z,x_{N})}{d(x_N,x_0)}u(x_0),
\end{align*}
showing that indeed \eqref{eq:auxenvol1} holds
when $[x,y]$ is a straight line.
When $[x,y]$ is not a straight line, there is
a $z\in[x,y]$ such that $[x,z]$ and $[z,y]$ are
straight lines. Observe that $[x,y]=[x,z]\cup[z,y]$ and
$\S(z)\cap[x,y]=\{w_1,w_2\}.$ We can assume that
$w_1\in [x,z]$ and $w_2\in[z,y].$
Thus, from \eqref{eq:auxenvol1}, we have
\begin{align*}
2u(z)&\le u(w_1)+ u(w_2)\\
&\le \left[\dfrac{d(w_1,x)}{d(x,z)}
+\dfrac{d(w_2,y)}{d(y,z)}
\right]u(z)+\dfrac{d(w_1,z)}{d(x,z)}u(x)
+\dfrac{d(w_2,z)}{d(y,z)} u(y).
\end{align*}
Then,
\[
\dfrac{2d(x,z)d(y,z)-d(w_1,x)d(z,y)-d(w_2,y)d(z,x)}
{d(z,x)d(z,y)}u(z)\le
\dfrac{d(w_1,z)}{d(x,z)}u(x)
+\dfrac{d(w_2,z)}{d(y,z)} u(y).
\]
Since $d(w_1,z)=d(w_2,z),$ we get
\begin{align*}
d(w_1,z)&\left[d(y,z)u(x)+d(x,z) u(y)\right]\\[10pt]
&\ge
\left[
2d(x,z)d(y,z)-d(w_1,x)d(z,y)-d(w_2,y)d(z,x)
\right]u(z)\\[10pt]
&\ge
\left\{
2d(x,z)d(y,z)-\left[d(x,z)-d(w_1,z)\right]d(z,y)-
\left[d(y,z)-d(w_2,z)\right]d(z,x)
\right\}u(z)\\[10pt]
&\ge d(w_1,z)\left[d(z,y)+d(z,x)\right] u(z)\\
&\ge d(w_1,z)d(x,y) u(z).
\end{align*}
Therefore, we obtain
\[
u(z)\le \dfrac{d(x,z)}{d(x,y)}u(y)+\dfrac{d(y,z)}{d(x,y)}u(x).
\]
If $w\in[x,y]\setminus\{x,z,y\}$ then $w\in[x,z]$ or
$w\in[z,y].$ In the case that $w\in[x,z],$ since
$[x,z]$ is a straight line we have
\begin{align*}
u(w)&\le \dfrac{d(x,w)}{d(x,z)}u(z)+\dfrac{d(z,w)}{d(x,z)}u(x)\\[10pt]
&\le \dfrac{d(x,w)}{d(x,y)}u(y)+
\left[\dfrac{d(x,w)d(y,z)}{d(x,z)d(x,y)}+\dfrac{d(z,w)}{d(x,z)}
\right]u(x)\\[10pt]
&\le \dfrac{d(x,w)}{d(x,y)}u(y)+
\dfrac{d(x,w)d(y,z)+d(z,w)d(x,y)}{d(x,z)d(x,y)}u(x)\\[10pt]
&\le \dfrac{d(x,w)}{d(x,y)}u(y)+
\dfrac{\left[d(x,y)-d(y,w)\right]d(y,z)+
d(z,w)d(x,y)}{d(x,z)d(x,y)}u(x)\\[10pt]
&\le \dfrac{d(x,w)}{d(x,y)}u(y)+
\dfrac{\left[d(y,z)+d(z,w)\right]d(x,y)-
d(y,w)d(y,z)}{d(x,z)d(x,y)}u(x)\\[10pt]
&\le \dfrac{d(x,w)}{d(x,y)}u(y)+
\dfrac{d(y,w)\left[d(x,y)-d(y,z)\right]}{d(x,z)d(x,y)}u(x)\\[10pt]
&\le \dfrac{d(x,w)}{d(x,y)}u(y)+
\dfrac{d(y,w)}{d(x,y)}u(x).
\end{align*}
In the case that $w\in [z,y]$ the proof is similar.
Therefore, we conclude that a function $u$ that verifies
\eqref{subsol1} is a convex function in $\mathbb{T}_m$.
\end{proof}
Our second result show that the sum of convex function is also
a convex function.
\begin{co}\label{co:sumconv}
Let $u,v\colon \mathbb{T}_m\to\mathbb{R}$ be convex functions. Then
$u+v$ is a convex function.
\end{co}
\begin{proof}
Since $u,v$ are convex function, by Lemma \ref{lema:convexeq},
for any $x\in\mathbb{T}_m$ we have that
\begin{align*}
&u (x) + v(x) \le
\min \left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u(y) +u(z)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{ u(\hat{x}) + m u(y)}{m+1}
\right\}\\
&\hspace{5cm}+\min \left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{v(y) +v(z)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{ v(\hat{x}) + m v(y)}{m+1}
\right\}\\
&\le\min \left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u(y) +u(z)}2+\frac{v(y) +v(z)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{ u(\hat{x}) + m u(y)}{m+1}+
\frac{ v(\hat{x}) + m v(y)}{m+1}
\right\}.
\end{align*}
Therefore, by Lemma \ref{lema:convexeq}, $u+v$ is a convex function.
\end{proof}
It is immediate to check that the constant function $u=1$ is a convex
function such that
\[
\lim_{x\to\pi}u(x)=\mbox{\Large$\chi$}_{[0,1]}(\psi(\pi)) \quad
\forall\pi\in\partial\mathbb{T}_m.
\]
We now show that for any $x_0\in\mathbb{T}_m\setminus\{\emptyset\}$
there is a convex function $u$ such that
\[
\limsup_{x\to\pi}u(x)\le \mbox{\Large$\chi$}_{I_{x_0}}
(\psi(\pi)) \quad\pi\in\partial\mathbb{T}_m.
\]
Here $I_{x_0}$ is the interval associated to the vertex $x_0$ of length $\tfrac{1}{m^{|x_0|}}$
given by
\[
I_{x_0}\coloneqq\left[\psi(x_0),\psi(x_0)+\frac1{m^{|x_0|}}\right].
\]
Observe that for $x_0\in \mathbb{T}_m$, $I_{x_0} \cap \partial\mathbb{T}_m$ is the
subset of $\partial\mathbb{T}_m$ consisting of all branches that pass through $x_0$.
To find such a convex function we introduce the following set: given $x_0\in\mathbb{T}_m$, let us consider
\[
\mathbb{T}_m^{x_{0}}\coloneqq
\{x\in\mathbb{T}_m \colon |x|\ge |x_0|,I_x\subset I_{x_0}\}.
\]
\begin{lem}\label{lema:caract}
Let $x_0\in\mathbb{T}_m\setminus\{\emptyset\}.$
Then the function $u_{x_0}\colon\mathbb{T}_m\to\mathbb{R}$
\[
u_{x_0}(x)\coloneqq \frac{m-1}m
\begin{cases}
0 &\text{if } x\not\in\mathbb{T}_m^{x_0},\\
\displaystyle\sum_{i=0}^{|x|-|x_0|}\frac1{m^i}
&\text{if } x\in\mathbb{T}_m^{x_0},
\end{cases}
\]
is a convex function such that
\[
\limsup_{x\to\pi}u_{x_0}(x)
\le
\mbox{\Large$\chi$}_{I_{x_0}}(\psi(\pi)) \quad\forall
\pi\in\partial\mathbb{T}_m.
\]
Moreover,
\[
u_{x_0}(x)=\min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u_{x_0}(y) +u_{x_0}(z)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{ u_{x_0}(\hat{x}) + m u_{x_0}(y)}{m+1}
\right\} \quad\forall x\in\mathbb{T}_m ,
\]
and for any $\pi\in\mathbb{T}_m$ such that $\psi(\pi)$ is not one of the two
endpoints of $I_{x_0}$ we have
\[
\lim_{x\to\pi}u_{x_0}(x)
=\mbox{\Large$\chi$}_{I_{x_0}}(\psi(\pi)).
\]
\end{lem}
\begin{proof}
Let us start by showing that the function
$u_{x_0}$ is convex. By Lemma \eqref{lema:convexeq},
it is enough to show that $u_{x_0}$ satisfies \eqref{subsol1}.
If $x\in \mathbb{T}_m\setminus \mathbb{T}_m^{x_0}$ then there exist
$y,z\in \S(x)$ such that $y\neq z,$ $u_{x_0}(y)=u_{x_0}(z)=0$.
So, we have
\begin{equation}
\label{eq:caract1}
\min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u_{x_0}(y) +u_{x_0}(z)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{ u_{x_0}(\hat{x}) + m u_{x_0}(y)}{m+1}
\right\}=0=u_{x_0}(x).
\end{equation}
If $x=x_0$, then $u_{x_0}(\hat{x}_0)=0$ and
$u_{x_0}(y)=\tfrac{m-1}m(1+\tfrac1m)$
for any $y\in\S(x_0).$ Therefore
\begin{equation}
\label{eq:caract2}
\begin{aligned}
& \min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}
\frac{u_{x_0}(y) +u_{x_0}(z)}2 ;
\min_{y\in\mathcal{S}(x_0)}
\frac{ u_{x_0}(\hat{x}) + m u_{x_0}(y)}{m+1}
\right\}
\\[10pt]
& \qquad =\frac{m-1}m\min\left\{
1+\frac1m;
1\right\}\\[10pt]
&\qquad =\frac{m-1}m\\[10pt]
&\qquad =u_{x_0}(x_0).
\end{aligned}
\end{equation}
Now, suppose that $x\in\mathbb{T}_m\setminus\{x_0\}$, and so,
\[
u_{x_0}(\hat{x})=\frac{m-1}m \sum_{i=0}^{|x|-1-|x_0|}\dfrac1{m^i}
\]
and
\[
u_{x_0}(y)=\frac{m-1}m \sum_{i=0}^{|x|+1-|x_0|}\dfrac1{m^i}.
\]
Hence, we obtain
\begin{equation}
\label{eq:caract3}
\begin{aligned}
\min
\Bigg\{
\min_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}
\frac{u_{x_0}(y) +u_{x_0}(z)}2 &;
\min_{y\in\mathcal{S}(x_0)}
\frac{ u_{x_0}(\hat{x}) + m u_{x_0}(y)}{m+1}
\Bigg\}\\[10pt]
&=\frac{m-1}m\min\left\{
\sum_{i=0}^{|x|+1-|x_0|}\dfrac1{m^i};
\sum_{i=0}^{|x|-|x_0|}\dfrac1{m^i}\right\}\\[10pt]
&=\frac{m-1}m \sum_{i=0}^{|x|-|x_0|}\dfrac1{m^i}\\[10pt]
&=u_{x_0}(x).
\end{aligned}
\end{equation}
Therefore, by \eqref{eq:caract1}, \eqref{eq:caract2} and \eqref{eq:caract3} we get that
\[
u_{x_0}(x)=\min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u_{x_0}(y) +u_{x_0}(z)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{ u_{x_0}(\hat{x}) + m u_{x_0}(y)}{m+1}
\right\} \quad\forall x\in\mathbb{T}_m.
\]
Thus, $u_{x_0}$ is a convex function.
Finally, we have to show that
\[
\limsup_{x\to\pi}u_{x_0}(x)
\le
\mbox{\Large$\chi$}_{I_{x_0}}(\psi(\pi)) \quad\forall
\pi\in\partial\mathbb{T}_m.
\]
{\it Case 1}. If $\pi\in\partial\mathbb{T}_m,$ $\psi(\pi)\in I_{x_0}$
and $\psi(\pi)$ is not an endpoint of $I_{x_0}$ then
for any sequence $\{x_k\}_{k\in\mathbb{N}}$ in $\mathbb{T}_m$ such that
$\pi=(x_1,\dots,x_k,\dots)$, there is $k_0\in\mathbb{N}$ such that
$x_k\in \mathbb{T}_m^{x_0}$ for all $k\ge k_0.$
Then
\[
u_{x_0}(x_k)=\frac{m-1}m
\sum_{i=0}^{|x_k|-|x_0|}\frac1{m^i}\quad \forall k\le k_0.
\]
Thus, as $k\to\infty$ we have
\[
u_{x_0}(x_k)\to 1=\mbox{\Large$\chi$}_{I_{x_0}}(\psi(\pi)).
\]
{\it Case 2}.
Similarly, if $\pi\in\partial\mathbb{T}_m,$ $\psi(\pi)\not\in I_{x_0}$
then for any sequence $\{x_k\}_{k\in\mathbb{N}}$ on $\mathbb{T}_m$ such that
$\pi=(x_1,\dots,x_k,\dots)$, we get
\[
u_{x_0}(x_k)\to 0=\mbox{\Large$\chi$}_{I_{x_0}}(\psi(\pi))
\]
as $k\to\infty.$
{\it Case 3}.
Finally suppose that $\pi\in\partial\mathbb{T}_m,$ $\psi(\pi)\in I_{x_0}$
and $\psi(\pi)$ is an endpoint of $I_{x_0}.$
In the case that $\psi(\pi)=0$ or $\psi(\pi)=1$
for any sequence $\{x_k\}_{k\in\mathbb{N}}$ on $\mathbb{T}_m$ such that
$\pi=(x_1,\dots,x_k,\dots)$, there is $k_0\in\mathbb{N}$ such that
$x_k\in \mathbb{T}_m^{x_0}$ for all $k\ge k_0.$ Therefore
\[
u_{x_0}(x_k)\to 1=\mbox{\Large$\chi$}_{I_{x_0}}(\psi(\pi))
\]
as $k\to\infty.$
In the case that $\psi(\pi)\not\in\{0,1\}$ there exists sequences
$\{x_k\}_{k\in\mathbb{N}},$ $\{y_k\}_{k\in\mathbb{N}}$ on $\mathbb{T}_m$
and $k_0\in\mathbb{N}$ such that $\pi=(x_1,\dots,x_k,\dots),$
$\pi=(y_1,\dots,y_k,\dots),$ for all $k\ge k_0$ we get
$x_k\in \mathbb{T}_m^{x_0}$ and $y_k\in \mathbb{T}_m^{x_0}.$ Therefore,
\begin{align*}
&u_{x_0}(x_k)\to 1=\mbox{\Large$\chi$}_{I_{x_0}}(\psi(\pi)),\\
&u_{x_0}(y_k)\to 0\le\mbox{\Large$\chi$}_{I_{x_0}}(\psi(\pi)).
\end{align*}
This fact, together with the previous cases 1 and 2, completes the proof.
\end{proof}
\section{Convex envelopes}\label{sect-convex-envelopes}
\setcounter{equation}{0}
In this section we deal with convex functions on the tree.
Let us start by showing that the convex envelop $u^*_g$
of function $g\colon[0,1]\to\mathbb{R}$, defined in \eqref{convex-envelope-arbol}, is a convex function.
\begin{lem}
\label{lema:convenv1}
For any function $g\colon[0,1]\to\mathbb{R},$
the convex envelop $u_g^*$ is a convex function.
\end{lem}
\begin{proof} This follows easily from the fact that the supremum of convex functions is also convex.
Given $g\colon[0,1]\to\mathbb{R},$ for every function $u\in\mathcal{C}(g)$
it holds that
\[
u(z) \leq \frac{d(y,z)}{d(x,y)} u(x) +
\frac{d(x,z)}{d(x,y)} u(y)\le
\frac{d(y,z)}{d(x,y)} u^*_g(x) +
\frac{d(x,z)}{d(x,y)} u^*_g(y)
\]
for any $x,y,z\in \mathbb{T}_m$ with $z\in[x,y].$ Hence we get
\[
u^*_g(z) \le \frac{d(y,z)}{d(x,y)} u^*_g(x) +
\frac{d(x,z)}{d(x,y)} u^*_g(y)
\]
for any $x,y,z\in \mathbb{T}_m$ with $z\in[x,y].$
Thus $u^*_g$ is a convex function.
\end{proof}
Our second aim is to show that if $g$ is a continuous
function then
\begin{equation}
\label{eq:limb}
\lim_{x\to \pi\in \partial \mathbb{T}_m} u_g^* (x) =
g(\psi(\pi))\quad\forall\pi\in\partial\mathbb{T}_m.
\end{equation}
To prove this property, we need to show a comparison principle.
\begin{lem} \label{lema.compar.convexas}
Let $u$ and $v$ satisfy
\begin{align}
\label{eq.envolvente.44.compar.u}
u(x) &\ge
\min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}
\frac{u(y) +u(z)}2 ;
\min_{y\in\mathcal{S}(x_0)}
\frac{ u(\hat{x}_0) + m u(y)}{m+1}
\right\} \quad\forall x\in\mathbb{T}_m,\\
\label{eq.envolvente.44.compar.v}
v(x) &\le
\min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}
\frac{v(y) +v(z)}2 ;
\min_{y\in\mathcal{S}(x_0)}
\frac{ v(\hat{x}_0) + m v(y)}{m+1}
\right\}\quad\forall x\in\mathbb{T}_m,
\end{align}
with
\[
\lim_{x\to \pi\in \partial \mathbb{T}_m}
u (x) \geq \lim_{x\to \pi\in \partial \mathbb{T}_m} v (x) ,
\]
for every $\pi\in \partial \mathbb{T}_m$. Then,
\[
u(x) \geq v(x) \qquad \forall x\in \mathbb{T}_m.
\]
\end{lem}
\begin{proof}
Adding a positive constant $c$ to $u,$
we may assume that
\begin{equation} \label{estric}
\lim_{x\to \pi\in \partial \mathbb{T}_m} u (x) >
\lim_{x\to z\in \partial \mathbb{T}_m} v (x) .
\end{equation}
Our goal is to show that in this case we have
\[
u(x) \geq v(x) \quad \forall x\in\mathbb{T}_m
\]
(and then we obtain the result
just by letting $c\to 0$).
We argue by contradiction and assume that
\[
M = \max_{x\in \mathbb{T}_m} ( v(x)-u(x) ) >0.
\]
Notice that the maximum is attained thanks to
\eqref{estric}. Also thanks to \eqref{estric},
we have that $M$ is attained only in a finite set of nodes. Let ${x}$ be one of such nodes.
From \eqref{eq.envolvente.44.compar.u} and \eqref{eq.envolvente.44.compar.v} we obtain
\begin{align*}
M= v(x) - u(x) \le&
\min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}
\frac{v(y) +v(z)}2 ;
\min_{y\in\mathcal{S}(x_0)}
\frac{ v(\hat{x}_0) + m v(y)}{m+1}
\right\}\\
&-\min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}
\frac{u(y) +u(z)}2 ;
\min_{y\in\mathcal{S}(x_0)}
\frac{ u(\hat{x}_0) + m u(y)}{m+1}
\right\}.\\
\end{align*}
From this inequality, using that
$$
M\geq \left(\frac{v(y) +v(z)}2 \right) - \left( \frac{u(y) +u(z)}2 \right), \qquad \forall y,z\in\mathcal{S}(x_0)\, y\neq z,
$$
and
$$
M\geq \left(\frac{ v(\hat{x}_0) + m v(y)}{m+1} \right) - \left( \frac{ u(\hat{x}_0) + m u(y)}{m+1} \right), \qquad \forall
y\in\mathcal{S}(x_0),
$$
we get
\begin{align*}
M \le&
\min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}
\frac{v(y) +v(z)}2 ;
\min_{y\in\mathcal{S}(x_0)}
\frac{ v(\hat{x}_0) + m v(y)}{m+1}
\right\}\\
&-\min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}
\frac{u(y) +u(z)}2 ;
\min_{y\in\mathcal{S}(x_0)}
\frac{ u(\hat{x}_0) + m u(y)}{m+1}
\right\} \leq M.\\
\end{align*}
Hence, we obtain that there are two nodes $x_1$ and
$x_2$ connected with $x$ (one of them can be the predecessor)
such that
\[
v(x_1)-u(x_1) = M,\qquad \text{and} \qquad
v(x_2)-u(x_2) = M.
\]
Since this happens for every $x$ in the set of maximums of $v-u$
and this set is finite, we obtain a contradiction that shows that
\[
u(x) \geq v(x),
\]
and proves the result.
\end{proof}
Now we will prove \eqref{eq:limb}.
\begin{teo}\label{teo:limb}
Let $g\colon[0,1]\to\mathbb{R}$ be a continuous function.
Then
\[
\lim_{x\to \pi\in \partial \mathbb{T}_m} u_g^* (x) =
g(\psi(\pi))
\]
for any $\pi\in\partial\mathbb{T}_m.$
\end{teo}
\begin{proof}
Let us start by observing that, for any constant $c$,
$u\in\mathcal{C}(g)$ if only if
$u+c\in\mathcal{C}(g+c).$ Therefore,
without loss of generality, we may
assume that $g$ is a nonnegative function.
Let $\pi_0=(y_1,\dots,y_k,\dots)\in\partial\mathbb{T}_m.$
For any $n\in\mathbb{T}_m,$ there exist $z_n\in\mathbb{T}_m^n$ and
$k_0$ such that $\psi(y_k)\in I_{z_n}$
for all $k\ge k_0.$
Now taking $c=\min\{g(t)\colon t\in I_{z_n}\}$
and $w_{n}=cu_{z_n}$ where $u_{z_n}$ is given by
Lemma \ref{lema:caract}, we have that
$w_{n}$ is a convex function such that
\[
\lim_{x\to\pi}w_{n}(x)\le g(\psi(\pi)),
\qquad \forall\pi
\in\partial\mathbb{T}_m.
\]
Here, we are using that $g\ge 0$.
Then, $w_{n}\in\mathcal{C}(g),$ and
therefore $w_{n}(x)\le u^*_g(x)$ for any $x\in\mathbb{T}_m.$
In particular, $w_{n}(y_k)\le u^*_g(y_k)$
for any $k.$ Therefore,
\[
\min\{g(t)\colon t\in I_{z_n}\} =
\lim_{k\to\infty} w_{n}(y_k)\le
\liminf_{k\to\infty}u_g^*(y_k).
\]
Taking the limit as $n\to \infty,$
we have
\[
g(\psi(\pi_0))\le \liminf_{k\to\infty}u^*(y_k)
\]
since $g$ is a continuous function.
Moreover, taking
\[w^{n}(x)=a(1-u_{z_n})+bu_{z_n}=a+(b-a)w_n\] where
$a=2\max\{g(t)\colon t\in[0,1]\}$ and
$b=\max\{g(t)\colon t\in I_{z_n}\},$ we have that
\begin{align*}
&w^{n}(x)=a+(b-a)w_n(x)\\
&=a+(b-a)\min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}
\frac{w_n(y) +w_n(z)}2 ;
\min_{y\in\mathcal{S}(x_0)}
\frac{ w_n(\hat{x}_0) + m w_n(y)}{m+1}
\right\}\\
&=\max
\left\{
\max_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}
a+\frac{(b-a)(w_n(y) +w_n(z))}2 ;
\max_{y\in\mathcal{S}(x_0)}
a+\frac{(b-a) (w_n(\hat{x}_0) + m w_n(y))}{m+1}
\right\}\\
&=\max
\left\{
\max_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}
\frac{w^n(y) +w^n(z)}2 ;
\max_{y\in\mathcal{S}(x_0)}
\frac{w^n(\hat{x}) + m w^n(y)}{m+1}
\right\}\\
&\ge\min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}
\frac{w^n(y) +w^n(z)}2 ;
\min_{y\in\mathcal{S}(x_0)}
\frac{w^n(\hat{x}) + m w^n(y)}{m+1}
\right\}
\end{align*}
for any $x\in\mathbb{T}_m$ and
\[
g(\psi(\pi))\le \liminf_{z\to\pi}w^{n}(x_k) ,
\qquad \forall\pi\in\partial\mathbb{T}_m.
\]
Thus, by Lemma \ref{lema.compar.convexas},
for any $u\in\mathcal{C}(g)$ we have that
$u(x)\le w^{n}(x)$ for any $x\in\mathbb{T}_m.$
Therefore $u_g^*(x)\le w^{n}(x)$ for any $x\in\mathbb{T}_m.$
In particular, $u_g^*(y_k)\le w^{n}(y_k) $
for any $k.$ Then
\[
\limsup_{k\to\infty}u_g^*(y_k)\le
\lim w_{n}(y_k)
=\max\{g(t)\colon t\in I_{z_n}\}.
\]
Again, taking the limit as $n\to \infty,$
we have
\[
\limsup_{k\to\infty}u_g^*(y_k)\le g(\psi(\pi_0)).
\]
Therefore, we conclude that
\[
\lim_{k\to\infty}u^*(y_k)= g(\psi(\pi_0)).
\]
As $\pi_0\in\partial\mathbb{T}_m$ was arbitrary, we conclude
\[
\lim_{x\to\pi_0}u^*(x)= g(\psi(\pi_0))
\]
for any $\pi_0\in\partial\mathbb{T}_m.$
\end{proof}
Now our next goal is to find the equation that $u^*_g$ verifies on $\mathbb{T}_m$.
\begin{teo}\label{teo:largesol}
Let $g\colon[0,1]\to\mathbb{R}$ be a continuous
functions. The convex envelope $u^*_g$ is characterized as the largest solution
to
\begin{equation} \label{eq.envolvente}
u (x) = \min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u(y) +u(z)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{ u(\hat{x}) + m u(y)}{m+1}
\right\}\quad\text{on }\mathbb{T}_m,
\end{equation}
that verifies
\[
\limsup_{x\to \pi\in \partial \mathbb{T}_m} u (x) \leq
g(\psi(\pi)).
\]
\end{teo}
\begin{proof}
Given $g\colon[0,1]\to\mathbb{R},$ by Lemmas \ref{lema:convenv1} and
\ref{lema:convexeq} we get that $u^*_g$ verifies \eqref{subsol1}.
Now, to see that we have an equality,
we argue by contradiction. Assume that at some node $x_0\in\mathbb{T}_m,$
we have
\[
u_g^* (x_0) < \min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}
\frac{u_g^*(y) +u_g^*(z)}2 ;
\min_{y\in\mathcal{S}(x_0)}
\frac{ u_g^*(\hat{x}_0) + m u_g^*(y)}{m+1}
\right\}
\]
Taking $\delta>0$ such that
\[
u_g^* (x_0) + \delta < \min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u_g^*(y) +u_g^*(z)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{ u_g^*(\hat{x}) + m u_g^*(y)}{m+1}
\right\}
\]
and consider
\[
v(x) =
\begin{cases}
u^*_g (x) &\text{if }x\neq x_0,\\
u_g^* (x_0) + \delta&\text{if }x= x_0.
\end{cases}
\]
Observe that $v$ verifies \eqref{subsol1}. Thus, by
Lemma \ref{lema:convexeq}, $v$ is convex. In addition, we have
that $v\in{\mathbb{C}}(g).$ Therefore
\[
v(x)\le u_g^*(x)\quad\forall x\in \mathbb{T}_m,
\]
leading to a contradiction. This proves that $u^*_g$ is a solution
to \eqref{eq.envolvente}.
Finally, to see that $u^*_g$ is the largest solution
to \eqref{eq.envolvente} that verifies
\[
\limsup_{x\to \pi\in \partial \mathbb{T}_m} u^*_g (x) \leq g(\psi(\pi)),
\]
it is enough to define
\[
\overline{u} (x) = \sup
\left\{ u(x)\colon u\text{ verifies \eqref{eq.envolvente} and } \limsup_{x\to \pi\in \partial \mathbb{T}_m} u (x)
\leq g(\psi(\pi))
\right\}.
\]
This function $\overline{u}$ trivially verifies
\[
\overline{u} (x) \geq u^*_g (x) \quad x \in \mathbb{T}_m,
\]
just notice that $u^*_g$ belongs to the set defining $\overline{u}$.
On the other hand, since $\overline{u}$ is a solution to
\eqref{eq.envolvente}, by Lemma \ref{lema:convexeq}, we have that
$\overline{u}$ is convex and therefore $\overline{u}\in\mathcal{C}(g).$
Then
\[
\overline{u} (x) \leq u^*_g (x) \quad\forall x \in \mathbb{T}_m.
\]
We conclude that
\[
u^*_g(x) = \overline{u} (x) =
\sup \left\{ v(x) \colon
u \text{ verifies \eqref{eq.envolvente} and }
\limsup_{x\to \pi\in \partial \mathbb{T}_m} u (x) \leq g(\psi(\pi))
\right\}.
\]
\end{proof}
Observe that by Lemma \ref{lema.compar.convexas} and
Theorem \ref{teo:largesol},
for any continuous function $g\colon [0,1] \mapsto \mathbb{R}$, the equation defining the convex envelope
has a unique solution that attains the datum $g$ continuously.
\begin{teo}
Let $g\colon[0,1] \mapsto \mathbb{R}$ be a continuous function.
There exists a unique solution to \eqref{eq.envolvente} such that
\[
\lim_{x\to \pi \in \partial \mathbb{T}_m} u (x) = g(\psi(\pi)).
\]
for any $\pi\in\mathbb{T}_m.$
\end{teo}
To end this section we prove Theorem \ref{teo.convex.arbol.f}
that deals with the convex envelope of a function $f:\mathbb{T}_m \mapsto \mathbb{R}$ given by \eqref{convex-envelope-arbol.f}.
\begin{teo} \label{teo.convex.arbol.f.sec}
The convex envelope of a function $f\colon \mathbb{T}_m \to\mathbb{R}$
is the solution to the obstacle problem for the equation \eqref{eq.tree.convex}.
\end{teo}
\begin{proof}
Let us call $v^*$ to the largest solution to
\begin{equation} \label{eq.tree.convex.ffffhhh}
u (x) \leq \min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u(y) +u(z)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{ u(\hat{x}) + m u(y)}{m+1}
\right\}\quad\text{on }\mathbb{T}_m,
\end{equation}
that verifies
\[
u(x) \leq f(x) \qquad \forall x \in \mathbb{T}_m.
\]
We have to prove that the convex enevolpe of $f$, $u^*_f$, verifies
\[
u^*_f (x) = v^* (x), \qquad \forall x \in \mathbb{T}_m.
\]
Since $u^*_f$ is convex from Lemma \ref{lema:convexeq} we obtain that it is a solution to
\eqref{eq.tree.convex.ffffhhh} that verifies $u^*_f \leq f$ on $\mathbb{T}_m$ and then we obtain
\[
u^*_f (x) \leq v^* (x), \qquad \forall x \in \mathbb{T}_m.
\]
We have also from Lemma \ref{lema:convexeq} that $v^*$ being a solution to
\eqref{eq.tree.convex.ffffhhh} is a convex function and it verifies $v^* \leq f$ on $\mathbb{T}_m$. Hence,
\[
v^* (x) \leq u^*_f (x), \qquad \forall x \in \mathbb{T}_m.
\]
We conclude that
\[
u^*_f (x) = v^* (x), \qquad \forall x \in \mathbb{T}_m.
\]
In the coincidence set, the function $f$ verifies an inequality. From the fact that $u^*_f$ is convex and smaller
than $f$ we obtain for $x \in CS(f)$,
\begin{align} \label{eq.tree.convex.ffff.latengo.kkk}
f (x) & = u^*_f(x) \\
& \leq \displaystyle \min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u^*_f(y) + u^*_f(z)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{ u^*_f(\hat{x}) + m \, u^*_f (y)}{m+1}
\right\} \\[10pt]
& \displaystyle \leq \min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{f(y) + f(z)}2 ;
\min_{y\in\mathcal{S}(x)}
\frac{ f(\hat{x}) + m f (y)}{m+1}
\right\}.
\end{align}
Finally, outside of the coincidence set the convex envelope, $u^*_f$, is a solution to the equation.
In fact, arguing by contradiction, assume that for some $x_0 \not\in CS(f)$ it holds
\begin{equation} \label{eq.tree.convex.ffff.latengo.mmmm.llll}
u^*_f (x_0) < \min
\left\{
\min_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}
\frac{u^*_f(y) +u^*_f(z)}2 ;
\min_{y\in\mathcal{S}(x_0)}
\frac{ u^*_f(\hat{x_0}) + m u^*_f(y)}{m+1}
\right\}.
\end{equation}
Then, since $x_0 \not\in CS(f)$ and we have a strict inequality in \eqref{eq.tree.convex.ffff.latengo.mmmm.llll}
there exists $\delta>0$ such that the function
\[
v(x) =
\begin{cases}
u^*_f (x) &\text{if }x\neq x_0,\\
u_f^* (x_0) + \delta&\text{if }x= x_0
\end{cases}
\]
is convex and still verifies $v\leq f$ on $\mathbb{T}_m$ contradicting the maximality of the convex envelope $u^*_f$.
This contradiction shows that we have an equality in \eqref{eq.tree.convex.ffff.latengo.mmmm.llll}.
\end{proof}
\section{Binary convex functions}
\label{section.biconvfunction}
As in Section \ref{sect-convex}, we begin showing a different characterization of binary convex functions.
\begin{lem}\label{lema:biconvexeq}
A function $u$ on the tree is binary convex
if and only if $u$ satisfies
\begin{equation} \label{subsol2}
u (x) \le
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u(y) +u(z)}2
\qquad \forall x\in \mathbb{T}_m.
\end{equation}
\end{lem}
\begin{proof}
Let us start the proof observing that if $x\in \mathbb{T}_m,$
$y,z\in \S(x)$ and $y\neq z$ then $[y,z]\in\mathbb{T}_{2}^{x}$ and $\mathcal{E}([y,z])=\{y,z\}.$
Therefore if $u$ is a binary convex function,
$x\in\mathbb{T}_m$ and $y,z\in \S(x)$ are such taht $y\neq z$
then
\[
u(x)\le \dfrac{u(y)+u(z)}{2}.
\]
Thus, $u$ satisfies \eqref{subsol2} in $\mathbb{T}_m.$
Now assume that $u$ satisfies \eqref{subsol2}. Our aim is to prove that $u$ is a binary convex function,
that is, we aim to show that
\begin{equation}\label{eq:biconvdef}
u(x)\le \sum_{y\in\mathcal{E}(\mathbb{B})}
\dfrac{u(y)}{2^{|y|-|x|}}\quad \forall
\mathbb{B}\in\mathbb{T}_{2}^{x}.
\end{equation}
Fix $x\in \mathbb{T}_m.$ Given $\mathbb{B}\in\mathbb{T}_{2}^{x},$ we
define
\[
|\mathbb{B}|\coloneqq
\max \big\{|z|-|x|\colon z\in\mathcal{E}(\mathbb{B})\big\}\in\mathbb{N}
\]
and
\[
\mathbb{T}_{2}^{x n} \coloneqq
\big\{\mathbb{B}\in\mathbb{T}_{2}^{x}\colon |\mathbb{B}|=n\big\}\subset\mathbb{T}_{2}^{x}.
\]
The proof of \eqref{eq:biconvdef} runs by induction in
$n.$
Observe that in the case $|\mathbb{B}|=1$ there exist
$y,z\in\S(x)$ such that $\mathbb{B}=[y,z]$ and
obviously $\mathcal{E}(\mathbb{B})=\{y,z\}.$ Then,
since $u$ satisfies \eqref{subsol2}, we get
\[
u(x)\le\dfrac{u(y)+u(z)}2=
\sum_{y\in\mathcal{E}(\mathbb{B})}
\dfrac{u(y)}{2^{|y|-|x|}}.
\]
That is \eqref{eq:biconvdef} holds for any
$\mathbb{B}\in\mathbb{T}_{2}^{x 1}.$
Now we assume that \eqref{eq:biconvdef} holds for any
$\mathbb{B}\in\mathbb{T}_{2}^{x n},$ and we will show that it also holds for any $\mathbb{B}\in\mathbb{T}_{2}^{x (n+1}.$
If $\mathbb{B}\in\mathbb{T}_{2}^{x (n+1)}$ then
$\mathbb{B}'=\mathbb{B}\setminus
\{y\in\mathcal{E}(\mathbb{B})\colon |y|-|x|=n+1\}\in\mathbb{T}_{2}^{x n}.$ Then,
by the inductive hypothesis, we get
\begin{equation}\label{eq:biconaux1}
u(x)\le\sum_{y\in\mathcal{E}(\mathbb{B}')}
\dfrac{u(y)}{2^{|y|-|x|}}.
\end{equation}
On the other hand, for any
$y\in\mathcal{E}(\mathbb{B})$ we have that $y\in\mathcal{E}(\mathbb{B}')$ or there are
$w\in \mathcal{E}(\mathbb{B}')$ and
$z\in\mathcal{E}(\mathbb{B})\setminus\{y\}$
such that $y,z\in \S(w).$ Thus, since
$u$ satisfies \eqref{subsol2}, from
\eqref{eq:biconaux1}, we have that
\[
u(x)\le\sum_{y\in\mathcal{E}(\mathbb{B})}
\dfrac{u(y)}{2^{|y|-|x|}}.
\]
Finally, since $x$ is arbitrary, we conclude that $u$ is a binary convex function.
\end{proof}
\begin{re}\label{re:cfisbcf}
Now, by Lemmas \ref{lema:convexeq} and
\ref{lema:biconvexeq}, it is easy to check
that a convex function is also a binary convex
function.
\end{re}
Proceeding as in the proof of Corollary
\ref{co:sumconv} we can prove the following result.
\begin{co}\label{co:sumbiconv}
Let $u,v\colon \mathbb{T}_m\to\mathbb{R}$ be binary
convex functions. Then $u+v$ is a
binary convex function.
\end{co}
Now, we obtain the following result, whose proof is similar to Lemma
\ref{lema:caract}.
\begin{lem}\label{lema:bicaract}
Let $x_0\in\mathbb{T}_m\setminus\{\emptyset\}.$
Then the function $u_{x_0}\colon\mathbb{T}_m\to\mathbb{R}$ defined by
\[
u_{x_0}(x)\coloneqq
\begin{cases}
0 &\text{if } x\not\in\mathbb{T}_m^{x_0},\\
1&\text{if } x\in\mathbb{T}_m^{x_0},
\end{cases}
\]
is a binary convex function such that
\[
\limsup_{x\to\pi}u_{x_0}(x)
\le
\mbox{\Large$\chi$}_{I_{x_0}}(\psi(\pi)) \quad\forall
\pi\in\partial\mathbb{T}_m.
\]
Moreover,
\[
u_{x_0}(x)=
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u_{x_0}(y) +u_{x_0}(z)}2 \quad\forall x\in\mathbb{T}_m,
\]
and for any $\pi\in\mathbb{T}_m$ such that $\psi(\pi)$ is not an
endpoint of $I_{x_0}$ we have
\[
\lim_{x\to\pi}u_{x_0}(x)
=\mbox{\Large$\chi$}_{I_{x_0}}(\psi(\pi)).
\]
\end{lem}
\section{Binary Convex envelopes}
\label{sect-biconvex-envelopes}
Proceeding as in the proof of Lemma \ref{lema:convenv1}
we can show that the binary convex envelop is a binary convex function.
\begin{lem}
\label{lema:biconvenv1}
For any function $g\colon[0,1]\to\mathbb{R},$
the binary convex envelop $\tilde{u}_g$ is a
binary convex function.
\end{lem}
In a similar way to Section \ref{sect-convex-envelopes},
we will show that if $g$ is a continuous function, then
\begin{equation}
\label{eq:bilimb}
\lim_{x\to \pi\in \partial \mathbb{T}_m} \tilde{u}_g (x) =
g(\psi(\pi))\quad\forall\pi\in\partial\mathbb{T}_m.
\end{equation}
As before, to prove this claim we need a comparison principle.
\begin{lem} \label{lema.bicompar.convexas}
Let $u$ and $v$ satisfy
\begin{align}
\label{eq.bienvolvente.44.compar.u}
u(x) &\ge
\min_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}\frac{u(y) +u(z)}2 \quad\forall x\in\mathbb{T}_m,\\
\label{eq.bienvolvente.44.compar.v}
v(x) &\le
\min_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq
z}}
\frac{v(y) +v(z)}2 , \quad\forall x\in\mathbb{T}_m,
\end{align}
with
\[
\lim_{x\to \pi\in \partial \mathbb{T}_m}
u (x) \geq \lim_{x\to \pi\in \partial \mathbb{T}_m} v (x) ,
\]
for every $\pi\in \partial \mathbb{T}_m$. Then,
\[
u(x) \geq v(x) \qquad \forall x\in \mathbb{T}_m.
\]
\end{lem}
\begin{proof}
Adding a positive constant $c$ to $u,$
we may assume that
\begin{equation} \label{biestric}
\lim_{x\to \pi\in \partial \mathbb{T}_m} u (x) >
\lim_{x\to z\in \partial \mathbb{T}_m} v (x) .
\end{equation}
We argue by contradiction, so, assume that
\[
M = \max_{x\in \mathbb{T}_m} (v(x)-u(x)) >0.
\]
Notice that the maximum is attained thanks to
\eqref{biestric}. Also by \eqref{biestric},
we have that $M$ is attained only in a finite set of vertices. Let ${x}$ be one of such vertices.
From \eqref{eq.bienvolvente.44.compar.u} and
\eqref{eq.bienvolvente.44.compar.v} we obtain
\[
M= v(x) - u(x) \le
\min_{\substack{y,z\in\mathcal{S}(x_0)\\
y\neq z}}\frac{v(y) +v(z)}2
-\min_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}
\frac{u(y) +u(z)}2.
\]
Now, using that
$$
M\geq \left(\frac{v(y) +v(z)}2 \right) - \left( \frac{u(y) +u(z)}2 \right), \qquad \forall y,z\in\mathcal{S}(x_0)\, y\neq z,
$$
we get
\[
M \le
\min_{\substack{y,z\in\mathcal{S}(x_0)\\
y\neq z}}\frac{v(y) +v(z)}2
-\min_{\substack{y,z\in\mathcal{S}(x_0)\\ y\neq z}}
\frac{u(y) +u(z)}2 \leq M.
\]
Then, there exist two nodes $x_1, x_2\in\S(x)$ such that
\[
v(x_1)-u(x_1) = M,\qquad \text{and}
\qquad v(x_2)-u(x_2) = M.
\]
Since this happens for every $x$ in the set of maximums of
$v-u$ and this set is finite, we obtain a contradiction that
shows that
\[
u(x) \geq v(x)
\]
and proves the result.
\end{proof}
Now we will show \eqref{eq:bilimb}.
\begin{teo}\label{teo:bilimb}
Let $g\colon[0,1]\to\mathbb{R}$ be a continuous function.
Then, for any $\pi\in\partial\mathbb{T}_m$
\[
\lim_{x\to \pi\in \partial \mathbb{T}_m} \tilde{u}_g (x) =
g(\psi(\pi)).
\]
\end{teo}
\begin{proof}
Let us start by observing that, for any constant $c$,
$u\in\mathcal{B}(g)$ if only if
$u+c\in\mathcal{B}(g+c).$ Therefore,
without loss of generality, we may
assume that $g$ is nonnegative.
Consequently, by Remark \ref{re:cfisbcf}
and Theorem \ref{teo:limb}, we have
\[
\liminf_{x\to \pi\in \partial \mathbb{T}_m} \tilde{u}_g (x) \ge
\lim_{x\to \pi\in \partial \mathbb{T}_m} u^*_g (x)=
g(\psi(\pi))
\]
for any $\pi\in\partial\mathbb{T}_m.$
To complete the proof, we proceed as in the end of the proof
of Theorem \ref{teo:limb}, using Lemmas \ref{lema:bicaract} and \ref{lema.bicompar.convexas} instead of Lemmas
\ref{lema:caract} and \ref{lema.compar.convexas}.
\end{proof}
Finally, with analogous arguments of the Section \ref{sect-convex-envelopes}, we get the following results.
\begin{teo}\label{teo:bilargesol}
Let $g\colon[0,1]\to\mathbb{R}$ be a continuous
functions. The binary convex envelope $\tilde{u}_g$
is characterized as the largest solution
to
\begin{equation} \label{eq.bienvolvente}
u (x) =
\min_{\substack{y,z\in\mathcal{S}(x)\\ y\neq z}}
\frac{u(y) +u(z)}2 \quad\text{on }\mathbb{T}_m,
\end{equation}
that verifies
\[
\limsup_{x\to \pi\in \partial \mathbb{T}_m} u (x) \leq
g(\psi(\pi)).
\]
\end{teo}
\begin{teo}
Let $g\colon [0,1] \mapsto \mathbb{R}$ be a continuous function. Then, there exists a unique solution to \eqref{eq.bienvolvente} such that for any $\pi\in\mathbb{T}_m$
\[
\lim_{x\to \pi \in \partial \mathbb{T}_m} u (x) = g(\psi(\pi)).
\]
\end{teo}
\medskip
{\bf Acknowledgements.} \
Supported by CONICET grant PIP GI 11220150100036CO (Argentina), by UBACyT grant 20020160100155BA (Argentina) and by MINECO MTM2015-70227-P (Spain).
| {
"timestamp": "2019-11-19T02:15:15",
"yymm": "1904",
"arxiv_id": "1904.05322",
"language": "en",
"url": "https://arxiv.org/abs/1904.05322",
"abstract": "We introduce two notions of convexity for an infinite regular tree. For these two notions we show that given a continuous boundary datum there exists a unique convex envelope on the tree and characterize the equation that this envelope satisfies. We also relate the equation with two versions of the Laplacian on the tree. Moreover, for a function defined on the tree, the convex envelope turns out to be the solution to the obstacle problem for this equation.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Convex envelopes on Trees",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987375050346933,
"lm_q2_score": 0.8128673133042217,
"lm_q1q2_score": 0.802604904399132
} |
https://arxiv.org/abs/1601.05525 | On Drury's solution of Bhatia \& Kittaneh's question | Let $A, B$ be $n\times n$ positive semidefinite matrices. Bhatia and Kittaneh asked whether it is true $$ \sqrt{\sigma_j(AB)}\le \frac{1}{2} \lambda_j(A+B), \qquad j=1, \ldots, n$$ where $\sigma_j(\cdot)$, $\lambda_j(\cdot)$, are the $j$-th largest singular value, eigenvalue, respectively. The question was recently solved by Drury in the affirmative. This article revisits Drury's solution. In particular, we simplify the proof for a key auxiliary result in his solution. | \section{Introduction}
Bhatia has made many fundamental contributions to Matrix Analysis \cite{Bha97}. One of his favorite topics is matrix inequalities. Roughly speaking, matrix inequalities are noncommutative versions of the corresponding scalar inequalities. To get a glimpse of this topic, let us start with a simple example. The simplest AM-GM inequality says that
$$a, b>0 \implies \frac{a+b}{2}\ge \sqrt{ab}.$$
Now it is known that \cite[p. 107]{Bha07} its most ``direct" noncommutative version is
\begin{eqnarray}\label{am-gm}
A, B ~ \hbox{are $n\times n$ positive definite matrices} \implies \frac{A+B}{2}\ge A\sharp B,
\end{eqnarray}
where $A\sharp B:=A^{\frac{1}{2}}(A^{-\frac{1}{2}}BA^{-\frac{1}{2}})^{\frac{1}{2}}A^{\frac{1}{2}}$ is called the geometric mean of $A$ and $B$. For two Hermitian matrices $A$ and $B$ of the same size, in this article, we write $A\ge B$ (or $B\le A$) to mean that $A-B$ is positive semidefinite.
If we denote $S:=A\sharp B$, then $B=SA^{-1}S$. Thus a variant of (\ref{am-gm}) is the following
\begin{eqnarray}\label{am-gm1}
A, S ~ \hbox{are $n\times n$ positive definite matrices} \implies A+SA^{-1}S\ge 2S.
\end{eqnarray}
There is a long tradition in matrix analysis of comparing eigenvalues or singular values. To proceed, let us fix some notation. The $j$-th largest singular value of a complex matrix $A$ is denoted by $\sigma_j(A)$. If all the eigenvalues of $A$ are real, then we denote its $j$-th largest one by $\lambda_j(A)$. By Weyl's Monotonicity Theorem \cite[p. 63]{Bha97}, (\ref{am-gm}) readily implies
\begin{eqnarray*} \lambda_j(A+B)\ge 2 \lambda_j(A\sharp B), \qquad j=1, \ldots, n.
\end{eqnarray*}
As far as the eigenvalues or singular values are considered, there are other versions of ``geometric mean". Bhatia and Kittaneh studied this kind of inequalities over a twenty year period \cite{BK90, BK00, BK08}. Their elegant results include the following: If $A, B$ are $n\times n$ positive semidefinite matrices, then
\begin{eqnarray}\label{bk1} && \lambda_j(A+B)\ge 2\sqrt{\lambda_j(AB)}=2\sigma_j(A^{\frac{1}{2}}B^{\frac{1}{2}}); \\&&
\label{bk2} \lambda_j(A+B)\ge 2\lambda_j(A^{\frac{1}{2}}B^{\frac{1}{2}})
\end{eqnarray} for $j=1, \ldots, n$.
To complete the picture in (\ref{bk1})-(\ref{bk2}), they asked whether it is true
\begin{eqnarray*}\lambda_j(A+B)\ge 2\sqrt{\sigma_j(AB)}, \qquad j=1, \ldots, n?
\end{eqnarray*}
This question was recently answered in the affirmative by Drury in his very brilliant work \cite{Dru12}. The purpose of this expository article is to revisit Drury's solution. Hopefully, some of our arguments would shed new insights into the beautiful result, which is now a theorm.
\begin{thm}\cite{Dru12} If $A, B$ are $n\times n$ positive definite semidefinite matrices, then
\begin{eqnarray}
\label{bkd} \lambda_j(A+B)\ge 2\sqrt{\sigma_j(AB)}, \qquad j=1, \ldots, n.
\end{eqnarray}
\end{thm}
\section{Drury's reduction in proving (\ref{bkd})}
Our presentation here is just slightly different from that in \cite{Dru12}.
Assume without loss of generality that $A, B$ are positive definite (the general case is by a standard purturbation argument). Fix $r$ in the range $1\le r\le n$ and normalize so that
$\sigma_r(AB)=1$. Our goal is to show that $\lambda_r(A+B)\ge 2$.
Note that $\sigma_r(AB)=1$ is the same as $\lambda_r(AB^2A)=1$. Consider the spectral decomposition $$AB^2A=\sum_{k=1}^n\lambda_k(AB^2A)P_k,$$ where $P_1, P_2, \ldots, P_n$, are orthogonal projections. Then
$\lambda_k(AB^2A)\ge 1$ for $k=1, \ldots, r$. Define a positive semidefinite $$B_1:=\left(A^{-1}\left(\sum_{k=1}^r P_k\right)A^{-1}\right)^{1/2}.$$ It is easy to see (indeed, from $B^2\ge B_1^2$) that
$$B=\left(A^{-1}\left(\sum_{k=1}^r\lambda_k(AB^2A) P_k\right)A^{-1}\right)^{1/2}\ge B_1.$$ So we are done if we can show \begin{eqnarray}\label{reduction1}
\lambda_r(A+B_1)\ge 2.
\end{eqnarray}
As $B_1$ has rank $r$, split the underlying space as the direct sum of image and kernel of $B_1$, we may partition comformally $B_1$ and $A$ in the following form
$$B_1=\begin{pmatrix}
X & 0 \\0& 0
\end{pmatrix}, \ A=\begin{pmatrix}
A_{11} & A_{12}\\ A_{12}^*& A_{22}
\end{pmatrix}.$$
Note $AB_1^2A$ is an orthogonal projection of rank $r$, the same is true for $B_1A^2B_1$. Therefore, $$B_1A^2B_1=\begin{pmatrix}
X(A_{11}^2+A_{12}A_{12}^*)X & 0\\ 0& 0
\end{pmatrix}\implies X(A_{11}^2+A_{12}A_{12}^*)X=I_r$$
where $I_r$ is the $r\times r$ identity matrix.
Finally, observe that $$A\ge A_1:=\begin{pmatrix}
A_{11} & A_{12}\\ A_{12}^*& A_{12}^*A_{11}^{-1}A_{12}
\end{pmatrix}.$$
Therefore, (\ref{reduction1}) would follow from
\begin{eqnarray}\label{reduction2}
\lambda_r(A_1+B_1)\ge 2.
\end{eqnarray}
Thus, the remaining effort is made to show (\ref{reduction2}), which we formulate as a proposition.
\begin{proposition}\label{p1}
Let $A_{11}$ and $X$ be $r\times r$ positive definite matrices and $A_{12}$ is an $(n-r)\times (n-r)$ matrix such that $X(A_{11}^2+A_{12}A_{12}^*)X=I_r$. Then \begin{eqnarray}\label{reduction3}
\lambda_r\begin{pmatrix}
A_{11}+X & A_{12}\\ A_{12}^*& A_{12}^*A_{11}^{-1}A_{12}
\end{pmatrix}\ge 2.
\end{eqnarray}
\end{proposition}
\section{The mystified part}
In order to prove (\ref{reduction3}), Drury made the following key observations.
\begin{proposition}\label{p2}\cite[Proposition 2]{Dru12} Let $M$ and $N$ be $r\times r$ positive definite matrices. Then
\begin{eqnarray*}
\lambda_r\begin{pmatrix}
M &(M\sharp N)^{-1}\\ (M\sharp N)^{-1}& N
\end{pmatrix}\ge 2.
\end{eqnarray*} \end{proposition}
\begin{proposition}\label{p3}\cite[Theorem 7]{Dru12} Let $L$ and $M$ be $r\times r$ positive definite matrices, and let $Z$ be an $r\times r$ matrix such that $ML(I+ZZ^*)LM=I_r$. Then
\begin{eqnarray} \label{reduction4} \lambda_r\begin{pmatrix}
L+M & LZ\\ Z^*L& Z^*LZ
\end{pmatrix}\ge 2.
\end{eqnarray} \end{proposition}
The way that Drury proved (\ref{reduction4}) is by showing that $T:=\begin{pmatrix}
L+M & LZ\\ Z^*L& Z^*LZ
\end{pmatrix}$ and $R:=\begin{pmatrix}
M &(M\sharp N)^{-1}\\ (M\sharp N)^{-1}& N
\end{pmatrix}$ have the same characteristic polynomial, and so the eigenvalues of $R$ and $T$ coincide. As explained in \cite{Dru12a}, this connection (between $R$ and $T$) is mystified. Formally, the mystified part also comes from $R$ and $T$ themselves, indeed, $T$ is always positive semidefinite while $R$ is not!
In order to apply Proposition \ref{p3} to Proposition \ref{p1}, Drury discussed three possible relations between the size $n$ and $r$. Our proof of Proposition \ref{p1} in the next section allows us to skip this discussion on the size.
\section{Proof of Proposition \ref{p1}}
The following lemma slightly generalizes Proposition \ref{p2} in form.
\begin{lemma} \label{lem1} Let $X$ be a $r\times r$ positive definite matrix and let $S$ be a $r\times r$ nonsingular matrix. Then
\begin{eqnarray*}
\lambda_r\begin{pmatrix}
SX^{-1}S^*& (S^{-1})^*\\S^{-1}& X
\end{pmatrix}\ge 2.
\end{eqnarray*}
\end{lemma}
\begin{proof}
Consider the polar decomposition of $S$, $S=U|S|$, where $U$ is unitary and $|S|=(S^*S)^{\frac{1}{2}}$. The matrix $\begin{pmatrix}
SX^{-1}S^*& (S^{-1})^*\\S^{-1}& X
\end{pmatrix}$ is unitarily similar to
$$\begin{pmatrix}U^*
SX^{-1}S^*U& U^*(S^{-1})^*\\S^{-1}U& X
\end{pmatrix}=\begin{pmatrix}
|S|X^{-1}|S|& |S|^{-1}\\|S|^{-1}& X
\end{pmatrix}.$$
As $P:=\frac{1}{2}\begin{pmatrix}
I_r\\ I_r
\end{pmatrix}$ is a partial isometry,
\begin{eqnarray*}
\lambda_r\begin{pmatrix}
SX^{-1}S^*& (S^{-1})^*\\S^{-1}& X
\end{pmatrix}&=&\lambda_r\begin{pmatrix}
|S|X^{-1}|S|& |S|^{-1}\\|S|^{-1}& X
\end{pmatrix}\\&\ge& \lambda_r \left(P^*\begin{pmatrix}
|S|X^{-1}|S|& |S|^{-1}\\|S|^{-1}& X
\end{pmatrix}P\right)\\&=&\lambda_r\left(\frac{X+|S|X^{-1}|S|}{2}+|S|^{-1}\right)\\&\ge& \lambda_r(|S|+|S|^{-1}) \ge 2. \qquad \hbox{by (\ref{am-gm1})}
\end{eqnarray*}
The required result follows.
\end{proof}
Now we are ready to give a simpler proof of Proposition \ref{p1}.
~
\noindent
{\it Proof. } Consider the factorization $$\begin{pmatrix}
A_{11}+X & A_{12}\\ A_{12}^*& A_{12}^*A_{11}^{-1}A_{12}
\end{pmatrix}=\begin{pmatrix}
A_{11}^{\frac{1}{2}}&X^{\frac{1}{2}}\\ A_{12}^*A_{11}^{-\frac{1}{2}}& 0
\end{pmatrix}\begin{pmatrix}
A_{11}^{\frac{1}{2}}&X^{\frac{1}{2}}\\ A_{12}^*A_{11}^{-\frac{1}{2}}& 0
\end{pmatrix}^*.$$
Clearly, $\begin{pmatrix}
A_{11}+X & A_{12}\\ A_{12}^*& A_{12}^*A_{11}^{-1}A_{12}
\end{pmatrix}$ is unitarily similar to \begin{eqnarray*}
\begin{pmatrix}
A_{11}^{\frac{1}{2}}&X^{\frac{1}{2}}\\ A_{12}^*A_{11}^{-\frac{1}{2}}& 0
\end{pmatrix}^*\begin{pmatrix}
A_{11}^{\frac{1}{2}}&X^{\frac{1}{2}}\\ A_{12}^*A_{11}^{-\frac{1}{2}}& 0
\end{pmatrix}&=&\begin{pmatrix}
A_{11}+A_{11}^{-\frac{1}{2}}A_{12}A_{12}^*A_{11}^{-\frac{1}{2}}& A_{11}^{\frac{1}{2}}X^{\frac{1}{2}}\\ X_{11}^{\frac{1}{2}}A^{\frac{1}{2}}& X
\end{pmatrix}\\&=&\begin{pmatrix}
A_{11}^{-\frac{1}{2}}X^{-2}A_{11}^{-\frac{1}{2}}& A_{11}^{\frac{1}{2}}X^{\frac{1}{2}}\\ X_{11}^{\frac{1}{2}}A^{\frac{1}{2}}& X
\end{pmatrix}.
\end{eqnarray*}
Now setting $S=A_{11}^{-\frac{1}{2}}X^{-\frac{1}{2}}$ in Lemma \ref{lem1} yields the desired result. \qed
\section{A conjecture}
A weighted version of (\ref{bk1}) is known. That is, if $A, B$ are $n\times n$ positive semidefinite matrices, then for any $t\in [0, 1]$ and $j=1, \ldots, n$
\begin{eqnarray}\label{ando} && \lambda_j((1-t)A+tB)\ge \sqrt{\lambda_j(A^{2(1-t)}B^{2t})}=\sigma_j(A^{1-t}B^{t}).
\end{eqnarray} Inequality (\ref{ando}) is due to Ando \cite{Ando95}. With \ref{ando}), it is not hard to present a weighted version of (\ref{bk2}).
\begin{proposition}\label{p4} If $A, B$ are $n\times n$ positive semidefinite matrices, then for any $t\in [0, 1]$ and $j=1, \ldots, n$
\begin{eqnarray}\label{lin} \lambda_j((1-t)A+tB)\ge \lambda_j(A^{1-t}B^{t}).
\end{eqnarray}
\end{proposition}
\begin{proof} By (\ref{ando}) and the matrix convexity of the square function,
\begin{eqnarray*} \lambda_j(A^{1-t}B^{t})&=&\sigma_j^2(A^{(1-t)/2}B^{t/2})\\&\le&\lambda_j((1-t)A^{1/2}+tB^{1/2})^2 \\ &\le& \lambda_j((1-t)A+tB).
\end{eqnarray*}
\end{proof}
We conclude the paper with the following conjecture \begin{conj}
If $A, B$ are $n\times n$ positive definite semidefinite matrices, then for any $t\in [0, 1]$
\begin{eqnarray*}
\lambda_j((1-t)A+tB)\ge \sqrt{\sigma_j(A^{2(1-t)}B^{2t})}, \qquad j=1, \ldots, n.
\end{eqnarray*}
\end{conj}
The present method of proof does not seem to lead to a solution of this conjecture.
\section*{Acknowledgement} The author thanks some helpful conversations with T. Ando and P. van den Driessche.
| {
"timestamp": "2016-01-22T02:04:47",
"yymm": "1601",
"arxiv_id": "1601.05525",
"language": "en",
"url": "https://arxiv.org/abs/1601.05525",
"abstract": "Let $A, B$ be $n\\times n$ positive semidefinite matrices. Bhatia and Kittaneh asked whether it is true $$ \\sqrt{\\sigma_j(AB)}\\le \\frac{1}{2} \\lambda_j(A+B), \\qquad j=1, \\ldots, n$$ where $\\sigma_j(\\cdot)$, $\\lambda_j(\\cdot)$, are the $j$-th largest singular value, eigenvalue, respectively. The question was recently solved by Drury in the affirmative. This article revisits Drury's solution. In particular, we simplify the proof for a key auxiliary result in his solution.",
"subjects": "Functional Analysis (math.FA)",
"title": "On Drury's solution of Bhatia \\& Kittaneh's question",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750510899382,
"lm_q2_score": 0.8128673110375458,
"lm_q1q2_score": 0.8026049027650375
} |
https://arxiv.org/abs/math/0508396 | Group actions in number theory | Students having had a semester course in abstract algebra are exposed to the elegant way in which finite group theory leads to proofs of familiar facts in elementary number theory. In this note we offer two examples of such group theoretical proofs using the action of a group on a set. The first is Fermat's little theorem and the second concerns a well known identity involving the famous Euler phi function. The tools that we use to establish both results are sometimes seen in a second semester algebra course in which group actions are studied. Specifically, we will use the class equation of a group action and Burnside's theorem. | \section{Introduction}
Students having had a semester course in abstract algebra are exposed to the elegant way in which finite group theory leads to proofs of familiar facts in elementary number theory. In this note we
offer two examples of such group theoretical proofs using the action of a group on a set. The first is Fermat's little theorem and the second concerns a well known identity involving the famous Euler phi function. The tools that we use to establish both results are sometimes seen in a second semester algebra course in which group actions are studied. Specifically, we will use the class equation of a group action and Burnside's theorem.
\section{Fermat's Little Theorem}
A well known consequence of the class equation of a group action asserts
that if $G$ is a $p$-group (that is, $G$ is a finite group of order $p^n$
for some integer $n\ge 1$ and a prime integer $p$), and $G$ acts on a
finite set $S$, then the number of elements in $S$ is congruent to the
number of fixed points of the action modulo $p$. Recall that an element
$s\in S$ is a fixed point of the action if $gs=s$ for all $g\in G$. The
set of fixed points is usually denoted by $S^G$, and so using this
notation, the aforementioned theorem asserts that
\begin{equation}
\label{fact} |S|\equiv |S^G|\pmod p.
\end{equation}
This seemingly obscure result appears to have limitless utility in group
theory! A line of argumentation due to R.~J.~Nunke (c.f~\cite{Fraleigh,Hungerford}) uses
(\ref{fact}) repeatedly to establish the three famous Sylow theorems in
elementary group theory. In
this section, we use (\ref{fact}) to obtain a new proof of the following
well known result in number theory.
\begin{theorem}[Fermat's little theorem]\label{fermat}
If $a\ge 1$ is any integer and $p$ is a prime, then $a^p\equiv a \pmod p$.
\end{theorem}
{\bf Proof.} Let $a\ge 1$ be an integer and let $A=\{1,\cdots,a\}$. Let
$S=A^p$ and let a cyclic group $G$ of order $p$ act on
$S$ by cyclic permutation of the entries of an element in $S$. Easily, this is
a well defined action and $(a_1,\cdots,a_p)\in S$ is a fixed point if
and only if $a_1=\cdots = a_p$. Therefore $|S^G|=a$. Since
$|S|=a^p$, an application of (\ref{fact}) completes the proof. {\hfill\vrule height 5pt width 5pt depth 0pt}
Of course, Fermat's little theorem holds for all integers $a$, but the construction in the our proof is not valid for $a\le 0$. The case $a=0$ is trivial, and if $a\le -1$, then $-a\ge 1$ and what we have proved so far shows that
$(-a)^p\equiv (-a)\pmod p$. But $(-a)^p=-a^p$ and hence $a^p\equiv a\pmod
p$. We remark here that we do not believe the above proof of Theorem~\ref{fermat} is better than the standard group theoretic proof.
Indeed, the ideas in it are seldom encountered in a first semester undergraduate level course in algebra. However, the non-zero elements of $\mathbb {Z}_{p^j}$ do not form a group under multiplication if $j>1$, so that the standard argument does not generalize to the case of a power of a prime. On the other hand, our method immediately gives a proof of this case as well.
\begin{theorem} If $a\ge 1$ is any integer and $p$ is a prime, then
$a^{p^j}\equiv a \pmod p$ for all $j\ge 1$. \end{theorem}
{\bf Proof.} We note that a cyclic group $G$ of order $p^j$ acts on the set $A^{p^j}$ cyclically, and there are still precisely $a$ fixed points. This time, however, there are $a^{p^j}$ total elements in the set. {\hfill\vrule height 5pt width 5pt depth 0pt}
\section{The Euler Phi Function}
If $n\ge 1$ is an integer, we denote by $\varphi(n)$ the number of elements in the set $\{1,\dots, n\}$ that are relatively prime to $n$. The function $\varphi$ is called the {\it Euler phi function}. In this section, we use group actions to give a proof of the following well known result.
\begin{theorem}\label{second}
If $n\ge 1$ is an integer, then $\displaystyle \sum_{d|n}\varphi(d)=n$.
\end{theorem}
In contrast to Fermat's little theorem, this result is not typically discussed in algebra classes. The usual proof given in number theory exploits the fact that the function $\varphi$ is {\it multiplicative} (that is, satisfies $\varphi(ab)=\varphi(a)\varphi(b)$ when ever $a$ and $b$ are relatively prime integers). Our method here will employ Burnside's theorem as well as the structure of the lattice of subgroups of a finite cyclic group. If a finite group $G$ acts on a finite set $S$, then for each $g\in G$, we let $S^g=\{s\in S : gs=s\}$ denote the set of elements in $S$ left fixed by $g$. If $r$ denotes the number of orbits in $S$ under the action of $G$, Burnside's theorem states that
\begin{equation}
\label{burnside}
r\cdot |G|=\sum_{g\in G} |S^g|.
\end{equation}
We refer the reader to \cite{Fraleigh} for an excellent account of the details. To establish Theorem~\ref{second}, we consider the problem of counting the number of distinguishable ways of coloring the edges of a regular $n$-gon ($n\ge 3$) with $q$ colors ($q\ge 1$). The dihedral group $D_n$ of order $2n$ has a natural action on a regular $n$-gon as the group of symmetries. Under this action, two colorings of the $n$-gon are indistinguishable if and only if they belong to the same orbit under the action. Therefore the solution to our counting problem is the number of distinct orbits under this action.
For notation, we let $D_n=\langle a,b | a^n=1, b^2=1, ba=a^{-1}b\rangle$. We refer to an element of the cyclic subgroup $\langle a\rangle$ as a {\it rotation} and an element of the coset $b\langle a\rangle$ as a {\it flip}. To use Burnside's theorem, we must compute $|S^g|$ for all $g\in D_n$ where $S$ is the set of all $q^n$ possible colorings of the $n$-gon. If $g$ is a flip and $n$ is odd, then the line of reflection for $g$ must pass through a vertex of the $n$-gon and the midpoint of the edge opposite this vertex. If a coloring $s$ is fixed under $g$, this opposite edge may be colored any one of the $q$ colors, but the remaining $(n-1)/2$ edges must be colored the same as their image under $g$. Therefore there are $qq^{(n-1)/2}=q^{(n+1)/2}$ colorings fixed by $g$. It follows that if $n$ is odd, then
\[\sum_{g\in b\langle a\rangle} |S^g|=n\left (q^{(n+1)/2}\right).\]
If $n$ is even, then there are $q^{n/2}$ colorings fixed by $g$ if $g$ is a flip in a line through opposite vertices and there are $q^{(n+2)/2}$ colorings fixed by $g$ if $g$ is a flip in a line through the midpoints of opposite edges. Since there are exactly $n/2$ of each of these types of flips, if $n$ is even we have
\[\sum_{g\in b\langle a\rangle} |S^g|=\frac{n}{2}\left (q^{n/2}\right)+\frac{n}{2}q^{(n+2)/2}=\frac{n}{2}q^{n/2}(q+1).\]
Now we turn our attention toward the rotations. For every positive divisor $d$ of $n=|\langle a\rangle|$, there is a unique subgroup of $\langle a\rangle$ of order $d$, and this subgroup has precisely $\varphi(d)$ generators. For each of these generators $g$, if we choose an edge of the $n$-gon, each of the images of this edge under the $d$ distinct powers of $g$ must be colored the same color if $g$ leaves the coloring fixed. Therefore there are $q^{n/d}$ colorings left fixed by each of the $\varphi(d)$ elements of order $d$ and hence we have
\[\sum_{g\in \langle a\rangle} |S^g|=\sum_{d|n}\varphi(d)q^{n/d}.\]
Combining this with our results for the flips and Burnside's theorem, we have shown that the number of orbits in $S$ under the action of $D_n$ is given by
\begin{equation*}
r=\left\{\begin{array}{ll} \frac{1}{2n}\left
(nq^{(n+1)/2}+{\displaystyle\sum_{d|n}\varphi(d)q^{n/d}}\right ) & \mbox {if
$n$ is odd,}\\ \frac{1}{2n}\left
(\frac{n}{2}q^{n/2}(q+1)+{\displaystyle\sum_{d|n}\varphi(d)q^{n/d}}\right ) &
\mbox {if $n$ is even.}\\ \end{array}\right. \end{equation*}
Now, Theorem~\ref{second} is easily verified for $n=1$ and $n=2$. If $n\ge 3$, then setting $q=1$ above and noting that we must have $r=1$, we see (using $n$ even or odd) we have
\[n+\sum_{d|n}\varphi(d)=2n,\]
which completes the proof.
| {
"timestamp": "2005-08-21T23:34:25",
"yymm": "0508",
"arxiv_id": "math/0508396",
"language": "en",
"url": "https://arxiv.org/abs/math/0508396",
"abstract": "Students having had a semester course in abstract algebra are exposed to the elegant way in which finite group theory leads to proofs of familiar facts in elementary number theory. In this note we offer two examples of such group theoretical proofs using the action of a group on a set. The first is Fermat's little theorem and the second concerns a well known identity involving the famous Euler phi function. The tools that we use to establish both results are sometimes seen in a second semester algebra course in which group actions are studied. Specifically, we will use the class equation of a group action and Burnside's theorem.",
"subjects": "History and Overview (math.HO)",
"title": "Group actions in number theory",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9901401441145626,
"lm_q2_score": 0.810478913248044,
"lm_q1q2_score": 0.8024877079652324
} |
https://arxiv.org/abs/math/0703504 | Ubiquity of simplices in subsets of vector spaces over finite fields | We prove that a sufficiently large subset of the $d$-dimensional vector space over a finite field with $q$ elements, $ {\Bbb F}_q^d$, contains a copy of every $k$-simplex. Fourier analytic methods, Kloosterman sums, and bootstrapping play an important role. | \section{Introduction}
\vskip.125in
Many problems in combinatorial geometry ask, in one form or another,
whether a certain structure must be present in a set of sufficiently
large size. Perhaps the most celebrated result of this type is
Szemeredi's theorem (\cite{SO75}) which says that if a subset of the
integers has positive density, then it contains an arbitrary large
arithmetic progression. The conclusion has recently been extended to
the subsets of prime numbers by Green and Tao (\cite{GT07}). In
Euclidean space, a result due to Katznelson and Weiss (\cite{FKB90})
says that a subset of Euclidean space of positive Lebesgue upper
density contains every sufficiently large distance. A subsequent
result by Bourgain (\cite{BK86}), says that a subset of $\Bbb R^k$ of
positive Lebesgue upper density contains an isometric copy of all
large dilates of a set of $k$ points spanning a $(k-1)$-dimensional
hyperplane. Ergodic theory has been used to show that positive upper
density implies that the set contains a copy of a sufficiently large
dilate of every convex polygon with finitely many sides. See, for
example, a recent survey by Bryna Kra (\cite{K06}).
Let $\Bbb F_q^d$ be a $d$-dimensional vector space over a finite
field $\Bbb F_q$ of odd characteristic. A plausible analogy to Bourgain's result (\cite
{BK86}) in this context would be to consider whether a subset of
positive density contains a isometric copy of a set of $k$ points
spanning a $(k-1)$-dimensional hyperplane. It turns out however, that
the positive density condition is much too strong in the context of vector spaces over finite fields and the same
conclusion follows from a much weaker assumption on the size of the
underlying set.
\begin{definition}
Let a $k$-simplex be a set of $k+1$ points in general position, which
means that
no $n + 1$ of these points, $n \leq k$, lie in a $(n - 1)$-dimensional
sub-space of ${\Bbb F}_q^d$.
\end{definition}
\begin{definition} We say that a linear transformation $T$ on ${\Bbb
F}_q^d$ is an isometry if
$$ ||Tx||=||x||,$$ where
$$ ||x||=x_1^2+x_2^2+\dots+x_d^2,$$ an element of ${\Bbb F}_q$.
\end{definition}
The question we ask in this paper is how large does $E \subset {\Bbb
F}_q^d$ need to be in order to be sure that it contains a copy of
every $k$-simplex.
Our main result is the following.
\begin{theorem} \label{main} Let $E \subset {\Bbb F}_q^d$, $d>{k+1 \choose 2}$, such that
$|E| \ge C q^{\tfrac{k}{k+1}d}q^{\tfrac{k}{2}}$ with a
sufficiently large constant $C>0$. Then $E$ contains an isometric
copy of every $k$-simplex.
\end{theorem}
Note that we obtain non-trivial results only when $k<<\sqrt{d}$.
Nevertheless, in that range we are able to dip considerably below the
positive density condition on the underlying set $E$.
The method of proof relies on the fact that orthogonal
transformations on ${\Bbb F}_q^d$ are isometries.
A "distance representation" of a simplex is then used to reduce
Theorem \ref{main} to an appropriate weighted incidence theorem for
spheres and points. Weil's estimate (\cite{We48}) for classical
Kloosterman sums is used to control the size of the Fourier transform
of spheres of non-zero radius. The key idea in the proof is to show
at each step of an inductive argument that a collection of distances
among vertices of a given simplex can not only be realized, but
actually occur a "statistically correct" number of times.
\vskip.125in
\section{Preliminaries and Definitions}
Let $\Bbb F _q^d$ be the $d$-dimensional vector space over the finite
field ${\Bbb F}_q$. The Fourier transform of a function
$$f: \Bbb F_q^d \rightarrow \Bbb C$$ is given by
\begin{equation*}
\widehat{f}(m) := q^{-d} \sum_{x \in \Bbb F_q^d} f(x) \chi(-x \cdot m),
\end{equation*}
where $\chi$ is an additive character on $\Bbb F_q$.
The orthogonality property of the Fourier Transform says that
$$ q^{-d}\sum_{x \in \Bbb F_q^d} \chi(-x \cdot m)=1$$ for $m=(0,
\dots, 0)$ and $0$
otherwise yields many standard properties of the Fourier Transform.
We summarize some of the properties of the Fourier Transform as follows.
\begin{lemma}[The Fourier Transform]
Let
$$f,g:\Bbb F_q^d \rightarrow \Bbb C.$$
\begin{align*}
& \hat f (0, \dots, 0) =q^{-d} \sum_{x \in \Bbb F_q^d} f(x),
\\
& (\text{Plancherel})\ q^{-d} \sum_{x \in \Bbb F_q^d} f(x) \overline{g
(x)} =\sum_{m \in \Bbb F_q^d} \hat{f}(m) \overline{\hat{g}(m)},
\\
& (\text{Inversion})\ \ \ f(x) =\sum_{m \in \Bbb F_q^d} \hat{f}(m)
\chi(x \cdot m).
\end{align*}
\end{lemma}
\subsection{Notation} Throughout the paper $X \lesssim Y$ means that
there exists $C>0$ such that $X\leq CY$, $X \gtrsim Y$ means $Y
\lesssim X$, and $X \approx Y$ if both $X\lesssim Y$ and
$X\gtrsim Y$. Along the same lines, $X \ll Y $ means that $\tfrac{X}
{Y}\rightarrow 0$, as $q \to \infty$ $X \gg Y$ means $Y\ll X$, and $X
\sim Y$ if $\tfrac{X}{Y}\rightarrow 1$ as $q \rightarrow \infty$.
\vskip.125in
\section{Proof of the main result}
Even though a finite field with $q$ elements, ${\Bbb F}_q$, is not a
metric space, we define the "distance" between two points $x$ and $y$
in ${\Bbb F}_q^d$ by the formula
$$ ||x-y||={(x_1-y_1)}^2+{(x_2-y_2)}^2+\dots+{(x_d-y_d)}^2.$$
The same notion of "distance" was used by Bourgain, Katz and Tao(\cite
{BKT04}), and Iosevich and Rudnev(\cite{IR07}) in their study of the
Erd\H os distance problem in vector spaces over finite fields. As we
noted above, a geometric justification of this notion of distance is
that an orthogonal transformation on ${\Bbb F}_q^d$, a matrix $O$
such that $O^t \cdot O=I$, preserves this notion of a "distance".
Represent a $k$-simplex in a subset $E \subset \Bbb F_q^d$ on $k+1$
points recursively by setting
$$ {\cal T}_{l_k}=\{(x_0,\dots,x_{k-1},x_k) \in {\cal T}_{l_{k-1}}
\times E: ||x_0-x_k||=t_{1,k}, ||x_1-x_k||=t_{2,k},\dots,||x_{k-1}-x_k||=t_{k,k}\},$$
for $l_k=l_{k-1}\cup_\{t_{1,k},\dots t_{k,k}\},t_{i,j}\in \mathbb F_q^*$ where
$$ {\cal T}_{l_1}=\{(x_0,x_1) \in E^2: ||x_0-x_1||=t_{1,1}\}.$$
This representation does not, in general, always embody a simplex as $
{\cal T}_{l_k}^k$ is not guaranteed to be in general position.
However, as we show below, "legitimate" $k$-simplices are equivalent
up to an orthogonal transformation.
\begin{theorem} \label{simplexlemma} Let $E \subset {\Bbb F}_q^d$, $d>
{k+1 \choose 2}$, such that
$|E| \ge Cq^{\tfrac{k}{k+1}d}q^{\tfrac{k}{2}}$, with a
sufficiently large constant $C$. Then for every side length set $l_k$, $l_k\in (\Bbb F_q^*)^{k+1 \choose 2}$ we have that $|{\cal T}_{l_k}|>0$. Furthermore,
$$ |{\cal T}_{l_k}| \sim |E|^{k+1}q^{-{k+1 \choose 2}}.$$
\end{theorem}
Using this theorem we recover the main result of the paper using the
following linear algebraic observation.
\begin{lemma}\label{uptocrap} Let $P$ be a simplex with vertices
$V_0, V_1, \dots, V_k$, $V_j \in {\Bbb F}_q^d$. Let $P'$ be another
simplex with vertices $V'_0, V'_1, \dots, V'_k$. Suppose that
\begin{equation} \label{equalnorm} ||V_i-V_j||=||V'_i-V'_j|| \end{equation}
for all $i,j$.
Then there exists an orthogonal, affine transformation $O$ on ${\Bbb F}_q^d$ such
that $O(P)=P'$.
\end{lemma}
\vskip.125in
\subsection{Proof of Theorem \ref{simplexlemma}-the main result
reformulated in terms of "distances"}
The proof proceeds by induction. The first step is the case $k=2$.
For a set $E$ we define the characteristic or indicator function to
be $E(x)$. Now define the sphere of radius $t_{1,1} \in \Bbb F_q^*$
to be
$$S_{t_{1,1}}=\{x \in {\Bbb F}_q^d: ||x||=t_{1,1}\},$$ then
\begin{align*}
|{\cal T}_{l_1}|&=|\{(x_0,x_1) \in E \times E: ||x_0-x_1||=t_{1,1}\}|
\\
&=\sum_{x_0,x_1} E(x_0)E(x_1)S_{t_{1,1}}(x_0-x_1).
\end{align*}
In order to obtain information from this quantity the behavior of
incidences of spheres and points in $E$ will be critical. The
following classical fact, whose proof will be given in a subsequent
section, states that the sphere has optimal Fourier decay away
from the origin.
\begin{lemma} \label{sphere} Let $S_{t}$, $t \in \Bbb F_q^*
$ be defined as above. If $m \not=(0, \dots, 0)$ then
$$ |\widehat{S}_{t}(m)| \lesssim q^{-\frac{d+1}{2}}, $$ and
$$ \widehat{S}_{t}(0, \dots, 0)=q^{-d} |S_{t}| \approx q^{-1}.$$ \end{lemma}
Applying Fourier inversion to the sphere,
\begin{align*}
|{\cal T}_{l_1}|=&q^{2d}\sum_{x_0,x_1} E(x_0)E(x_1)\sum_m \widehat{S}_{t_{1,1}}(m)\chi(m\cdot(x_0-x_1))
\\
&=q^{2d} \sum_m {|\widehat{E}(m)|}^2 \widehat{S}_{t_{1,1}}(m)
\\
&={|E|}^2 \cdot q^{-d} \cdot |S_{t_{1,1}}|+q^{2d} \sum_{m \not=(0,
\dots, 0)} {|\widehat{E}(m)|}^2 \widehat{S}_{t_{1,1}}(m)
\\
&=M+R.\,
\end{align*}
By Lemma \ref{sphere},
$$ M \approx \frac{{|E|}^2}{q},$$ and using Lemma \ref{sphere} once
again,
\begin{align*}
|R| &\lesssim q^{2d} \cdot q^{-\frac{d+1}{2}} \cdot \sum_m {|\widehat
{E}(m)|}^2
\\
&=q^{\frac{d-1}{2}} \cdot |E|,
\end{align*}
which is smaller than $M$ if $|E| \ge Cq^{\frac{d+1}{2}}$ with a
sufficiently large constant $C$ and thus
${\cal T}_{l_1}$ is non-empty. Moreover, if $|E| \gg q^{\frac{d+1}
{2}}$, we get the "statistically expected" number of distances,
$$|{\cal T}_{l_1}| \sim \frac{{|E|}^2}{q}.$$
Assuming the $(k-1)$st case, we count the number of $k$-simplices
in $E$ as an extension of the $(k-1)$-simplices in $E$.
$$|{\cal T}_{l_{k}}|= \sum_{x_0,\dots,x_k} {\cal T}_{l_{k-1}}(x_0,\dots,x_{k-1})E(x_k)S_{t_{1,k}}(x_0-x_k)\dots S_{t_{k,k}}(x_{k-1}-x_k).$$
By Fourier inversion, the expression equals
$$ \sum_{x_0,\dots,x_k} \sum_{m_0,\dots,m_{k-1}} \prod_{i=1}^{k}\chi
((x_{i-1}-x_k) \cdot m_{i-1})\widehat{S}_{t_{i,k}}(m_{i-1}) {\cal T}_{l_{k-1}}(x_0,\dots,x_{k-1})E(x_k)$$
$$=q^{(k+1)d} \sum_{m_0,\dots,m_{k-1}} \widehat{{\cal T}}_{l_{k-1}}
(-m_0,\dots,-m_{k-1}) \widehat{E}(m_0+\dots+m_{k-1}) \widehat{S}_
{t_{1,k}}(m_0)\dots\widehat{S}_{t_{k,k}}(m_{k-1}),$$ where the
Fourier transform of ${\cal T}_{l_{k-1}}$ is actually the
Fourier transform on ${\Bbb F}_q^d \times\dots\times {\Bbb F}_q^d$,
$k$ times.
Extracting the zero term and breaking the remaining sum into pieces
on which we may apply Lemma \ref{sphere}, this expression equals
$$q^{(k+1)d} \cdot q^{-(k+1)d} \cdot |{\cal T}_{l_{k-1}}| \cdot |E|
\cdot |S_{t_{1,k}}| \cdot q^{-d}\cdot\dots \cdot |S_{t_{k,k}}|
\cdot q^{-d}$$
$$+q^{(k+1)d} \sum_{\substack{\mathcal{I} \cup \mathcal{I}'= \{ 0, \dots ,k-1 \} \\ m_i=0\ (i\in \mathcal{I})\\ m_i\neq 0\ (i \notin \mathcal{I})}}
\widehat{{\cal T}}_{l_{k-1}}(-m_0,\dots,-m_{k-1}) \widehat{E}(m_0+\dots+m_{k-1})\widehat{S}_{t_{1,k}}(m_0)\dots\widehat{S}_{t_{k,k}}(m_{k-1})$$
$$=M+R,$$ where the sum defining $R$ runs over all the partitions of $\{0,\dots,k-1\}$ with the case $\mathcal{I}'=\emptyset$
extracted and used as the main term $M$ above.
By Lemma \ref{sphere} and the induction hypothesis,
$$ M \sim {|E|}^{k+1}q^{-{k+1 \choose 2}}.$$
By Lemma \ref{sphere} we have that
$$|R|\lesssim q^{(k+1)d}\sum_{\substack{\mathcal{I} \cup \mathcal{I}'= \{ 0, \dots ,k-1 \} \\ m_i=0\ (i\in \mathcal{I})\\ m_i\neq 0\ (i \notin \mathcal{I})}}
q^{-|\mathcal{I}'|(d+1)/2-|\mathcal{I}|}
|\widehat{{\cal T}}_{l_{k-1}}(-m_0,\dots,-m_{k-1})| |\widehat{E}(m_0+\dots+m_{k-1})|.$$
Then for each term in the sum corresponding to a partition $\mathcal{I} \cup \mathcal{I}'$ we apply Cauchy-Schwarz,
$$\sum_{\substack{m_i=0\ (i\in \mathcal{I})\\ m_i\neq 0\ (i \notin \mathcal{I})}}
|\widehat{{\cal T}}_{l_{k-1}}(-m_0,\dots,-m_{k-1})| |\widehat{E}(m_0+\dots+m_{k-1})|
\lesssim A^{1/2}B^{1/2}.$$
Applying Plancherel and the induction hypothesis,
$$A\leq\sum_{m_0,\dots,m_{k-1}} |\widehat{{\cal T}}_{l_{k-1}}(-m_0,
\dots,-m_{k-1})|^2 =q^{-kd}|{\cal T}_{l_{k-1}}|\sim q^{-kd}q^{-{k \choose 2}} {|E|}^{k}.$$
Now
$$B=\sum_{m_i(i\in \mathcal{I}')}
\left| \widehat{E}\left(\sum_{i\in \mathcal{I}'}m_i \right)\right|^2=q^{|\mathcal{I}'|d}q^{-2d}|E|.$$
This implies that
$$|R|\lesssim q^{\tfrac{kd}{2}}q^{-\tfrac{k(k-1)}{4}} {|E|}^{\tfrac{k+1}{2}}\sum_{\mathcal{I} \cup \mathcal{I}'= \{ 0, \dots ,k-1 \}}
q^{-|\mathcal{I}'|(d+1)/2-|\mathcal{I}|} q^{|\mathcal{I}'|d}.$$
The largest term in the sum occurs when $\mathcal{I}=\emptyset$. We
conclude that
$$|R| \lesssim q^{\tfrac{kd}{2}} q^{-\tfrac{k(k+1)}{4}} {|E|}^
{\tfrac{k+1}{2}}.$$
The term $R$ is smaller than, say, $\frac{M}{2}$ if
$$q^{\tfrac{kd}{2}} q^{-\tfrac{k(k+1)}{4}} {|E|}^{\tfrac{k+1}{2}}
\leq C {|E|}^{k+1}q^{-{k+1 \choose 2}},$$ with a sufficiently large
constant $C$, which happens if
$$ |E| \ge C'q^{\tfrac{k}{k+1}d}q^{\tfrac{k}{2}},$$ with a
sufficiently large constant $C'$ depending on the constants implicit
in the estimates above. This completes the proof.
\vskip.125in
\section{Proof of Lemma \ref{uptocrap}}
To prove Lemma \ref{uptocrap}, let $\pi_r(x)$ denote the $r$th coordinate of
$x$. There is no harm in assuming that $V_0=(0, \dots, 0)$. We may also
assume that $V_1, \dots, V_k$ are contained in ${\Bbb F}_q^k$. The condition
(\ref{equalnorm}) implies that
\begin{equation} \label{dotproduct} \sum_{r=1}^k \pi_r(V_i) \pi_r(V_j)=
\sum_{r=1}^k \pi_r(W_i) \pi_r(W_j). \end{equation}
Let $T$ be the linear transformation uniquely determined by the condition
$$ T(V_i)=V'_i.$$
In order to prove that $T$ is orthogonal, it suffices to show that
$$ ||Tx||=||x||$$ for any $x \not=(0, \dots, 0)$.
Since $V_j$s form a basis, by assumption, we have
$$ x=\sum_i t_i V_i, $$ so it suffices to show that
$$ ||x||=\sum_r \sum_{i,j} t_i t_j \pi_r(V_i) \pi_r(V_j)$$
$$=\sum_r \sum_{i,j} t_i t_j \pi_r(V'_i) \pi_r(V'_j)=||Tx||,$$ which follows
immediately from (\ref{dotproduct}).
Observe that we used the fact that orthogonality of $T$, the condition that
$T^t \cdot T=I$ is equivalent to the condition that $||Tx||=||x||$. To see
this observe that to show that $T^t \cdot T=I$ it suffices to show that
$T^tTx=x$ for all non-zero $x$. This, in turn, is equivalent to the
statement that
$$ <T^tTx,x>=||x||,$$ where
$$ <x,y>=\sum_{i=1}^k x_iy_i.$$
Now,
$$ <T^tTx,x>=<Tx, Tx>$$ by definition of the transpose, so the stated
equivalence is established. This completes the proof of Lemma
\ref{uptocrap}.
\vskip.125in
\section{Estimation of the Fourier transform of the sphere: proof of Lemma \ref{sphere}}
The proof of Lemma \ref{sphere} is fairly standard, but we outline
the argument for reader's convenience. For any $m\in{\mathbb
F}^d_q$, we have
\begin{equation} \label{sphereparade}
\begin{array}{llllll} \widehat{S}_t(m)&=&
q^{-d} \sum_{x \in {\mathbb F}^d_q} q^{-1} \sum_{j \in {\mathbb F}_q} \chi(
j(\|x\|-t)) \chi( - x \cdot m)\\ \hfill \\&=&q^{-1}\delta(m) +
q^{-d-1} \sum_{j \in {\mathbb F}^{*}_q} \chi(-jt) \sum_{x}
\chi( j\|x\|) \chi(- x \cdot m)\\ \hfill
\\&=&q^{-1}\delta(m)+ Q^d q^{-\frac{d+2}{2}} \sum_{j \in {\mathbb
F}^{*}_q}
\chi\left(\frac{\|m\|}{4j}+jt\right)\eta^d(-j),\end{array}\end{equation}
where the notation $\delta(m)=1$ if $m=(0\ldots,0)$ and $\delta(m)=0$
otherwise.
In the last line we have completed the square, changed $j$ to
$-j$, and used $d$ times the Gauss sum equality
\begin{equation}
\sum_{c\in {\mathbb F}_q} \chi(jc^2) = \eta(j)\sum_{c\in{\mathbb
F}_q}\eta(c)\chi(c)=\eta(j)\sum_{c\in{\mathbb F}_q^*}\eta(c)\chi(c) =
Q\sqrt{q}\,\eta(j),
\label{gauss}\end{equation} where the constant $Q$ equals $\pm1$ or $\pm i$, depending on
$q$, and $\eta$ is the quadratic multiplicative character (or the
Legendre symbol) of ${\mathbb F}_q^*$.
The conclusion now follows from the following classical estimate due to
A. Weil (\cite{We48}).
\begin{theorem} \label{kloosterman} Let
$$ K(a)=\sum_{s \not=0} \chi(as+s^{-1}) \psi(s), $$ where, once
again, $\psi$ is a multiplicative character on ${\Bbb F}_q^{*}$. Then
$$ |K(a)| \leq 2 \sqrt{q}$$ if $a \not=0$.
\end{theorem}
\vskip.125in
\newpage
| {
"timestamp": "2007-10-11T14:50:27",
"yymm": "0703",
"arxiv_id": "math/0703504",
"language": "en",
"url": "https://arxiv.org/abs/math/0703504",
"abstract": "We prove that a sufficiently large subset of the $d$-dimensional vector space over a finite field with $q$ elements, $ {\\Bbb F}_q^d$, contains a copy of every $k$-simplex. Fourier analytic methods, Kloosterman sums, and bootstrapping play an important role.",
"subjects": "Classical Analysis and ODEs (math.CA); Combinatorics (math.CO)",
"title": "Ubiquity of simplices in subsets of vector spaces over finite fields",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9901401444055119,
"lm_q2_score": 0.8104789018037399,
"lm_q1q2_score": 0.8024876968695758
} |
https://arxiv.org/abs/2010.05415 | Unweighted linear congruences with distinct coordinates and the Varshamov--Tenengolts codes | In this paper, we first give explicit formulas for the number of solutions of unweighted linear congruences with distinct coordinates. Our main tools are properties of Ramanujan sums and of the discrete Fourier transform of arithmetic functions. Then, as an application, we derive an explicit formula for the number of codewords in the Varshamov--Tenengolts code $VT_b(n)$ with Hamming weight $k$, that is, with exactly $k$ $1$'s. The Varshamov--Tenengolts codes are an important class of codes that are capable of correcting asymmetric errors on a $Z$-channel. As another application, we derive Ginzburg's formula for the number of codewords in $VT_b(n)$, that is, $|VT_b(n)|$. We even go further and discuss connections to several other combinatorial problems, some of which have appeared in seemingly unrelated contexts. This provides a general framework and gives new insight into all these problems which might lead to further work. | \section{Introduction}\label{Sec_1}
A \textit{$Z$-channel} (also called a \textit{binary asymmetric channel}) is a channel with binary input and binary output where a transmitted $0$ is always received correctly but a transmitted $1$ may be received as either $1$ or $0$. These channels have many applications, for example, some data storage systems and optical communication systems can be modelled using these channels. In 1965, Varshamov and Tenengolts \cite{VATE} introduced an important class of codes, known as the Varshamov--Tenengolts codes or VT-codes, that are capable of correcting asymmetric errors on a $Z$-channel (see also \cite{STYO, VAR}). Levenshtein \cite{LEV1, LEV2}, by giving an elegant decoding algorithm, showed that these codes could also be used for correcting a single deletion or insertion. Using the Varshamov--Tenengolts codes, Gevorkyan and Kabatiansky \cite{GEKA} constructed a class of binary codes of a specific length correcting single localized errors whose cardinality attains the ordinary Hamming bound.
\begin{definition}
Let $n$ be a positive integer and $0\leq b\leq n$ be a fixed integer. The Varshamov--Tenengolts code $VT_b(n)$ is the set of all binary vectors $\langle y_1,\ldots,y_n \rangle$ such that
$$
\sum_{i=1}^{n}iy_i \equiv b \pmod{n+1}.
$$
\end{definition}
For example, $VT_0(5)=\lbrace 00000, 10001, 01010, 11100, 00111, 11011 \rbrace$, where we have shown vectors as strings. So, $|VT_0(5)|=6$. The \textit{Hamming weight} of a string over an alphabet is defined as the number of non-zero symbols in the string. Equivalently, the Hamming weight of a string is the Hamming distance between that string and the all-zero string of the same length. For example, the Hamming weight of $01010$ is $2$, and the number of codewords in $VT_0(5)$ with Hamming weight $2$ is $2$.
Varshamov in his fundamental paper ``On an arithmetic function with an application in the theory of coding" (\cite{VAR2}) proved that the maximum number of codewords in the Varshamov--Tenengolts code $VT_b(n)$ is achieved when $b=0$, that is, $|VT_0(n)| \geq |VT_b(n)|$ for all $b$. Several natural questions arise: What is the number of codewords in the Varshamov--Tenengolts code $VT_b(n)$, that is, $|VT_b(n)|$? Given a positive integer $k$, what is the number of codewords in $VT_b(n)$ with Hamming weight $k$, that is, with exactly $k$ $1$'s? Ginzburg \cite{GIN} in 1967 considered the first question and proved an explicit formula for $|VT_b(n)|$. In this paper, we deal with both questions and obtain explicit formulas for them via a novel approach, namely, \textit{connecting the Varshamov--Tenengolts codes to linear congruences with distinct coordinates}. We even go further and show that the number of solutions of these congruences is related to several other combinatorial problems, some of which have appeared in seemingly unrelated contexts. (For example, as we will discuss in Section~\ref{Sec_4}, Razen, Seberry, and Wehrhahn \cite{RSW} considered two special cases of a function considered in this paper and gave an application in coding theory in finding the complete weight enumerator of a code generated by a circulant matrix.) This provides a general framework and gives new insight into all these problems which might lead to further work. Let us now describe these congruences.
Throughout the paper, we use $(a_1,\ldots,a_k)$ to denote the greatest common divisor (gcd) of the integers $a_1,\ldots,a_k$, and write $\langle a_1,\ldots,a_k\rangle$ for an ordered $k$-tuple of integers. Let $a_1,\ldots,a_k,b,n\in \mathbb{Z}$, $n\geq 1$. A linear congruence in $k$ unknowns $x_1,\ldots,x_k$ is of the form
\begin{align} \label{cong form}
a_1x_1+\cdots +a_kx_k\equiv b \pmod{n}.
\end{align}
By a solution of (\ref{cong form}), we mean an $\mathbf{x}=\langle x_1,\ldots,x_k \rangle \in \mathbb{Z}_n^k$ that satisfies (\ref{cong form}). The following result, proved by D. N. Lehmer \cite{LEH2}, gives the number of solutions of the above linear congruence:
\begin{proposition}\label{Prop: lin cong}
Let $a_1,\ldots,a_k,b,n\in \mathbb{Z}$, $n\geq 1$. The linear congruence $a_1x_1+\cdots +a_kx_k\equiv b \pmod{n}$ has a solution $\langle x_1,\ldots,x_k \rangle \in \mathbb{Z}_{n}^k$ if and only if $\ell \mid b$, where
$\ell=(a_1, \ldots, a_k, n)$. Furthermore, if this condition is satisfied, then there are $\ell n^{k-1}$ solutions.
\end{proposition}
Counting the number of solutions of the above congruence with some restrictions on the solutions is also a problem of great interest. As an important example, one can mention the restrictions $(x_i,n)=t_i$ ($1\leq i\leq k$), where $t_1,\ldots,t_k$ are given positive divisors of $n$. The number of solutions of the linear congruences with the above restrictions, which we called {\it restricted linear congruences} in \cite{BKSTT}, was first considered by Rademacher \cite{Rad1925} in 1925 and Brauer \cite{Bra1926} in 1926, in the special case of $a_i=t_i=1$ $(1\leq i \leq k)$. Since then, this problem has been studied, in several other special cases, in many papers (very recently, we studied it in its `most general case' in \cite{BKSTT}) and has found very interesting applications in number theory, combinatorics, geometry, physics, computer science, and cryptography; see \cite{BKS2, BKSTT, BKSTT2, JAWILL} for a detailed discussion about this problem and a comprehensive list of references.
Another restriction of potential interest is imposing the condition that all $x_i$ are {\it distinct} modulo $n$. Unlike the first problem, there seems to be very little published on the second problem. Recently, Grynkiewicz et al. \cite{GPP}, using tools from additive combinatorics and group theory, proved necessary and sufficient conditions under which the linear congruence $a_1x_1+\cdots +a_kx_k\equiv b \pmod{n}$, where $a_1,\ldots,a_k,b,n$ ($n\geq 1$) are arbitrary integers, has a solution $\langle x_1,\ldots,x_k \rangle \in \mathbb{Z}_{n}^k$ with all $x_i$ distinct modulo $n$; see also \cite{ADP, GPP} for connections to zero-sum theory. So, it would be an interesting problem to give an explicit formula for the number of such solutions. Quite surprisingly, this problem was first considered, in a special case, by Sch\"{o}nemann \cite{SCH} almost two centuries ago(!) but his result seems to have been forgotten. Sch\"{o}nemann \cite{SCH} proved an explicit formula for the number of such solutions when $b=0$, $n=p$ a prime, and $\sum_{i=1}^k a_i \equiv 0 \pmod{p}$ but $\sum_{i \in I} a_i \not\equiv 0 \pmod{p}$ for all $I\varsubsetneq \lbrace 1, \ldots, k\rbrace$. Very recently, the authors \cite{BKS6} generalized Sch\"{o}nemann's theorem using Proposition~\ref{Prop: lin cong} and a result on graph enumeration recently obtained by Ardila et al. \cite{ACH}. Specifically, we obtained an explicit formula for the number of solutions of the linear congruence $a_1x_1+\cdots +a_kx_k\equiv b \pmod{n}$, with all $x_i$ distinct modulo $n$, when $(\sum_{i \in I} a_i, n)=1$ for all $I\varsubsetneq \lbrace 1, \ldots, k\rbrace$, where $a_1,\ldots,a_k,b,n$ $(n\geq 1)$ are arbitrary integers. Clearly, this result does not resolve the problem in its full generality; for example, it does not cover the important case of $a_i=1$ ($1\leq i\leq k$) and this is what we consider in this paper with an entirely different approach. Specifically, we give an explicit formula for the number $N_n(k,b)$ of such solutions when $a_i=1$ ($1\leq i\leq k$), and do the same when in addition all $x_i$ are \textit{positive} modulo $n$.
Our main tools in this paper are properties of Ramanujan sums and of the discrete Fourier transform of arithmetic functions which are reviewed in the next section. In Section~\ref{Sec_3}, we derive the explicit formulas, and discuss applications to the Varshamov--Tenengolts codes. In Section~\ref{Sec_4}, we discuss connections to several other combinatorial contexts.
\section{Ramanujan sums and discrete Fourier transform} \label{Sec_2}
Let $e(x)=\exp(2\pi ix)$ be the complex exponential with period 1. For integers $m$ and $n$ with $n \geq 1$ the quantity
\begin{align}\label{def1}
c_n(m) = \mathlarger{\sum}_{\substack{j=1 \\ (j,n)=1}}^{n}
e\!\left(\frac{jm}{n}\right)
\end{align}
is called a {\it Ramanujan sum}. It is the sum of the $m$-th powers of the primitive $n$-th roots of unity, and is also denoted by $c(m,n)$ in the literature. From (\ref{def1}), it is clear that $c_n(-m) = c_n(m)$. Clearly, $c_n(0)=\varphi (n)$, where $\varphi (n)$ is {\it Euler's totient function}. Also, $c_n(1)=\mu (n)$, where $\mu (n)$ is the {\it M\"{o}bius function}. The following theorem, attributed to Kluyver~\cite{KLU}, gives an explicit formula for $c_n(m)$:
\begin{theorem} \label{thm:Ram Mob}
For integers $m$ and $n$, with $n \geq 1$,
\begin{align}\label{for:Ram Mob}
c_n(m) = \mathlarger{\sum}_{d\, \mid\, (m,n)} \mu
\!\left(\frac{n}{d}\right)d.
\end{align}
\end{theorem}
By applying the M\"{o}bius inversion formula, Theorem~\ref{thm:Ram Mob} yields the following property: For $m,n\geq 1$,
\begin{align} \label{Orth1 for cons}
\sum_{d\, \mid\, n} c_{d}(m)&=
\begin{cases}
n, & \text{if $n\mid m$},\\
0, & \text{if $n\nmid m$}.
\end{cases}
\end{align}
The {\it von Sterneck number} (\cite{von}) is defined by
\begin{align}\label{def3}
\Phi(m,n)=\frac{\varphi (n)}{\varphi \left(\frac{n}{\left(m,n\right)}\right)}\mu \!\left(\frac{n}{\left(m,n\right)} \right).
\end{align}
A crucial fact in studying Ramanujan sums and their applications is that they coincide with the von Sterneck number. This result is attributed to Kluyver~\cite{KLU}:
\begin{theorem} \label{thm:von rama}
For integers $m$ and $n$, with $n \geq 1$, we have
\begin{align}\label{von rama for}
\Phi(m,n)=c_n(m).
\end{align}
\end{theorem}
A function $f:\mathbb{Z} \to \mathbb{C}$ is called {\it periodic} with period $n$ (also called {\it $n$-periodic} or {\it periodic} modulo $n$) if $f(m + n) = f(m)$, for every $m\in \mathbb{Z}$. In this case $f$ is determined by the finite vector $(f(1),\ldots,f(n))$. From (\ref{def1}) it is clear that $c_n(m)$ is a periodic function of $m$ with period $n$.
We define the {\it discrete Fourier transform} (DFT) of an $n$-periodic function $f$ as the function
$\widehat{f}={\cal F}(f)$, given by
\begin{align}\label{FFT1}
\widehat{f}(b)=\mathlarger{\sum}_{j=1}^{n}f(j)e\! \left(\frac{-bj}{n}\right)\quad (b\in \mathbb{Z}).
\end{align}
The standard representation of $f$ is obtained from the Fourier representation $\widehat{f}$ by
\begin{align}\label{FFT2}
f(b)=\frac1{n} \mathlarger{\sum}_{j=1}^{n}\widehat{f}(j)e\!\left(\frac{bj}{n}\right) \quad (b\in \mathbb{Z}),
\end{align}
which is the {\it inverse discrete Fourier transform} (IDFT); see, e.g., \cite[p.\ 109]{MOVA}.
\section{Solutions with distinct coordinates}\label{Sec_3}
In this section, we obtain an explicit formula for the number of solutions $\langle x_1,\ldots,x_k \rangle \in \mathbb{Z}_{n}^k$ of the linear congruence $x_1+\cdots +x_k\equiv b \pmod{n}$, with all $x_i$ distinct modulo $n$. First, we need some preliminary results.
\begin{lemma}\label{lem: cyclo 1}
Let $n$ be a positive integer and $m$ be a non-negative integer. We have
$$
\mathlarger{\prod}_{j=1}^{n}\left(1-ze^{2\pi ijm/n}\right)=(1-z^{\frac{n}{d}})^d,
$$
where $d=(m,n)$.
\end{lemma}
\begin{proof}
It is well-known that (see, e.g., \cite[p. 167]{STAN})
$$
1-z^n=\mathlarger{\prod}_{j=1}^{n}\left(1-ze^{2\pi ij/n}\right).
$$
Now, letting $d=(m,n)$, we obtain
\begin{align*}
\mathlarger{\prod}_{j=1}^{n}\left(1-ze^{2\pi ijm/n}\right) &= \mathlarger{\prod}_{j=1}^{n} \left(1-ze^{2\pi ij\frac{m/d}{n/d}}\right)\\
&= \left(\mathlarger{\prod}_{j=1}^{n/d}\left(1-ze^{2\pi ij\frac{m/d}{n/d}}\right)\right)^d\\
&{\stackrel{(\frac{m}{d},\frac{n}{d})=1}{=}} \left(\mathlarger{\prod}_{j=1}^{n/d}\left(1-ze^{\frac{2\pi ij}{n/d}}\right)\right)^d=(1-z^{\frac{n}{d}})^d.
\end{align*}
\end{proof}
Similarly, we can prove that:
\begin{lemma}\label{lem: cyclo 2}
Let $n$ be a positive integer and $m$ be a non-negative integer. We have
$$
\mathlarger{\prod}_{j=1}^{n}\left(z-e^{2\pi ijm/n}\right)=(z^{\frac{n}{d}}-1)^d,
$$
where $d=(m,n)$.
\end{lemma}
Now, we simply get:
\begin{corollary}\label{cor: cyclo 1}
Let $n$ be a positive integer and $m$, $k$ be non-negative integers. The coefficient of $z^k$ in
$$
\mathlarger{\prod}_{j=1}^{n}\left(1+ze^{2\pi ijm/n}\right),
$$
is $(-1)^{k+\frac{kd}{n}}\binom{d}{\frac{kd}{n}}$, where $d=(m,n)$. Note that the binomial coefficient $\binom{d}{\frac{kd}{n}}$ equals zero if $\frac{kd}{n}$ is not an integer.
\end{corollary}
Now, we are ready to obtain an explicit formula for the number of solutions of the linear congruence.
\begin{theorem} \label{main thm dist ai=1}
Let $n$ be a positive integer and $b \in \mathbb{Z}_n$. The number $N_n(k,b)$ of solutions $\langle x_1,\ldots,x_k \rangle \in \mathbb{Z}_{n}^k$ of the linear congruence $x_1+\cdots +x_k\equiv b \pmod{n}$, with all $x_i$ distinct modulo $n$, is
\begin{align} \label{main thm dist ai=1: for}
N_n(k,b)=\frac{(-1)^k k!}{n}\mathlarger{\sum}_{d\, \mid \, (n,\;k)}(-1)^{\frac{k}{d}}c_{d}(b)\binom{\frac{n}{d}}{\frac{k}{d}}.
\end{align}
\end{theorem}
\begin{proof}
It is well-known that (see, e.g., \cite[pp. 3-4]{GUP}) the number of partitions of $b$ into exactly $k$ \textit{distinct} parts each taken from the given set $A$, is the coefficient of $q^bz^k$ in
$$
\mathlarger{\prod}_{j \in A}\left(1+zq^j\right).
$$
Now, take $A=\mathbb{Z}_n$ and $q=e^{2\pi im/n}$, where $m$ is a non-negative integer. Then, the number $P_n(k,b)$ of partitions of $b$ into exactly $k$ \textit{distinct} parts each taken from $\mathbb{Z}_n$ (that is, the number of solutions of the above linear congruence, with all $x_i$ distinct modulo $n$, if order does not matter), is the coefficient of $e^{2\pi ibm/n}z^k$ in
$$
\mathlarger{\prod}_{j=1}^{n}\left(1+ze^{2\pi ijm/n}\right).
$$
This in turn implies that
$$
\mathlarger{\sum}_{b=1}^{n}P_n(k,b)e^{2\pi ibm/n} = \text{the coefficient of $z^k$ in $\mathlarger{\prod}_{j=1}^{n}\left(1+ze^{2\pi ijm/n}\right)$}.
$$
Let $e(x)=\exp(2\pi ix)$. Note that $N_n(k,b)=k!P_n(k,b)$. Now, using Corollary~\ref{cor: cyclo 1}, we get
$$
\mathlarger{\sum}_{b=1}^{n}N_n(k,b)e\left(\frac{bm}{n}\right) = (-1)^{k+\frac{kd}{n}}k!\binom{d}{\frac{kd}{n}},
$$
where $d=(m,n)$. Now, by (\ref{FFT1}) and (\ref{FFT2}), we obtain
\begin{align*}
N_n(k,b) &= \frac{(-1)^{k}k!}{n}\mathlarger{\sum}_{m=1}^{n}(-1)^{\frac{kd}{n}}e\left(\frac{-bm}{n}\right)\binom{d}{\frac{kd}{n}}\\
&= \frac{(-1)^{k}k!}{n}\mathlarger{\sum}_{d\, \mid \, n}\mathlarger{\sum}_{\substack{m=1 \\ (m,\;n)=d}}^{n}(-1)^{\frac{kd}{n}}e\left(\frac{-bm}{n}\right)\binom{d}{\frac{kd}{n}}\\
&{\stackrel{m'=m/d}{=}} \;\; \frac{(-1)^{k}k!}{n}\mathlarger{\sum}_{d\, \mid \, n}\mathlarger{\sum}_{\substack{m'=1 \\ (m',\;n/d)=1}}^{n/d}(-1)^{\frac{kd}{n}}e\left(\frac{-bm'}{n/d}\right)\binom{d}{\frac{kd}{n}}\\
&= \frac{(-1)^{k}k!}{n}\mathlarger{\sum}_{d\, \mid \, n}(-1)^{\frac{kd}{n}}c_{n/d}(-b)\binom{d}{\frac{kd}{n}}\\
&= \frac{(-1)^{k}k!}{n}\mathlarger{\sum}_{d\, \mid \, n}(-1)^{\frac{kd}{n}}c_{n/d}(b)\binom{d}{\frac{kd}{n}}\\
&= \frac{(-1)^{k}k!}{n}\mathlarger{\sum}_{d\, \mid \, n}(-1)^{\frac{k}{d}}c_{d}(b)\binom{\frac{n}{d}}{\frac{k}{d}}\\
&= \frac{(-1)^{k}k!}{n}\mathlarger{\sum}_{d\, \mid \, (n,\;k)}(-1)^{\frac{k}{d}}c_{d}(b)\binom{\frac{n}{d}}{\frac{k}{d}}.
\end{align*}
\end{proof}
\begin{corollary} \label{special cases: b=0,1}
If $n$ or $k$ is odd then from (\ref{main thm dist ai=1: for}) we obtain the following important special cases of the function $P_n(k,b)=\frac{1}{k!}N_n(k,b)$:
\begin{align} \label{special case: b=0}
P_n(k,0)= \frac{1}{n}\mathlarger{\sum}_{d\, \mid \, (n,\;k)}\varphi(d)\binom{\frac{n}{d}}{\frac{k}{d}},
\end{align}
\begin{align} \label{special case: b=1}
P_n(k,1)= \frac{1}{n}\mathlarger{\sum}_{d\, \mid \, (n,\;k)}\mu(d)\binom{\frac{n}{d}}{\frac{k}{d}}.
\end{align}
\end{corollary}
\begin{corollary}
If $(n,k)=1$ then (\ref{main thm dist ai=1: for}) is independent of $b$ and simplifies as
$$N_n(k)=\frac{k!}{n}\binom{n}{k}.$$
(Of course, this can also be proved directly.) If in addition we have $n=2k+1$ then
$$
P_n(k)=\frac{1}{k!}N_n(k)=\frac{1}{2k+1}\binom{2k+1}{k}=\frac{1}{k+1}\binom{2k}{k},
$$
which is the Catalan number.
\end{corollary}
\begin{rema}
Using (\ref{Orth1 for cons}), it is easy to see that (\ref{main thm dist ai=1: for}) also works when $k=0$.
\end{rema}
Now, we introduce the important function $T_n(b)$ which is the sum of $P_n(k,b)$ over $k$. There are several interpretations for the function $T_n(b)$, for example, $T_n(b)$ can be interpreted as the number of subsets of the set $\lbrace 1, 2, \ldots, n \rbrace$ which sum to $b$ modulo $n$.
\begin{corollary}\label{nice function}
Let $T_n(b):=\sum_{k=0}^{n}\frac{1}{k!}N_n(k,b)=\sum_{k=0}^{n}P_n(k,b)$. Then we have
\begin{align} \label{nice function: for}
T_n(b)=\frac{1}{n}\mathlarger{\sum}_{\substack{d\, \mid \, n \\ d \; \textnormal{odd}}}c_{d}(b)2^{\frac{n}{d}}.
\end{align}
\end{corollary}
\begin{proof} We have
\begin{align*}
T_n(b)&= \mathlarger{\sum}_{k=0}^{n} \frac{(-1)^{k}}{n}\mathlarger{\sum}_{d\, \mid \, (n,\;k)}(-1)^{\frac{k}{d}}\binom{\frac{n}{d}}{\frac{k}{d}}c_{d}(b)\\
&= \frac{1}{n}\mathlarger{\sum}_{d\, \mid \, n}c_{d}(b)\mathlarger{\sum}_{\substack{k=0 \\ d\, \mid \,k}}^{n}(-1)^{k+\frac{k}{d}}\binom{\frac{n}{d}}{\frac{k}{d}}\\
&= \frac{1}{n}\mathlarger{\sum}_{\substack{d\, \mid \, n \\ d \; \textnormal{odd}}}c_{d}(b)\mathlarger{\sum}_{\substack{k=0 \\ d\, \mid \,k}}^{n}(-1)^{k+\frac{k}{d}}\binom{\frac{n}{d}}{\frac{k}{d}}
+\frac{1}{n}\mathlarger{\sum}_{\substack{d\, \mid \, n \\ d \; \textnormal{even}}}c_{d}(b)\mathlarger{\sum}_{\substack{k=0 \\ d\, \mid \,k}}^{n}(-1)^{k+\frac{k}{d}}\binom{\frac{n}{d}}{\frac{k}{d}}
\\
&=\frac{1}{n}\mathlarger{\sum}_{\substack{d\, \mid \, n \\ d \; \textnormal{odd}}}c_{d}(b)2^{\frac{n}{d}}.
\end{align*}
Note that in the last equality we have used the fact that if $d \mid n$ and $d$ is even then
$$
\mathlarger{\sum}_{\substack{k=0 \\ d\, \mid \,k}}^{n}(-1)^{k+\frac{k}{d}}\binom{\frac{n}{d}}{\frac{k}{d}}
= \mathlarger{\sum}_{\substack{k=0 \\ d\, \mid \,k}}^{n}(-1)^{\frac{k}{d}}\binom{\frac{n}{d}}{\frac{k}{d}}= 0.
$$
\end{proof}
What is the number of subsets of the set $\lbrace 1, 2, \ldots, n-1 \rbrace$ which sum to $b$ modulo $n$? Using Corollary~\ref{nice function}, we can obtain an explicit formula for the number of such subsets (see also \cite{MAZ}).
\begin{corollary}\label{nice function 2}
The number $T'_n(b)$ of subsets of the set $\lbrace 1, 2, \ldots, n-1 \rbrace$ which sum to $b$ modulo $n$ is
\begin{align} \label{nice function 2: for}
T'_n(b)=\frac{1}{2}T_n(b)=\frac{1}{2n}\mathlarger{\sum}_{\substack{d\, \mid \, n \\ d \; \textnormal{odd}}}c_{d}(b)2^{\frac{n}{d}}.
\end{align}
\end{corollary}
\begin{proof}
Let $A$ be a subset of the set $\lbrace 1, 2, \ldots, n-1 \rbrace$ which sum to $b$ modulo $n$. Then $A$ and $A\cup \lbrace n \rbrace$ are both subsets of the set $\lbrace 1, 2, \ldots, n \rbrace$ and both sum to $b$ modulo $n$. Therefore, $T'_n(b)=\frac{1}{2}T_n(b)$.
\end{proof}
Ginzburg \cite{GIN} in 1967 proved an explicit formula for the number of codewords in the $q$-ary Varshamov--Tenengolts codes, where $q$ is an arbitrary positive integer. This result was later rediscovered by Stanley and Yoder \cite{STYO} in 1973, and in the binary case (that is, when $q=2$) by Sloane \cite{SLO} in 2002. Now, we give a short proof for the binary case which we derive as a consequence of our results.
\begin{corollary}\label{VT exa tot}
The number $|VT_b(n)|$ of codewords in the Varshamov--Tenengolts code $VT_b(n)$ is
\begin{align} \label{VT exa tot: for}
|VT_b(n)|=\frac{1}{2(n+1)}\mathlarger{\sum}_{\substack{d\, \mid \, n+1 \\ d \; \textnormal{odd}}}c_{d}(b)2^{\frac{n+1}{d}}.
\end{align}
\end{corollary}
\begin{proof}
Let $\langle y_1,\ldots,y_n \rangle$ be a codeword in $VT_b(n)$. Note that $\sum_{i=1}^{n}iy_i$ is just the sum of some elements of the set $\lbrace 1, 2, \ldots, n \rbrace$. Therefore, finding the number of codewords in $VT_b(n)$ boils down to finding the number of subsets of the set $\lbrace 1, 2, \ldots, n \rbrace$ which sum to $b$ modulo $n+1$. The result now follows by a direct application of Corollary~\ref{nice function 2}.
\end{proof}
In some applications (for example, in coding theory) we also need to consider the case that all $x_i$ are \textit{positive} and \textit{distinct} modulo $n$. Now, we obtain an explicit formula for the number of such solutions.
\begin{theorem} \label{main thm dist pos ai=1}
Let $n$ be a positive integer and $b \in \mathbb{Z}_n$. The number $N_n^{>0}(k,b)$ of solutions $\langle x_1,\ldots,x_k \rangle \in \mathbb{Z}_{n}^k$ of the linear congruence $x_1+\cdots +x_k\equiv b \pmod{n}$, with all $x_i$ positive and distinct modulo $n$, is
\begin{align} \label{main thm dist pos ai=1: for}
N_n^{>0}(k,b)=\frac{(-1)^k k!}{n}\mathlarger{\sum}_{d\, \mid \, n}(-1)^{\lfloor\frac{k}{d}\rfloor}c_{d}(b)\binom{\frac{n}{d}-1}{\lfloor\frac{k}{d}\rfloor}.
\end{align}
\end{theorem}
\begin{proof}
Clearly, $N_n^{>0}(k,b)=N_n(k,b)-N_n^{0}(k,b)$, where $N_n^{0}(k,b)$ denotes the number of solutions $\langle x_1,\ldots,x_k \rangle \in \mathbb{Z}_{n}^k$ with all $x_i$ distinct modulo $n$ and one of $x_i$ is zero modulo $n$. Also, clearly, $N_n^{0}(k,b)=kN_n^{>0}(k-1,b)$. Thus,
\begin{align}\label{relation bet N and N>0}
N_n(k,b)=N_n^{>0}(k,b)+kN_n^{>0}(k-1,b).
\end{align}
Now, we use Theorem~\ref{main thm dist ai=1}. We have
\begin{align*}
N_n(k,b)&= \frac{(-1)^k k!}{n}\mathlarger{\sum}_{d\, \mid \, (n,\;k)}(-1)^{\frac{k}{d}}c_{d}(b)\binom{\frac{n}{d}}{\frac{k}{d}}\\
&= \frac{(-1)^k k!}{n}\mathlarger{\sum}_{d\, \mid \, (n,\;k)}(-1)^{\frac{k}{d}}c_{d}(b)\left(\binom{\frac{n}{d}-1}{\frac{k}{d}}+\binom{\frac{n}{d}-1}{\frac{k}{d}-1}\right)\\
&= \frac{(-1)^k k!}{n}\mathlarger{\sum}_{d\, \mid \, n}c_{d}(b)\left((-1)^{\frac{k}{d}}\binom{\frac{n}{d}-1}{\frac{k}{d}}-(-1)^{\frac{k}{d}-1}\binom{\frac{n}{d}-1}{\frac{k}{d}-1}\right)\\
&= \frac{(-1)^k k!}{n}\mathlarger{\sum}_{d\, \mid \, n}c_{d}(b)\left((-1)^{\lfloor\frac{k}{d}\rfloor}\binom{\frac{n}{d}-1}{\lfloor\frac{k}{d}\rfloor}-(-1)^{\lfloor\frac{k-1}{d}\rfloor}\binom{\frac{n}{d}-1}{\lfloor\frac{k-1}{d}\rfloor}\right)\\
&= \frac{(-1)^k k!}{n}\mathlarger{\sum}_{d\, \mid \, n}(-1)^{\lfloor\frac{k}{d}\rfloor}c_{d}(b)\binom{\frac{n}{d}-1}{\lfloor\frac{k}{d}\rfloor}\\
&+k\frac{(-1)^{k-1} (k-1)!}{n}\mathlarger{\sum}_{d\, \mid \, n}(-1)^{\lfloor\frac{k-1}{d}\rfloor}c_{d}(b)\binom{\frac{n}{d}-1}{\lfloor\frac{k-1}{d}\rfloor}.
\end{align*}
Note that in the fourth equality above we have used the fact that $\lfloor\frac{k}{d}\rfloor=\lfloor\frac{k-1}{d}\rfloor+1$ if $d \mid k$, and $\lfloor\frac{k}{d}\rfloor=\lfloor\frac{k-1}{d}\rfloor$ if $d\nmid k$. Now, recalling (\ref{relation bet N and N>0}) we obtain the desired result.
\end{proof}
We believe that Theorem~\ref{main thm dist pos ai=1} is also a strong tool and might lead to interesting applications. Denote by $VT_b^{1,k}(n)$ the set of codewords in the Varshamov--Tenengolts code $VT_b(n)$ with Hamming weight $k$. Theorem~\ref{main thm dist pos ai=1} immediately gives an explicit formula for the number of such codewords. This result is useful in the study of a class of binary codes that are immune to single repetitions \cite{DOAN}.
\begin{corollary}\label{VT exa k1s}
The number $|VT_b^{1,k}(n)|$ of codewords in the Varshamov--Tenengolts code $VT_b(n)$ with Hamming weight $k$ is
\begin{align} \label{VT exa k1s: for}
|VT_b^{1,k}(n)|=\frac{(-1)^k}{n+1}\mathlarger{\sum}_{d\, \mid \, n+1}(-1)^{\lfloor\frac{k}{d}\rfloor}c_{d}(b)\binom{\frac{n+1}{d}-1}{\lfloor\frac{k}{d}\rfloor}.
\end{align}
\end{corollary}
\begin{proof}
Let $\langle y_1,\ldots,y_n \rangle$ be a codeword in $VT_b(n)$ with Hamming weight $k$, that is, with exactly $k$ $1$'s. Denote by $x_j$ the position of the $j$th one. Note that $1\leq j \leq k$ and $1 \leq x_1 < x_2 < \cdots < x_k \leq n$. Now, we have
$$
\sum_{i=1}^{n}iy_i \equiv b \pmod{n+1} \Longleftrightarrow x_1+\cdots +x_k\equiv b \pmod{n+1}.
$$
Therefore, finding the number of codewords in $VT_b(n)$ with Hamming weight $k$ boils down to finding the number of solutions $\langle x_1,\ldots,x_k \rangle \in \mathbb{Z}_{n+1}^k$ of the linear congruence $x_1+\cdots +x_k\equiv b \pmod{n+1}$, with all $x_j$ positive and distinct modulo $n+1$, and with disregarding the order of the coordinates. The result now follows by a direct application of Theorem~\ref{main thm dist pos ai=1}.
\end{proof}
\begin{rema}
There is an earlier interesting result of Dolecek and Anantharam \cite{DOAN} which gives the formula (\ref{VT exa k1s: for}) in a special case where the Hamming weight is dependent on the modulus, but here we give a more general treatment where the Hamming weight is \textit{arbitrary}. Of course, the expression (3.7) in their paper is exactly the same as our formula (\ref{main thm dist ai=1: for}), so it is an interesting problem to prove a 1-1 correspondence between these two results.
\end{rema}
\section{More connections}\label{Sec_4}
Interestingly, some special cases of the functions $P_n(k,b)$, $N_n(k,b)$, $T_n(b)$, and $T'_n(b)$ that we studied in this paper have appeared in a wide range of combinatorial problems, sometimes in seemingly unrelated contexts. Here we briefly mention some of these connections. It would be interesting to prove 1-1 correspondences between these interpretations.
\bigskip
\textbf{Ordered partitions acted upon by cyclic permutations.} Consider the set of all ordered partitions of a positive integer $n$ into $k$ parts acted upon by the cyclic permutation $(1 2 \ldots k)$. Razen, Seberry, and Wehrhahn \cite{RSW} obtained explicit formulas for the cardinality of the resulting family of orbits and for the number of orbits in this family having exactly $k$ elements. These formulas coincide with the expressions for $P_n(k,0)$ and $P_n(k,1)$, respectively, when $n$ or $k$ is odd (see Corollary~\ref{special cases: b=0,1}). Razen et al. \cite{RSW} also discussed an application in coding theory in finding the complete weight enumerator of a code generated by a circulant matrix.
\bigskip
\textbf{Permutations with given cycle structure and descent set.} Gessel and Reutenauer \cite{GERE} counted permutations in the symmetric group $S_n$ with a given cycle structure and descent set. One of their results gives an explicit formula for the number of $n$-cycles with descent set $\lbrace k \rbrace$, which coincides with the expression for $P_n(k,1)$ when $n$ or $k$ is odd.
\bigskip
\textbf{Fixed-density necklaces and Lyndon words.} If $n$ or $k$ is odd then the expressions for $P_n(k,0)$ and $P_n(k,1)$ give, respectively, the number of fixed-density binary necklaces and fixed-density binary Lyndon words of length $n$ and density $k$, as described by Gilbert and Riordan \cite{GIRI}, and Ruskey and Sawada \cite{RUSA}.
\bigskip
\textbf{Necklace polynomial.} The function $T_n(b)$ is closely related to the polynomial
$$
M(q, n)= \frac{1}{n}\sum_{d\, \mid \, n}\mu(d)q^{\frac{n}{d}},
$$
which is called the \textit{necklace polynomial} of degree $n$ (it is easy to see that $M(q, n)$ is integer-valued for all $q \in \mathbb{Z}$). In fact, if $n$ is odd then $M(2, n)=T_n(1)$. The necklace polynomials turn up in various contexts in combinatorics and algebra.
\bigskip
\textbf{Quasi-necklace polynomial.} The function $T'_n(b)$ is also closely related to the polynomial
$$
M'(q, n)= \frac{1}{2n}\sum_{d\, \mid \, n}\mu(d)q^{\frac{n}{d}},
$$
that we call the \textit{quasi-necklace polynomial} of degree $n$. In fact, if $n$ is odd then $M'(2, n)=T'_n(1)$. The quasi-necklace polynomials also turn up in various contexts in combinatorics. For example, they appear as:
\begin{itemize}
\item the number of transitive unimodal cyclic permutations obtained by Weiss and Rogers \cite{WERO} (motivated by problems related to the structure of the set of periodic orbits of one-dimensional dynamical systems) using methods related to the work of Milnor and Thurston \cite{MITH}. See also \cite{THIB} which gives a generating function for the number of unimodal permutations with a given cycle structure;
\item the number of periodic patterns of the tent map \cite{AREL}.
\end{itemize}
\section*{Acknowledgements}
The authors would like to thank the anonymous referees for a careful reading of the paper and helpful suggestions. During the preparation of this work the first author was supported by a Fellowship from the University of Victoria (UVic Fellowship).
| {
"timestamp": "2020-10-13T02:27:23",
"yymm": "2010",
"arxiv_id": "2010.05415",
"language": "en",
"url": "https://arxiv.org/abs/2010.05415",
"abstract": "In this paper, we first give explicit formulas for the number of solutions of unweighted linear congruences with distinct coordinates. Our main tools are properties of Ramanujan sums and of the discrete Fourier transform of arithmetic functions. Then, as an application, we derive an explicit formula for the number of codewords in the Varshamov--Tenengolts code $VT_b(n)$ with Hamming weight $k$, that is, with exactly $k$ $1$'s. The Varshamov--Tenengolts codes are an important class of codes that are capable of correcting asymmetric errors on a $Z$-channel. As another application, we derive Ginzburg's formula for the number of codewords in $VT_b(n)$, that is, $|VT_b(n)|$. We even go further and discuss connections to several other combinatorial problems, some of which have appeared in seemingly unrelated contexts. This provides a general framework and gives new insight into all these problems which might lead to further work.",
"subjects": "Information Theory (cs.IT)",
"title": "Unweighted linear congruences with distinct coordinates and the Varshamov--Tenengolts codes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787849789999,
"lm_q2_score": 0.8128673246376008,
"lm_q1q2_score": 0.802445377884877
} |
https://arxiv.org/abs/1910.08182 | Approximate solutions of one dimensional systems with fractional derivative | The fractional calculus is useful to model non-local phenomena. We construct a method to evaluate the fractional Caputo derivative by means of a simple explicit quadratic segmentary interpolation. This method yields to numerical resolution of ordinary fractional differential equations. Due to the non-locality of the fractional derivative, we may establish an equivalence between fractional oscillators and ordinary oscillators with a dissipative term. | \section{Introduction}
The study of fractional derivatives for its application in classical and quantum physics has lately received a lot of attention \cite{KST,HER,UCH}. Needless to say that one of the simplest and most studied of those systems is the one dimensional harmonic oscillator. Thus, it would be a good point of departure in the study of systems with fractional derivative, a task which has been carried out in \cite{MAI}. Damped oscillator with fractional derivative has been also the objective of some studies, see \cite{OO}. Some extensions of the theory to other classical systems have been proposed, see for instance \cite{OO1} and references therein, or in \cite{CEY}.
In many of these papers, it was noted an analogy between a fractional oscillator and a classical oscillator with a damping term. This could be an idea to be exploited in order to model quantum systems with dissipation, in which the second derivative of the wave function in the Schr\"odinger equation be replaced by a fractional derivative, see \cite{OO2}.
The present work has been inspired by the article by Narahari et. al. \cite{AHEC}, where in addition to the study of the one dimensional harmonic oscillator with fractional derivative, they give a comparison with an equivalent dissipative oscillator described on the phase plane and analyze the stability of the solution.
Along the present manuscript, we show that it may be possible the determination of a time interval in which the solution of a fractional one dimensional oscillator may be approximated by the solution of a one dimensional ordinary equation with a dissipative term. The idea could be described by using a very simple example. Let us consider the Caputo derivative $D^\alpha_0$, defined in \eqref{1} below, and the fractional differential equation $D^\alpha_0\,x(t)=0$, with $1<\alpha\le 2$ and initial conditions $x(0)=0$ and $\dot x(0)=-1$. The solution is $x(t)=-t$. Then, let us consider the equation $\ddot z(t)=-p\dot z(t)$, $p>0$. The goal is the determination of a value of $p$ such that the solution of this equation with initial conditions $z(0)=0$ and $\dot z(0)=-1$, i.e., the same initial conditions imposed to the fractional equation, be approximated by the solution of the fractional equation over a finite time interval. This is clear, since the solution of the equation on $z(t)$ is
$$
z(t)=\frac 1p (-1+e^{-pt})\,.
$$
Therefore, on a time interval $0<t<\tau$, with $\tau=1/p$, we have $z(t)=-t+o(t^2)$. In this sense of having similar approximate solutions on a finite interval, we say that the fractional and the dissipative equations are {\it equivalent}. Here, we want to extend this idea.
Observe that in our notion of equivalence, we have discarded the asymptotic regime. This is essentially due to two reasons: i.) for large values of time, the fractional oscillator does not show oscillations; ii.) the behaviour of the oscillator from a strictly physical point of view, whether linear or non-linear but particularly the latter, has interest for finite times only. Its asymptotic behaviour is not measurable and has a mathematical interest only, and it is not the object of our study.
The present paper is organized as follows: On Section 2, we construct a method to obtain approximate solutions of fractional differential equations, with fractional Caputo derivative to be defined there, based on segmentary interpolation. This kind of interpolation has been used successfully to obtain approximate solutions of ordinary differential equations \cite{GL}. On Section 3, we apply this method to the fractional linear oscillator and to some other simple examples and make estimations on its precision. We compare results with those obtained replacing the fractional oscillator by the ordinary oscillator with a dissipative term. We present a similar analysis by replacing the equation of the oscillator by the van der Pol equation on Section 4. We close this paper with some concluding remarks.
\section{Caputo fractional derivative and its evaluation by segmentary interpolation}
Let $\alpha$ be a real positive number and denote by $n=\lceil \alpha \rceil$ the smaller integer bigger than $\alpha$. Let us define the Caputo fractional derivative, $D^\alpha_a$, of a $n$ times differentiable function of real variable, $x(t)$, as \cite{DIE}
\begin{equation}\label{1}
D^\alpha_a \,x(t) = \frac 1{\Gamma(n-\alpha)} \int_a^t \frac{x^{(n)}(s)}{(t-s)^{\alpha-n+1}} \,ds\,,
\end{equation}
where $x^{(n)}(s)$ means the $n$-th derivative of the function $x(s)$. Our objective, as mentioned at the header of the present section, is an evaluation of \eqref{1} using segmentary interpolation. Here, we consider that $0<\alpha<1$, so that the only choice for $n$ is $n=1$, and this will be the case for some of our applications. Segmentary interpolation is a standard tool of wide use in the approximation of solutions of differential equations \cite{HNW}. Let us sketch the method here for completeness, using an approach that has been used in previous articles by our group \cite{GL,FLS}.
Let $[a,b]$ be a compact interval in the real axis $\mathbb R$. At regular intervals, we select $n$ nodes, $a= t_0<t_1<\dots<t_n=b$, with $t_k-t_{k-1}=h$, for all $k=1,2,\dots,n$, so that $kn=b-a$. Let $x(t):[a,b] \longmapsto\mathbb C$ be a continuous function and use the notation $x_k:=x(t_k)$ and $I_k:=[t_{k-1},t_k]$\,, for all $k=1,2,\dots,n$.
Then, a quadratic segmentary interpolator $S(t)$ for the function $x(t)$, is a continuous function $S(t):[a,b]\longmapsto \mathbb C$, with first continuous derivative, such that
1.- On each interval of the form $I_k=[t_{k-1},t_k]$, $k=1,2,\dots,n$, we have that $S(t)\equiv P_k(t)$, where $P_k(t)$ is a polynomial of order two, depending on the given interval.
2.- The function $S(t)$ interpolates $x(t)$, in the sense that for any of the nodes $\{t_k\}$, one has that
\begin{equation}\label{2}
P_k(t_{k-1})=x_{k-1}\,,\qquad P_k(t_k)=x_k\,, \qquad k=1,2,\dots,n\,.
\end{equation}
The condition on the continuity of the derivative $S'(t)$ implies that
\begin{equation}\label{3}
P'_k(t_k) =P'_{k+1}(t_k)\,, \qquad k=1,2,\dots,n-1\,.
\end{equation}
Thus, the construction of the segmentary interpolator $S(t)$ relies in the construction of the interpolating polynomials $P_k(t)$. We propose the following form for the interpolating polynomials: For each of the intervals $I_k$, let us define,
\begin{equation}\label{4}
P_k(t) =p_k(t)+a_k(t-t_{k-1})(t-t_k)\,,
\end{equation}
with
\begin{equation}\label{5}
p_k(t) =\frac{t-t_{k-1}}{h}\,x_k-\frac{t-t_k}{h}\,x_{k-1}\,,
\end{equation}
and the complex coefficients $a_k$ are given by
\begin{equation}\label{6}
a_k= \sum_{j=0}^n c_{k,j}\,x_j\,.
\end{equation}
We still need to determine the values of the $c_{k,j}$, which are
\begin{equation}\label{7}
c_{j,k} = \left\{ \begin{array}{ll} \frac{(-1)^k}{h^2}\,\eta_1\,, & {\rm if}\; j=0\,, \\[2ex]
\frac{(-1)^{k+1}}{h^2}\,(2\eta_1+\eta_2)\,, & {\rm if}\; j=1\,, \\[2ex]
\frac{(-1)^{k+j}}{h^2}\,(\eta_{j-1}+2\eta_j+\eta_{j+1}) \,, & {\rm if}\; 1<j<n-1\,, \\[2ex]
\frac{(-1)^{k+n-1}}{h^2}\,(2\eta_{n-2}+\eta_{n-1}) \,, & {\rm if}\; j=n-1\,, \\[2ex]
\frac{(-1)^{k+n}}{h^2}\, \eta_{n-1} \,, & {\rm if}\; j=n \,,
\end{array} \right.
\end{equation}
where $\eta_j=j/n$ if $j\le k-1$ and $\eta_j=j/n-1$ if $j>k-1$.
Taking into account that $S(t)$ is an approximation of $x(t)$, on each of the nodes $t_k$ the Caputo fractional derivative \eqref{1} is approximated by
\begin{equation}\label{8}
D^\alpha_{a} x(t_k) \approx \frac{1}{\Gamma(1-\alpha)} \sum_{j=1}^k \int_{t_{j-1}}^{t_j} \frac{P'_j(s)}{(t_k-s)^\alpha}\,ds\,,\qquad k=1,\dots,n\,.
\end{equation}
Then, on each of the intervals $I_k$, we have that $P'_k(t)=\alpha_k\,t+\beta_k$, with
\begin{equation}\label{9}
\alpha_k=2a_k\,,\qquad \beta_k= \frac{x_k-x_{k-1}}{h} - a_k(2t_{k-1}+h)\,,
\end{equation}
and, consequently, equation \eqref{8}, takes the following form:
\begin{equation}\label{10}
D^\alpha_{a} x(t_k) \approx \frac{1}{\Gamma(1-\alpha)} \sum_{j=1}^k \widetilde c_{k,j}\,\alpha_j + \widetilde d_{k,j}\,\beta_j\,.
\end{equation}
The new coefficients $\widetilde c_{k,j}$ and $\widetilde d_{k,j}$ are given by
\begin{equation}\label{11}
\widetilde c_{k,j} = \int_{t_{j-1}}^{t_j} \frac{s\,ds}{(t_k-s)^\alpha}\,,\qquad \widetilde d_{k,j} = \int_{t_{j-1}}^{t_j} \frac{ds}{(t_k-s)^\alpha}\,,
\end{equation}
which obviously depend solely of the partition.
It is customary to choose $a=x_0=0$, which obviously does not restrict generality. Since the integrals in \eqref{11} are easily solvable and we know expressions for $\alpha_j$ and $\beta_j$, we can write the right hand side in \eqref{10} as
\begin{equation}\label{12}
D^\alpha_{a} x(t_k) \approx \frac{1}{(-1+\alpha)(-2+\alpha)\,\Gamma(1-\alpha)} \, \sum_{j=1}^k \gamma_{k,j}\,\alpha_j\,,
\end{equation}
where,
\begin{eqnarray}\label{13}
\gamma_{k,j} = [h(-j+k)]^{-\alpha} \,[h(1-j+k)]^{-\alpha} \, \Big\{ h(-1+j-k) \,[h(-j+k)]^\alpha \nonumber\\[2ex] \times
[-2h(-2+j+k) +h (-3+2j)\alpha-2(-2+\alpha)t_{j-1}] \nonumber\\[2ex] -h(j-k)\,[h(1-j+k)]^\alpha \, [-2h(-1+j+k)+h(-1+2j)\alpha-2(-2+\alpha) t_{j-1}] \Big\}\,.
\end{eqnarray}
This is a quite simple and workable receipt to obtain, once $x(t)$ is given, the values of its Caputo fractional derivative at the nodes $t_k$, so that we have an estimation of this fractional derivative.
It is interesting to remark that, due to the linear dependence on $\{x_n\}$ of the coefficients $\alpha_j=2a_j$ given in \eqref{6}, then, the derivative $D^\alpha_{a} x(t_k)$ in \eqref{12} can be explicitly determined from $x(t)$.
\subsection{A type of differential equations with fractional derivative}
Let $x(t):[a,b]\longmapsto \mathbb R^m$ be a differentiable real function of the real variable $t$ and $f(t,x):[a,b]\times\mathbb R^m\longmapsto \mathbb R^m$. In addition, we assume that $x(t^*)=x^*$, where $t^*$ is one of the nodes $\{t_k\}$, $a\le t^*\le b$, and $0<\alpha<1$. Then, let us consider the following fractional differential equation:
\begin{equation}\label{14}
D^\alpha_a x(t) = f(t,x(t))\,.
\end{equation}
The objective is to obtain an approximation for the solution of equation \eqref{14} under the condition $x(t^*)=x^*$. We already know how to obtain the identity \eqref{14} in the nodes $t_k$. Take these nodes with the exception of $t^*$. Then, \eqref{14} provides of an algebraic system of equations where the indeterminates are $\{x_{j,k}\}_{j=1,k=0}^{m,n}$ with $x_{j,k}:=x_j(t_k)$ and $x_{j,k}\ne x^*_j=x_j(t^*)$. This algebraic system may or may not be linear depending on the form of $f(t,x(t))$ and is of order $(mn)\times(mn)$. A numerical solution of this system could be obtained by whatever method, which gives a segmentary solution $S(t)$, which is given once one has obtained the coefficients $a_k$ defined in \eqref{6}.
In the particular case in which $f(t,x(t))$ contains an eigenvalue $\lambda$ and $f$ be linear with respect to $(\lambda,x)$ this algebraic system is linear and homogeneous. The eigenvalue is determined in the usual way.
As the reader may easily understand, this method is more general than the usual way to obtain a solution knowing an initial value, since now $t^*$ could be any node. In particular, the restriction to the solution that replaces the initial value condition could be imposed at $t^*$, and this represents a great advantage when compared with the shooting method worked out in \cite{DIE1,DB}.
\section{The fractional linear oscillator}
A simple example of an equation of the type \eqref{14} is the linear oscillator with the fractional derivative, which is defined as
\begin{equation}\label{15}
D_0^\alpha\,x(t) =-\omega^2\,x(t)\,.
\end{equation}
As in the standard harmonic oscillator the constant $\omega^2=k/m$, where $m$ is the oscillator mass and $k$ a constant, $\alpha$ being the order of derivation that in the present case we assume to be $1<\alpha \le 2$. Using the definition \eqref{1}, taking into account that for some differentiable function $f(t)$ (in our case $f(t)=x(t)$ or $f(t)=\dot x(t)$, where the dot means first derivative), we have that
\begin{equation}\label{16}
\lim_{\alpha\to 0^+} \frac1{\Gamma(\alpha+1)}\, t^\alpha\,f(0)=f(0)\,,
\end{equation}
and that $n$ is either 2 or 3, we may integrate by parts \eqref{15} using \eqref{1}, which gives the following integral version of \eqref{2}:
\begin{equation}\label{17}
x(t) = x(0)+\dot x(0) -\frac{\omega^2}{\Gamma(-\alpha)}\int_0^t (t-s)^{-\alpha-1}\,x(s)\,ds\,.
\end{equation}
The general solution has the form
\begin{equation}\label{18}
x(t) = c_1\, E_{\alpha,1}(-\omega^2 t^{\alpha}) + c_2\,t\, E_{\alpha,2} (-\omega^2 t^{\alpha})\,,
\end{equation}
where $E_{\alpha,\beta}(z)$ is the so called Mittag-Leffler function
\begin{equation}\label{19}
E_{\alpha,\beta}(z) = \sum_{k=0}^\infty \frac{z^k}{\Gamma(\alpha k+\beta)}\,.
\end{equation}
Thus, in order to obtain a particular solution, we have to impose some initial conditions. For instance, if we choose $x(0)=1$ and $\dot x(0)=0$, the solution to \eqref{17} with these initial conditions is given by
\begin{equation}\label{20}
x(t)= E_{\alpha,1} (-\omega^2 t^{\alpha})\,.
\end{equation}
Let us find a particular numerical solution to the fractional linear oscillator, using the method introduced in Section 2.1. We have to choose a particular value for $\omega$ and the simplest possibility is $\omega=1$. This is developed in the forthcoming subsection.
\subsection{Some numerical estimations.}
First of all, it is not the objective here to give explicit expressions for the approximate solutions for the studied examples. It is not difficult to plot these solutions for different values of $n$.
Let us start with equation \eqref{15} with $\omega=1$ on the interval $0\le t\le 1$, with $0<\alpha<1$ and initial condition $x(0)=1$. As seen above, this equation has exact solution given by $x_{\rm exact}(t)=E_{\alpha,1}(-t^\alpha)$ \cite{DIE}. The objective is now an estimation on the precision of the proposed method. As customary, this precision is measured by using the following parameter:
\begin{equation}\label{21}
e_n(\alpha)= \int_0^1 [x_{\rm exact}(t)-x_n(t)]^2\,dt\,,
\end{equation}
Here, $n$ is the number of sub-intervals $I_n$ in which we partite $[0,1]$, the number of nodes being $n+1$. The dependence of this parameter on $\alpha$ shows that the smaller is the value of $\alpha$, or equivalently the closer is $\alpha$ to zero, the lower is the precision and, therefore, the slower is the convergence to the exact value. However, we do not observe significative variations on the precision when we increase the value of $n$, i.e., as we make the sub-intervals smaller and smaller.
\vskip1cm
\centerline{$
\begin{array}
[c]{cccc}
n & e_{n}(0.1) & e_{n}(0.5) & e_{n}(0.9)\\[2ex]
5 & 7.4\text{ }10^{-3} & 1.7\text{ }10^{-3} & 2.0\text{ }10^{-4}\\
10 & 3.0\text{ }10^{-3} & 4.9\text{ }10^{-4} & 2.8\text{ }10^{-5}\\
20 & 1.1\text{ }10^{-3} & 1.4\text{ }10^{-4} & 5.9\text{ }10^{-6}\\
40 & 3.0\text{ }10^{-4} & 3.7\text{ }10^{-5} & 1.3\text{ }10^{-6}
\end{array}
$}
\medskip
\centerline{Table 1.- Values of the precision in terms of $n$ and $\alpha$.}
\vskip1cm
This can be seen in Table 1, where we have chosen values of $n$ ranging from 5 to 40. The values of $\alpha$ studied are 0.1, 0.5 and 0.9.
Let us study the precision of the method with another example different from the fractional oscillator, yet an equation of the form \eqref{14}. Here, we have chosen,
\begin{equation}\label{22}
D^{1/2}_0 x(t) =\sin x(t)\,,
\end{equation}
on the interval $[0,1]$, with the initial condition $x(1)=5/2$, which was already studied in \cite{DIE1}, where the integration was performed by means of the iterative shot method. Contrarily to the previous example, here we do not know an exact solution. The way out is to define the precision as
\begin{equation}\label{23}
e_n= \int_0^1 [D^{1/2}_0 x_n(t) -\sin x_n(t)]^2\, dx\,,
\end{equation}
where $n$ is again the number of intervals and $x_n(t)$ is the interpolating function for the studied case. After integration and using the boundary condition, we obtain $x(0)$. Along with \eqref{23}, we introduce another parameter that measures the convergence and that we denote as $e_r\%$. It represents the relative variation between the value of $y(0)$, obtained for a given value of $n$, and the value given for the precedent value of $n$ as listed on Table 2.
Table 2 is just a sample of numerous numerical examples we have performed. This sample is significative, as it manifest an obvious convergence and shows that the result obtained for a small number of nodes is satisfactory.
\vskip1cm
\centerline{$
\begin{array}
[c]{cccc}
n & y(0) & e_{r}\% & e_{n}\\[2ex]
5 & 1.74895 & -- & 1.8\text{ }10^{-2}\\
10 & 1.73812 & 0.5 & 7.4\text{ }10^{-3}\\
20 & 1.73326 & 0.3 & 2.6\text{ }10^{-3}\\
30 & 1.73166 & 0.09 & 1.1\text{ }10^{-3}\\
40 & 1.73085 & 0.05 & 5.6\text{ }10^{-4}
\end{array}
$}
\medskip
\centerline{Table 2: Values of $y(0)$, $e_r\%$ and $e_n$ for a given value of $n$}
\vskip1cm
Finally, we have performed another reliability test, which was the use of the value of $y(0)$ obtained numerically as initial value and evaluate the value of $y(1)$. In all cases, we have recovered the value $y(1)=5/2$.
\subsection{Damped oscillator with entire derivative.}
As is well known, the damped oscillator with entire derivative is given by
\begin{equation}\label{24}
m \ddot y(t)+ p\dot y(t)+k y(y)=0\,,
\end{equation}
where $m$, $p$ and $k$ are constants. Here, we assume that $p>0$.
For $p=0$, the limit for $\alpha\longmapsto 2^-$ in \eqref{15} should give the solution $y(t)$ for \eqref{24}, which we denote here as $\lim_{\alpha\to 2^-}x(t)=y(t)$. The solution $x(t)$ is damped oscillatory on the transitory regime only \cite{AHEC,DIE,KST}. Based on these notions, we propose the following Ansatz:
{\it For each given $1<\alpha\le 2$, there exists $p>0$ such that the solution $x(t)$ of \eqref{15} with $\alpha$ is a good approximation of the solution $y(t)$ of \eqref{24} with $p$, in the transitory regime}.
Using this Ansatz, let us obtain an approximate solution $y(t)$ for \eqref{24} such that this and its corresponding solution $x(t)$ for \eqref{15} fulfil the conditions $y(0)=x(0)$ and $\dot y(0)=\dot x(0)$. This is:
\begin{equation}\label{25}
y(t) = \exp\left( -\frac pm\,t \right)\, (c_-\exp(-\Delta t) +c_+ \exp(\Delta t))\,,
\end{equation}
where,
\begin{equation}\label{26}
\Delta= \sqrt{(p/m)^2-4\omega^2}\,,\quad \lambda_\pm =\frac{-p\pm\Delta}{2}\,,\quad c_\pm =\pm \frac{\lambda_\mp}{\Delta}\,,
\end{equation}
where $\omega^2$ was given in \eqref{15}.
Then, the point is the determination of the value of $p$ being given the value of $\alpha$, or equivalently the determination of a function $p=p(\alpha)$, in application to our Ansatz. This is an optimal control problem. We have to find the optimal solution, which minimizes the following functional:
\begin{equation}\label{27}
E(\alpha):= \frac 1T \int_0^T [x(t)-y(t)]^2\,dt\,,
\end{equation}
where $T$ is some time scale in which the amplitude of the oscillations are reduced by a factor of $1/T$. On the interval $[0,T]$, the transitory regime, we compare the solutions of the fractional derivative $x(t)$ and of the damped oscillator $y(t)$. The functional $E(\alpha)$ measures the deviation between $x(t)$ and $y(t)$. Then, go back to \eqref{20} and note that the function $E_{\alpha,1} (-\omega^2 t^{\alpha})$ is not asymptotically oscillating as $t\longmapsto 0$. This permits us to choose a value of $T$, although not small, not very high either. Numerical experiments have shown that the choice $T=20$ is appropriate.
Let us give some numerical results. On Table 3, we give the dependence between values of $\alpha$, $p(\alpha)$ and $E(\alpha)$ for the values $k=m=1$ and $n=50$.
\vskip1cm
\centerline{$
\begin{array}
[c]{ccc}
\alpha & p & E\\[2ex]
1.10 & 1.140 & 6.4\,10^{-3}\\
1.30 & 0.891 & 5.7\,10^{-3}\\
1.50 & 0.668 & 4.7\,10^{-3}\\
1.70 & 0.433 & 3.3\,10^{-3}\\
1.90 & 0.152 & 1.1\, 10{-3}\\
1.95 & 0.754 & 4.3\, 10^{-4}\\
2 & 0 & 0
\end{array}
$}
\medskip
\centerline{Table 3: Comparison between the values of $\alpha$, $p$ and $E$,}
\centerline{for $T=20$, $k=m=1$ and $n=50$.}
\vskip1cm
An explicit expression of the function $p(\alpha)$ may be obtained by the least-square method and this gives $p(\alpha)= 1.49409 +0.056127 \,\alpha - 0.401446\,\alpha^2$. This is depicted on Figure 2.
On Figure 1, we represent the usual behaviour in the transitory regime for $x(t)$ and $y(t)$, when we choose $\alpha=1.7$ and $n=50$.
It is interesting to remark that numerical experiments show that $p(\alpha)$ does not depend on any choice of initial values.
\subsection{Non-linear oscillator}
Following the discussion on the damped oscillator, we present a similar problem given by the following non-linear oscillator:
\begin{equation}\label{28}
D_0^\alpha\,x(t) =y(t)\,, \qquad D_0^\alpha\,y(t) =-\sin x(t)\,,
\end{equation}
with $0<\alpha\le 1$. Let us choose the initial values given by $x(0)=1$ and $y(0)=0$. Clearly, for small oscillations equation \eqref{28} becomes equation \eqref{15} with the replacement $\alpha \rightarrow 2\alpha$. Again, this is a non-linear problem having no analytic solution for $\alpha$ non-integer. Then, we proceed by analogy with the damped oscillator. In this case, we consider the following system involving entire derivatives only:
\begin{equation}\label{29}
\dot z(t)=w(t)\,,\qquad \dot w(t)=-p w(t)-\sin z(t)\,,\qquad p>0\,.
\end{equation}
On the transitory regime, we compare the solutions of systems \eqref{28} and \eqref{29} under the conditions $z(0)=x(0)=1$ and $w(0)=y(0)=0$. To do this, we need the previous determination of $p(\alpha)$, which we assume that minimizes the following quadratic dispersion:
\begin{equation}\label{30}
E(\alpha)=\frac 1T \int_0^T \{ [x_n(t)-z(t)]^2 +[y_n(t)-w(t)]^2 \}\,dt\,.
\end{equation}
Obviously, this expression generalizes \eqref{27}. Again, we adjust the value of $T$ by numerical experiments, which show that $T=20$ is, again, a convenient choice. On Table 4, we give some values for the dependence between $\alpha$, $p(\alpha)$ and $E(\alpha)$ after the choice $T=20$ and $n=50$.
\vskip1cm
\centerline{$
\begin{array}
[c]{ccc}
\alpha & p & E\\[2ex]
0.50 & 1.203 & 5.5\,10^{-3}\\
0.70 & 0.757 & 3.9\,10^{-3}\\
0.90 & 0.294 & 2.1\,10^{-3}\\
0.95 & 0.148 & 1.4\,10^{-3}\\
1.00 & 0 & 0
\end{array}
$}
\medskip
\centerline{Table 4: Comparison between the values of $\alpha$, $p$ and $E$,}
\centerline{for the values $T=20$ and $n=50$.}
\vskip1cm
The above examples manifest an analogous behaviour between a fractional linear oscillator and a damped or even non-linear oscillator on some time interval. The solutions between the fractional and the integer equation are very similar on some time scale. This could be a rather general situation, so that in many practical cases and inside a time interval, we conjecture that a fractional operator might be replaced by the frictional additional term on the classical oscillator. The behaviour of the solutions is similar to that shown in Figure 1.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{OLNL}
\caption{\small
{ The continuous line represents the solution, $x(t)$ of the fractional equation \eqref{15}, which is given by \eqref{25}. The dashed line gives the solution, $y(t)$, of the equation with ordinary derivative \eqref{25}.}
}
\label{}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{PALF}
\caption{\small
Function $p(\alpha)$.
}
\label{}
\end{figure}
\section{On the fractional van der Pol equation}
The van der Pol equation is a second order non-linear ordinary differential equation \cite{VDP,FAR}. It has the following form:
\begin{equation}\label{31}
\ddot x(t) - \mu(1-x^2(t))\,\dot x(t) + x(t)=0\,,
\end{equation}
where $\mu\ge 0$ is a constant. When $\mu=0$, \eqref{31} is the equation of the ordinary harmonic oscillator. The van der Pol equation may be written in terms of a first order system as
\begin{equation}\label{32}
\dot x(t):= z(t)\,,\qquad \dot z(t)=-\mu (x^2(t)-1) z(t) -x(t)\,, \qquad \mu\ge 0\,.
\end{equation}
This equation has a unique limit cycle for $\mu\ne 0$, after the Li\'enard theorem \cite{STR}. This suggested us the interest of considering the possible existence of a limit cycle for the fractional system analogous to \eqref{32} given by
\begin{equation}\label{33}
D_0^\alpha\, x(t)=z(t)\,,\qquad D_0^\alpha \,z(t)= -\mu z(t) (x^2(t)-1)-x(t)\,,\qquad 0<\alpha\le 1.
\end{equation}
One fractional van der Pol equation of the type
\begin{equation}\label{34}
D_0^{\alpha+1}\, x(t) +\mu (x^2(t)-1) D_0^\alpha \,x(t) +x(t)=0\,,
\end{equation}
has been studied in \cite{GUO}, where a relation between the parameters $\alpha$ and $\mu$ was given as a sufficient condition for the existence of a limit cycle, using the balance harmonic method \cite{DEU}.
We have studied the system \eqref{33} through numerical as well as analytic methods. We have performed a big amount of numerical experiments, which have shown the existence of a value of the parameter $\mu$, here called $\mu_c$, where the subindex $c$ stands for {\it critical}, which depends on $\alpha$ and $\mu_c(\alpha)>0$, such that
\begin{itemize}
\item{For values of $\mu$ with $0<\mu<\mu_c$, there is a fixed point $(x^*,z^*)=(0,0)$, which remains stable at the limit $t\longmapsto\infty$, $\lim_{t\to\infty}(x(t),z(t))=(0,0)$. Therefore, there is no stable limit cycle. In addition, there is no evidence of the existence of an unstable limit cycle.}
\item{For values $\mu>\mu_c$, the fixed point $(x^*,z^*)=(0,0)$ is unstable. We found a unique stable limit cycle. In this case a Hopf bifurcation emerges with $\mu_c$ as critical parameter.}
\item{As shown in Figure 3, $\mu_c(\alpha)$ decreases with $\alpha$ and $\lim_{\alpha \to1}\mu_c(\alpha)=0$.}
\end{itemize}
The point here is to show the existence of the critical value for the parameter $\mu$ for a given value of $0<\alpha<1$, $\mu_c(\alpha)$. This existence has been manifested by the numerical estimations above. Nevertheless, this existence may be also shown analytically. To this end, we use the following result \cite{AEE}:
Let us consider the following system, where $D_0^\alpha$ represents the Caputo fractional derivative:
\begin{equation}\label{35}
D^\alpha_0 \, x(t)=f(x,z)\,,\qquad D^\alpha_0\,z(t)= g(x,z)\,,\qquad 0<\alpha<1\,.
\end{equation}
A solution $(x^*(t),z^*(t))$ is in equilibrium if $f(x^*(t),z^*(t))=g(x^*(t),z^*(t))=0$. It is asymptotically stable if the eigenvalues, $\lambda$, of the Jacobian matrix
\begin{equation}\label{36}
J(x,z):= \left(\begin{array}{cc} \partial f/\partial x & \partial f/\partial z \\[2ex] \partial g/\partial x & \partial g/\partial z \end{array} \right)\,,
\end{equation}
when evaluated at the equilibrium point satisfies
\begin{equation}\label{37}
|\arg(\lambda)|>\alpha\,\frac \pi 2\,.
\end{equation}
A comparison between \eqref{33} and \eqref{35} gives the precise form of $f(x,z)$ and $g(x,z)$ for our particular case. This gives the precise form of \eqref{36} as
\begin{equation}\label{38}
J(x,z):= \left(\begin{array}{cc} 0 & 1 \\[2ex] -1-2\mu\,z(t)x(t) & -\mu(x^2(t)-1) \end{array} \right)\,.
\end{equation}
Taking into account that the fixed point is located at $(x^*,z^*)=(0,0)$, we have that
\begin{equation}\label{39}
J(0,0)=\left(\begin{array}{cc} 0 & 1\\[2ex] -1 & \mu \end{array} \right)\,,
\end{equation}
which has the following eigenvalues:
\begin{equation}\label{40}
\lambda_\pm= \frac 12 (\mu\pm \sqrt{\mu^2-4})\,.
\end{equation}
Obviously, if $\mu \ge 2$, then $\arg(\lambda_\pm)=0$. On the other hand, if $\mu<2$, one has that,
\begin{equation}\label{41}
\arg(\lambda_\pm)= \arctan \left(\pm\sqrt{\left( \frac 2\mu\right)^2-1} \right)\,.
\end{equation}
From \eqref{37}, the critical value, $\mu_c$, of $\mu$ should obey the following relation:
\begin{equation}\label{42}
\arctan \left(\pm\sqrt{\left( \frac 2\mu_c\right)^2-1} \right) = \alpha\,\frac \pi 2\,,
\end{equation}
which gives
\begin{equation}\label{43}
\mu_c= \frac{2}{\displaystyle \sqrt{1+\tan^2\left( \frac{\alpha\pi}{2}\right)}}\,.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{MUC}
\caption{\small
Function $\mu_c(\alpha)$.
}
\label{}
\end{figure}
This is to say, if we fix $\alpha$ and start from $\mu\approx 0$, as we increase $\mu$, we go from a situation with a asymptotically stable fixed point to an unstable point. This happens when $\mu>\mu_c$. The transition from the stability to the unstability drives to the emergency of a limit cycle. We may qualitatively interpret the limit cycle loss as follows: let us consider $\mu\approx 0$ in \eqref{33}, which may then be approximated by
\begin{equation}\label{44}
D_0^\alpha \,x(t) = z(t)\,,\qquad D_0^\alpha \,z(t)=-x(t)\,.
\end{equation}
This is the same than equation \eqref{28} with the paraxial approximation $\sin y(x)\approx y(x)$. Note that \eqref{44} does not show a limit cycle and, further, the trivial solution $(0,0)$ is an attractor. The second equation in \eqref{33} contains the term $-\mu\,z(x)(y^2(x)-1)$, which in the case of $\mu>\mu_c$ is not negligible. This fact outbalances the dissipation and this is precisely which makes it possible the existence of a limit cycle.
On Figure 4 and on the phase plane, the continuous and slashed curves represent the solution with entire and fractional derivative, respectively. Both trajectories are determined with same initial values and same parameter $\mu$. In all cases, the fractional limit circle is enclosed by the trajectory of the limit cycle with entire derivative.
On Figure 3, we show the relation $\mu_c=\mu_c(\alpha)$. In the region above the curve, there exists a stable cycle limit and, furthermore, the fixed point $(0,0)$ is unstable. Below the curve the limit cycle does not exist and the fixed point is asymptotically stable. There is an obvious difference with the results obtained in \cite{GUO}, which is due to the fact that the fractional equations \eqref{33} and \eqref{34} are {\it not} equivalent.
\subsection{Equivalence between the fractional van der Pol equation and the same equation with entire derivative and dissipation.}
On the previous section, we have compared the approximate solutions of a dissipative oscillator with entire derivative with those of the linear oscillator with fractional derivative. Now, we want to carry out a similar analysis with the fractional van der Pol equation and a van der Pol equation with entire derivative and a dissipative term of the form $\beta z(t)$, $\beta>0$. This system has the form,
\begin{equation}\label{45}
x'(t)=z(t)\,,\qquad z'(t)=-z(t)(\beta+\mu(x^2(t)-1))-z(t)\,, \qquad \mu\,,\beta>0\,.
\end{equation}
Here, the fixed point is $(x^*,z^*)=(0,0)$. To check its stability, we consider again equation \eqref{37}, which in the present case gives at the fixed point the following expression
\begin{equation}\label{46}
J(0,0)= \left(\begin{array}{cc} 0 & 1 \\[2ex] -1 & \mu-\beta \end{array}\right)\,,
\end{equation}
which has the eigenvalues
\begin{equation}\label{47}
\lambda_\pm = \frac{(\mu-\beta) \pm \sqrt{(\mu-\beta)^2-4}}2\,.
\end{equation}
Therefore, the fixed point is stable if Re$(\lambda_\pm)<0$ and unstable if Re$(\lambda_\pm)>0$, or equivalently, if $\mu-\beta<0$ and $\mu-\beta>0$, respectively. Then, for each $\mu$, there exists a $\beta_c=\mu$, where the Hopf bifurcation of the fixed point appears and, consequently, the destruction of the limit cycle.
In any case, according to the Li\'enard theorem \cite{STR,FAR}, we may show that there exists a unique stable limit cycle if $\mu-\beta<0$. In consequence, the van der Pol equations with entire derivative and dissipation and fractional have qualitatively the same properties.
We have checked numerically a qualitative equivalence, in the sense of having approximately the same solution, through a substantial number of numerical experiments, between equations \eqref{44} (with fractional derivative) and \eqref{45} (with entire derivative). Thus by trial and error, we have determined a value of $\beta$ giving the same cycle in both cases. For instance, if we give the values $\alpha=0.9$ and $\mu=0.1$, we obtained $\beta\cong 0.315$. In an analogous manner, we trial with values for which $\mu<\mu_c$ and obtained similar conclusions.
On Figure 5, we represent limit cycles for the fractional and dissipative entire van der Pol equations.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{CLEF}
\caption{\small
Comparison between the entire (continuous curve) and the fractional (slashed curve) van der Pol solutions, with the same value of $\mu$.
}
\label{}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{CERF}
\caption{\small
Comparison between the damped (continuous curve) and the fractional (slashed curve) van der Pol solutions.
}
\label{}
\end{figure}
\section{Concluding remarks}
We have applied a quadratic spline method in order to obtain functions that approximate the result of applying a fractional derivative to a given function. This is suitable to obtain solutions to some differential equations with initial values or mixed conditions of potential interest in Physics or Engineering. We have tested our method with the fractional linear oscillator, where exact solutions are known and checked its degree of precision. Results of numerous numerical experiments show that for values of $\alpha$ in the range $0<\alpha<1$, the higher is $\alpha$, the better is the precision of our method. Here, $\alpha$ is the order of the fractional derivation, $D^\alpha$. However, there are not substantial differences when we increase the number of nodes on the interval under our consideration. Similar results have been obtained for non-linear oscillators.
Based on previous studies on the approximation of solutions of the fractional linear oscillator by solutions of a damped oscillator, we have used our method to confirm these results. We have shown that there exists a time interval for which the solutions of both equations are similar with a high degree of accuracy, under the condition that a relation is given between the coefficient $p$ of the dissipative term of the damped oscillator and the order of the fractional derivation, $\alpha$.
A similar study compares a fractional and a damped van der Pol equations, written as a system of two equations on phase space, with similar results. In addition, we have considered the behaviour of limit cycles and fixed points in terms of $\alpha$ and a parameter $\mu$ characteristic of the van der Pol equation. Using analytic as well as numerical arguments, we show the existence of a critical value for the parameter $\mu$, $\mu_c$, such that if $\mu<\mu_c$ the origin of phase space is stable and if $\mu>\mu_c$ is unestable. This limit value $\mu_c$ depends on $\alpha$ and we give the exact relation.
\section*{Acknowledgements}
This research has been financed by the Projects No. ING 19/i 402 and ING 80020180100064 of the Universidad Nacional de Rosario, the Spanish MINECO (Project No. MTM2014-57129) and the Junta de Castilla y Le\'on (Project Nos. BU229P18 and VA137G18).
| {
"timestamp": "2019-10-21T02:03:40",
"yymm": "1910",
"arxiv_id": "1910.08182",
"language": "en",
"url": "https://arxiv.org/abs/1910.08182",
"abstract": "The fractional calculus is useful to model non-local phenomena. We construct a method to evaluate the fractional Caputo derivative by means of a simple explicit quadratic segmentary interpolation. This method yields to numerical resolution of ordinary fractional differential equations. Due to the non-locality of the fractional derivative, we may establish an equivalence between fractional oscillators and ordinary oscillators with a dissipative term.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Approximate solutions of one dimensional systems with fractional derivative",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126475856414,
"lm_q2_score": 0.8198933293122506,
"lm_q1q2_score": 0.8024399710689989
} |
https://arxiv.org/abs/1402.2456 | Equal Sum Sequences and Imbalance Sets of Tournaments | Reid conjectured that any finite set of non-negative integers is the score set of some tournament and Yao gave a non-constructive proof of Reid's conjecture using arithmetic arguments. No constructive proof has been found since. In this paper, we investigate a related problem, namely, which sets of integers are imbalance sets of tournaments. We completely solve the tournament imbalance set problem (TIS) and also estimate the minimal order of a tournament realizing an imbalance set. Our proofs are constructive and provide a pseudo-polynomial time algorithm to realize any imbalance set. Along the way, we generalize the well-known equal sum subsets problem (ESS) to define the equal sum sequences problem (ESSeq) and show it to be NP-complete. We then prove that ESSeq reduces to TIS and so, due to the pseudo-polynomial time complexity, TIS is weakly NP-complete. | \section{Introduction}
A tournament is an orientation of a complete simple graph. In a tournament, the \emph{score} $s_{i}$ of a vertex $v_{i}$ is the number of arcs directed away from that vertex, that is, the outdegree of $v_{i}$. The \emph{score sequence} of a tournament is formed by listing the scores in nondecreasing order.
Let us write $[x_{i}]_{1}^{n}$ to denote a sequence with $n$ terms. Landau \cite{landau1} gave a simple characterization of the score sequences of tournaments.
\begin{theorem}\label{landau}
A sequence $[s_{i}]_{1}^{n}$ of non-negative integers in nondecreasing order is the score sequence of a tournament if and only if for every $I\subseteq\{1,2,\ldots,n\}$,
\begin{equation}\label{land}
\sum_{i\in I}{s_{i}}\geq{\left|I\right|\choose 2},
\end{equation}
\noindent with equality when $\left|I\right|=n$, where $\left|I\right|$ is the cardinality of the set $I$.
\end{theorem}
Several proofs of Landau's theorem have appeared over the years \cite{bang1, brualdi1, griggs1, landau1, thomassen1} and it continues to play a central role in the theory of tournaments and their generalizations. Brualdi and Shen \cite{brualdi2} strengthened Landau's theorem by deriving a set of inequalities that are individually stronger than inequalities (\ref{land}) but are collectively equivalent to these inequalities.
The set of scores of vertices in a tournament is called the \textit{score set} of the tournament. Reid \cite{reid1} conjectured that any finite nonempty set $S$ of non-negative integers is the score set of some tournament. He gave a constructive proof of the conjecture for the cases $\left|S\right|=1,2,3$, while Hager \cite{hager1} settled the cases $\left|S\right|=4,5$. In 1986, Yao announced a nonconstructive proof of Reid's theorem by arithmetic arguments \cite{yao1}. Pirzada and Naikoo \cite{pirzada1} obtained the construction of a tournament with a given score set in the special case when the score increments are increasing. However, so far no constructive proof has been found for Reid's theorem in general.
In a digraph, the \emph{imbalance} of a vertex $v_{i}$ is defined as $t_{i} = d_{i}^{+} -d_{i}^{-}$, where $d_{i}^{+}$ and $d_{i}^{-}$ are respectively the outdegree and indegree of $v_{i}$. The \emph{imbalace sequence} of a digraph is formed by listing the vertex imbalances in nonincreasing order. If $T$ is a tournament with imbalance sequence $[t_{i}]_{1}^{n}$, we say that $T$ \emph{realizes} $[t_{i}]_{1}^{n}$. Mubayi, Will and West \cite{west1} gave necessary and sufficient conditions for a sequence of integers to be the imbalance sequence of a simple digraph.
\begin{theorem}\label{west}
A sequence of integers $[t_{i}]_{1}^{n}$ with $t_{1}\geq\cdots\geq t_{n}$ is an imbalance sequence of a simple digraph if and only if $\sum_{i=1}^{j}t_i\leq j(n-j)$, for $1\leq j\leq n$ with equality when $j=n$.
\end{theorem}
On rearranging the imbalances in nondecreasing order, we obtain the equivalent inequalities $\sum_{i=1}^{j}t_i\geq j(j-n)$, for $1\leq j\leq n$ with equality when $j=n$.
Koh and Ree \cite{koh1} showed that if an additional parity condition is satisfied the sequence $[t_{i}]_{1}^{n}$ can be realized by a tournament. In fact, they proved the result in the more general setting of hypertournaments. The following corollary of Theorem 6 in \cite{koh1} provides a characterization of imbalance sequences of tournaments.
\begin{theorem}\label{seq}
A nonincreasing sequence $[t_{i}]_{1}^{n}$ of integers is the imbalance sequence of a tournament if and only if $n-1, t_{1}, \ldots, t_{n}$ have the same parity and
\begin{equation}\label{charac}
\sum_{i=1}^{j}{t_{i}} \leq j(n-j),
\end{equation}
\noindent for $j = 1, \ldots , n$ with equality when $j = n$.
\end{theorem}
In a digraph, the set of imbalances of the vertices is called its \textit{imbalance set} \cite{pirzada2}. In \cite{pirzada2} the following result regarding the imbalance sets of oriented graphs is proved.
\begin{theorem}\label{pir}
Let $P = \{p_1 , \ldots, p_m \}$ and $Q = \{-q_1 , \ldots, -q_n \}$, where $p_1 < \cdots < p_m$ and $q_1 < \cdots < q_n$ are positive integers. Then there exists an oriented graph with imbalance set $P \cup Q$.
\end{theorem}
Due to the interest in Reid's score set theorem, it is natural to ask if a similar result holds for imbalacne sets of tournaments. Furthermore, since a constructive proof of Reid's theorem has not yet been found, it would be interesting to look for an algorithm that generates a tournament from its imbalance set. In this paper we address both questions. We study the following decision problem and its search version.
\begin{definition}[\textbf{Tournament Imbalance Set Problem (TIS)}]\label{TIS}
Given a set $Z$ of integers, decide if $Z$ is the imbalance set of a tournament.
\end{definition}
In Section \ref{odd case}, we first show that the obvious necessary conditions for the existence of tournament imbalance sets are not sufficient. We then completely characterize the sets of odd integers that are imbalance sets of tounraments. In Section \ref{even case}, we treat the case of even integers, which is more involved. We show that any set of even integers that contains at least one positive and at least one negative integer or only consists of a single element 0, is the imbalnce set of a partial tournament in which each vertex is joined to every other vertex except one. However, not all such sets are imbalance sets of tournaments. This is followed by necessary and sufficient conditions for a set of even integers to be a tournament imbalance set. In Section \ref{algorithm}, we define a new variant of the \textit{equal sum subsets problem} (ESS) called \textit{equal sum sequences problem} (ESSeq). We show that ESSeq is NP-hard and ESSeq reduces to TIS in polynomial time. Furthermore, we propose a pseudo-polynomial time algorithm that determines if a set of integers is a tournament imbalance set and, in addition, generates a tournmanet realizing any such set. Thus TIS is shown to be weakly NP-complete. We also consider extremal cases and determine upper bounds for the minimal order of a tournament realizing an imbalance set.
\section{Characterizing odd imbalance sets}
\label{odd case}
Consider a tournament of order (number of vertices) $n$. Let $v_{i}$ be a vertex with score $s_{i}$ and imbalance $t_{i}$ then $t_{i}=d_{i}^{+}-d_{i}^{-}=s_{i}-(n-1-s_{i})=2s_{i}-(n-1)$, or by rearranging $s_{i} = \frac{n-1+t_{i}}{2}$. Converesly, assume that $v_{i}$ is a vertex of a tournament with $n$ vertices and $t_{i}$ is the imbalance of $v_{i}$. Let $s_{i} = \frac{n-1+t_{i}}{2}$ then $s_{i}$ is the score of $v_{i}$. Thus we have
\begin{lemma}\label{aux}
Let $t_{i}$ be the imbalance of a vertex $v_{i}$ in a tournament. Then $s_{i}$ is the score of $v_{i}$ if and only if
\begin{equation}\label{inter1}
s_{i} = \frac{n-1+t_{i}}{2},
\end{equation}
\noindent where $n$ is the order of the tournament.
\end{lemma}
A tournament is said to be \emph{regular} if all the vertices have the same score \cite{chartrand1}. Clearly, there exists a regular tournament on $n$ vertices with score $s_{i}$ if and only if $n$ is odd and $s_{i}=\frac{n-1}{2}$. Therefore, the imbalance of any vertex $v_{i}$ of a regular tournament is $t_{i}=2s_{i}-(n-1)=0$ and the imbalance set of any regular tournament is $\{0\}$.
The following is a set of obvious necessary conditions for an imbalance set of a tournament.
\begin{theorem}\label{necessary}
If a finite nonempty set $Z$ of integers is the imbalance set of a tournament of order $n$ then all the elements of $Z$ have the same parity as $n-1$ and it either contains at least one positive and at least one negative integer or contains only a single element 0.
\end{theorem}
\begin{proof}
If $Z$ is the imbalance set of a tournament with $n$ vertices then by Theorem \ref{seq}, the elements of $Z$ must have the same parity as $n-1$. Furthermore, either the tournament is regular and $Z=\{0\}$ or $Z$ must contain at least one positive and at least one negative integer so the corresponding imbalance sequence sums to zero.
\end{proof}
The question is whether these conditions are also sufficient. The answer is `no' as can be seen from the following example.
\begin{example}
Let $Z=\{6, -10\}$. Then $Z$ satisfies the necessary conditions given in Theorem \ref{necessary} and it can potentially be the imbalance set of a tournament with an odd number of vertices. However, any sequence with elements chosen from $Z$ can sum to zero only if it consists of an even number of elements (e.g., $6$, $6$, $6$, $6$, $6$, $-10$, $-10$, $-10$ ). Thus by Theorem \ref{seq}, we cannot construct a tournament imbalance sequence from $Z$ and so $Z$ is not a tournament imbalance set.
\end{example}
Although the conditions given in Theorem \ref{necessary} are not sufficient in general, they are sufficient if $Z$ consists of odd integers. We first show that
\begin{theorem}\label{odd}
Let $X=\{x_{1}, \ldots, x_{l}\}$ and $Y=\{-y_{1},\ldots,-y_{m}\}$ be disjoint nonempty sets of odd integers, where $x_{1}>\cdots> x_{l}$ are positive odd integers and $y_{1}<\cdots< y_{m}$ are also positive odd integers. Let $L=\sum_{i=1}^{l}{x_{i}}$, $M=\sum_{i=1}^{m}{y_{i}}$ and $n=lM+mL$. Then there exists a tournament of order $n$ with imbalance set $X\cup Y$.
\end{theorem}
\begin{proof} We observe that $n$ is even and all the elements of $X\cup Y$ have the same parity as $n-1$. Let $x^{(p)}$ denote that $x$ is appearing in $p$ consecutive terms of a sequence. We use Theorem \ref{seq} to prove that the $n$-term sequence
\[
[t_{i}]_{1}^{n}={x_{1}}^{(M)},\ldots,{x_{l}}^{(M)},{-y_{1}}^{(L)},\ldots,{-y_{m}}^{(L)}
\]
\noindent is the imbalance sequence of a tournament arranged in nonincreasing order. First note that
\[
\sum_{i=1}^{M}{t_{i}}= Mt_{1}=Mx_{1}\leq M((l-1)M+mL)=M(n-M),
\]
\[
\sum_{i=1}^{2M}{t_{i}}= M(t_{1}+t_{2})=M(x_{1}+x_{2})\leq 2M((l-2)M+mL)=2M(n-2M),
\]
\[\ldots \ \ \ \ \ldots \ \ \ \ \ldots \ \ \ \ \ldots \ \ \ \ \ldots
\]
\[
\sum_{i=1}^{lM}{t_{i}}= M\sum_{i=1}^{l}x_{i}=LM\leq lM(mL)=lM(n-lM),
\]
\[
\sum_{i=1}^{lM+L}{t_{i}}= LM-Ly_{1}\leq (lM+L)(m-1)L=(lM+L)(n-lM-L),
\]
\[
\sum_{i=1}^{lM+2L}{t_{i}}= LM-L(y_{1}+y_{2})\leq (lM+L)(m-2)L=(lM+2L)(n-lM-2L),
\]
\[
\ldots \ \ \ \ \ldots \ \ \ \ \ldots \ \ \ \ \ldots \ \ \ \ \ldots
\]
and
\[
\sum_{i=1}^{n}{t_{i}}=M\sum_{i=1}^{l}{x_{i}}-L\sum_{i=1}^{m}{y_{i}}=0=n(n-lM-mL).
\]
So inequality (\ref{charac}) holds for $j=M,2M,\ldots,lM,lM+L,lM+2L,\ldots,lM+mL(=n)$ with equality when $j=n$. Now suppose that for some other value $j=j_{0}$ we have $\sum_{i=1}^{j_{0}}{t_{i}} > j_{0}(n-j_{0})$ and $j_{0}$ is the smallest such integer. But then $t_{j_{0}}>n-2j_{0}+1$ and as $j_{0}\neq M,2M,\ldots,lM,lM+L,lM+2L,\ldots,n$, we have $t_{j_{0}+1}= t_{j_{0}}>n-2j_{0}+1>n-2j_{0}-1=n-2(j_{0}+1)+1$. Thus $\sum_{i=1}^{j_{0}+1}{t_{i}} > (j_{0}+1)(n-j_{0}-1)$, showing that $j_{0}+1\neq M,2M,\ldots,lM,lM+L,lM+2L,\ldots,n$. Continuing in this way leads to a contradiction as we must reach one of $M,2M,\ldots,lM,lM+L,lM+2L,\ldots,n$ in finitely many steps.
\end{proof}
Together Theorems \ref{necessary} and \ref{odd} immediately give the following necessary and sufficient conditions for odd tournament imbalance sets.
\begin{theorem}\label{oddsuff}
A finite nonempty set of odd integers is the imbalance set of a tournament if and only if it contains at least one positive and at least one negative integer.
\end{theorem}
\section{The case of even imbalances}
\label{even case}
Mubayi, Will and West \cite{west1} considered simple digraphs with maximum number of arcs that realize imbalance sequences.
\begin{lemma}\cite{west1}\label{aux2}
Let $D$ be a simple digraph with maximum number of arcs realizing the imbalance sequence $[t_{i}]_{1}^{n}$. Then any vertex in $D$ has at most one non-neighbour and the number of arcs in $D$ equals $\sum_{i=1}^{n}\left\lfloor \frac{n-1+t_{i}}{2}\right\rfloor$.
\end{lemma}
A \textit{partial tournament} is a simple digraph obtained by removing one or more arcs from a tournament \cite{brualdi1}. We say that a partial tournament of order $n$ is a \emph{near tournament} if each vertex is joined to all the other vertices except exactly one. Clearly, every near tournament has even order.
In this section, we characterize the sets of even integers that are imbalance sets of tournaments. Recall that $\{0\}$ is the imbalance set of every regular tournament. Therefore, in the remainder of this section we focus on nonzero sets of even integers. Example 1 shows that not every set of even integers that satisfies the necessary conditions of Theorem \ref{necessary} is the imbalance set of a tournament. We can nevertheless prove that any such set is the imbalance set of a near tournament.
\begin{theorem}\label{even}
Let $X=\{x_{1}, \ldots, x_{l}\}$ and $Y=\{-y_{1},\ldots,-y_{m}\}$ be disjoint nonempty sets of even integers, where $x_{1}>\cdots> x_{l}$ are non-negative even integers and $y_{1}<\cdots< y_{m}$ are positive even integers. Suppose that $X\cup Y\neq \{0\}$. Let $L=\sum_{i=1}^{l}{x_{i}}$, $M=\sum_{i=1}^{m}{y_{i}}$ and $n=lM+mL$. Then there exists a near tournament of order $n$ with imbalance set $X\cup Y$.
\end{theorem}
\begin{proof}
Since $L$ and $M$ are even, $n$ is even and so we cannot construct a tournament of order $n$ with imbalance set $X\cup Y$. By mirroring the proof of Case 1 of Theorem \ref{odd}, we can show that the $n$-term sequence
\[
[t_{i}]_{1}^{n}={x_{1}}^{(M)},\ldots,{x_{l}}^{(M)},{-y_{1}}^{(L)},\ldots,{-y_{m}}^{(L)}
\]
\noindent is the imbalance sequence of a simple digraph. Let $D$ be a realization of $[t_{i}]_{1}^{n}$ with maximum number of arcs. Since all the imbalances $t_{i}$ are even while $n-1$ is odd, by Lemma \ref{aux2} the number of arcs in $D$ is
\[
\sum_{i=1}^{n}\left\lfloor \frac{n-1+t_{i}}{2}\right\rfloor =\sum_{i=1}^{n}\frac{n-2+t_{i}}{2}=\frac{n(n-2)}{2},
\]
\noindent which is $\frac{n}{2}$ less than the number of arcs of a tournament of order $n$. Therefore, Lemma \ref{aux2} implies that every vertex of $D$ has exactly one non-neighbour and $D$ must be a near tournament.
\end{proof}
The following result shows that under certain conditions we can transform $D$ into a tournament by adding a suitable number of vertices.
\begin{theorem}\label{evensuff}
Let $X$, $Y$, $l$, $m$, $L$, $M$ and $n$ be as defined in Theorem \ref{even}.
\item (i) If $0\in X\cup Y$ then there exists a tournament of order $n+1$ with imbalance set $X\cup Y$.
\item (ii) If there exists an $x_{p}\in X$ and (not necessarily distinct) $-y_{q},-y_{r}\in Y$ such that $x_{p}=y_{q}+y_{r}$ then there exists a tournament of order $n+3$ with imbalance set $X\cup Y$.
\item (iii) If there exists a $-y_{p}\in Y$ and (not necessarily distinct) $x_{q},x_{r}\in X$ such that $y_{p}=x_{q}+x_{r}$ then there exists a tournament of order $n+3$ with imbalance set $X\cup Y$.
\end{theorem}
\begin{proof} Let $T$ be a near tournament realizing the imbalance sequence $[t_{i}]_{1}^{n}$ as defined in the proof of Theorem \ref{even}. For a vertex $v_{i}$ in $T$ let $v'_{i}$ denote the unique non-neighbour of $v_{i}$. Also let $(u,v)$ denote an arc directed from vertex $u$ to vertex $v$. In the three cases we can transform $T$ into a tournament as follows.
\noindent \textit{(i)} Add a vertex $v$ to $T$ in such a way that for every pair of non-adjacent vertices $v_{i}$ and $v'_{i}$ we insert the arcs $(v_{i},v'_{i})$, $(v'_{i},v)$ and $(v,v_{i})$. Thus the imbalance of all the vertices of $T$ is preserved and the new vertex $v$ has imbalance 0. Since every vertex of $T$ has been linked with every other vertex of $T$ as well as the new vertex $v$, the resulting digraph is a tournament.
\begin{figure}[htb]
\centering
\includegraphics[scale=1.3]{3-3-2-ink.pdf}
\caption{Construction of tournament in case (ii). Figure (a) represents step 1-2 and figure (b) step 3.}
\label{tournamentex}
\end{figure}
\noindent \textit{(ii)} Add three new vertices $u_{1}$, $u_{2}$ and $u_{3}$ to $T$ and insert arcs in the following manner:
\begin{enumerate}
\item Insert $(u_{1},u_{2})$, $(u_{2},u_{3})$ and $(u_{3},u_{1})$.
\item Choose any $\frac{x_{p}}{2}=\frac{y_{q}+y_{r}}{2}$ pairs $\{v_{i},v'_{i}\}$ of non-neighbouring vertices in $T$ and insert $(v_{i},v'_{i})$. Out of these choose any $\frac{y_{q}}{2}$ pairs. For each of these pairs insert the arcs $(u_{1},v_{i})$, $(u_{1},v'_{i})$, $(v_{i},u_{2})$, $(v'_{i},u_{2})$, $(v'_{i},u_{3})$ and $(u_{3},v_{i})$. For the other $\frac{y_{r}}{2}$ pairs insert the arcs $(u_{1},v_{i})$, $(u_{1},v'_{i})$, $(v_{i},u_{3})$, $(v'_{i},u_{3})$, $(v'_{i},u_{2})$ and $(u_{2},v_{i})$.
\item For the remaining $\frac{n-x_{p}}{2}$ pairs $\{v_{i},v'_{i}\}$ of non-neighbours, insert the arcs $(u_{1},v_{i})$, $(v'_{i},u_{1})$, $(v_{i},v'_{i})$, $(v_{i},u_{2})$, $(u_{2},v'_{i})$, $(u_{3},v_{i})$ and $(v'_{i},u_{3})$.
\end{enumerate}
Since every vertex is joined with every other vertex, the resulting digraph is a tournament. Furthermore, the imbalance of each vertex of $T$ is preserved, while the new vertices $u_{1}$, $u_{2}$ and $u_{3}$ have imbalances $x_{p}$, $-y_{q}$ and $-y_{r}$ respectively.
\noindent \textit{(iii)} The proof is essentially the same as that of case (ii).
\end{proof}
Theorem \ref{evensuff} can be generalized and it is the generalized version that is of interest to us as it leads to the characterization of even imbalance sets. However, we stated and proved Theorem \ref{evensuff} to provide the reader a concrete perspective of what is happening in the more abstract setting of Theorem \ref{evensuffgen}.
\begin{theorem}\label{evensuffgen}
Let $X$, $Y$, $l$, $m$, $L$, $M$ and $n$ be as defined in Theorem \ref{even}. The set $X\cup Y$ is the imbalance set of a tournament if any one of the following conditions is satisfied:
\item (i) $0\in X\cup Y$,
\item(ii) there exist an odd number of (not necessarily distinct) $x_{p_1},\ldots,x_{p_{2r+1}}\in X$ and an even number of (not necessarily distinct) $-y_{q_1},\ldots,-y_{q_{2s}}\in Y$ such that $\sum_{j=1}^{2r+1}x_{p_j}=\sum_{j=1}^{2s}y_{q_j}$,
\item(iii) there exist an odd number of (not necessarily distinct) $-y_{p_1},\ldots,-y_{p_{2r+1}}\in Y$ and an even number of (not necessarily distinct) $x_{q_1},\ldots,x_{q_{2s}}\in X$ such that $\sum_{j=1}^{2r+1}y_{p_j}=\sum_{j=1}^{2s}x_{q_j}$.
\end{theorem}
\begin{proof}
Let $T$ be a near tournament realizing the imbalance sequence $[t_{i}]_{1}^{n}$ as defined in the proof of Theorem \ref{even}.
\noindent \textit{(i)} The proof is exactly the same as part (i) of Theorem \ref{evensuff}.
\noindent \textit{(ii)} Add $2r+2s+1$ new vertices labelled $u_{p_1},\ldots,u_{p_{2r+1}}, u_{q_1}, \ldots,u_{q_{2s}}$ to $T$. Note that in the construction that follows we will relabel them in different ways, such as $u_{1},\ldots,u_{2r+2s+1}$, for the sake of convenience. We insert arcs in $T$ using the following procedure.
\begin{figure}[htb]
\centering
\includegraphics[scale=1.2]{3-4-1-ink.pdf}
\caption{Figure (a) represents steps 1-4 and figure (b) step 5 of \textsc{Add Arcs}.}
\label{tournamentex}
\end{figure}
\vspace{2mm}
\textsc{Add Arcs}:
\begin{enumerate}
\item Insert arcs so that the newly added vertices induce a regular tournament of order $2r+2s+1$.
\item Choose any $\frac{\sum_{j=1}^{2r+1}x_{p_j}}{2}=\frac{\sum_{j=1}^{2s}y_{q_j}}{2}$ pairs $\{v_{i},v'_{i}\}$ of non-neighbouring vertices in $T$ and order them arbitrarily. For each $i=1,\ldots,\frac{\sum_{j=1}^{2r+1}x_{p_j}}{2}$ insert the arc $(v_{i}, v'_{i})$. Therefore, the imbalances of $v_{i}$ and $v'_{i}$ change by $+1$ and $-1$ respectively, for all $i$.
\item For each $j=1,\ldots,2r+1$ choose $i=\frac{\sum_{h=1}^{p_{_{j-1}}}x_h}{2}+1,\ldots,\frac{\sum_{h=1}^{p_{j}}x_h}{2}$ and insert $x_{p_{j}}$ arcs $(u_{p_j},v_{i})$ and $(u_{p_j},v'_{i})$. This gives $u_{p_{j}}$ the imbalance $x_{p_{j}}$.
\item For each $j=1,\ldots,2s$ choose $i=\frac{\sum_{h=1}^{q_{_{j-1}}}y_h}{2}+1,\ldots,\frac{\sum_{h=1}^{q_{j}}y_h}{2}$ and insert the arcs $(v_{i},u_{q_j})$, $(v'_{i},u_{q_j})$. Thus the imbalance of $u_{q_{j}}$ is $-y_{q_{j}}$. But the imbalances of $v_{i}$ and $v'_{i}$ are still perturbed by $+1$ and $-1$ respectively, for $i=1,\ldots,\frac{\sum_{j=1}^{2r+1}x_{p_j}}{2}$.
\item For every $i=1,\ldots,\frac{\sum_{j=1}^{2r+1}x_{p_j}}{2}$ list the $u$'s that are not already linked with $v_{i}$ and $v'_{i}$. There are exactly $2r+2s-1$ such $u$'s, for each $i$. Label them arbitrarily from $1,\ldots,2r+2s-1$. For $j=1,\ldots,\frac{2r+2s-2}{2}$ insert the arcs $(u_{j},v_{i})$ and $(v'_{i},u_{j})$. For $j=\frac{2r+2s-2}{2}+1,\ldots,2r+2s-2$ insert the arcs $(u_{j},v_{i})$ and $(v'_{i},u_{j})$. Finally, insert the arcs $(u_{2r+2s-1},v_{i})$ and $(v'_{i}, u_{2r+2s-1})$. This preserves all the imbalances.
\item For the remaining $\frac{n-\sum_{j=1}^{2r+1}x_{p_j}}{2}$ pairs $\{v_{i}, v'_{i}\}$ of non-neighbours insert the arc $(v_{i},v'_{i})$. Label all the $u$'s arbitrarily from $1,\ldots,2r+2s+1$. For $j=1,\ldots,\frac{2r+2s}{2}$ insert the arcs $(u_{j},v_{i})$ and $(v'_{i},u_{j})$. For $j=\frac{2r+2s}{2}+1,\ldots,2r+2s$ insert the arcs $(u_{j},v_{i})$ and $(v'_{i},u_{j})$. Finally, insert the arcs $(u_{2r+2s+1},v_{i})$ and $(v'_{i}, u_{2r+2s+1})$. This preserves all the imbalances.
\begin{figure}[htb]
\centering
\includegraphics[scale=1.5]{3-4-2-ink.pdf}
\caption{Inserting the remaining arcs in step 6 of \textsc{Add Arcs}.}
\label{tournamentex}
\end{figure}
\end{enumerate}
\vspace{2mm}
Since every vertex is joined with every other vertex, the resulting digraph is a tournament. Furthermore, the imbalance of each vertex of $T$ is preserved, while the new vertices $u_{p_1},\ldots,u_{p_{2r+1}}, u_{q_1},\ldots,u_{q_{2s}}$ have imbalances $x_{p_1},\ldots,x_{p_{2r+1}}$, $-y_{q_1},\ldots,-y_{q_{2s}}$ respectively.
\noindent \textit{(iii)} The proof is essentially the same as that of case (ii).
\end{proof}
The reader can easily draw parallels between the proofs of Theorems \ref{evensuff}(ii) and \ref{evensuffgen}(ii). For instance, the first and the last steps of both proofs are essentially achieveing the same target while steps 2-5 of the later are similar to but more complicated than step 2 of the former.
Analyzing the above proof leads to a couple of simpler sufficient conditions for tournament imbalance sets. The first is one is a fairly straightforward consequence of Theorem \ref{evensuff} (i).
\begin{corr}\label{zero}
If $Z$ is the empty set or it contains at least one positive and at least one negative even integer then $Z\cup \{0\}$ is the imbalance set of a tournament.
\end{corr}
The second condition is not as obvious and is more of an arithmetic result than a combinatorial one. First, we note that for any positive integer $p\geq 1$, the set $\{2^{p}, -2^{p}\}$ is not a tournament imbalance set as any zero sum sequence formed by the elements of this set necessarily consists of an even number of elements. However, the following sufficient condition shows that any other set of positive and negative even integers containing a power of $2$ is a tournament imbalance set.
\begin{corr}\label{arithmetic}
Let $Z$ be a finite nonempty set of even integers containing at least one positive and at least one negative integer. Suppose $Z$ contains an element of the form $2^{p}$ or $-2^{p}$, for some positive integer $p\geq 1$, and $Z\neq \{2^{p}, -2^{p}\}$. Then $Z$ is the imbalance set of a tournament.
\end{corr}
\begin{proof} Let us assume that for some positive integer $p\geq 1$, $2^{p}$ is an element of $Z$. Choose any negative element $-y\in Z$. Then $y=r2^{q}$, where $q\geq 1$ is a positive integer and $r\geq 1$ is an odd positive integer such that if $r=1$ then $q\neq p$. (If this is not possible, we can start with $-2^{p}\in Z$ and choose an $x=r2^{q}$ from $Z$ with $q\neq p$.) Without loss of generality, let $p=\max\{p,q\}$. We have
\[\underbrace{2^{p}+\cdots+2^{p}}_{r \textnormal{ terms}}=\underbrace{y+\cdots+y}_{2^{p-q} \textnormal{ terms}},
\]
\noindent and by Theorem \ref{evensuffgen}, $Z$ is a tournament imbalance set.
\end{proof}
After deriving a number of sufficient conditions for tournament imbalance sets of even integers, the natural question is whether the sufficient conditions given in Theorem \ref{evensuffgen} are also necessary. The answer is positive as seen from the following result.
\begin{theorem}\label{evennecess}
Let $Z=X\cup Y$ be a finite nonempty set of even integers, where $X$ is the set of non-negative integers and $Y$ is the set of negative integers in $Z$. Then $Z$ is the imbalance set of a tournament if and only if either $Z=\{0\}$ or both $X$ and $Y$ are nonempty and satisfy one of the conditions (i), (ii) or (iii) of Theorem \ref{evensuffgen}.
\end{theorem}
\begin{proof}
The sufficiency follows from Theorem \ref{evensuffgen}.
To prove the necessity, suppose that $0\notin X\cup Y$ and let $X\cup Y$ be the imbalance set of a tournament of order $k$. This implies that we can form a sequence $[t_{i}]_{1}^{k}$ consisting of an odd number of not necessarily distinct terms from the elements of $X\cup Y$ that sums to zero. Since $k$ is odd, therefore either the number of terms from $X$ is odd or the number of terms from $Y$ is odd, but not both. Thus we have an odd (respectively even) number of terms $x\in X$ and an even (respectively odd) number of terms $-y\in Y$ such that $\sum{x}=\sum{y}$.
\end{proof}
\section{Algorithmic aspects}
\label{algorithm}
The aim of this section is to study the \textit{tournament imbalance set problem} (TIS) and present an algorithm that generates a tournament realizing any tournament imbalance set. We begin by proving a theorem on the lengths of equal sum sequences chosen from two set of non-negative integers that will play a crucial role in developing the algorithm.
\begin{theorem}\label{zerosumsequence}
Let $X$, $Y$, $l$, $m$, $L$, $M$ and $n$ be as defined in Theorem \ref{even}. If $k=p+q$ is the minimum odd number such that there exists a $p$-term sequence from $X$ and a $q$-term sequence from $-Y=\{y:-y\in Y\}$ having the same sum, then $k<n$.
\end{theorem}
\begin{proof}
We observe that $k$ equals the minimal length of a zero sum sequence from $X\cup Y$. We prove the result by induction on $n$. Note that according to the conditions of Theorem \ref{even}, $X\cup Y\neq \{0\}$ and so the minimum possible value of $n$ is 4 that only corresponds to the sets $X=\{2\}$ and $Y=\{-2\}$. Since it is not possible to form a zero sum sequence with odd number of terms from $\{2, -2\}$. The next smallest value of $n$ is $6$ that corresponds to the sets $\{2,0,-2\}$, $\{2,-4\}$ and $\{4,-2\}$. Each of these sets admits a zero sum sequence of length $k=3$ and so $k<n$.
Now we aim to show that the result holds for any $n>6$ by assuming that it holds for all values less than $n$. Let $X$ and $Y$ be any two sets of integers corresponding to $n$ and let $k\geq 3$ be the minimum odd number such that there exists a $k$-term zero sum sequence $a_{1},\ldots, a_{p},-b_{1},\ldots,-b_{q}$, where $k=p+q$, $a_{1}>\cdots>a_{p}\in X$ and $-b_{1}>\cdots> -b_{q}\in Y$. Assume that $k>n$, then $k\geq 5$. The sequence $a_{1}+a_{2},\ldots,a_{p},-b_{1}\ldots,-b_{q-1}-b_{q}$ is a zero sum sequence of minimal odd length $k'=k-2$ corresponding to the sets $X'=X-\{a_{1},a_{2}\}\cup \{a_{1}+a_{2}\}$ and $Y'=Y-\{-b_{q-1},-b_{q}\}\cup\{-b_{q-1}-b_{q}\}$. For the sets $X'$ and $Y'$ we have $n'<n-2$ and so $k'>n'$, contradicting the induction hypothesis.
\end{proof}
Given a set of non-negative integers, we call the search problem of finding two disjoint nonempty subsets that have identical sums the \textit{equal sum subsets problem} (ESS). Several authors \cite{bazgan1,woeginger1} have considered the corresponding decision and optimization problems. Additionally, many variants of ESS have been studied in literature. For instance, if we require subsets to be found from two different sets of positive integers the problem is called \textit{equal sum subsets from two sets} (ESST). It is known that ESS and ESST are weakly NP-hard as they admit pseudo-polynomial time algorithms \cite{cieliebak1,woeginger1}. The best known algorithm for the ESS is the dynamic programming procedure by Bazgan, Santha and Tuza \cite{bazgan1} that runs in $O(\left|I\right|\times{Sum}^{2})$ time, and determines all possible solutions of an ESS instance. Here $\left|I\right|$ and $Sum$ respectively denote the number of elements and the sum of elements of the input set. This procedure can be easily adapted to solve ESST \cite{cieliebak1}. Here we are interested in the following variation of ESS.
\begin{definition}[\textbf{Equal Sum Sequences Problem (ESSeq)}]\label{ESSeq}
Given two sets $X$ and $Y$ of non-negative integers and a positive integer $k$, find two nonempty finite sequences $[x]$ and $[y]$ consisting of elements from $X$ and $Y$ respectively, with each element allowed to repeat at the most $k$ times, such that $\sum x = \sum y$.
\end{definition}
We now study the complexity of ESSeq. Clearly, ESSeq is in class NP. Furthermore, ESST corresponds to the special case $k=1$ of ESSeq. Thus ESSeq is NP-hard. In fact, ESSeq is weakly NP-hard as we can solve any instance $ESSeq(X,Y,k)$ by using the multisets $X^{(k)}$ and $Y^{(k)}$, in which each element is repeated $k$ times, as input for the pseudo-polynomial ESST algorithm. Let us call the resulting algorithm, which finds all possible solutions to an ESSeq instance, \textsc{Equal Seq}. We have shown the following.
\begin{theorem}\label{esseq np}
The ESSeq decision (search) problem is weakly NP-complete (weakly NP-hard).
\end{theorem}
On the other hand, any algorithm that solves the even case of TIS must be able to check the existence of equal sum sequences $[x]_{1}^{a}$ and $[y]_{1}^{b}$ from any given nonempty finite sets $X$ and $Y$ of even integers such that $a$ and $b$ have different parity. Let $hX$ denote the set obtained by multiplying every element of a set $X$ by $h$ and $X+\{h\}$ denote the set obtained by adding a number $h$ to every element of $X$. We can solve an instance $ESSeq(X,Y,k)$ of the ESSeq decision problem by using any TIS algorithm to solve $\left|X\right|+1$ even instances $TIS(2X,2Y)$, $TIS(2(X+\{x\}),2Y)_{x\in X}$ of TIS.
\begin{theorem}\label{np}
The odd case of TIS can be solved in linear time. On the other hand, the even case of TIS is NP-complete. Hence in general TIS is NP-complete.
\end{theorem}
We now present a pseudo-polynomial time algorithm that not only solves TIS but also generates a tournaments realizing any tournament imbalance set. Our algorithm is based on the proofs of Theorems \ref{odd}, \ref{evensuffgen} and \ref{zerosumsequence}. First we form a suitable $n$-term imbalance sequence and then realize it as a tournament. Lemma \ref{aux2} can be used to construct a simple digraph, with maximum possible number of arcs, realizing an imbalance sequence $[t_{i}]_{1}^{n}$. The idea is to start with an arbitrary vertex $v$ having imbalance $t_{i}$ and attach it to $\left\lfloor \frac{n-1+t_{i}}{2}\right\rfloor$ vertices by arcs directed away from $v$. If $t_{i}$ has the same parity as $n-1$ then it is joined with $n-1-\left\lfloor \frac{n-1+t_{i}}{2}\right\rfloor$ other vertices by arcs directed towards $v$. Otherwise, it is joined with $n-2-\left\lfloor \frac{n-1+t_{i}}{2}\right\rfloor$ other vertices by arcs directed towards $v$. Thus $v$ is joined to every vertex except possibly one. These steps are then repeated for every vertex without attaching any new arcs to the preprocessed vertices. We name this $O(n^{2})$ procedure \textsc{Max Realization}.
Now suppose that $Z$ is a finite nonempty set of integers arranged in decreasing order. Form the sets $X=\{z\in Z:z\geq 0\}=\{x_{1}, \ldots, x_{l}\}$ and $Y=\{z\in Z:z<0\}=\{-y_{1}, \ldots, -y_{m}\}$ arranged in decreasing order. Let $L=\sum_{i=1}^{l}{x_{i}}$, $M=\sum_{i=1}^{m}{y_{i}}$ and $n=lM+mL$ as in the earlier proofs. The following algorithm outputs a tournament that realizes $Z$, whenever such a tournament exists.
\begin{algo}[\textsc{Imbalance Set}]\label{generate}
\begin{enumerate}
\item If either $X$ or $Y$ is empty, then $Z$ is not a tournament imbalance set. Stop.
\item If elements of $Z$ have different parity, $Z$ is not a tournament imbalance set. Stop.
\item Form the sequence $[t_{i}]_{1}^{n}={x_{1}}^{(M)},\ldots,{x_{l}}^{(M)},{-y_{1}}^{(L)},\ldots,{-y_{m}}^{(L)}.$
\item Call the procedure \textsc{Max Realization} to realize $[t_{i}]_{1}^{n}$ as a simple digraph $D$ with maximum number of arcs.
\item If elements of $Z$ have odd parity, output $D$. End.
\item If elements of $Z$ have even parity, call \textsc{Equal Seq} with the input $(X^{(n)}, (-Y)^{(n)}, n)$ to find sequences $[x]_{1}^{a}$ and $[y]_{1}^{b}$, with $a$ and $b$ having different parity and $\sum x=\sum y$. If no such sequences exist then $Z$ is not a tournament imbalance set. End.
\item Add $2a+2b+1$ isolated vertices to $D$.
\item Call \textsc{Add Arcs} to add $a+b$ vertices and arcs to $D$ to form a tournament $T$. Return $T$.
\end{enumerate}
\end{algo}
The following result shows that the procedure \textsc{Imbalance Set} runs in pseudo-polynomial time and hence TIS is weakly NP-hard.
\begin{theorem}\label{correct}
Algorithm \ref{generate} is correct and runs in pseudo-polynomial time.
\end{theorem}
\begin{proof}
The correctness follows immediately from Theorems \ref{odd}, \ref{even}, \ref{evensuffgen} and \ref{zerosumsequence}. In particular, Theorem \ref{zerosumsequence} guarantees that in the case when $Z$ is a set of even integers, step 6 of Algorithm \ref{generate} necessarily finds the required sequences if they exist. Now note that the computational complexity of Algorithm \ref{generate} is dominated by steps 4 and 6. Step 4 can be performed in $O(n^{2})=O((lM+mL)^{2})$ time, whereas step 6 takes $O(( n\left| X \right| + n\left| Y \right|)\times(n\sum_{x\in X}x + n\sum_{-y\in Y}y)^{2})=O(n^{3}(l+m)\times(L+M)^{2})=O(n^{3}\left|Z\right|\times (L+M)^{2})$ time. The overall complexity is therefore $O(\left|Z\right|\times n^{5})$, which is pseudo-polynomial since $n$ depends on the numeric value of the input.
\end{proof}
Thus we can use Algorithm \ref{generate} to check if a given set of integers is the imbalance set of a tournament and moreover, to construct a tournament realizing the set, if it exists. We now illustrate Algorithm \ref{generate} by showing how it generates a tournament realizing the imbalance set $\{4,2,-2\}$.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.7]{tournament1.pdf}
\caption{A tournament realizing the imbalance set $\{4, 2, -2\}$ obtained from Algorithm 1.}
\label{tournamentex}
\end{figure}
\begin{example}
Consider the set $Z=\{4, 2, -2\}$. Since $Z$ satisfies the conditions in the first two steps of Algorithm 1, the algorithm goes to step 3 and forms the sequence $4, 4, 2, 2, -2, -2, -2, -2, -2, -2$. Step 4 calls the procedure \textsc{Max Realization} to output a simple digraph of order 10 realizing $Z$. However, this simple digraph is only a near tournament and not a tournament (see the digraph induced by the black vertices in Figure \ref{tournamentex}). Since the elements of $Z$ have even parity, the algorithm proceeds to step 6 and finds the the sequences $[x_{i}]_{1}^{1}=4$ and $[y_{i}]_{1}^{2}=-2,-2$ with odd and even number of terms respectively, such that $\sum{x_{i}} = -\sum{y_{i}}$. Then step 7 adds 3 new vertices (colored white in Figure \ref{tournamentex}) to the near tournament obtained in step 4. In the end, step 8 adds the arcs (dashed arcs in Figure \ref{tournamentex}) necessary to form a tournament of order 13 in such a way that the imbalances of the old vertices are preserved and the new vertices have imbalances $4$, $-2$, and $-2$. The output of Algorithm 1 is a tournament with imbalance sequence $4, 4, 4, 2, 2, -2, -2, -2, -2, -2, -2, -2, -2$ as shown in Figure \ref{tournamentex}.
\end{example}
The results presented in Sections 2, 3 and 4 can be used to estimate the order of a tournament realizing an imbalance set. Let us denote by $ord(Z)$ the minimal order of a tournament realizing an imbalance set $Z$.
\begin{theorem}\label{extremal}
Let $Z$ be a tournament imbalance set (i.e., it satisfies the conditions of Theorem 2.4 or Theorem 3.6). Define $X$, $Y$, $l$, $m$, $L$, $M$ and $n$ as in the earlier results.
\item(i) If $Z$ consists of odd integers then $ord(Z)\leq n=lM+mL$.
\item(ii) If $Z$ consists of even integers and $0\in Z$ then $ord(Z)\leq n+1=lM+mL+1$.
\item(iii) If $Z$ consists of even integers and $0\notin Z$ then $ord(Z)< 2n=2(lM+mL)$.
\end{theorem}
\begin{proof}
The proof of (i) and (ii) follows from Theorem \ref{odd}, while (iii) follows from Theorem \ref{evensuff}(i). For (iv), observe that the order of the tournament constructed in the proof of Theorem \ref{evensuffgen} is $n+2r+2s+1$. From Theorem \ref{zerosumsequence}, $2r+2s+1< n$. As a result, the constructed tournament can be at the most of order $2n-1$.
\end{proof}
\bigskip
\noindent \textbf{Acknowledgements}
\noindent The author would like to thank Professor K\'{a}roly Bezdek for having many useful discussions on the topic and helping in impproving this manuscript. The author is also thankful to Dr. Shariefuddin Pirzada for drawing his attention to the imbalance set problem. This research was performed at the Center for Computational and Discrete Geometry, Department of Mathematics and Statistics, University of Calgary.
| {
"timestamp": "2014-02-12T02:09:51",
"yymm": "1402",
"arxiv_id": "1402.2456",
"language": "en",
"url": "https://arxiv.org/abs/1402.2456",
"abstract": "Reid conjectured that any finite set of non-negative integers is the score set of some tournament and Yao gave a non-constructive proof of Reid's conjecture using arithmetic arguments. No constructive proof has been found since. In this paper, we investigate a related problem, namely, which sets of integers are imbalance sets of tournaments. We completely solve the tournament imbalance set problem (TIS) and also estimate the minimal order of a tournament realizing an imbalance set. Our proofs are constructive and provide a pseudo-polynomial time algorithm to realize any imbalance set. Along the way, we generalize the well-known equal sum subsets problem (ESS) to define the equal sum sequences problem (ESSeq) and show it to be NP-complete. We then prove that ESSeq reduces to TIS and so, due to the pseudo-polynomial time complexity, TIS is weakly NP-complete.",
"subjects": "Combinatorics (math.CO)",
"title": "Equal Sum Sequences and Imbalance Sets of Tournaments",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534349454033,
"lm_q2_score": 0.8175744828610095,
"lm_q1q2_score": 0.8024112845276495
} |
https://arxiv.org/abs/2009.07768 | Duality Mapping for Schatten Matrix Norms | In this paper, we fully characterize the duality mapping over the space of matrices that are equipped with Schatten norms. Our approach is based on the analysis of the saturation of the Hölder inequality for Schatten norms. We prove in our main result that, for $p\in (1,\infty)$, the duality mapping over the space of real-valued matrices with Schatten-$p$ norm is a continuous and single-valued function and provide an explicit form for its computation. For the special case $p = 1$, the mapping is set-valued; by adding a rank constraint, we show that it can be reduced to a Borel-measurable single-valued function for which we also provide a closed-form expression. | \section{Introduction}
\label{Sec:intro}
In linear algebra and matrix analysis, Schatten norms are a family of spectral matrix norms that are defined via the singular-value decomposition \cite{bhatia2013matrix}. They have appeared in many applications such as image reconstruction \cite{lefki2013Poisson,lefki2013HS}, image denoising \cite{xie2016weighted}, and tensor decomposition \cite{gao2020robust}, to name a few.
Generally, the Schatten-$p$ norm of a matrix is the $\ell_p$ norm of its singular values. The family contains some well-known matrix norms: The Frobenius and the spectral (operator) norms are special cases in the family, with $p=2$ and $p=\infty$, respectively. The case $p=1$ (trace or nuclear norm) is of particular interest for applications as it can be used to recover low-rank matrices \cite{davenport2016overview}. This is the current paradigm in matrix completion, where the goal is to recover an unknown matrix given some of its entries \cite{candes2009exact}. Prominent examples of applications that can be reduced to low-rank matrix-recovery problems are phase retrieval \cite{candes2015phase}, sensor-array processing \cite{davies2012rank}, system identification \cite{fazel2013hankel}, and index coding \cite{asadi2017fast,esfahanizadeh2014matrix}.
In addition to their many applications in data science, Schatten norms have been extensively studied from a theoretical point of view. Various inequalities concerning Schatten norms have been proven \cite{kittaneh1985inequalities,kittaneh1987inequalities2,kittaneh1986inequalities3,kittaneh1986inequalities4,kittaneh1987inequalities5,bourin2006matrix,Hirzallah2010Schatten,moslehian2011schatten,conde2016norm};
sharp bounds for commutators in Schatten spaces have been given \cite{wenzel2010impressions,cheng2015schatten}; moreover, facial structure \cite{so1990facial}, Fr\'echet differentiablity \cite{potapov2014frechet}, and various other aspects \cite{kittaneh1989continuity,bhatia2000cartesian} have been studied already.
Our objective in this paper is to investigate the duality mapping in spaces of matrices that are equipped with Schatten norms. The duality mapping is a powerful tool to understand the topological structure of Banach spaces \cite{beurling1962theorem,cioranescu2012geometry}. It has been used to derive powerful characterizations of the solution of variational problems in function spaces \cite{de1976best,unser2019unifying} and also to determine generalized linear inverse operators \cite{liu2007best}. Here, we prove that the duality mapping over Schatten-$p$ spaces with $p\in (1,+\infty)$ is a single-valued and continuous function which, in fact, highlights the strict convexity of these spaces. For the special case $p=1$, the mapping is set-valued. However, we prove that, by adding a rank constraint, it reduces to a single-valued Borel-measurable function. In both cases, we also derive closed-form expressions that allow one to compute them explicitly.
The paper is organized as follows: In Section \ref{Sec:Prelim}, we present relevant mathematical tools and concepts that are used in this paper. We study the duality mapping of Schatten spaces and propose our main result in Section \ref{Sec:DualMap}. We provide further discussions regarding the introduced mappings in Section \ref{Sec:discuss}.
\section{Preliminaries}
\label{Sec:Prelim}
\subsection{Dual Norms, H\"older Inequality, and Duality Mapping}
\label{Sec:Holder}
Let $V$ be a finite-dimensional vector space that is equipped with an inner-product $\langle \cdot,\cdot \rangle:V \times V \rightarrow \mathbb{R}$ and let $\|\cdot\|_{X}:V\rightarrow \mathbb{R}_{\geq 0}$ be an arbitrary norm on $V$. We then denote by $X$ the space $V$ equipped with $\|\cdot\|_X$. Clearly, $X$ is a Banach space, because all finite-dimensional normed spaces are complete. The dual norm of $X$, denoted by $\|\cdot\|_{X'}: V\rightarrow \mathbb{R}_{\geq 0}$, is defined as
\begin{equation}\label{Eq:DualNorm}
\|{\bf v}\|_{X'} = \sup_{{\bf u} \in V\backslash \{\boldsymbol{0}\} } \frac{\langle {\bf v} , {\bf u} \rangle}{\|{\bf u}\|_{X}},
\end{equation}
for any ${\bf v} \in V$. Following this definition, one would directly obtain the generic duality bound
\begin{equation}\label{Eq:DualityBound}
\langle {\bf v}, {\bf u} \rangle \leq \|{\bf v}\|_{X'} \|{\bf u}\|_{X},
\end{equation}
for any ${\bf v}, {\bf u} \in V$. Saturation of Inequality \eqref{Eq:DualityBound} is the key concept of dual conjugates that is formulated in the following definition.
\begin{definition}\label{Def:DualConj}
Let $V$ be a finite-dimensional vector space and let $(\|\cdot\|_X,\|\cdot\|_{X'})$ be a pair of dual norms that are defined over $V$. The pair $({\bf u},{\bf v}) \in V\times V$ is said to be a $(X,X')$-conjugate, if
\begin{itemize}
\item $\langle {\bf v}, {\bf u}\rangle = \|{\bf v}\|_{X'} \|{\bf u}\|_{X}$,
\item $\|{\bf v}\|_{X'} = \|{\bf u}\|_{X}$.
\end{itemize}
For any ${\bf u}\in V$, the set of all elements $ {\bf v} \in V$ such that $({\bf u},{\bf v})$ forms an $(X,X')$-conjugate is denoted by $\mathcal{J}_{X}({\bf u})\subseteq V$. We refer to the set-valued mapping $\mathcal{J}_{X}: V \rightarrow 2^V$ as the duality mapping. If, for all ${\bf u}\in V$, the set $\mathcal{J}_{X}({\bf u})$ is a singleton, then we indicate the duality mapping for the $X$-norm via the single-valued function ${\rm J}_X:V\rightarrow V$ with $\mathcal{J}_{X}({\bf u}) = \{{\rm J}_X({\bf u})\}$.
\end{definition}
It is worth mentioning that, for any ${\bf u}\in V$, the set $\mathcal{J}_{X}({\bf u})$ is nonempty . In
fact, the closed ball $B= \{ {\bf v}\in V : \|{\bf v} \|_{X'}= \|{\bf u} \|_{X} \}$ is compact and, hence, the function ${\bf v}\mapsto \langle {\bf v} , {\bf u}\rangle$ attains its maximum value at some ${\bf v}^*\in B$. Now, following Definition \ref{Def:DualConj}, one readily verifies that $({\bf u},{\bf v}^*)$ is an $({X},{X}')$-conjugate.
We conclude this part by providing a classical and illustrative example. Let $V= \mathbb{R}^n$ for some $n\in\mathbb{N}$. For any $p \in [1,+\infty]$, the $\ell_p$-norm of a vector ${\bf u} =(u_i)\in \mathbb{R}^n$ is defined as
\begin{equation}\label{Eq:lp}
\|{\bf u}\|_{p} = \begin{cases} \left(\sum_{i=1}^n |u_i|^p\right)^{\frac{1}{p}}, & p< +\infty \\ \max_{i} |u_i|, & p= +\infty. \end{cases}
\end{equation}
It is widely known that the dual norm of $\ell_p$ is the $\ell_q$-norm, where $(p,q)$ are H\"older conjugates ({\it i.e.}, $1/p+1/q=1$) \cite{rudin1991functional}. This stems from the H\"older inequality which states that
\begin{equation}\label{Eq:HolderVec}
\langle {\bf v}, {\bf u} \rangle \leq \| {\bf u}\|_{p} \|{\bf v}\|_q,
\end{equation}
for all ${\bf u}=(u_i),{\bf v}=(v_i) \in \mathbb{R}^n$. In the sequel, we exclude the trivial cases ${\bf u}=\boldsymbol{0}$ and ${\bf v}=\boldsymbol{0}$ to avoid unnecessary complexities in our statements.
When $1<p<+\infty$, Inequality \eqref{Eq:HolderVec} is saturated if and only if $u_i v_i \geq 0$ for $i=1,\ldots,n$ and there exists a constant $c >0$ such that $|{\bf u}|^p = c |{\bf v}|^q$, where $|{\bf u}|^p = (|u_i|^p)$. This ensures that the duality mapping is single-valued and also yields the map
\begin{equation}\label{Eq:DualMapLp}
\mathrm{J}_p({\bf u})= {\rm sign}({\bf u}) \frac{|{\bf u}|^{p-1}}{\|{\bf u}\|_p^{p-2}}.
\end{equation}
For $p=1$, one can verify that the equality happens if and only if, for any index $i=1,\ldots,n$ with $u_i \neq 0$, one has that
\begin{equation}\label{Eq:HolderEq1}
v_i = {\rm sign} (u_i)\|{\bf v}\|_{\infty}.
\end{equation}
In other words, the vector ${\bf v}$ should attain its extreme values at places where ${\bf u}$ has nonzero values, with the sign being determined by the corresponding element in ${\bf u}$.
Due to \eqref{Eq:HolderEq1}, the set $\mathcal{J}_1({\bf u})$ is not necessarily a singleton. However, if we add an additional sparsity constraint, then the mapping becomes single-valued. This leads us to introduce the new notion of {\it sparse duality mapping} in Definition \ref{Def:SparseDual}.
\begin{definition}\label{Def:SparseDual}
Let $V$ be a finite-dimensional vector space and let $s_0: V \rightarrow \mathbb{N}$ be an integer-valued function that acts as a sparsity measure. Assuming a pair $(\|\cdot\|_X,\|\cdot\|_{X'})$ of dual norms over $V$, we call the pair $({\bf u},{\bf v})\in V\times V$ a sparse $(X,X')$-conjugate if
\begin{itemize}
\item $({\bf u},{\bf v})$ forms an $(X,X')$-conjugate pair. In other words, ${\bf v} \in \mathcal{J}({\bf u})$.
\item The quantity $s_0 ( {\bf v})$ attains its minimal value over the set $\mathcal{J}({\bf u})$.
\end{itemize}
We denote the set of sparse conjugates of ${\bf u}$ by $\mathcal{J}_{X,s_0}({\bf u})$. Whenever $\mathcal{J}_{X,s_0}({\bf u})$ is a singleton for any ${\bf u}\in V$, we refer to the single-valued function ${\rm J}_{X,s_0}: V \rightarrow V$ with $\mathcal{J}_{X,s_0}({\bf u})=\{{\rm J}_{X,s_0}({\bf u})\}$ as the sparse duality mapping.
\end{definition}
Following Definition \ref{Def:SparseDual}, if we use the $\ell_0$-norm as the sparsity measure, that is $s_0 ({\bf u}) = \|{\bf u}\|_0={\rm Card}\left(\{i: u_i\neq 0\}\right)$\footnote{Although this functional does not satisfy the homogeneity property of a norm, it has been widely referred to as the $\ell_0$-norm.}, then we have the sparse duality mapping
\begin{align}\label{Eq:DualMapSparseL1}
&{\rm J}_{1,0}:\mathbb{R}^n\rightarrow\mathbb{R}^n:{\bf u}=(u_i) \mapsto {\bf v}=(v_i) ={\rm J}_{1,0}({\bf u}), \nonumber \\
&v_i = \begin{cases} {\rm sign}(u_i) \|{\bf u}\|_1, & u_i\neq 0 \\
0, & u_i=0.\end{cases}
\end{align}
Finally, we mention that, for $p= +\infty$, the reduced set $\mathcal{J}_{\infty,0}$ is not single-valued. Indeed, let us define $I_{\max}({\bf u})=\{i: |u_i|=\|{\bf u}\|_{\infty}\}\subseteq \{1,\ldots,n\}$. We readily deduce from \eqref{Eq:HolderEq1} that ${\bf v}=(v_1,\ldots,v_n) \in \mathcal{J}_{\infty}({\bf u})$ if and only if $v_i = 0$ whenever $i\not \in I_{\max}({\bf u})$ and ${\rm
sign}(v_i) = {\rm sign}(u_i)$ for $i \in I_{\max}({\bf u})$ with $\sum_{i\in I_{\max}({\bf u})} |
v_i|= \|{\bf u}\|_{\infty}$. This shows that $\mathcal{J}_{\infty}({\bf u})$ is a convex set with $\mathcal{J}_{\infty,0}({\bf u})$ being its extreme points, where $\mathcal{J}_{\infty,0}({\bf u})=\{u_i {\bf e}_i: i\in I_{\max}({\bf u})\}$.
\subsection{Schatten $p$-Norm}
It is widely known that any matrix ${\bf A} \in \mathbb{R}^{m\times n}$ can be decomposed as
\begin{equation}\label{Eq:SVD}
{\bf A}= {\bf U} {\bf S} {\bf V}^T,
\end{equation}
where ${\bf U}\in\mathbb{R}^{m\times m}$ and ${\bf V}\in\mathbb{R}^{n\times n}$ are orthogonal matrices and ${\bf S}$ is an $m$ by $n$ rectangular diagonal matrix with nonnegative real entries $\sigma_1 \geq \sigma_2 \geq \cdots \geq \sigma_{\min(m,n)} \geq 0$ sorted in descending order. In the literature, \eqref{Eq:SVD} is known as the singular-value decomposition (SVD) and the entries $\sigma_i$ are the singular values of ${\bf A}$. In general, the SVD of a matrix $\bf A$ is not unique. However, the diagonal matrix ${\bf S}$ and, consequently, its entries, are fully determined from ${\bf A}$. In other words, the values of $\sigma_i$ are invariant to a specific choice of decomposition. This is why one can refer to the diagonal entries of ${\bf S}$ as the ``singular values'' of ${\bf A}$.
When ${\bf A}$ is not full rank, one can obtain a reduced version of \eqref{Eq:SVD}. Indeed, if we denote the rank of ${\bf A}$ by $r$, then we have that
\begin{equation}\label{Eq:ReducedSVD}
{\bf A} = {\bf U}_r {\bf S}_r {\bf V}_r^T,
\end{equation}
where $ {\bf U}_r \in \mathbb{R}^{m\times r}$ and $ {\bf V}_r \in \mathbb{R}^{n\times r}$ are (sub)-orthogonal matrices such that $ {\bf U}_r^T {\bf U}_r = {\bf V}_r^T {\bf V}_r = {\bf I}_r$ and ${\bf S}_r ={\rm diag}(\boldsymbol{\sigma})$ is a diagonal matrix that contains positive singular values $\boldsymbol{\sigma}=(\sigma_1,\ldots,\sigma_r)\in \mathbb{R}^{r}$ of ${\bf A}$.
Finally, for any $p\in [1,+\infty]$, the Schatten-$p$ norm of ${\bf A}$ is defined as
\begin{equation} \label{Eq:SchattenNorm}
\|{\bf A}\|_{S_p} = \begin{cases} \left(\sum_{i=1}^r \sigma_i^p\right)^{\frac{1}{p}}, & p< +\infty \\ \sigma_1, & p= +\infty. \end{cases}
\end{equation}
\section{Duality Mapping in Schatten Spaces}
\label{Sec:DualMap}
The dual of the Schatten-$p$ norm is the Schatten-$q$ norm, where $q\in [1,\infty]$ is such that $\frac{1}{p}+\frac{1}{q}=1$ \cite{bhatia2013matrix}. This is due to the generalized version of H\"older's inequality for Schatten norms, as stated in Proposition \ref{Prop:Holder}. While this is a known result (see, for example, \cite{Lefki2015StructureTensor}), it is also the basis for the present work, which is the reason why we provide a proof in \ref{App:Holder}.
\begin{proposition}\label{Prop:Holder}
For any pair $(p,q)\in [1,+\infty]^2$ of H\"older conjugates with $\frac{1}{p}+\frac{1}{q}=1$ and any pair of matrices ${\bf A},{\bf B}\in \mathbb{R}^{m\times n}$, we have that
\begin{equation}\label{Eq:Holder}
\langle {\bf A}, {\bf B}\rangle = \mathrm{Tr}\left( {\bf A}^T {\bf B} \right) \leq \|{\bf A}\|_{S_p} \|{\bf B}\|_{S_q}.
\end{equation}
\end{proposition}
In Proposition \ref{Prop:HolderSat}, we investigate the case where the H\"older inequality is saturated, in the sense that
\begin{equation}\label{Eq:HolderSat}
{\rm Tr}\left( {\bf A}^T {\bf B} \right) = \|{\bf A}\|_{S_p} \|{\bf B}\|_{S_q}.
\end{equation}
This saturation is central to our work, as it is tightly linked to the notion of duality mapping.
\begin{proposition}\label{Prop:HolderSat}
Let $(p,q)$ be a pair of H\"older conjugates and let ${\bf A},{\bf B}\in \mathbb{R}^{m\times n}$ be a pair of nonzero matrices with reduced SVDs of the form
\begin{equation}\label{Eq:RedSVD}
{\bf A}= {\bf U}_r {\rm diag}(\boldsymbol{\sigma}) {\bf V}_r^T, \quad {\bf B}= \tilde{\bf U}_{\tilde{r}} {\rm diag}(\boldsymbol{\tilde{\sigma}})\tilde{\bf V}_{\tilde{r}}^T.
\end{equation}
\begin{itemize}
\item If $p\in (1,\infty)$, then the H\"older inequality is saturated if and only if we have that
\begin{equation}\label{Eq:FormBgenP}
{\bf B} = c {\bf U}_r{\rm diag}({\rm J}_p(\boldsymbol{\sigma})){\bf V}_r^T
\end{equation}
or, equivalently,
\begin{equation}\label{Eq:FormAgenP}
{\bf A} = c^{-1} {\bf \tilde{U}}_{\tilde{r}}{\rm diag}({\rm J}_q(\tilde{\boldsymbol{\sigma}})){\bf \tilde{V}}_{\tilde{r}}^T,
\end{equation}
where $c= \frac{\|{\bf B}\|_{S_q}}{\|{\bf A}\|_{S_p}}$ and ${\rm J}_p(\cdot)$ and ${\rm J}_q(\cdot)$ are the duality mappings for the $\ell_p$ and $\ell_q$ norms, respectively (see \eqref{Eq:DualMapLp}).
\item If $p=1$, then a necessary condition for the saturation of the H\"older inequality is that
\begin{equation}
{\rm rank}({\bf A}) \leq r_1 \leq {\rm rank}({\bf B}),
\end{equation}
where $r_1= {\rm Card}\left(\{i: \tilde{\sigma}_i = \tilde{\sigma}_1\}\right)$ is the multiplicity of the first singular value of ${\bf B}$. Moreover, if we denote the first $r_1$ singular vectors of ${\bf B}$ in \eqref{Eq:RedSVD} by ${\bf \tilde{U}}_1\in \mathbb{R}^{m\times r_1}$ and ${\bf \tilde{V}}_1\in\mathbb{R}^{n\times r_1}$, then the saturation of the H\"older inequality is equivalent to the existence of a symmetric matrix ${\bf X}\in \mathbb{R}^{r_1\times r_1}$ such that
\begin{align}\label{Eq:FormA1}
{\bf A} = {\bf \tilde{U}}_1 {\bf X} {\bf \tilde{V}}_1^T.
\end{align}
Finally in the rank-equality case ${\rm rank}({\bf A}) ={\rm rank}({\bf B})$, we have saturation if and only if
\begin{equation}\label{Eq:FormBrankEq}
{\bf B} = c {\bf U}_r{\bf V}_r^T,
\end{equation}
where $c= \|{\bf B}\|_{S_{\infty}}$ and the matrices ${\bf U}_r$ and ${\bf V}_r$ are defined in \eqref{Eq:RedSVD}.
\end{itemize}
\end{proposition}
\begin{remark}\label{Remark}
The reduced SVD is not unique; there are multiple choices for the sub-orthogonal matrices in \eqref{Eq:RedSVD}. However, the parametric forms given in Proposition \ref{Prop:HolderSat} do not depend on a specific decomposition.
\end{remark}
The proof of Proposition \ref{Prop:HolderSat} can be found in \ref{App:HolderSat}. We observe that, in the case $p\in(1,\infty)$, the saturation of H\"older inequality provides a very tight link between the two matrices: If we know one of them, then the other lies in a one-dimensional ray that is parameterized by the constant $c>0$. However, in the special case $p=1$, the identification is not as simple. There again, for a given matrix ${\bf B}$, one can fully characterize the set of admissible matrices ${\bf A}$. However, for the reverse direction, an additional rank-equality constraint is essential to reduce the set of admissible matrices ${\bf B}$ to just one ray.
Inspired from Proposition \ref{Prop:HolderSat}, we now propose our main result in Theorem \ref{Thm:p}, where we explicitly characterize the duality mapping for the Schatten $p$-norms. The proof of Theorem \ref{Thm:p} can be found in \ref{App:p}.
\begin{theorem}\label{Thm:p}
Let $p,q\in [1,+\infty]$ be a pair of H\"older conjugates with $\frac{1}{p}+\frac{1}{q}=1$ and ${\bf A}\in\mathbb{R}^{m\times n}$ a matrix whose reduced SVD is specified in \eqref{Eq:ReducedSVD}.
\begin{itemize}
\item If $1<p<+\infty$, then the single-valued duality mapping ${\rm J}_{S_p}: \mathbb{R}^{m\times n} \rightarrow \mathbb{R}^{m\times n}$ is well-defined and can be expressed as
\begin{equation}
{\rm J}_{S_p}:{\bf A} = {\bf U }_r {\rm diag}(\boldsymbol{\sigma} ) {\bf V}_r^T \mapsto {\bf A}^*= {\bf U}_r{\rm diag}({\rm J}_p(\boldsymbol{\sigma} )){\bf V}_r^T.
\end{equation}
\item If $p=1$ and if we consider the rank function as the sparsity measure in Definition \ref{Def:SparseDual}, then the sparse duality mapping ${\rm J}_{S_1,{\rm rank}}: \mathbb{R}^{m\times n} \rightarrow \mathbb{R}^{m\times n}$ is well-defined (singleton) and is given as
\begin{equation}
{\rm J}_{S_1,{\rm rank}}:{\bf A} = {\bf U }_r {\rm diag}(\boldsymbol{\sigma}) {\bf V}_r^T \mapsto {\bf A}^*= \|\boldsymbol{\sigma}\|_{1} {\bf U} _r{\bf V}_r^T.
\end{equation}
\item If $p=+\infty$, then the set-valued mapping $\mathcal{J}_{S_\infty}(\cdot)$ can be described as
\begin{equation}\label{Eq:Jinf}
\mathcal{J}_{S_\infty}({\bf A})= \left\{ \sigma_1 {\bf U}_1 {\bf X} {\bf V}_1^T: {\bf X}\in\mathbb{R}^{r_1\times r_1} \text{ is symmetric and } \|{\bf X}\|_{S_1} =1 \right\},
\end{equation}
where $r_1$ denotes the multiplicity of the first singular value $\sigma_1$ of ${\bf A}$ and ${\bf U}_1,{\bf V}_1$ are singular vectors that correspond to $\sigma_1$ in \eqref{Eq:ReducedSVD}. It is a convex set whose extreme points are ${\bf E}_{i,j} = \frac{\sigma_1}{2}({\bf u}_i {\bf v}_j^T +{\bf v}_i {\bf u}_j^T )$ for $1\leq i\leq j\leq r_1$. Finally, the set of sparse dual conjugates is the collection of rank-1 elements of $\mathcal{J}_{S_\infty}({\bf A})$ which can be characterized as
\begin{equation}
\mathcal{J}_{S_\infty, {\rm rank}}({\bf A}) = \{ \sigma_1 {\bf U}_1 {\bf p}{\bf p}^T {\bf V}_1^T: {\bf p}\in \mathbb{R}^{r_1} , \|{\bf p}\|_2=1\}.
\end{equation}
\end{itemize}
\end{theorem}
\section{Discussion}\label{Sec:discuss}
Theorem \ref{Thm:p} provides an interesting characterization of the duality mapping in three scenarios: The first case is $1<p<+\infty$ which is the most straightforward one. Theorem \ref{Thm:p} tells us that the mapping is single-valued and also gives a formula to compute the dual conjugate ${\bf A}^*$ of any matrix ${\bf A}\in\mathbb{R}^{m\times n}$. We use this result to deduce the continuity of the duality mapping as well as the strict convexity of the Schatten space in this case (see Corollary \ref{Cor:strict}). In the second case, with $p=1$, the mapping is not single-valued. However, there is a unique element in the set of dual conjugates with the minimal rank (that is equal to the rank of ${\bf A}$) and, hence, we can construct a single-valued sparse duality mapping. Finally, we showed in the third case, characterized by $p=+\infty$, that neither the set of dual conjugates nor the ones with the minimal rank are unique. However, we observe in \eqref{Eq:Jinf} that the entries of ${\bf X}$ can be independently chosen, up to symmetry and normalization assumptions. This suggests that the dimension of $\mathcal{J}_{S_\infty}({\bf A})$ is $d=\left( \frac{r_1(r_1+1)}{2}-1\right)$. Moreover, we show that this convex set has exactly $(d+1)$ extreme points, which is the minimal number for a convex set of dimension $d$. We also observe that the extreme points of $\mathcal{J}_{S_\infty}({\bf A})$ are low-rank. They are indeed a collection of rank-1 and rank-2 matrices.
In Corollary \ref{Cor:strict}, we highlight some consequences of Theorem \ref{Thm:p} concerning the strict convexity of Schatten spaces and the continuity of the duality mapping.
\begin{corollary}\label{Cor:strict}
The Banach space of $m$ by $n$ matrices equipped with the Schatten-$p$ norm is strictly convex, if and only if $p\in (1,+\infty)$. In this case, the function ${\rm J}_{S_p} : \mathbb{R}^{m\times n} \rightarrow \mathbb{R}^{m\times n}$ is continuous.
\end{corollary}
\begin{proof}
For $p\in (1,+\infty)$, we know from Theorem \ref{Thm:p} that the duality mapping ${\rm J}_{S_p}$ is bijective. Moreover, it is known that all finite-dimensional Banach spaces are reflexive. Now, following \cite{petryshyn1970characterization}, we deduce the strict convexity of the space of $m$ by $n$ matrices with Schatten-$p$ norm.
For $p=1$ and $p=+\infty$, we can readily verify that
\begin{align*}
&\left \|\alpha \begin{pmatrix}1 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & 0 \end{pmatrix} +(1-\alpha)\begin{pmatrix}0 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & 1 \end{pmatrix} \right\|_{S_1} = \left \|\begin{pmatrix} \alpha & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & (1-\alpha) \end{pmatrix} \right\|_{S_1}=1, \\
&\left\|\alpha \begin{pmatrix}1 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & 0 \end{pmatrix} +(1-\alpha)\begin{pmatrix}1 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & 1 \end{pmatrix} \right\|_{S_\infty} = \left\| \begin{pmatrix}1 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & (1-\alpha) \end{pmatrix} \right\|_{S_\infty}=1,
\end{align*}
for all $\alpha \in (0,1)$, which shows that the Schatten space is not strictly convex for $p=1,+\infty$.
Finally, the Schatten-$p$ norm is known to be Fr\'echet differentiable for $p\in(1,+\infty)$ \cite{potapov2014frechet}. Moreover, the duality mapping of any Banach space with Fr\'echet-differentiable norms is guaranteed to be continuous \cite{giles1978geometrical,contreras1994upper}. Combining the two statements, we deduce the continuity of the duality mapping in this case.
\end{proof}
By contrast, the sparse duality mapping ${\rm J}_{S_1,{\rm rank}}(\cdot)$ is not continuous. This is best explained by providing a counterexample. Specifically, let us consider the sequence of 2 by 2 matrices
$${\bf S}_k=\begin{pmatrix} 1 & 0\\ 0 & \frac{1}{k}\end{pmatrix}, \quad k\in\mathbb{N}.$$
It is clear that ${\bf S}_k \rightarrow {\bf S}_{\infty} =\begin{pmatrix} 1 & 0\\ 0 & 0\end{pmatrix}$. However, we have that
$$\forall k\in\mathbb{N}: {\rm J}_{S_1,{\rm rank}}({\bf S}_k) = \begin{pmatrix} 1 & 0\\ 0 & 1\end{pmatrix}, \quad \text{while} \quad {\rm J}_{S_1,{\rm rank}}({\bf S}_{\infty}) = \begin{pmatrix} 1 & 0\\ 0 & 0\end{pmatrix},$$
which shows the discontinuity of ${\rm J}_{S_1,{\rm rank}}$ in the space of 2 by 2 matrices. This can be generalized to space of matrices with arbitrary dimensions $m,n\in\mathbb{N}$.
Although ${\rm J}_{S_1,{\rm rank}}$ is not continuous, we now show that it is Borel-measurable and, hence, that it can be approximated with arbitrary precision by a continuous mapping due to Lusin's theorem \cite{rudin1991functional}.
\begin{proposition}\label{Prop:measure}
For any $m,n\in\mathbb{N}$, the sparse duality mapping ${\rm J}_{S_1,{\rm rank}}$ is a Borel-measurable matrix-valued function over the space of $m$ by $n$ matrices.
\end{proposition}
Before going into the proof of Proposition \ref{Prop:measure}, we present a preliminary result.
\begin{lemma}\label{Lem}
The set $\mathcal{R}_{ r}\subseteq \mathbb{R}^{m\times n}$ of $m$ by $n$ matrices of rank $r$ is Borel-measurable.
\end{lemma}
\begin{proof}
First note that
$$\mathcal{R}_{1}=\{ {\bf u}{\bf v}^T: {\bf u}\in\mathbb{R}^m, {\bf v}\in \mathbb{R}^n\}. $$
The set $\mathcal{R}_{1}$ is the image of the continuous mapping $\mathbb{R}^{m}\times \mathbb{R}^n \rightarrow \mathbb{R}^{m\times n}:({\bf u}, {\bf v}) \mapsto {\bf u}{\bf v}^T$ and, hence, is Borel-measurable.
Now, denote by $\mathcal{R}_{\leq r}\subseteq \mathbb{R}^{m\times n}$, the set of matrices with rank no more than $r$. Using the identity
$$\mathcal{R}_{\leq r} = \mathcal{R}_{1} + \cdots + \mathcal{R}_{1} ,\quad \text{(r times)}, $$
we deduce that $\mathcal{R}_{\leq r}$ and, consequently, $\mathcal{R}_{r} = \mathcal{R}_{\leq r} \backslash \mathcal{R}_{\leq (r-1)}$ are also Borel-measurable sets.
\end{proof}
\begin{proof}[Proof of Proposition \ref{Prop:measure}]
Consider a Borel-measurable set $\mathcal{B}\subseteq \mathbb{R}^{m\times n}$. We show that $\mathcal{B}_{\rm inv}= {\rm J}_{S_1,{\rm rank}}^{-1}(\mathcal{B})$ is also Borel-measurable. By defining $\mathcal{B}_{{\rm inv},r} = \mathcal{B}_{\rm inv}\cap \mathcal{R}_{r}$, we can partition $ \mathcal{B}_{\rm inv}$ as
$$\mathcal{B}_{\rm inv}= \bigcup_{r=1}^{\min (m,n)} \mathcal{B}_{\rm inv}\cap \mathcal{R}_{r}.$$
Hence, it is sufficient to show that each partition $\mathcal{B}_{\rm inv}\cap \mathcal{R}_{r}$ is Borel-measurable.
Define the set $\mathcal{P}_{r} \subseteq \mathcal{R}_{r}^2$ as
$$\mathcal{P}_{r} = \{ ({\bf A},{\bf B}) \in \mathcal{R}_{r} \times \mathcal{B}: {\rm Tr}({\bf A}^T{\bf B}) = \|{\bf A}\|_{S_1} \|{\bf B}\|_{S_\infty}, \quad \|{\bf A}\|_{S_1}= \|{\bf B}\|_{S_\infty}\}.$$
The set $\mathcal{P}_{r}$ introduces a relation over $ \mathcal{R}_{r} $ whose domain is $\mathcal{B}_{\rm inv}\cap \mathcal{R}_{r}$. Since the trace and norm are continuous (and, consequently, Borel-measurable) functions and $\mathcal{R}_{r}\times \mathcal{B}$ is a Borel-measurable set (using Lemma \ref{Lem}), we deduce that the relation induced from $\mathcal{P}_{r}$ is Borel-measurable as well. Finally, we use \cite[Proposition 2.1]{himmelberg1975measurable} to show that its domain is Borel-measurable.
\end{proof}
\section{Conclusion}
In this paper, we studied the duality mapping in finite-dimensional Schatten spaces. Based on a careful investigation of the cases where the H\"older inequality saturates, we provided an explicit form for this mapping when $p\in (1,+\infty)$. Furthermore, by adding a rank constraint, we proved that the mapping becomes single-valued for the special case $p=1$. As for $p=+\infty$, we showed that the mapping yields a convex set whose extreme points are low-rank matrices. Finally, we discussed our theorem and studied the continuity of the introduced mappings as well as the strict convexity of the Schatten spaces. A possible future direction of research is to extend the results of this paper to infinite-dimensional Schatten spaces and even, in full generality, to linear operators over Hilbert spaces.
| {
"timestamp": "2020-09-17T02:18:52",
"yymm": "2009",
"arxiv_id": "2009.07768",
"language": "en",
"url": "https://arxiv.org/abs/2009.07768",
"abstract": "In this paper, we fully characterize the duality mapping over the space of matrices that are equipped with Schatten norms. Our approach is based on the analysis of the saturation of the Hölder inequality for Schatten norms. We prove in our main result that, for $p\\in (1,\\infty)$, the duality mapping over the space of real-valued matrices with Schatten-$p$ norm is a continuous and single-valued function and provide an explicit form for its computation. For the special case $p = 1$, the mapping is set-valued; by adding a rank constraint, we show that it can be reduced to a Borel-measurable single-valued function for which we also provide a closed-form expression.",
"subjects": "Functional Analysis (math.FA)",
"title": "Duality Mapping for Schatten Matrix Norms",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534365728416,
"lm_q2_score": 0.8175744695262777,
"lm_q1q2_score": 0.8024112727707832
} |
https://arxiv.org/abs/1311.5240 | PENLAB: A MATLAB solver for nonlinear semidefinite optimization | PENLAB is an open source software package for nonlinear optimization, linear and nonlinear semidefinite optimization and any combination of these. It is written entirely in MATLAB. PENLAB is a young brother of our code PENNON \cite{pennon} and of a new implementation from NAG \cite{naglib}: it can solve the same classes of problems and uses the same algorithm. Unlike PENNON, PENLAB is open source and allows the user not only to solve problems but to modify various parts of the algorithm. As such, PENLAB is particularly suitable for teaching and research purposes and for testing new algorithmic ideas.In this article, after a brief presentation of the underlying algorithm, we focus on practical use of the solver, both for general problem classes and for specific practical problems. | \section{Introduction}
Many problems in various scientific disciplines, as well as many
industrial problems lead to (or can be advantageously formulated) as
nonlinear optimization problems with semidefinite constraints. These
problems were, until recently, considered numerically unsolvable, and
researchers were looking for other formulations of their problem that
often lead only to approximation (good or bad) of the true solution.
This was our main motivation for the development of PENNON
\cite{pennon}, a code for nonlinear optimization problems with matrix
variables and matrix inequality constraints.
Apart from PENNON, other concepts for the solution of nonlinear
semidefinite programs are suggested in literature; see
\cite{sun-sun-zhang} for a discussion on the classic augmented
Lagrangian method applied to nonlinear semidefinite programs,
\cite{correa2004global,fares,freund-jarre-vogelbusch} for sequential
semidefinite programming algorithms and \cite{kanzow-nagel-newt} for a
smoothing type algorithm. However, to our best knowledge, none of these
algorithmic concepts lead to a publicly available code yet.
In this article, we present PENLAB, a younger brother of PENNON and
a new implementation from NAG. PENLAB can solve the same classes of
problems, uses the same algorithm and its behaviour is very similar.
However, its performance is relatively limited in comparison to \cite{pennon}
and \cite{naglib}, due to MATLAB implementation. On the other hand,
PENLAB is open source and allows the user not only to solve problems
but to modify various parts of the algorithm. As such, PENLAB is
particularly suitable for teaching and research purposes and for
testing new algorithmic ideas.
After a brief presentation of the underlying algorithm, we focus on
practical use of the solver, both for general problem classes and for
specific practical problems, namely, the nearest correlation matrix
problem with constraints on condition number, the truss topology
problem with global stability constraint and the static output
feedback problem. More applications of nonlinear semidefinite
programming problems can be found, for instance, in
\cite{annad,kanno,leibfritz-volkwein}.
PENLAB is distributed under GNU GPL license and can be downloaded from
{\tt http://web.mat.bham.ac.uk/kocvara/penlab}.
We use standard notation: Matrices are denoted by capital letters
($A,B,X,\ldots$) and their elements by the corresponding small-case
letters ($a_{ij}, b_{ij}, x_{ij},\ldots$). For vectors $x,y\in\RR^n$,
$\langle x,y\rangle:=\sum_{i=1}^n x_iy_i$ denotes the inner product.
$\SS^{m}$ is the space of real symmetric matrices of dimension $m\times
m$. The inner product on $\SS^{m}$ is defined by $\langle A,
B\rangle_{\SS^{m}} := \Tr (AB)$. When the dimensions of $A$ and $B$ are
known, we will often use notation $\langle A, B\rangle$, same as for
the vector inner product. Notation $A\preccurlyeq B$ for
$A,B\in\SS^{m}$ means that the matrix $B-A$ is positive semidefinite.
If $A$ is an $m\times n$ matrix and $a_j$ its $j$-th
column, then $\Vec A$ is the $mn\times 1$ vector
$$
\Vec A = \begin{pmatrix} a_1^T\ \ a_2^T\ \ \cdots\ \ a_n^T\end{pmatrix}^T\,.
$$
Finally, for $\Phi:\SS^m\to\SS^m$ and $X,Y\in \SS^m$, $D\Phi(X;Y)$
denotes the directional derivative of $\Phi$ with respect to $X$ in
direction $Y$.
\vfill
\section{The problem}
We intend to solve optimization problems with a nonlinear objective
subject to nonlinear inequality and equality constraints and nonlinear
matrix inequalities (NLP-SDP):
\begin{align}
& \min_{x\in \RR^n, Y_1\in\SS^{p_1},\ldots,Y_k\in\SS^{p_k}} f(x,Y)\label{eq:nlpsdp}\\
& \begin{aligned}
\mbox{subject to}\quad
&g_i(x,Y) \leq 0, \qquad &&i=1,\ldots,m_g\\
&h_i(x,Y) = 0, \qquad &&i=1,\ldots,m_h \\
&{ {\cal A}_i(x,Y)\preceq 0,} \qquad &&{ i=1,\ldots,m_A}\\
&\underline{\lambda}_i I \preceq Y_i\preceq \overline{\lambda}_i I, \qquad&&i=1,\ldots,k\,.
\end{aligned}\nonumber
\end{align}
Here
\begin{itemize}
\item $x\in\RR^n$ is the vector variable;
\item $Y_1\in\SS^{p_1},\ldots,Y_k\in\SS^{p_k}$ are the matrix
variables, $k$ symmetric matrices of dimensions $p_1\times
p_1,\ldots,p_k\times p_k$;
\item we denote $Y=(Y_1,\ldots,Y_k)$;
\item $f$, $g_i$ and $h_i$ are $C^2$ functions from $\RR^n\times
\SS^{p_1}\times\ldots\times\SS^{p_k}$ to $\RR$;
\item $\underline{\lambda}_i$ and $\overline{\lambda}_i$ are the
lower and upper bounds, respectively, on the eigenvalues of~
$Y_i$, $i=1,\ldots,k$;
\item ${\cal A}_i(x,Y)$ are twice continuously differentiable
nonlinear matrix operators from $\RR^n\times
\SS^{p_1}\times\ldots\times\SS^{p_k}$ to $\SS^{p_{A_i}}$ where
${p_{A_i}}$, $i=1,\ldots,m_A$, are positive integers.
\end{itemize}
\section{The algorithm}
The basic algorithm used in this article is based on the nonlinear
rescaling method of Roman~Polyak \cite{polyak} and was described in
detail in \cite{pennon} and \cite{stingl}. Here we briefly recall it
and stress points that will be needed in the rest of the paper.
The algorithm is based on a choice of penalty/barrier functions
$\varphi:\RR\to\RR$ that penalize the inequality constraints and
$\Phi:\SS^p\to\SS^p$ penalizing the matrix inequalities. These
functions satisfy a number of properties (see \cite{pennon,stingl})
that guarantee that for any $\pi>0$ and $\Pi > 0$, we have
$$
z(x) \le 0 \ \Longleftrightarrow \ \pi\varphi(z(x)/\pi) \le
0,
\quad z\in C^2(\RR^n\to\RR)
$$
and
$$
Z \preceq 0 \ \Longleftrightarrow \ \Pi \Phi(Z/\Pi) \preceq
0, \quad Z\in\SS^p \,.
$$
This means that, for any $\pi>0$, $\Pi>0$, problem (\ref{eq:nlpsdp})
has the same solution as the following ``augmented" problem
\begin{align}
& \min_{x\in\RR^n, Y_1\in\SS^{p_1},\ldots,Y_k\in\SS^{p_k}} f(x,Y) \label{eq:nlpsdp_phi}\\
& \begin{aligned}
\mbox{subject to}\quad
&\varphi_{\pi}(g_i(x,Y)) \leq 0, \qquad &&i=1,\ldots,m_g\nonumber\\
&\Phi_{\Pi}({\cal A}_i(x,Y))\preceq 0,\qquad&&i=1,\ldots,m_A\nonumber\\
&\Phi_{\Pi}(\underline{\lambda}_i I - Y_i)\preceq 0, \qquad&&i=1,\ldots,k\nonumber\\
& \Phi_{\Pi}(Y_i-\overline{\lambda}_i I)\preceq 0, \qquad&&i=1,\ldots,k\nonumber\\
&h_i(x,Y) = 0, \qquad &&i=1,\ldots,m_h\,, \nonumber\\
\end{aligned}
\end{align}
where we have used the abbreviations $\varphi_{\pi} = \pi \varphi
(\cdot / \pi)$ and $\Phi_{\Pi} = \Pi \Phi (\cdot / \Pi)$.
\medskip
The Lagrangian of (\ref{eq:nlpsdp_phi}) can be viewed as
a (generalized) augmented Lagrangian of (\ref{eq:nlpsdp}):
\begin{multline}\label{eq:lagr}
F(x,Y,u,\Xi,\underline{U},\overline{U},v,\pi,\Pi)\\
=f(x,Y) + \sum_{i=1}^{m_g} u_i
\varphi_{\pi}(g_i(x,Y))
+ \sum_{i=1}^{m_A}\langle \Xi_i,\Phi_\Pi({\cal A}_i(x,Y))\rangle\\
+
\sum_{i=1}^{k}\langle\underline{U}_i, \Phi_\Pi(\underline{\lambda}_i I
- Y_i)\rangle +
\sum_{i=1}^{k}\langle\overline{U}_i,\Phi_\Pi(Y_i-\overline{\lambda}_i
I)\rangle +v^\top h(x, Y)
\,;
\end{multline}
here $u\in\RR^{m_g}$, $\Xi = (\Xi_1,\ldots,\Xi_{m_A}),
\Xi_i\in\SS^{p_{A_i}}$, and
$\underline{U}=(\underline{U}_1,\ldots,\underline{U}_k),
\overline{U}=(\overline{U}_1, \ldots, \overline{U}_k)$,
$\underline{U}_i,\overline{U}_i\in\SS^{p_i}$, are Lagrange multipliers
associated with the standard and the matrix inequality constraints,
respectively, and $v\in \RR^{m_h}$ is the vector of Lagrangian
multipliers associated with the equality constraints.
The algorithm combines ideas of the (exterior) penalty and (interior)
barrier methods with the augmented Lagrangian method.
\begin{Algorithm}\label{algo:1}
Let $x^1, Y^1$ and $u^1, \Xi^1, \underline{U}^1, \overline{U}^1, v^1$
be given. Let $\pi^1>0$, $\Pi^1>0$ and $\alpha^1>0$. For
$\ell=1,2,\ldots$ repeat till a stopping criterium is reached:
\begin{align*}
(i)\qquad &\mbox{Find $x^{\ell+1}$, $Y^{\ell+1}$ and $v^{\ell+1}$ such that}\\
\qquad &\qquad\|\nabla_{x,Y} F(x^{\ell+1},Y^{\ell+1},u^{\ell},\Xi^{\ell},\underline{U}^{\ell},
\overline{U}^{\ell},v^{\ell+1},\pi^{\ell},\Pi^{\ell})\| \leq \alpha^{\ell}\\
\qquad &\qquad\| h(x^{\ell+1},Y^{\ell+1}) \| \leq \alpha^{\ell}\\
(ii)\qquad &u_i^{\ell+1} = u_i^{\ell}\varphi_{\pi^{\ell}}'(g_i(x^{\ell+1},Y^{\ell+1})),\quad
i=1,\,\ldots,m_g\\
&\Xi_i^{\ell+1} = D_{\cal A} \Phi_{\Pi^{\ell}}({\cal A}_i(x^{\ell+1},Y^{\ell+1});
\Xi_i^{\ell}),\quad
i=1,\,\ldots,m_A\\
&\underline{U}_i^{\ell+1} = D_{\cal A} \Phi_{\Pi^{\ell}}((\underline{\lambda}_i I - Y_i^{\ell+1});
\underline{U}_i^{\ell}),\quad
i=1,\,\ldots,k\\
&\overline{U}_i^{\ell+1} = D_{\cal A} \Phi_{\Pi^{\ell}}(( Y_i^{\ell+1}- \overline{\lambda}_i I );
\overline{U}_i^{\ell}),\quad
i=1,\,\ldots,k\\
(iii)\qquad &\pi^{\ell+1} < \pi^{\ell},\quad
\Pi^{\ell+1} < \Pi^{\ell},\quad \alpha^{\ell+1} < \alpha^{\ell}\,.
\end{align*}
\end{Algorithm}
In Step~(i) we attempt to find an approximate solution of the following
system (in $x, Y$ and $v$):
\begin{equation}\label{eq:KKT-eq}
\begin{aligned}
\qquad \nabla_{x,Y} {F} (x,Y,u,\Xi,\underline{U},\overline{U},v,\pi,\Pi) &= 0 \\
h(x,Y) &= 0\,,
\end{aligned}
\end{equation}
where the penalty parameters $\pi, \Pi$, as well as the multipliers
$u,\Xi,\underline{U},\overline{U}$ are fixed. In order to solve it, we
apply the damped Newton method. Descent directions are calculated
utilizing the MATLAB command {\tt ldl} that is based on the
factorization routine MA57, in combination with an inertia correction
strategy described in \cite{stingl}. In the forthcoming release of
PENLAB, we will also apply iterative methods, as described in
\cite{pen-iter}. The step length is derived using an augmented
Lagrangian merit function defined as
$$
{F} (x,Y,u,\Xi,\underline{U},\overline{U},v,\pi,\Pi) + \frac{1}{2\mu}\|h(x,Y)\|_2^2
$$
along with an Armijo rule.
If there are no equality constraints in the problems, the unconstrained
minimization in Step~(i) is performed by the modified Newton method
with line-search (for details, see \cite{pennon}).
The multipliers calculated in Step~(ii) are restricted in order to
satisfy:
$$
\mu < \frac{u_i^{\ell+1}}{u_i^{\ell}} < \frac1{\mu}
$$
with some positive ${\mu} \leq 1$; by default, ${\mu} = 0.3$.
A similar restriction procedure can be applied to the matrix
multipliers $\underline{U}^{\ell+1}, \overline{U}^{\ell+1}$ and $\Xi$;
see again \cite{pennon} for details.
The penalty parameters $\pi, \Pi$ in Step~(iii) are updated by some
constant factor dependent on the initial penalty parameters $\pi^1,
\Pi^1$. The update process is stopped when $\pi_{eps}$ (by default
$10^{-6}$) is reached.
Algorithm~\ref{algo:1} is stopped when a criterion based on the
KKT error is satisfied and both of the inequalities holds:
\begin{eqnarray*}
\frac{|f(x^{\ell},Y^{\ell}) - F(x^{\ell},Y^{\ell},u^{\ell},\Xi^{\ell},\underline{U}^{\ell},\overline{U}^{\ell},v^{\ell},\pi^{\ell},\Pi^{\ell})|}
{1+|f(x^{\ell},Y^{\ell})|} &<& \epsilon\\
\frac{|f(x^{\ell},Y^{\ell}) - f(x^{\ell-1},Y^{\ell-1})|}{1+|f(x^{\ell},Y^{\ell})|} &<& \epsilon\,,
\end{eqnarray*}
where $\epsilon$ is by default $10^{-6}$.
\subsection{{Choice of $\varphi$ and $\Phi$}}\label{sec:hess}
To treat the standard NLP constraints, we use the penalty/barrier
function proposed by Ben-Tal and Zibulevsky \cite{ben-tal-zibulevsky}:
\begin{equation}
\varphi_{\bar{\tau}} (\tau) = \left \{
\begin{aligned}
&\tau + \frac{1}{2} \, \tau^2 &\mbox{if~}& \tau \geq \bar{\tau} \\
&- (1+ \bar{\tau})^2 \log \left ( \frac{1+ 2 \bar{\tau} -\tau}
{1 + \bar{\tau}} \right)
+ \bar{\tau} + \frac{1}{2} \bar{\tau}^2 \ &\mbox{if~}& \tau < \bar{\tau} \,;
\end{aligned} \right .
\label{eq:phi}
\end{equation}
by default, $\bar{\tau} = - \frac{1}{2}$.
The penalty function $\Phi_\Pi$ of our choice is defined as follows
(here, for simplicity, we omit the variable $Y$):
\begin{equation}\label{eq:pen}
\Phi_\Pi({\cal A}(x)) = -\Pi^2({\cal A}(x) - \Pi I)^{-1} - \Pi I \,.
\end{equation}
The advantage of this choice is that it gives closed formulas for the
first and second derivatives of $\Phi_\Pi$. Defining
\begin{equation}\label{eq:Z}
{\cal Z}(x) = -({\cal A}(x) - \Pi I)^{-1}
\end{equation}
we have (see \cite{pennon}):
\begin{align*}
\frac{\partial}{\partial x_i} \Phi_\Pi({\cal A}(x))& =
\Pi^2{\cal Z}(x) \frac{\partial{\cal A}(x)}{\partial x_i} {\cal Z}(x)
\label{eq:der1} \\
\frac{\partial^2}{\partial x_i\partial x_j} \Phi_\Pi({\cal
A}(x)) & = \Pi^2{\cal Z}(x) \left(\frac{\partial{\cal A}(x)}{\partial
x_i}
{\cal Z}(x) \frac{\partial{\cal A}(x)}{\partial x_j} +
\frac{\partial^2{\cal A}(x)}{\partial x_i\partial x_j}
\right.\nonumber\\
&\left.\phantom{\Pi^2{\cal Z}(x)}\qquad\ \
+ \frac{\partial{\cal A}(x)}{\partial x_j}
{\cal Z}(x) \frac{\partial{\cal A}(x)}{\partial x_i}\right){\cal
Z}(x)\,.&\nonumber
\end{align*}
\subsection{Strictly feasible constraints} In certain applications,
some of the bound constraints must remain strictly feasible for all
iterations because, for instance, the objective function may be
undefined at infeasible points (see examples in
Section~\ref{ex:truss}). To be able to solve such problems, we treat
these inequalities by a classic barrier function. In case of matrix
variable inequalities, we
split $Y$ in non-strictly feasible matrix variables $Y_1$ and strictly
feasible matrix variables $Y_2$, respectively, and define the augmented
Lagrangian
\begin{equation}\label{eq:lagr2}
\widetilde{F}(x,Y_1,Y_2,u,\Xi,\underline{U},\overline{U},v,\pi,\Pi,\kappa) =
F(x,Y_1,u,\Xi,\underline{U},\overline{U},v,\pi,\Pi) + \kappa \Phi_{\rm bar}(Y_2),
\end{equation}
where $\Phi_{\rm bar}$ can be defined, for example for the constraint
$Y_2\succeq 0$, by
$$\Phi_{\rm bar}(Y_2) = -\log\det(Y_2).$$
Strictly feasible variables $x$ are treated in a similar manner.
Note that, while the penalty parameter $\pi$ may be constant from a
certain index $\bar{\ell}$ (see again \cite{stingl} for details), the
barrier parameter $\kappa$ is required to tend to zero with increasing
$\ell$.
\section{The code}
PENLAB is a free open-source MATLAB implementation of the algorithm
described above. The main attention was given to clarity of the code
rather than tweaks to improve its performance. This should allow users
to better understand the code and encourage them to edit and develop
the algorithm further. The code is written entirely in MATLAB with an
exception of two mex-functions that handles the computationally most
intense task of evaluating the second derivative of the Augmented
Lagrangian and a sum of multiple sparse matrices (a slower non-mex
alternative is provided as well). The
solver is implemented as a MATLAB handle class and thus it should be
supported on all MATLAB versions starting from R2008a.
PENLAB is distributed under GNU GPL license and can be downloaded from
{\tt http://web.mat.bham.ac.uk/kocvara/penlab}. The distribution
package includes the full source code and precompiled mex-functions,
PENLAB User's Guide and also an internal (programmer's) documentation
which can be generated from the source code. Many examples provided in
the package show various ways of calling PENLAB and handling NLP-SDP
problems.
\subsection{Usage}
The source code is divided between a class \verb|penlab| which
implements Algorithm 1 and handles generic NLP-SDP problems similar to
formulation (\ref{eq:nlpsdp}) and interface routines providing various
specialized inputs to the solver.
Some of these are described in Section~\ref{sec:modules}.
The user needs to prepare a MATLAB structure (here called \verb|penm|)
which describes the problem parameters, such as number of variables,
number of constraints, lower and upper bounds, etc. Some of the fields
are shown in Table~\ref{tab:7}, for a complete list see the PENLAB
User's Guide. The structure is passed to \verb|penlab| which returns
the initialized problem instance:
\begin{verbatim}
>> problem = penlab(penm);
\end{verbatim}
The solver might be invoked and results retrieved, for example, by
calling
\begin{verbatim}
>> problem.solve()
>> problem.x
\end{verbatim}
The point \verb|x| or option settings might be changed and
the solver invoked again. The whole object can be cleared from the
memory using
\begin{verbatim}
>> clear problem;
\end{verbatim}
\begin{table}[htbp]
\caption{Selection of fields of the MATLAB structure {\tt penm} used to
initialize PENLAB object. Full list is available in PENLAB
User's Guide.}
\begin{tabular*}{\hsize}{@{\extracolsep{\fill}}ll}
\hline
field name & meaning \\
\hline
Nx & dimension of vector $x$ \\
NY & number of matrix variables $Y$\\
Y & cell array of length NY with a nonzero pattern of each of the matrix variables\\
lbY & NY lower bounds on matrix variables (in spectral sense) \\
ubY & NY upper bounds on matrix variables (in spectral sense) \\
NANLN & number of nonlinear matrix constraints \\
NALIN & number of linear matrix constraints \\
lbA & lower bounds on all matrix constraints\\
ubA & upper bounds on all matrix constraints\\
\hline
\end{tabular*} \label{tab:7}
\end{table}
\subsection{Callback functions} The principal philosophy of the code is
similar to many other optimization codes---we use callback functions
(provided by the user) to compute function values and derivatives of
all involved functions.
For a generic problem, the user must define nine MATLAB callback
functions: {\tt objfun}, {\tt objgrad}, {\tt objhess}, {\tt confun},
{\tt congrad}, {\tt conhess}, {\tt mconfun}, {\tt mcongrad}, {\tt
mconhess} for function value, gradient, and Hessian of the objective
function, (standard) constraints and matrix constraint. If one
constraint type is not present, the corresponding callbacks need not be
defined. Let us just show the parameters of the most complex callbacks
for the matrix constraints:
\begin{verbatim}
function [Ak, userdata] = mconfun(x,Y,k,userdata)
function [dAki,userdata] = mcongrad(x,Y,k,i,userdata)
function [ddAkij, userdata] = mconhess(x,Y,k,i,j,userdata)
\end{verbatim}
Here $x,Y$ are the current values of the (vector and matrix) variables.
Parameter $k$ stands for the constraint number. Because every element
of the gradient and the Hessian of a matrix function is a matrix, we
compute them (the gradient and the Hessian) element-wise (parameters
$i,j$). The outputs {\tt
Ak,dAki,ddAkij} are symmetric matrices saved in sparse MATLAB format.
Finally, {\tt userdata} is a MATLAB structure passed through
all callbacks for user's convenience and may contain any
additional data needed for the evaluations.
It is unchanged by the algorithm itself but it can be modified in the
callbacks by user.
For instance, some time-consuming computation that depends
on $x,Y,k$ but is independent of $i$ can be performed only for $i=1$,
the result stored in {\tt userdata} and recalled for any $i>1$ (see,
e.g., Section~\ref{ex:truss}, example Truss Design with Buckling Constraint).
\subsection{Mex files}
Despite our intentions to use only pure Matlab code, two routines were
identified to cause a significant slow-down and therefore their m-files
were substituted with equivalent mex-files. The first one computes
linear combination of a set of sparse matrices, e.g., when evaluating
${\cal A}_i(x)$ for polynomial matrix inequalities, and is based on
ideas from \cite{davis}. The second one evaluates matrix inequality
contributions to the Hessian of the augmented Lagrangian
(\ref{eq:lagr}) when using penalty function (\ref{eq:pen}).
The latter case reduces to computing $z_{\ell} = \langle TA_kU,\,
A_{\ell}\rangle$ for $\ell=k,\ldots,n$ where $T, U \in \SS^m$ are dense
and $A_{\ell} \in \SS^m$ are sparse with potentially highly varying
densities. Such expressions soon become challenging for nontrivial $m$
and can easily dominate the whole Algorithm~\ref{algo:1}. Note that the
problem is common even in primal-dual interior point methods for SDPs
and have been studied in \cite{fujisawa-kojima-nakata}. We developed a
relatively simple strategy which can be viewed as an evolution of the
three computational formulae presented in \cite{fujisawa-kojima-nakata}
and offers a minimal number of multiplications while keeping very
modest memory requirements. We refer to it as a \emph{look-ahead
strategy with caching}. It can be described as follows:
\begin{Algorithm}\label{algo:trace}
Precompute a set ${\cal J}$ of all nonempty columns across all
$A_{\ell}, {\ell}=k,\ldots,n$ and a set ${\cal I}$ of nonempty rows of
$A_k$ \emph{(look-ahead)}. Reset flag vector $c\leftarrow 0$, set $z=0$
and $v=w=0$. For each $j \in {\cal J}$ perform:
\begin{enumerate}
\item compute selected elements of the $j$-th column of $A_kU$, i.e.,\\
$v_i = \sum_{\alpha=1}^m(A_k)_{i \alpha} U_{\alpha j}$ for $i \in {\cal I}$,
\item for each $A_{\ell}$ with nonempty $j$-th column go through
its nonzero elements $(A_{\ell})_{ij}$ and
\begin{enumerate}
\item if $c_i<j$ compute $w_i = \sum_{\alpha \in {\cal I}}
T_{i\alpha}v_{\alpha}$ and set $c_i \leftarrow j$
\emph{(caching)},
\item update trace, i.e., $z_{\ell} = z_{\ell} + w_i(A_{\ell})_{ij}$.
\end{enumerate}
\end{enumerate}
\end{Algorithm}
\section{Gradients and Hessians of matrix valued functions}
There are several concepts of derivatives of matrix functions; they,
however, only differ in the ordering of the elements of the resulting
``differential''. In PENLAB, we use the following definitions of the
gradient and Hessian of matrix valued functions.
\begin{definition}\label{def:1}
Let $F$ be a differentiable $m\times n$ real matrix function of an
$p\times q$ matrix of real variables $X$. The $(i,j)$-th element of the
\emph{gradient} of $F$ at $X$ is the $m\times n$ matrix
\begin{equation}\label{eq:a041}
\left[\nabla F(X)\right]_{ij} :=
\frac{\partial F(X)}{\partial x_{ij}},
\qquad i=1,\ldots,p,\ j=1,\ldots,q
\,.
\end{equation}
\end{definition}
\begin{definition}\label{def:2}
Let $F$ be a twice differentiable $m\times n$ real matrix function of
an $p\times q$ matrix of real variables $X$. The $(ij,k\ell)$-th
element of the \emph{Hessian} of $F$ at $X$ is the $m\times n$ matrix
\begin{equation}\label{eq:a040}
\left[\nabla^2 F(X)\right]_{ij,k\ell} :=
\frac{\partial^2 F(X)}{\partial x_{ij}\partial x_{kl}} ,
\qquad i,k=1,\ldots,p,\ j,\ell=1,\ldots,q
\,.
\end{equation}
\end{definition}
In other words, for every pair of variables $x_{ij},\ x_{k\ell}$,
elements of $X$, the second partial derivative of $F(X)$ with respect
to these variables is the $m\times n$ matrix $\frac{\partial^2
F(X)}{\partial x_{ij}\partial x_{k\ell}}$.
How to compute these derivatives, i.e., how to define the callback
functions? In Appendix A, we summarize basic formulas for the
computation of derivatives of scalar and matrix valued functions of
matrices.
For low-dimensional problems, the user can utilize MATLAB's Symbolic
Toolbox. For instance, for $F(X)=XX$, the commands
\begin{verbatim}
>> A=sym('X',[2,2]);
>> J=jacobian(X*X,X(:));
>> H=jacobian(J,X(:));
\end{verbatim}
generate arrays $J$ and $H$ such that the $i$-th column of $J$ is the
vectorized $i$-th element of the gradient of $F(X)$; similarly, the
$k$-th column of $H$, $k=(i-1)n^2+j$ for $i,j=1,\ldots,n^2$ is the
vectorized $(i,j)$-th element of the Hessian of $F(X)$. Clearly, the
dimension of the matrix variable is fixed and for a different dimension
we have to generate new formulas. Unfortunately, this approach is
useless for higher dimensional matrices (the user is invited to use the
above commands for $F(X)=X^{-1}$ with $X\in\SS^5$ to see the
difficulties). However, one can always use symbolic computation to
check validity of general dimension independent formulas on small
dimensional problems.
\section{Pre-programmed interfaces}\label{sec:modules}
PENLAB distribution contains several pre-programmed interfaces for
standard optimization problems with standard inputs. For these
problems, the user does not have to create the \verb|penm| object, nor
the callback functions.
\subsection{Nonlinear optimization with AMPL input}
PENLAB can read optimization problems that are defined in and processed
by AMPL \cite{ampl}. AMPL contains routines for automatic
differentiation, hence the gradients and Hessians in the callbacks
reduce to calls to appropriate AMPL routines.
Assume that nonlinear optimization problem is processed by AMPL, so
that we have the corresponding \verb|.nl| file, for instance
\verb|chain.nl|, stored in directory \verb|datafiles|. All the user has
to do to solve the problem is to call the following three commands:
\begin{verbatim}
>> penm = nlp_define('datafiles/chain100.nl');
>> problem = penlab(penm);
>> problem.solve();
\end{verbatim}
\subsection{Linear semidefinite programming}
Assume that the data of a linear SDP problem is stored in a MATLAB
structure \verb|sdpdata|. Alternatively, such a structure can be
created by the user from SDPA input file \cite{sdpa}. For instance, to
read problem \verb|arch0.dat-s| stored in directory \verb|datafiles|,
call
\begin{verbatim}
>> sdpdata = readsdpa('datafiles/control1.dat-s');
\end{verbatim}
To solve the problem by PENLAB, the user just has to call the following
sequence of commands:
\begin{verbatim}
>> penm = sdp_define(sdpdata);
>> problem = penlab(penm);
>> problem.solve();
\end{verbatim}
\subsection{Bilinear matrix inequalities}
We want to solve an optimization problem with quadratic objective and
constraints in the form of bilinear matrix inequalities:
\begin{align}\label{eq:bmiproblem}
&\min_{x\in\RR^n} \frac{1}{2} x^T H x + c^T x \\
& \begin{aligned}
\mbox{subject to}\quad
&b_{\rm low}\leq Bx\leq {b_{\rm up}}\nonumber\\
&Q^i_0 + \sum_{k=1}^{n} x_k Q^i_k + \sum_{k=1}^{n}\sum_{\ell=1}^{n} x_k x_\ell Q^i_{k\ell}
\succcurlyeq 0, \quad i=1,\ldots,m\,. \nonumber
\end{aligned}
\end{align}
The problem data should be stored in a simple format explained in
PENLAB User's Guide.
All the user has to do to solve the problem
is to call the following sequence of commands:
\begin{verbatim}
>> load datafiles/bmi_example;
>> penm = bmi_define(bmidata);
>> problem = penlab(penm);
>> problem.solve();
\end{verbatim}
\subsection{Polynomial matrix inequalities}
We want to solve an optimization problem with constraints in the form
of polynomial matrix inequalities:
\begin{align}\label{eq:pmiproblem}
&\min_{x\in\RR^n} \frac{1}{2} x^T H x + c^T x \\
& \begin{aligned}
\mbox{subject to}\quad
&b_{\rm low}\leq Bx\leq {b_{\rm up}}\nonumber\\
&{\cal A}_i(x)\succcurlyeq 0,\quad i=1,\ldots,m \nonumber
\end{aligned}
\end{align}
with
$$
{\cal A}_i(x) = \sum_j x^{(\kappa^i(j))} Q^i_j
$$
where $\kappa^i(j)$ is a multi-index of the $i$-th constraint with
possibly repeated entries and $x^{(\kappa^i(j))}$ is a product of
elements with indices in $\kappa^i(j)$.
For example, for $${\cal A}(x) = Q_1 + x_1 x_3 Q_2 + x_2 x_4^3 Q_3$$
the multi-indices are $\kappa(1) = \{0\}$ ($Q_1$ is an absolute term),
$\kappa(2) = \{1,3\}$ and $\kappa(3) = \{2,4,4,4\}$.
Assuming now that the problem is stored in a structure \verb|pmidata|
(as explained in PENLAB User's Guide), the user just has to call the
following sequence of commands:
\begin{verbatim}
>> load datafiles/pmi_example;
>> penm = pmi_define(pmidata);
>> problem = penlab(penm);
>> problem.solve();
\end{verbatim}
\section{Examples}
All MATLAB programs and data related to the examples in this section
can be found in directories \verb|examples| and \verb|applications|
of the PENLAB distribution.
\subsection{Correlation matrix with the constrained condition number}\label{ex:cond}
We consider the problem of finding the nearest correlation matrix
(\cite{higham}):
\begin{align}
&\min_X \sum_{i,j=1}^n (X_{ij}-H_{ij})^2\label{corr1}\\
&\mbox{subject to}\nonumber\\
&\qquad X_{ii} = 1,\quad i=1,\ldots,n\nonumber\\
&\qquad X\succeq 0\,.\nonumber
\end{align}
In addition to this standard setting of the problem, let us bound the
condition number of the nearest correlation matrix by adding the
constraint
$$
\mbox{cond}(X) = \kappa \,.
$$
We can formulate this constraint as
\begin{align}
I\preceq\widetilde{X}\preceq \kappa I
\end{align}
the variable transformation
$$
\widetilde{X} = \zeta X\,.
$$
After the change of variables, and with the new constraint, the problem
of finding the nearest correlation matrix with a given condition number
reads as follows:
\begin{align}
&\min_{\zeta,\widetilde{X}} \sum_{i,j=1}^n
(\frac{1}{\zeta}\widetilde{X}_{ij}-H_{ij})^2\label{corr_cond}\\
&\mbox{subject to}\nonumber\\
&\qquad \widetilde{X}_{ii} -\zeta = 0,\quad i=1,\ldots,n\nonumber\\
&\qquad I\preceq\widetilde{X}\preceq \kappa I\nonumber
\end{align}
The new problem now has the NLP-SDP structure of (\ref{eq:nlpsdp}).
We will consider an example based on a practical application from
finances; see \cite{werner-schoettle}. Assume that we are given a
$5\times 5$ correlation matrix. We now add a new asset class, that
means, we add one row and column to this matrix. The new data is based
on a different frequency than the original part of the matrix, which
means that the new matrix is no longer positive definite:
$$
H_{\rm ext} = \begin{pmatrix}1 &-0.44& -0.20 &0.81& -0.46& -0.05\\
-0.44& 1 &0.87& -0.38& 0.81 & -0.58\\
-0.20 &.87 &1& -0.17& 0.65& -0.56\\
0.81 &-0.38& -0.17& 1& -0.37& -0.15\\
-0.46& 0.81& 0.65& -0.37& 1& -0.08\\
-0.05&-0.58&-0.56&-0.15&0.08&1
\end{pmatrix}\,.
$$
When solving problem (\ref{corr_cond}) by PENLAB with $\kappa=10$, we
get the solution after 11 outer and 37 inner iterations. The optimal
value of $\zeta$ is $3.4886$ and, after the back substitution $X =
\frac{1}{\zeta}\widetilde{X}$, we get the nearest correlation matrix
\begin{verbatim}
X =
1.0000 -0.3775 -0.2230 0.7098 -0.4272 -0.0704
-0.3775 1.0000 0.6930 -0.3155 0.5998 -0.4218
-0.2230 0.6930 1.0000 -0.1546 0.5523 -0.4914
0.7098 -0.3155 -0.1546 1.0000 -0.3857 -0.1294
-0.4272 0.5998 0.5523 -0.3857 1.0000 -0.0576
-0.0704 -0.4218 -0.4914 -0.1294 -0.0576 1.0000
\end{verbatim}
with eigenvalues
\begin{verbatim}
eigenvals =
0.2866 0.2866 0.2867 0.6717 1.6019 2.8664
\end{verbatim}
and the condition number equal to 10, indeed.
\paragraph{Gradients and Hessians}
What are the first and second partial derivatives of functions involved
in problem (\ref{corr_cond})? The constraint is linear, so the answer
is trivial here, and we can only concentrate on the objective function
\begin{equation}\label{eq:f}
f(z,\widetilde{X}):=\sum_{i,j=1}^n (z\widetilde{X}_{ij}-H_{ij})^2
= \langle z\widetilde{X}-H, z\widetilde{X}-H \rangle\,,
\end{equation}
where, for convenience, we introduced a variable $z=\frac{1}{\zeta}$.
\begin{theorem} Let
$x_{ij}$ and $h_{ij}$, $i,j=1,\ldots,n$ be elements of $\widetilde{X}$
and $H$, respectively. For the function $f$ defined in (\ref{eq:f}) we
have the following partial derivatives:
\begin{itemize}
\item[(i)] $\nabla_{\!z}\, f(z,\widetilde{X}) = 2\langle
\widetilde{X}, z\widetilde{X}-H\rangle$
\item[(ii)]
$\left[\nabla_{\!\widetilde{X}}f(z,\widetilde{X})\right]_{ij} =
2z(zx_{ij}-h_{ij})$,\quad $i,j=1,\ldots ,n$
\item[(iii)] $\nabla^2_{\!z,z}\, f(z,\widetilde{X}) = 2\langle
\widetilde{X},\widetilde{X}\rangle$
\item[(iv)] $\left[\nabla^2_{\!z,\widetilde{X}}\,
f(z,\widetilde{X})\right]_{ij} =
\left[\nabla^2_{\!\widetilde{X},z}\,
f(z,\widetilde{X})\right]_{ij} = 4zx_{ij} - 2h_{ij}$,\quad
$i,j=1,\ldots ,n$
\item[(v)] $\left[\nabla^2_{\!\widetilde{X},\widetilde{X}}\,
f(z,\widetilde{X})\right]_{ij,k\ell} = 2z^2$\quad for $i=k,\
j=\ell$ and zero otherwise ($i,j,k,\ell=1,\ldots ,n$)\,.
\end{itemize}
\end{theorem}
The proof follows directly from formulas in Appendix~A.
\paragraph{PENLAB distribution}
This problem is stored in directory \verb|applications/CorrMat| of the PENLAB
distribution. To solve the above example and to see the resulting
eigenvalues of $X$, run in its directory
\begin{verbatim}
>> penm = corr_define;
>> problem = penlab(penm);
>> problem.solve();
>> eig(problem.Y{1}*problem.x)
\end{verbatim}
\subsection{Truss topology optimization with stability constraints}\label{ex:truss}
In truss optimization we want to design a pin-jointed framework
consisting of $m$ slender bars of constant mechanical properties
characterized by their Young's modulus $E$. We will consider trusses in
a $d$-dimensional space, where $d=2$ or $d=3$. The bars are jointed at
$\tilde{n}$ nodes. The system is under load, i.e., forces
$f_j\in\RR^{d}$ are acting at some nodes $j$. They are aggregated in a
vector $f$, where we put $f_j=0$ for nodes that are not under load.
This external load is transmitted along the bars causing displacements
of the nodes that make up the displacement vector $u$. Let $p$ be the
number of fixed nodal coordinates, i.e., the number of components with
prescribed discrete homogeneous Dirichlet boundary condition. We omit
these fixed components from the problem formulation reducing thus the
dimension of $u$ to
$$
n=d\,\cdot\,\tilde{n} - p .
$$
Analogously, the external load $f$ is considered as a vector in
$\RR^n$.
The design variables in the system are the bar volumes
$x_1,\ldots,x_m$. Typically, we want to minimize the weight of the
truss. We assume to have a unique material (and thus density) for all
bars, so this is equivalent to minimizing the volume of the truss,
i.e., $\sum_{i=1}^m x_i$. The optimal truss should satisfy mechanical
equilibrium conditions:
\begin{equation}
K(x)u=f \,;
\label{eq:3b1}
\end{equation}
here
\begin{equation}
K(x):= \sum\limits_{i=1}^m x_iK_i,\quad K_i=\frac{E_i}{\ell_i^2}\gamma_i\gamma_i^{\top}
\label{KO5eq:1}
\end{equation}
is the so-called stiffness matrix, $E_i$ the Young modulus of the $i$th
bar, $\ell_i$ its length and $\gamma_i$ the $n-$vector of direction
cosines.
We further introduce the compliance of the truss $f^{\top}u$ that
indirectly measures the stiffness of the structure under the force $f$
and impose the constraints
$$
f^{\top}u \leq \gamma\,.
$$
This constraint, together with the equilibrium conditions, can be
formulated as a single linear matrix inequality (\cite{buck})
$$
\begin{pmatrix} K(x) & f\\ f^T &\gamma\end{pmatrix}
\succeq 0\,.
$$
The minimum volume single-load truss topology optimization problem can
then be formulated as a linear semidefinite program:
\begin{align}
&\min_{x\in\RR^m} \sum_{i=1}^m x_i\label{minvolc}\\
&\mbox{subject to}\nonumber\\
&\qquad \begin{pmatrix} K(x) & f\\ f^T &\gamma\end{pmatrix}
\succeq 0 \nonumber\\
&\qquad x_i\geq 0,\quad i=1,\ldots,m\,.\nonumber
\end{align}
We further consider the constraint on the global stability of the
truss. The meaning of the constraint is to avoid global buckling of the
optimal structure. We consider the simplest formulation of the buckling
constraint based on the so-called linear buckling assumption
\cite{buck}. As in the case of free vibrations, we need to constrain
eigenvalues of the generalized eigenvalue problem
\begin{equation}\label{eq:buckEVP}
K(x) w = \lambda {G}(x) w \,,
\end{equation}
in particular, we require that all eigenvalues of (\ref{eq:buckEVP})
lie outside the interval [0,1]. The so-called geometry stiffness matrix
${G}(x)$ depends, this time, nonlinearly on the design variable $x$:
\begin{equation}\label{eq:G}
{G}(x) = \sum_{i=1}^m {G}_i(x), \qquad
{G}_i(x) = \frac{E x_i}{\ell_i^d} (\gamma_i^{\top} K(x)^{-1}f)
(\delta_i\delta_i^{\top}+\eta_i\eta_i^{\top}).
\end{equation}
Vectors $\delta,\eta$ are chosen so that $\gamma,\delta,\eta$ are
mutually orthogonal. (The presented formula is for $d=3$. In the
two-dimensional setting the vector $\eta$ is not present.) To simplify
the notation, we denote $$\Delta_i = \delta_i\delta^T_i +
\eta_i\eta^T_i\,.$$
It was shown in \cite{buck} that the eigenvalue constraint can be
equivalently written as a nonlinear matrix inequality
\begin{equation} \label{eq:truss_buck}
K(x)+{G}(x) \succcurlyeq 0
\end{equation}
that is now to be added to (\ref{minvolc}) to get the following
nonlinear semidefinite programming problem. Note that $x_i$ are
requested to be strictly feasible.
\begin{align}
&\min_{x\in\RR^m} \sum_{i=1}^m x_i\label{eq:truss}\\
&\mbox{subject to}\nonumber\\
&\qquad \begin{pmatrix} K(x) & f\\ f^T &\gamma\end{pmatrix}
\succeq 0
\nonumber\\
&\qquad K(x)+{G}(x) \succcurlyeq 0 \nonumber\\
&\qquad x_i > 0,\quad i=1,\ldots,m\,\nonumber
\end{align}
\paragraph{Gradients and Hessians}
Let $M: \RR^m \to \RR^{n\times n}$ be a matrix valued function
assigning each vector $\xi$ a matrix $M(\xi)$. We denote by $\nabla\!_k
M$ the partial derivative of $M(\xi)$ with respect to the $k$-th
component of vector $\xi$.
\begin{lemma}[based on \cite{magnus1988matrix}]
Let $M: \RR^m \to \RR^{n\times n}$ be a symmetric matrix valued
function assigning each $\xi\in\RR^m$ a nonsingular $(n\times n)$
matrix $M(\xi)$. Then (for convenience we omit the variable $\xi$)
$$
\nabla\!_k M^{-1} = -M^{-1} (\nabla\!_k M)
M^{-1}\,.
$$
If $M$ is a linear function of $\xi$, i.e., $M(\xi) = \sum_{i=1}^m
\xi_i M_i$ with symmetric positive semidefinite $M_i, i=1,\ldots,m,$
then the above formula simplifies to
$$
\nabla\!_k M^{-1} = -M^{-1} M_k M^{-1}\,.
$$
\end{lemma}
\begin{theorem}[\cite{buck}]
Let $G(x)$ be given as in (\ref{eq:G}). Then
$$
[\nabla {G}\,]_k = {E\over\ell_k^3}\gamma_k^T K^{-1}f\Delta_k
- \sum_{j=1}^m {Et_j\over\ell_j^3}\gamma_j^T K^{-1}K_k K^{-1} f
\Delta_j
$$
and
\begin{multline*}
\displaystyle
[\nabla^2 {G}\,]_{\!k\ell} =
\displaystyle
-{E\over\ell_k^3}\gamma_k^T K^{-1}K_\ell K^{-1}f\Delta_k
-{E\over\ell_\ell^3}\gamma_\ell^T K^{-1}K_k K^{-1}f\Delta_\ell\\
\displaystyle
-\sum_{j=1}^m {Et_j\over\ell_j^3}\gamma_j^T K^{-1}K_\ell K^{-1}K_k K^{-1} f
\Delta_j\\
-\sum_{j=1}^m {Et_j\over\ell_j^3}\gamma_j^T K^{-1}K_k K^{-1}K_\ell K^{-1} f \Delta_j.
\end{multline*}
\end{theorem}
\paragraph{Example} Consider the standard example of a laced column under
axial loading (example \verb|tim| in the PENLAB collection). Due to
symmetry, we only consider one half of the column, as shown in
Figure~\ref{fig:tr1}(top-peft); it has 19 nodes and 42 potential bars,
so $n=34$ and $m=42$. The column dimensions are $8.5\times 1$, the two
nodes on the left-hand side are fixed and the ``axial'' load applied at
the column tip is $(0,-10)$. The upper bound on the compliance is
chosen as $\gamma=1$.
Assume first that $x_i=0.425, i=1,\ldots,m$, i.e., the volumes of all
bars are equal and the total volume is 17.85. The values of $x_i$ were
chosen such that the truss satisfies the compliance constraint:
$f^{\top}u =0.9923\leq \gamma$. For this truss, the smallest
nonnegative eigenvalue of (\ref{eq:buckEVP}) is equal to 0.7079 and the
buckling constraint (\ref{eq:truss_buck}) is not satisfied. Figure
\ref{fig:tr1}(top-right) shows the corresponding the buckling mode
(eigenvector associated with this eigenvalue).
\begin{figure}[h]
\begin{center}
\resizebox{0.39\hsize}{!}
{\includegraphics{pic/tim_fig1.eps}}\qquad
\resizebox{0.39\hsize}{!}
{\includegraphics{pic/tim_fig2.eps}}\\[1.5em]
\resizebox{0.39\hsize}{!}
{\includegraphics{pic/tim_fig3.eps}}\qquad
\resizebox{0.39\hsize}{!}
{\includegraphics{pic/tim_fig4.eps}}
\end{center}
\caption{Truss optimization with stability problem:
initial truss (top-left); its buckling mode (top-right);
optimal truss without stability constraint (bottom-left);
and optimal stable truss (bottom-right)}\label{fig:tr1}
\end{figure}
Let us now solve the truss optimization problem \emph{without} the
stability constraint (\ref{eq:truss}). We obtain the design shown in
Figure~\ref{fig:tr1}(bottom-left). This truss is much lighter than the
original one ($\sum\limits_{i=1}^m x_i = 9.388$), it is, however,
extremely unstable under the given load, as (\ref{eq:buckEVP}) has a
zero eigenvalue.
When solving the truss optimization problem \emph{with} the stability
constraint (\ref{eq:truss}) by PENLAB, we obtain the design shown in
Figure~\ref{fig:tr1}(bottom-right). This truss is still significantly
lighter than the original one ($\sum\limits_{i=1}^m x_i = 12.087$), but
it is now stable under the given load. To solve the nonlinear SDP
problem, PENLAB needed 18 global and 245 Newton iterations and 212
seconds of CPU time, 185 of which were spent in the Hessian evaluation
routines.
\paragraph{PENLAB distribution}
Directories \verb|applications/TTO| and \verb|applications/TTObuckling|
of the PENLAB distribution
contain the problem formulation and many examples of trusses. To solve
the above example with the buckling constraint, run
\begin{verbatim}
>> solve_ttob('GEO/tim.geo')
\end{verbatim}
in directory \verb|TTObuckling|.
\subsection{Static output feedback}
Given a linear system with $A\in\RR^{n\times n}, B\in\RR^{n\times m},
C\in\RR^{p\times n}$
\begin{align*}
\dot{x} &= Ax + Bu\\
y& = Cx
\end{align*}
we want to stabilize it by static output feedback
$
u = Ky \,.
$ That is, we want to find a matrix $K\in\RR^{m\times p}$ such that the
eigenvalues of the closed-loop system $A+BKC$ belong to the left
half-plane.
The standard way how to treat this problem is based on the Lyapunov
stability theory. It says that $A+BKC$ has all its eigenvalues in the
open left half-plane if and only if there exists a symmetric positive
definite matrix $P$ such that
\begin{equation}\label{eq:sofbmi}
(A+BKC)^T P+P(A+BKC) \succ 0\,.
\end{equation}
Hence, by introducing the new variable, the Lyapunov matrix $P$, we can
formulate the SOF problem as a feasibility problem for the bilinear
matrix inequality (\ref{eq:sofbmi}) in variables $K$ and $P$. As
typically $n>p,m$ (often $n\gg p,m$), the Lyapunov variable dominates
here, although it is just an auxiliary variable and we do not need to
know its value at the feasible point. Hence a natural question arises
whether we can avoid the Lyapunov variable in the formulation of the
problem. The answer was given in \cite{SOF2005} and lies in the
formulation of the problem using polynomial matrix inequalities.
Let $k=\Vec K$. Define the characteristic polynomial of $A+BKC$:
$$
q(s,k) = \det(sI-A-BKC) = \sum_{i=0}^n q_i(k)s^i\,,
$$
where $q_i(k) = \sum_\alpha q_{i\alpha}k^\alpha$ and
$\alpha\in\NN^{mp}$ are all monomial powers. The \emph{Hermite
stability criterion} says that the roots of $q(s,k)$ belong to the
stability region {\cal D} (in our case the left half-plane) if and only
if
$$
H(q) = \sum_{i=0}^n\sum_{j=0}^n q_i(k)q_j(k) H_{ij} \succ 0 \,.
$$
Here the coefficients $H_{ij}$ depend on the stability region only
(see, e.g., \cite{ecc}). For instance, for $n=3$, we have
$$
H(q) = \begin{pmatrix} 2q_0q_1 & 0 & 2q_0q_3\\
0 & 2q_1q_2-2q_0q_3 & 0\\
2q_0q_3 & 0 & 2q_2q_3
\end{pmatrix}\,.
$$
The Hermite matrix $H(q)=H(k)$ depends polynomially on $k$:
\begin{equation}\label{eq:sofpmi}
H(k) = \sum_\alpha H_\alpha k^\alpha \succ 0
\end{equation}
where $H_\alpha = H_\alpha^T\in\RR^{n\times n}$ and $\alpha\in\NN^{mp}$
describes all monomial powers.
\begin{theorem}[\cite{SOF2005}] Matrix $K$ solves the static output feedback problem if
and only if $k=\Vec K$ satisfies the polynomial matrix inequality
(\ref{eq:sofpmi}).
\end{theorem}
In order to solve the strict feasibility problem (\ref{eq:sofpmi}), we
can solve the following optimization problem with a polynomial matrix
inequality
\begin{align}\label{eq:sofpmi1}
&\max_{k\in\RR^{mp},\,\lambda\in\RR} \lambda-\mu\|k\|^2 \\
&
\mbox{subject to}\quad
H(k)\succcurlyeq \lambda I\,. \nonumber
\end{align}
Here $\mu>0$ is a parameter that allows us to trade off between
feasibility of the PMI and a moderate norm of the matrix $K$, which is
generally desired in practice.
\paragraph{COMPlib examples}
In order to use PENLAB for the solution of SOF problems
(\ref{eq:sofpmi1}), we have developed an interface to the problem
library COMPlib \cite{complib}\footnote{The authors would like to thank
Didier Henrion, LAAS-CNRS Toulouse, for developing a substantial part
of this interface.}. Table~\ref{tab:11} presents the results of our
numerical tests. We have only solved COMPlib problems of small size,
with $n<10$ and $mp<20$. The reason for this is that our MATLAB
implementation of the interface (building the matrix $H(k)$ from
COMPlib data) is very time-consuming. For each COMPlib problem, the
table shows the degree of the matrix polynomial, problem dimensions $n$
and $mp$, the optimal $\lambda$ (the negative largest eigenvalue of the
matrix $K$), the CPU time and number of Newton iterations/linesearch
steps of PENLAB. The final column contains information about the
solution quality. ``F'' means failure of PENLAB to converge to an
optimal solution. The plus sign ``+'' means that PENLAB converged to a
solution which does not stabilize the system and "0" is used when
PENLAB converged to a solution that is on the boundary of the feasible
domain and thus not useful for stabilization.
\begin{table}[htbp]
\caption{mmm}
\begin{tabular*}{\hsize}{@{\extracolsep{\fill}}lcccrrrc}
\hline
Problem & degree & $n$ & $mp$ & $\lambda_{\rm opt}$& CPU (sec)&iter &remark \\
\hline
AC1 & 5 & 5 & 9 & $-0.871\cdot 10^{0}$ & 2.2 & 27/30& \\
AC2 & 5 & 5 & 9 & $-0.871\cdot 10^{0}$ & 2.3 & 27/30& \\
AC3 & 4 & 5 & 8 & $-0.586\cdot 10^{0}$ & 1.8 & 37/48& \\
AC4 & 2 & 4 & 2 & $0.245\cdot 10^{-2}$ & 1.9 & 160/209& + \\
AC6 & 4 & 7 & 8 & $-0.114\cdot 10^{4}$ & 1.2 & 22/68& \\
AC7 & 2 & 9 & 2 & $-0.102\cdot 10^{3}$& 0.9 & 26/91 &\\
AC8 & 2 & 9 & 5 & $0.116\cdot 10^{0}$ & 3.9 & 346/1276 & F \\
AC11 & 4 & 5 & 8 & $-0.171\cdot 10^{5}$ & 2.3 & 65/66& \\
AC12 & 6 & 4 & 12 & $0.479\cdot 10^{0}$ & 12.3 & 62/73& + \\
AC15 & 4 & 4 & 6 & $-0.248\cdot 10^{-1}$ & 1.2 & 25/28 & \\
AC16 & 4 & 4 & 8 & $-0.248\cdot 10^{-1}$ & 1.2 & 23/26 & \\
AC17 & 2 & 4 & 2 & $-0.115\cdot 10^{2}$ & 1.0 & 19/38 & \\
HE1 & 2 & 4 & 2 & $-0.686\cdot 10^{2}$ & 1.0 & 22/22 & \\
HE2 & 4 & 4 & 4 & $-0.268\cdot 10^{0}$ & 1.6 & 84/109 & \\
HE5 & 4 & 8 & 8 & $0.131\cdot 10^{2}$ & 1.9 & 32/37 & + \\
REA1 & 4 & 4 & 6 & $-0.726\cdot 10^{2}$ & 1.4 & 33/35 & \\
REA2 & 4 & 4 & 4 &$-0.603\cdot 10^{2}$ & 1.3 & 34/58 & \\
DIS1 & 8 & 8 & 16 & $-0.117\cdot 10^{2}$ & 137.6 & 30/55 & \\
DIS2 & 4 & 3 & 4 & $-0.640\cdot 10^{1}$ & 1.6 & 59/84 & \\
DIS3 & 8 & 6 & 16 & $-0.168\cdot 10^{2}$ & 642.3 & 66/102 & \\
MFP & 3 & 4 & 6 & $-0.370\cdot 10^{-1}$ & 1.0 & 20/21 & \\
TF1 & 4 & 7 & 8 & $-0.847\cdot 10^{-8}$ & 1.7 & 27/31 & 0 \\
TF2 & 4 & 7 & 6 & $-0.949\cdot 10^{-7}$ & 1.3 & 19/23 & 0 \\
TF3 & 4 & 7 & 6 & $-0.847\cdot 10^{-8}$ & 1.6 & 28/38 & 0 \\
PSM & 4 & 7 & 6 & $-0.731\cdot 10^{2}$ & 1.1 & 17/39 & \\
NN1 & 2 & 3 & 2 & $-0.131\cdot 10^{0}$ & 1.2 & 32/34 & 0\\
NN3 & 2 & 4 & 1 & $0.263\cdot 10^{2}$ & 1.0 & 31/36 & +\\
NN4 & 4 & 4 & 6 & $-0.187\cdot 10^{2}$ & 1.2 & 33/47 & \\
NN5 & 2 & 7 & 2 & $0.137\cdot 10^{2}$ & 1.5 & 108/118 & +\\
NN8 & 3 & 3 & 4 & $-0.103\cdot 10^{1}$ & 1.0 & 19/29 & \\
NN9 & 4 & 5 & 6 & $0.312\cdot 10^{1}$ & 1.6 & 64/97 & +\\
NN10 & 6 & 8 & 9 & $0.409\cdot 10^{4}$ & 18.3 &300/543 & F\\
NN12 & 4 & 6 & 4 & $0.473\cdot 10^{1}$ & 1.4 & 47/58 & + \\
NN13 & 4 & 6 & 4 & $0.279\cdot 10^{12}$ & 2.2 & 200/382 &F\\
NN14 & 4 & 6 & 4 & $0.277\cdot 10^{12}$ & 2.3 & 200/382 &F\\
NN15 & 3 & 3 & 4 & $-0.226\cdot 10^{0}$ & 1.0 & 15/14 & \\
NN16 & 7 & 8 & 16 & $-0.623\cdot 10^{3}$ &613.3 &111/191 & \\
NN17 & 2 & 3 & 2 & $0.931\cdot 10^{-1}$ & 1.0 & 25/26 & +\\
\hline
\end{tabular*} \label{tab:11}
\end{table}
The reader can see that PENLAB can solve all problems apart from AC7,
NN10, NN13 and NN14; these problems are, however, known to be very
ill-conditioned and could not be solved via the Lyapunov matrix
approach either (see \cite{SOF2004}). Notice that the largest problems
with polynomials of degree up to 8 did not cause any major difficulties
to the algorithm.
\paragraph{PENLAB distribution}
The related MATLAB programs are stored in directory \verb|applications/SOF| of the
PENLAB distribution. To solve, for instance, example AC1, run
\begin{verbatim}
>> sof('AC1');
\end{verbatim}
COMPlib program and library must be installed on user's computer.
\section{PENLAB versus PENNON (MATLAB versus C)}\label{sec:comparison}
The obvious concern of any user will be, how fast (or better, how slow)
is the MATLAB implementation and if it can solve any problems of
non-trivial size. The purpose of this section is to give a very rough
comparison of PENLAB and PENNON, i.e., the MATLAB and C implementation
of the same algorithm. The reader should, however, not make any serious
conclusion from the tables below, for the following reasons:
\begin{itemize}
\item Both implementations slightly differ. This can be seen on the
different numbers of iterations needed to solve single
examples.
\item The difference in CPU timing very much depends on the type of
the problem. For instance, some problems require
multiplications of sparse matrices with dense ones---in this
case, the C implementation will be much faster. On the other
hand, for some problems most of the CPU time is spent in the
dense Cholesky factorization which, in both implementations,
relies on LAPACK routines and thus the running time may be
comparable.
\item The problems were solved using an Intel i7 processor with two
cores. The MATLAB implementation used both cores to perform
\emph{some} commands, while the C implementation only used one
core. This is clearly seen, e.g., example lame\_emd10 in
Table~\ref{tab:12}.
\item For certain problems (such as mater2 in Table~\ref{tab:14}),
most of the CPU time of PENLAB is spent in the user defined
routine for gradient evaluation. For linear SDP, this only
amounts to reading the data matrices, in our implementation
elements of a two-dimensional cell array, from memory. Clearly,
a more sophisticated implementation would improve the timing.
\end{itemize}
For all calculations, we have used a notebook running Windows 7 (32
bit) on Intel Core i7 CPU M620@2.67GHz with 4GB memory and MATLAB
7.7.0.
\subsection{Nonlinear programming problems}
We first solved selected examples from the COPS collection \cite{cops}
using AMPL interface. These are medium size examples mostly coming from
finite element discretization of optimization problems with PDE
constraints. Table~\ref{tab:12} presents the results.
\begin{table}[htbp]
\caption{Selected COPS examples. CPU time is given in seconds.
Iteration count gives the number of the global iterations in
Algorithm~\ref{algo:1} and the total number of steps of the Newton
method.}
\begin{tabular*}{\hsize}{@{\extracolsep{\fill}}crrrrrrr} \hline
problem & vars& constr. & constraint & \multicolumn{2}{c}{PENNON} & \multicolumn{2}{c}{PENLAB} \\
& & & type & CPU & iter. & CPU & iter. \\\hline
elec200 & 600 & 200 & $=$ & 40 & 81/224 & 31 & 43/135 \\
chain800 & 3199 & 2400 & $=$ & 1 & 14/23 & 6 & 24/56\\
pinene400 & 8000 & 7995 & $=$ & 1 & 7/7 & 11 & 17/17\\
channel800 & 6398 & 6398 & $=$ & 3 & 3/3 & 1 & 3/3\\
torsion100 & 5000 & 10000 & $\leq$ & 1 & 17/17 & 17 & 26/26 \\
bearing100 & 5000 & 5000 & $\leq$ & 1 & 17/17 & 13 & 36/36 \\
lane\_emd10& 4811 & 21 & $\leq$ & 217 & 30/86 & 64 & 25/49\\
dirichlet10& 4491 & 21 & $\leq$ & 151 & 33/71 & 73 & 32/68 \\
henon10 & 2701 & 21 & $\leq$ & 57 & 49/128 & 63 & 76/158 \\
minsurf100 & 5000 & 5000 & box & 1 & 20/20 & 97 & 203/203 \\
gasoil400 & 4001 & 3998 & $=$ \& box & 3 & 34/34 & 13 & 59/71\\
duct15 & 2895 & 8601 & $=$ \& $\leq$& 6 & 19/19 & 9 & 11/11\\
tri\_turtle& 3578 & 3968 &$\leq$ \& box& 3 & 49/49 & 4 & 17/17\\
marine400 & 6415 & 6392 &$\leq$ \& box& 2 & 39/39 & 22 & 35/35 \\
steering800& 3999 & 3200 &$\leq$ \& box& 1 & 9/9 & 7 & 19/40 \\
methanol400& 4802 & 4797 &$\leq$ \& box& 2 & 24/24 & 16 & 47/67 \\
catmix400 & 4398 & 3198 &$\leq$ \& box& 2 & 59/61 & 15 & 44/44 \\
\hline
\end{tabular*} \label{tab:12}
\end{table}
\subsection{Linear semidefinite programming problems}
We solved selected problems from the SDPLIB collection
(Table~\ref{tab:13}) and Topology Optimization collection
(Table~\ref{tab:14}); see \cite{sdplib,topo}. The data of all problems
were stored in SDPA input files \cite{sdpa}. Instead of PENNON, we have
used its clone PENSDP that directly reads the SDPA files and thus avoid
repeated calls of the call back functions. The difference between
PENNON and PENSDP (in favour of PENSDP) would only be significant in
the mater2 example with many small matrix constraints.
\begin{table}[htbp]
\caption{Selected SDPLIB examples. CPU time is given in seconds.
Iteration count gives the number of the global iterations in
Algorithm~\ref{algo:1} and the total number of steps of the Newton
method.}
\begin{tabular*}{\hsize}{@{\extracolsep{\fill}}crrrrrrr} \hline
problem & vars& constr. & constr. & \multicolumn{2}{c}{PENSDP} & \multicolumn{2}{c}{PENLAB} \\
& & & size & CPU & iter. & CPU & iter. \\\hline
control3 & 136 & 2 & 30 & 1 & 19/103 & 20 & 22/315 \\
maxG11 & 800 & 1& 1600 & 18 & 22/41 & 186 & 18/61 \\
qpG11 & 800 & 1& 1600 & 43 & 22/43 & 602 & 18/64 \\
ss30 & 132 & 1& 294 & 20 & 23/112 & 17 & 12/63 \\
theta3 & 1106 & 1& 150 & 11 & 15/52 & 61 & 14/48 \\
\hline
\end{tabular*} \label{tab:13}
\end{table}
\begin{table}[htbp]
\caption{Selected TOPO examples. CPU time is given in seconds.
Iteration count gives the number of the global iterations in
Algorithm~\ref{algo:1} and the total number of steps of the Newton
method.}
\begin{tabular*}{\hsize}{@{\extracolsep{\fill}}crrrrrrr} \hline
problem & vars& constr. & constr. & \multicolumn{2}{c}{PENSDP} & \multicolumn{2}{c}{PENLAB} \\
& & & size & CPU & iter. & CPU & iter. \\\hline
buck2 & 144 & 2 & 97 & 2 & 23/74 & 22 & 18/184 \\
vibra2 & 144 & 2& 97 & 2 & 34/132 & 35 & 20/304 \\
shmup2 & 200 & 2& 441 & 65 & 24/99 & 172 & 26/179 \\
mater2 & 423 & 94 & 11 & 2 & 20/89 & 70 & 12/179 \\
\hline
\end{tabular*} \label{tab:14}
\end{table}
| {
"timestamp": "2013-11-22T02:00:35",
"yymm": "1311",
"arxiv_id": "1311.5240",
"language": "en",
"url": "https://arxiv.org/abs/1311.5240",
"abstract": "PENLAB is an open source software package for nonlinear optimization, linear and nonlinear semidefinite optimization and any combination of these. It is written entirely in MATLAB. PENLAB is a young brother of our code PENNON \\cite{pennon} and of a new implementation from NAG \\cite{naglib}: it can solve the same classes of problems and uses the same algorithm. Unlike PENNON, PENLAB is open source and allows the user not only to solve problems but to modify various parts of the algorithm. As such, PENLAB is particularly suitable for teaching and research purposes and for testing new algorithmic ideas.In this article, after a brief presentation of the underlying algorithm, we focus on practical use of the solver, both for general problem classes and for specific practical problems.",
"subjects": "Optimization and Control (math.OC)",
"title": "PENLAB: A MATLAB solver for nonlinear semidefinite optimization",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.973240719919151,
"lm_q2_score": 0.824461932846258,
"lm_q1q2_score": 0.8023999250692269
} |
https://arxiv.org/abs/2003.11673 | Explicit expanders of every degree and size | An $(n,d,\lambda)$-graph is a $d$ regular graph on $n$ vertices in which the absolute value of any nontrivial eigenvalue is at most $\lambda$. For any constant $d \geq 3$, $\epsilon>0$ and all sufficiently large $n$ we show that there is a deterministic poly(n) time algorithm that outputs an $(n,d, \lambda)$-graph (on exactly $n$ vertices) with $\lambda \leq 2 \sqrt{d-1}+\epsilon$. For any $d=p+2$ with $p \equiv 1 \bmod 4$ prime and all sufficiently large $n$, we describe a strongly explicit construction of an $(n,d, \lambda)$-graph (on exactly $n$ vertices) with $\lambda \leq \sqrt {2(d-1)} + \sqrt{d-2} +o(1) (< (1+\sqrt 2) \sqrt {d-1}+o(1))$, with the $o(1)$ term tending to $0$ as $n$ tends to infinity. For every $\epsilon >0$, $d>d_0(\epsilon)$ and $n>n_0(d,\epsilon)$ we present a strongly explicit construction of an $(m,d,\lambda)$-graph with $\lambda < (2+\epsilon) \sqrt d$ and $m=n+o(n)$. All constructions are obtained by starting with known ones of Ramanujan or nearly Ramanujan graphs, modifying or packing them in an appropriate way. The spectral analysis relies on the delocalization of eigenvectors of regular graphs in cycle-free neighborhoods. | \section{Introduction}
An $(n,d,\lambda)$-graph is a $d$-regular graph on $n$ vertices
in which the absolute value of every nontrivial eigenvalue is at most
$\lambda$. This notation was introduced by the author in the early
90s motivated by the fact that such graphs in which $\lambda$ is much
smaller than $d$ exhibit strong expansion and quasi-random properties,
see \cite{Al1}, \cite{AC}, \cite{KS}.
It is well known (see \cite{Al1}, \cite{Ni}, \cite{Fr1})
that if an $(n,d,\lambda)$-graph exists then $\lambda \geq 2\sqrt {d-1}
-O(1/\log^2 n)$. An $(n,d,\lambda)$-graph is called (two-sided)
{\em Ramanujan} if
$\lambda \leq 2\sqrt {d-1}$.
Lubotzky, Phillips and Sarnak \cite{LPS}, and
independently Margulis \cite{Ma} proved that
for every prime $p$ which is $1$ modulo $4$ there are infinite families
of $d$-regular Ramanujan graphs. Friedman \cite{Fr2} (see also
\cite{Bo} for a simpler proof) proved the existence of near
Ramanujan graphs of every degree and every (large) admissible size. Indeed,
establishing a conjecture of the present author he proved that a random
$d$-regular graph on $n$ vertices is, with high probability, an
$(n,d,\lambda)$-graph for $\lambda=2\sqrt{d-1}+o(1)$, where the $o(1)$-term
tends to zero as $n$ tends to infinity.
For applications, however, (see, e.g., \cite{HLW} and its references for
many of those) it is desirable to have explicit constructions
of such graphs. It is also sometime desirable to have explicit
constructions with specified degrees and number of vertices,
(see, for example, \cite{MRSV} for a recent example).
A construction is called {\em explicit} if there is a
deterministic polynomial time algorithm that, given $n$ and $d$,
produces an $(n,d,\lambda)$-graph
(or an $(n(1+o(1)), d, \lambda)$-graph). It is {\em strongly explicit }
if the adjacency list of any given vertex can be produced in time
$polylog(n)$. The construction of \cite{LPS}, and that of \cite{Ma}
are strongly explicit
\footnote {Though they require to find a large prime in a
prescribed range. This can be done efficiently using randomization,
but can also be avoided. More details appear in Section 2},
providing Cayley graphs of $SL(2,F_q)$, but work
only for degrees that are $p+1$ for primes $p \equiv 1 \bmod 4$ and for
numbers of vertices that are of the form $q(q^2-1)/2$ for primes
$q$ which are $1$ modulo $4$ so that $p$ is a quadratic residue modulo
$p$. Morgenstern \cite{Mo} gave a strongly explicit construction
for every degree which is a prime power plus 1, but the possible numbers
of vertices obtained are sparser. An observation in \cite{CM} provides
strongly explicit families of $(n,d,\lambda)$-graphs
with $\lambda \leq O(d^{0.525})$ for infinitely many values of $n$
(but not for every $n$). Similarly, the method in
\cite{RVW} and its improvement in
\cite{BaTs} provide strongly explicit families with $\lambda \leq
O(d^{1/2+o(1)})$ (for infinitely many, but not for all $n$). The results
of \cite{MSS} together with those of \cite{Co} and an observation of
Srivastava (cf. \cite{MOP}) give explicit, but not strongly explicit
$(n,d,\lambda)$-graphs for all admissible $d$ and $n$ with
$\lambda \leq 4 \sqrt {d-1}$. In a recent work of Mohanty, O'Donnell
and Paredes \cite{MOP}
the authors describe an explicit (but not strongly explicit)
construction of $(n,d,\lambda)$-graphs for every $d$, where
$\lambda = 2\sqrt {d-1}+o(1)$ and the $o(1)$-term tends to $0$ as
$n$ tends to infinity. This, again, works for infinitely many values
of $n$, but not for all $n$.
In the present short paper
we describe improved explicit and strongly explicit
constructions of near Ramanujan graphs of all degrees and (large) number of
vertices. The first result is a
(slightly improved version of an)
observation I mentioned in several lectures in the 90s that, as far
as I know, has never appeared in print. Although it is very simple,
the parameters it provides are far better than the ones obtained from
the constructions in \cite{CM}, \cite{RVW}, \cite{BaTs}, and I therefore
decided to include it here.
\begin{prop}
\label{p11}
For every degree $d$ there is a strongly explicit constructions of
$(n,d,\lambda)$-graphs where $\lambda \leq (2+o_d(1))\sqrt {d}$,
the $o_d(1)$-term tends to zero as $d$ tends to infinity,
and the possible values of $n$ form a sequence in which the ratio
between consecutive terms tends to $1$.
\end{prop}
Note that this means that for every desired number of vertices $n$ and any
desired degree $d$, there is a strongly explicit construction of
an $(n(1+o_n(1)),d, \lambda)$-graph with $\lambda \leq (2+o_d(1))
\sqrt d$. Here the term $o_n(1)$ tends to zero as $n$ tends to infinity
and the $o_d(1)$-term tends to zero as $d$ tends to infinity.
The next result provides strongly explicit constructions of
$(n,d,\lambda)$ graphs for degrees $d=p+2$ with $p$ being a prime
congruent to $1$ modulo $4$, for any desired (large) number of vertices.
\begin{theo}
\label{t12}
For any prime $p \equiv 1 \bmod 4$ and every sufficiently large $n$
there is a strongly explicit construction of an $(n,d,\lambda)$-graph
(on exactly $n$ vertices), where $d=p+2$ and
$\lambda \leq \sqrt{2(d-1)}+\sqrt{d-1} +o(1)
< (1+\sqrt 2) \sqrt {d-1} +o(1)$, and the $o(1)$-term tends to
zero as $n$ tends to infinity.
\end{theo}
It is worth noting that here we allow to have at most one loop in
every vertex, with the convention that a loop adds one to the degree
(otherwise we must have an even number of vertices as the degree of
regularity is odd). For even $n$ we can replace the loops by a
matching with no loss in the spectral estimate.
If an explicit, rather than strongly explicit construction suffices,
we can combine a variant of
our method with the new result of \cite{MOP} to get
the following.
\begin{theo}
\label{t13}
For every degree $d$, every $\epsilon$ and all sufficiently
large $n \geq n_0(d,\epsilon)$, where $nd$ is even,
there is an explicit
construction of an $(n,d,\lambda)$-graph with $\lambda \leq 2\sqrt{d-1}
+\epsilon$.
\end{theo}
The construction in the proof of Proposition \ref{p11} is a
simple packing of known Ramanujan graphs on the same set of vertices.
A crucial point is that these constructions are all Cayley graphs of the
same group, so one can simply take a union of the
corresponding generating sets.
The proofs of Theorem \ref{t12} and \ref{t13} require more work.
Here too the idea is to start from a known Ramanujan or nearly Ramanujan
graph and modify it in an appropriate way. In the proof of
Theorem \ref{t12} we add
vertices connected to arbitrary disjoint sets of neighbors, adding loops
(or a matching)
to keep the graph regular. The eigenvalues are then estimated by their
variational definition.
In the construction for Theorem \ref{t13}
we omit carefully chosen vertices from a given near-Ramanujan graph
and add a matching between their neighbors to maintain regularity.
A crucial point in the spectral analysis here is the delocalization of
the eigenvectors of the graphs obtained, which is based on the absence
of short cycles in the neighborhoods of the omitted vertices.
The rest of this paper is organized as follows. In Section 2 we describe the
strongly explicit constructions, including the proofs of Proposition
\ref{p11} and Theorem \ref{t12}. In Section 3 we present the proof of
Theorem \ref{t13}. The final Section 4 contains some concluding remarks
and open problems.
\section{Strongly explicit constructions}
The basic construction we describe here requires the ability to
find efficiently a large prime in a prescribed range. It is well
known that this can be done efficiently by a randomized algorithm,
and can also be done deterministically assuming some standard
(open) number-theoretic conjectures
about the gap between consecutive primes. Since
this is the only non-deterministic part of the construction, we
call it a $p$-strongly explicit construction (where $p$ stands for
prime). This construction is described in the first subsection.
We then show how it can be replaced by a totally strongly
explicit construction. To do so, we first include a subsection
presenting the (known) description of the construction of
\cite{LPS}, \cite{Ma} as Cayley graphs of Quaternions over $Z_m$.
We proceed with a proof of Theorem \ref{t12} with a $p$-strongly
explicit construction, followed by its modification to a strongly
explicit one.
\subsection{The basic construction}
\label{basic}
We start with the simple proof of
Proposition \ref{p11}, with a $p$-strongly explicit construction.
It is based on the
fact that if $G_i=(V,E_i)$, $i \in I$, are graphs on the same set of
vertices $V$, where $G_i$ is an $(n,d_i,\lambda_i)$-graph, then their
union $G=(V, \cup_i E_i)$ (considered as a multigraph in case the sets
$E_i$ are not pairwise disjoint), is an $(n, \sum_i d_i, \sum_i \lambda_i)$
graph. This is a simple consequence of the variational definition of the
eigenvalues. The Ramanujan graphs in \cite{LPS} or \cite{Ma} are Cayley
graphs of the group $SL(2,F_q)$ of the two by two matrices with determinant
$1$ over the finite field $F_q$, modulo
its normal subgroup consisting of the
identity $I$ and $-I$. The degree can be $1$ plus
any prime $p$ congruent to $1$
modulo $4$, where $q$ is also a prime congruent to $1$ modulo $4$, and
$p$ is a quadratic residue modulo $q$. Note that by quadratic reciprocity
this is equivalent to $q$ being a quadratic residue modulo $p$.
Given a desired degree $d=d_1$,
let $p_1 $ be the largest prime congruent to
$1$ modulo $4$ and satisfying
$p_1+1 \leq d_1$. Put $d_2=d_1-p_1-1$. If $d_2>4$
let $p_2$ be the largest prime
congruent to $1$ modulo $4$ which satisfies $p_2+1 \leq d_2$ and put
$d_3=d_2-p_2-1$. Continuing in this manner we get primes $p_1, \ldots ,
p_s$ as above so that $(p_1+1)+(p_2+1)+ \cdots +(p_s+1) \leq d$
where $y=d-((p_1+1)+(p_2+1)+ \cdots +(p_s+1)) \leq 4$. Let $q$
be a prime congruent to $1$ modulo $4$ which is a quadratic residue
modulo each $p_i$ (for example, any $q$ which is $1$ modulo each $p_i$
will do). Let $V$ be the
set of elements of
$SL(2,F_q)$. For each $i$ let $G_i$ be the $(p_i+1)$-regular
Ramanujan
Cayley graph of $SL(2,F_q)$ described in \cite{LPS}, and let
$X_i$ be its (symmetric) set of generators. Let $G'$ be the
Cayley graph of $SL(2,F_q)$ whose set of generators consists of the union
of all sets $X_i$. Then $G'$ is $(d-y)$-regular, where $0 \leq y \leq 4$.
If $y=0$ let $G$ be $G'$. If $y=1$ add to the set of generators the
matrix $M$ with rows $(0,1)$ and $(-1,0)$ (which is of order $2$). If
$y=2$ add an arbitrary generator and its inverse, if $y=3$ add such
a generator, its inverse and $M$, and if $y=4$ add an arbitrary set of
two generators and their inverses. In each of these cases the resulting
graph $G$ is a $d$-regular Cayley graph of $SL(2,F_q)$. By the known
results about the distribution of primes in arithmetic progressions
each prime $p_i$ is much smaller than $p_{i-1}$ as long as
$p_{i-1}$ is large. In fact, by \cite{BHP} it follows
that $p_{i}=O(p_{i-1}^{0.525})$.
Therefore, the resulting
graph $G$ is an $(n,d,\lambda)$-graph for $n=q(q^2-1/2$ with
$\lambda\leq (2+o_d(1))\sqrt d$, where the $o(1)$-term tends to
zero as $d$ tends to infinity. Note that it is not difficult to ensure,
if so desired,
that the graph $G$ is simple: we just have to ensure the chosen
primes are distinct. This is automatically the case whenever $d_i$ is
still large, and if needed we can stop when $d_i$ becomes small and add
arbitrary additional generators and their inverses, together with
$M$ if $d$ is odd. Alternatively, if we have
to repeat the same prime several times, we can take the corresponding
generating set for this prime
and conjugate it to get an isomorphic graph with
different generators.
The known results about the
distribution of primes in arithmetic progressions imply also that
for each choice of the primes $p_i$ the possible choices for the prime
$q$ suffice to ensure that the sequence of possible values for the number
of vertices $n$ of the graph is one in which the ratio between consecutive
terms tends to $1$ as $n$ tends to infinity. This completes the proof
of the proposition (with a $p$-strongly explicit construction
resulting from the need to find the required large prime $q$).
\hfill $\Box$
\vspace{0.2cm}
\subsection{Ramanujan graphs as Cayley graphs of quaternions}
\label{quaternions}
In this subsection we present the known description of the LPS Ramanujan
graphs as Cayley graphs of quaternions. The proof these are
Ramanujan graphs appears
(somewhat implicitly)
in \cite{Lu}.
Let $p$ be a prime congruent to $1$ modulo $4$, and
let $A=A(p)$ be the set of all integral solutions $(a_0,a_1,a_2,a_3)$
of the equation $a_0^2+a_1^2+a_2^2+a_3^2=p$
where $a_0$ is positive odd,
and all other $a_i$ are even. By a well known result of Jacobi
there are exactly $p+1$ such vectors.
Let $m$ be odd, relatively prime to $p$, and assume further that $p$ is
a square in $Z_m^*$.
let $Q(m)$ be the factor group of the
multiplicative group of the quaternions
over $Z_m$ whose norm
is a square in $Z_m^*$, modulo its normal subgroup consisting of
the scalars $Z_m^*$. Thus the elements of $Q(m)$ are all
quaternions $x_0+x_1 i+x_2 j+x_3 k$ where
$x_0^2+x_1^2+x_2^2+x_3^2 \in (Z_m^*)^2$ and two such elements are
identified if one is a multiple of the other by a scalar.
Finally, let $H=H(p,m)$ be the Cayley graph of $Q(m)$ with
the generating set
$$\{a_0+a_1i+a_2j+a_3k: (a_0,a_1,a_2,a_3) \in A(p)\}.$$
The following result is proved (somewhat implicitly) in \cite{Lu},
see pages 95-97.
\begin{theo}[\cite{Lu}]
\label{t91}
For every $p$ and $m$ as above $H=H(p,m)$ is a non-bipartite
$(p+1)$-regular Ramanujan graph, that is, the absolute value of each
of its eigenvalues besides the top one is at most $2 \sqrt p$.
\end{theo}
\subsection{The proof of Proposition \ref{p11}}
In the construction here we will
start with the graphs $Q(p,m)$ with $p \equiv
1 \bmod 4$ a prime and $m=q_1^s q_2^t$, where $s,t \geq 1$ and
$q_1,q_2 $ are distinct primes, each being $1 \bmod 4p$. For each
fixed $p$ as above, the known results about the Linnik problem
(see \cite{HB})
imply that there are $q_1,q_2$ as above, each being at most a polynomial
in $p$. It is not difficult to check, using Hensel's Lemma and the
Chinese Remainder Theorem, that the number of vertices
of $H(p,q_1^s q_2^t)$ is
$$
Q(q_1,q_2,s,t)=q_1^{3(s-1)} q_2^{3(t-1)} \frac{q_1(q_1-1)(q_1+1)}{2}
\frac{q_2(q_2-1)(q_2+1)}{2}.
$$
Indeed, by Hensel's Lemma, for elements $x_0,x_1,x_2,x_3$ of $Z_m$
the norm $x_0^2+x_1^2+x_2^2+x_3^2$ is a square in $Z^*_m$ if and
only if it is a square in $Z^*_{q_1}$ and in $Z^*_{q_2}$. Since
each $q_i$ is $1 \bmod 4$, $-1$ is a quadratic residue implying
that the number of solutions of $y_1^2+y_2^2=0$ in $Z_{q_i}$ is
$2q_i-1$. For each nonzero $b$ in $Z_{q_i}$
the number of solutions of $y_1^2+y_2^2=b$ (in $Z_{q_i}$) is the same
as the number of solutions of $y^2-z^2(=(y-z)(y+z))=b$ , which is
$q_i-1$. This shows that the number of solutions of
$x_0^2+x_1^2+x_2^2+x_3^2=b$ for any nonzero $b \in Z_{q_i}$ is
$$
2 (2q_i-1)(q_i-1)+(q_i-2)(q_i-1)^2=(q_i-1)q_i(q_i+1).
$$
(These include $(2q_i-1)(q_i-1)$ solutions with
$x_0^2+x_1^2=0$ and $x_2^2+x_3^2=b$, $(2q_i-1)(q_i-1)$ ones
with $x_0^2+x_1^2=b$ and $x_2^2+x_3^2=0$, and $(q_i-1)^2$ solutions
for each of the $q_i-2$ possibilities $x_0^2+x_1^2=b_1$ and
$x_2^2+x_3^2=b_2$ with $b_1+b_2=b$ and $b_1,b_2 \not \in \{0,b\}$.)
Therefore, the number of elements over $Z_{q_i}$ whose norm is
a nonzero square in $Z_{q_i}$ is
$$
\frac{q_i-1}{2} (q_i-1)q_i(q_i+1)=\frac{q_i(q_i-1)^2(q_i+1)}{2}.
$$
By the Chinese remainder Theorem there are
$$
\frac{q_1(q_1-1)^2(q_1+1)}{2}
\frac{q_2(q_2-1)^2(q_2+1)}{2}.
$$
elements $(x_0,x_1,x_2,x_3)$ in $Z_{q_1q_2}$ so that
$x_0^2+x_1^2+x_2^2+x_3^2$ is a square in $Z^*_{q_1q_2}$, and
by Hensel's Lemma each of them provides $q_1^{4(s-1)}q_2^{4(t-1)}$
quaternions over $Z_{q_1^sq_2^t}$ whose norm is a square in
$Z^*_{q_1^sq_2^t}$. To get the number of vertices of the graph we
just have to divide by the cardinality of $Z^*_{q_1^sq_2^t}$ which
is $q_1^{s-1}q_2^{t-1}(q_1-1)(q_2-1)$, obtaining the required
number of vertices. Note also that by this description it is
easy to number the vertices of the graph. (For our application here it
is in fact enough to number a constant fraction of them. For fixed
$q_1,q_2$ this can be done, for example, by numbering all vectors
$(1,x_1,x_2,x_3) \in Z_{q_1^sq_2^t}$ with $x_1,x_2,x_3$ divisible by
$q_1q_2$, lexicographically).
We next show that for every fixed distinct
primes $q_1,q_2$, the ratio between
consecutive elements in the set of integers
$\{Q(q_1,q_2,s,t): s,t \geq 1 \}$ tends to $1$ as the elements grow.
\begin{lemma}
\label{l32}
Let $q_1,q_2$ be distinct primes. Then for every large integer
$n$ there are positive integers $s,t$ so that
$n \leq Q(q_1,q_2,s,t) \leq n+o(n)$.
\end{lemma}
\noindent
{\bf Proof:\,} The constant
$\alpha=\frac{\log {q_1}}{\log {q_2}}$ is irrational. Therefore, by
the equidistribtion theorem (in fact, by a special case that follows
easily from the pigeonhole principle), for every $\delta>0$ there
is an integer $k_1=k_1(\alpha)$ so that
$0<k_1 \alpha \bmod 1 < \delta$.
It follows that for every $\mu>0$ there are
integers $k_1,k_2$ such that
$$
1 \leq \frac{q_1^{k_1}}{q_2^{k_2}} \leq q_2^{\delta} \leq (1+\mu).
$$
This implies that for every $s,t \geq \max\{k_1,k_2\}$ the ratio
between $Q(q_1,q_2,s,t)$ and $Q(q_1,q_2,s-k_1,t-k_2)$ is between
$1$ and $(1+\mu)^3$, implying the desired result. \hfill $\Box$
\vspace{0.2cm}
\noindent
The proof of Proposition \ref{p11} now proceeds exactly as in
subsection \ref{basic}, using the description of the LPS graphs
serving as the building blocks as given in subsection
\ref{quaternions}. Since here $q_1,q_2$ are constants, there is no
need to find any large primes for the construction, providing a
strongly explicit construction for every fixed degree.
\subsection{The proof of Theorem \ref{t12}}
We first describe a $p$-strongly-explicit construction, starting,
again, with
the graphs of \cite{LPS}. Recall that the vertex sets of these graphs
is the set of matrices in $SL(2,F_q)$ where each matrix $A$ is identified
with $-A$. It is easy to number the vertices starting with the matrices
$(a_{ij})$ with $a_{11} \neq 0$ and ordering
them according to the lexicographic
order of the elements $(a_{11},a_{12},a_{13})$ where $a_{14}$ is chosen to
ensure that the determinant is $1$
(which is always possible as $a_{11} \neq 0$). Here $1 \leq a_{11} \leq
(q-1)/2$, as we identify each matrix $A$ with $-A$. The first matrices
are the $q^2$ matrices with $a_{11}=1$, then those with
$a_{11}=2$, and so on.
(The remaining $q(q-1)/2$ matrices with $a_{11}=0$ can appear
last in our order according to the lexicographic order of
$(a_{12},a_{24})$, but this will play no real role in the
construction.)
Given the desired number $n$ of vertices, and given the degree $d=p+2$
with $p$ as in the theorem, let $q$ be the largest prime which
is $1$ modulo $4$, is a quadratic residue modulo $p$ and satisfies
$|SL(2,F_q)|=m_q=q(q^2-1)/2 \leq n$. Put $m=m_q$, let $H$ be the Ramanujan
$(p+1)=(d-1)$-regular graph of \cite{LPS} whose vertex set $V$ is the set
of elements of $SL(2,F_q)$ numbered as described above.
By the known results about the distribution
of primes in progressions $n-m=o(m)$. Put $r=n-m$ and let
$R$ be a set of $r$ additional vertices $u_1,u_2, \ldots, u_r$.
Connect each vertex $u_i$ to the vertices numbered
$(i-1)d+1,(i-1)d+2, \ldots ,id$ of $H$. Finally add
a loop to each remaining vertex of $H$ to make the graph regular.
This is the desired graph $G$. It is clearly $d=p+2$-regular.
(If $n$ is even and we do not want loops we can replace them
by a matching between consecutive pairs of vertices,
saturating all non-neighbors of the $r$ new vertices).
It is clear that the construction above is strongly explicit.
To complete the proof
it remains to show that the absolute value of any nontrivial eigenvalue
of $G$ is at most $\sqrt{2(p+1)}+ \sqrt{p}+o(1)$.
We proceed with a proof of this
fact. By the variational definition of the nontrivial eigenvalues of
$G$ this is equivalent to
showing that for every real function $f(u)$ on the set of vertices
$U=V \cup R$ of $G$ satisfying $\|f \|_2^2=1$
and $\sum_{u \in U} f(u)=0$
\begin{equation}
\label{e31}
|f^t A_G f| \leq
\sqrt{2(p+1)}+ \sqrt{p}+o(1)
\end{equation}
where $A_G$ is the adjacency matrix of $G$. Let $W \subset V$
denote the set of all $(p+2)r$ neighbors of $R$, put $L=V-W$, and let
$E_R$ denote the set of all edges between $R$ and $W$. Thus $E_R$ is a
collection of pairwise vertex disjoint stars, each having $(p+2)$ leaves.
The adjacency matrix of $G$ can be written as a sum
$A_G=A_H+A_R+A_L$, where $A_H$ is the adjacency matrix of the
Ramanujan graph $H$ (with the additional isolated vertices
of $R$), $A_R$ is the adjacency matrix of the graph
$(U,E_R)$, and $A_L$ is the adjacency matrix of the graph on $U$ whose
edges are
the loops on the vertices of $L$ (or the added matching on them, if
we have chosen not to add loops).
Therefore
\begin{equation}
\label{e32}
f^t A_G f = f^t A_H f+f^t A_R f+f^t A_L f.
\end{equation}
We proceed to bound each of these terms.
By Cauchy-Schwarz
$$
|\sum_{u \in R} f(u)|^2 \leq |R| \sum_{u \in R} f^2(u) \leq |R|=o(m).
$$
Since $\sum_{u \in U} f(u)=0$, this implies that
$|\sum_{u \in V} f(u)|^2= |\sum_{u \in R} f(u)|^2 =o(m)$
Let $g$ be the trivial normalized eigenvector of $H$, that is, the vector
given by $g(v)=1/\sqrt m$ for all $v \in V$. Expressing the restriction
$f'$ of $f$ to $V$ as a linear combination of $g$ and a unit vector $h$
orthogonal to it, we get $f'=bg+ch$, where $\sum_{v \in V} h(v)=0$,
$b^2+c^2=1$ and $b^2=|\sum_{u \in V} f(u)|^2/m=o(1)$.
Since $H$ is a Ramanujan graph, $|h^t A_H h| \leq 2 \sqrt p.$
Therefore
\begin{equation}
\label{e33}
|f^t A_H f| = |(f')^t A_H f'| \leq b^2 (p+1) +c^2 2 \sqrt p
\leq 2 \sqrt p +o(1).
\end{equation}
Clearly
\begin{equation}
\label{e34}
|f^t A_L f| \leq \sum_{v \in L} f^2(v).
\end{equation}
Indeed this is an equality if there are loops and an inequality in
case a matching has been added.
For bounding the absolute value of $f^t A_R f$ observe that for every
positive $x$
$$
|f^t A_R f| =2 |\sum_{uv \in E_R} f(u) f(v) |
$$
\begin{equation}
\label{e35}
\leq \sum_{u \in R, v \in W, uv \in E_R} (\frac{f^2(u)}{x} + x f^2(v))
=\frac{p+2}{x} \sum_{u \in R} f^2 (u) + x \sum_{v \in w} f^2 (v).
\end{equation}
Combining (\ref{e32}),(\ref{e33}),(\ref{e34}) and (\ref{e35}) we conclude
that for every positive real $x$
\begin{equation}
\label{e36}
|f^t A_G f| \leq (2\sqrt p+1) \sum_{v \in L} f^2 (v) +
(2 \sqrt p+x)\sum_{v \in W} f^2 (v) + \frac{p+2}{x} \sum_{v \in R}
f^2(v) + o(1).
\end{equation}
Choosing $x=\sqrt{2p+2}-\sqrt{p}$ (which is at least $1$) and substituting
in (\ref{e36}) we finally get
$$
|f^t A_G f| \leq (\sqrt{2p+2}+\sqrt p) \sum_{u \in U} f^2 (u)+o(1)
=(\sqrt{2p+2}+\sqrt p) +o(1).
$$
This establishes (\ref{e31}) and completes the proof (with a
$p$-strongly explicit construction). The conversion to a strongly
explicit construction proceeds just as in the proof of Proposition
\ref{p11}, based on the results in subsection \ref{quaternions}.
Note that as mentioned in that subsection the description there
provides a simple efficient way to number enough vertices of each
graph $H(p,q_1^sq_2^t)$ and by Lemma \ref{l32} we can start by
finding efficiently appropriate $s,t$ using binary search.
\hfill $\Box$
\section{Explicit constructions}
In this section we present the proof of Theorem \ref{t13}. We start
with some preliminary lemmas.
\begin{lemma}
\label{l31}
Let $G=(V,E)$ be a $d$-regular graph on $n$ vertices, where
$d \geq 3$, and suppose that
the $2r+4$-neighborhood of any vertex in it contains at most one cycle,
where $r \leq \log_{d-1} n$.
Then there is a subset $U \subset V$ of vertices of $G$ satisfying the
following.
\begin{enumerate}
\item
$|U| \geq \frac{n}{2d^{2r+3}}$
\item
The $(r+1)$-neighborhood of any vertex in $U$ contains no cycle.
\item
The distance between any two vertices in $U$ is at least $2r+3$.
\end{enumerate}
Such a set $U$ can be found in polynomial time.
\end{lemma}
\vspace{0.1cm}
\noindent
{\bf Proof:}\, Let ${\cal C}$ denote the collection of all cycles of length
at most $2r+4$ in $G$. Note that the distance between any two members
$C_1,C_2$ of
${\cal C}$ is larger than $2r+4$, since otherwise there is a vertex $v$
within distance at most $r+2$ of both cycles $C_i$, and then its
$2r+4$-neighborhood contains both cycles, contradiction.
The $r+2$ neighborhood of each cycle $C \in {\cal C}$ contains no
other cycle besides $C$, as it is contained in the
$2r+4$ neighborhood of any vertex on the cycle. Thus the number of edges
spanned by each such neighborhood is at most the number of vertices
in it. As the neighborhoods are vertex disjoint, the total number
of edges in all these neighborhoods together is at most the number
of vertices of $G$ which is $n$. It follows that by omitting all
vertices in the $(r+1)$-neighborhoods of all members of ${\cal C}$, at most
$n$ edges are omitted, and as $G$ has $nd/2$ edges and $d \geq 3$
at least $n/2 $ edges, and hence at least $n/2d$ vertices
have not been omitted. Let $Z$ be the set
of non-omitted vertices. Note that the $(r+1)$-neighborhood of any vertex
in $Z$ contains no cycle (as if it contains a cycle, it contains
a cycle of length at most $2r+3<2r+4$ but the vertex is not within distance
$r+1$ of any such cycle.) Starting with $U=\emptyset$
let $v_1$ be an arbitrary vertex of $Z$, add it to $U$ and remove
all vertices of $Z$ within distance $2r+2$ of $v_1$.
Clearly at most $d^{2r+2}$ vertices have been deleted. Let
$v_2$ be an arbitrary vertex left in $Z$, add it to $U$ and remove all vertices
of $U$ within distance $2r+2$ of $v_2$. Continuing in this manner
we get a set $U$ of at least $\frac{n}{2d^{2r+3}}$ vertices. It is clear
that this set satisfies all the conclusions of the lemma. It is also clear
that $U$ can be computed in polynomial time. \hfill $\Box$
The next lemma about the delocalization of eigenvectors of regular graphs
in cycle-free neighborhood can be proved using the method of Kahale in \cite{Ka}
(see also \cite{AGS} for a recent application of this technique).
Here we present a simple self contained proof.
\begin{lemma}
\label{l42}
Let $G=(V,E)$ be a $d$-regular graph where $d \geq 3$,
let $uv$ be an edge of $G$
and suppose that the $r$-neighborhood of $uv$ contains no
cycle. For each $i$ satisfying $0 \leq i \leq r$ let
$N_i$ denote the set of all vertices of distance exactly $i$
from $\{u,v\}$. (In particular, $N_0=\{u,v\}$).
Let $f$ be a nonzero eigenvector of $G$ with eigenvalue
$\mu \geq 2 \sqrt{d-1}$. Then for every $1 \leq i \leq r$
\begin{equation}
\label{e41}
\sum_{w \in N_i} f^2(w) \geq \sum_{w \in N_{i-1}} f^2 (w).
\end{equation}
\end{lemma}
\vspace{0.1cm}
\noindent
{\bf Proof:}\, We apply induction on $i$. Note that by assumption the induced
subgraph of $G$ on the $r$-neighborhood of $uv$ is a $d$-regular tree. Therefore
$|N_i|=2(d-1)^i$ for all $i \leq r$. Let $u_1,u_2, \ldots ,u_{d-1}$
denote the neighbors of $u$ besides $v$, and let $v_1,v_2, \ldots ,v_{d-1}$
denote the neighbors of $v$ besides $u$. Then
$$
f(v)+\sum_{i=1}^{d-1} f(u_i)=\mu f(u)
$$
and
$$
f(u)+\sum_{i=1}^{d-1} f(v_i)=\mu f(v).
$$
By Cauchy-Schwarz,
$$
f^2(v)+\sum_{i=1}^{d-1} f^2(u_i) \geq \frac{\mu^2 f^2(u)}{d} \geq
\frac{4d-4}{d} f^2 (u)
$$
and similarly
$$
f^2(u)+\sum_{i=1}^{d-1} f^2(v_i) \geq \frac{4d-4}{d} f^2 (v).
$$
Summing, we conclude that
$$
f^2(u)+f^2(v)+\sum_{w \in N_1}f^2(w) \geq \frac{4d-4}{d} (f^2(u)+f^2 (v)),
$$
implying that
$$
\sum_{w \in N_1}f^2(w) \geq \frac{3d-4}{d} (f^2(u)+f^2 (v))
\geq f^2(u)+f^2(v) =\sum_{w \in N_0} f^2(w).
$$
This proves (\ref{e41}) for $i=1$.
Assuming it holds for $i-1$ we prove it for $i$. For each vertex
$w \in N_{i-1}$ let $w'$ be its unique parent in $N_{i-2}$ and let
$x_1,x_2, \ldots ,x_{d-1}$ be its neighbors in $N_i$.
Then
$$
f(w')+\sum_{i=1}^{d-1}f(x_i) = \mu f(w).
$$
Since $f(w')=(d-1) \cdot \frac{f(w')}{d-1}$, we get, by Cauchy-Schwarz,
$$
\frac{f^2(w')}{d-1}+\sum_{i=1}^{d-1}f^2(x_i) \geq \frac{\mu^2 f^2(w)}{2d-2}
\geq 2 f^2(w).
$$
Summing the above inequality for all $w$ in $N_{i-1}$, each
vertex $w'$ appears in the LHS exactly $d-1$ times, yielding
$$
\sum_{w' \in N_{i-2}} f^2(w') + \sum_{x \in N_{i}} f^2(x) \geq
2 \sum_{w \in N_{i-1}} f^2 (w).
$$
This gives
$$
\sum_{x \in N_{i}} f^2(x) \geq 2 \sum_{w \in N_{i-1}} f^2 (w)
- \sum_{w' \in N_{i-2}} f^2(w') \geq \sum_{w \in N_{i-1}} f^2 (w),
$$
where the last inequality follows from the induction hypothesis.
This completes the proof of the induction step, establishing the
assertion of the lemma. \hfill $\Box$
Finally, we need the main result of Mohanty, O'Donnell and Paredes in
\cite{MOP}, which is the following.
\begin{theo}[\cite{MOP}]
\label{t33}
For every $d$, $\epsilon>0$ and (large) $n$ there is an explicit construction
of an $(n+o(n),d,\lambda)$-graph with $\lambda \leq 2 \sqrt{d-1}+\epsilon$
so that the $s$ neighborhood of any vertex contains at most one cycle,
where $s \geq (\log \log n)^2 $.
\end{theo}
We note that the result is stated in \cite{MOP} without the conclusion
about the
cycles, but the version above follows from the proof as presented there.
We are now ready to prove Theorem \ref{t13}.
\vspace{0.2cm}
\noindent
{\bf Proof of Theorem \ref{t13}:}\,
Put $r=\lceil 2/\epsilon \rceil$ and $s=2r+4$.
Let $H=(V,E)$ be an explicit
$(n+u,d,\lambda)$-graph with $u=o(n)$ and $\lambda \leq
2\sqrt{d-1}+\epsilon/2$ in which the $s$-neighborhood of any vertex
contains at most one cycle. Such an $H$ exists by Theorem
\ref{t33}.
By Lemma \ref{l31} we can find efficiently
a set $U$ of $u$ vertices in $H$ satisfying the
assertion of the lemma. Omit these vertices from the graph
to get a graph $H'$
and add
a matching $M$ between their neighbors retaining the degree of
regularity $d$. Let $G$ denote the resulting graph. Clearly it is
$d$-regular and has $n$ vertices. Note that the $r$-neighborhood
of any edge $uv$ of the added matching $M$ contains no cycle.
In order to complete the proof it remains to show that every
nontrivial eigenvalue of $G$ has absolute value at most
$2 \sqrt {d-1}+\epsilon$. Let $A_G$ be the adjacency matrix of
$G$, $A_{H'}$ the adjacency matrix of $H'$ (on the set of vertices
$V$) and $A_M$ the adjacency
matrix of the graph on the set of vertices $V $
whose edges are those
of the matching $M$. Note that $A_G=A_{H'}+A_M$. Let $\lambda$ be
a nontrivial eigenvalue of $G$ and let $f$ be a corresponding
eigenvector satisfying $\sum_{v \in V} f^2(v)=1$. Then
\begin{equation}
\label{e91}
\lambda=f^t A_G f=f^t A_{H'} f+ f^t A_M f.
\end{equation}
Since $H'$ is an induced subgraph of $H$ and all nontrivial
eigenvalues of $H$ have absolute value at most
$2 \sqrt{d-1}+\epsilon/2$ it follows, by
eigenvalue interlacing, that
\begin{equation}
\label{e92}
|f^t A_{H'} f| \leq 2 \sqrt{d-1}+\epsilon/2.
\end{equation}
It is also clear that
\begin{equation}
\label{e93}
|f^t A_{M} f| = |2\sum_{uv \in M} f(u)f(v)| \leq \sum_{uv \in M}
f^2(u)+f^2(v).
\end{equation}
If $|\lambda| \leq 2 \sqrt{d-1}$ there is nothing to prove, we thus
assume that $\lambda \geq 2 \sqrt{d-1}$.
Since the $r$-neighborhood of any edge of $M$ contains no cycle,
Lemma \ref{l32} implies that for every such edge $uv$,
$$
f^2(u)+f^2(v) \leq \frac{1}{r} \sum_{w \in N(u,v,r)} f^2(w),
$$
where $N(u,v,r)$ denotes the $r$-neighborhood of $uv$. Since all
these neighborhoods are pairwise disjoint it follows that
\begin{equation}
\label{e94}
\sum_{uv \in M} f^2(u)+f^2(v) \leq \frac{1}{r} \sum_{w \in V} f^2
(v) \leq \frac{\epsilon}{2}.
\end{equation}
The desired result follows by
plugging (\ref{e92}) and (\ref{e94}) in (\ref{e91}) (using
(\ref{e93})).
\hfill $\Box$
\section{Concluding remarks}
\begin{itemize}
\item
Morgenstern \cite{Mo} gave a strongly explicit construction of
Ramanujan graphs for every degree which is a prime power plus $1$,
but we cannot apply his construction in the proof of Proposition
\ref{p11} since his construction provides Cayley graphs of
different groups (and different sizes) for different degrees and hence
one cannot pack the graphs corresponding to several degrees.
Similarly, we cannot use his construction in the proof of
Theorem \ref{t12} since for every fixed degree the
sequence of possible numbers of vertices in his construction
for this degree is too sparse.
\item
The proof of Theorem \ref{t13} can be applied directly to high girth
Ramanujan graphs like those in \cite{LPS}, \cite{Ma} in case the
required degree is $p+1$ for a prime $p$ congruent to
$1 \bmod 4$ to obtain near Ramanujan graphs of this degree
with any required
(large) number of vertices.
\item
The problem of obtaining
strongly explicit (two-sided) Ramanujan (and not nearly Ramanujan)
graphs for
any degree and
number of vertices remains open.
\end{itemize}
\noindent
{\bf Acknowledgment}
I thank L\'aszl\'o Babai, Oded Goldreich, Ryan O'Donnell, Ori
Parzanchevski and Peter Sarnak for helpful discussions.
| {
"timestamp": "2020-03-27T01:05:08",
"yymm": "2003",
"arxiv_id": "2003.11673",
"language": "en",
"url": "https://arxiv.org/abs/2003.11673",
"abstract": "An $(n,d,\\lambda)$-graph is a $d$ regular graph on $n$ vertices in which the absolute value of any nontrivial eigenvalue is at most $\\lambda$. For any constant $d \\geq 3$, $\\epsilon>0$ and all sufficiently large $n$ we show that there is a deterministic poly(n) time algorithm that outputs an $(n,d, \\lambda)$-graph (on exactly $n$ vertices) with $\\lambda \\leq 2 \\sqrt{d-1}+\\epsilon$. For any $d=p+2$ with $p \\equiv 1 \\bmod 4$ prime and all sufficiently large $n$, we describe a strongly explicit construction of an $(n,d, \\lambda)$-graph (on exactly $n$ vertices) with $\\lambda \\leq \\sqrt {2(d-1)} + \\sqrt{d-2} +o(1) (< (1+\\sqrt 2) \\sqrt {d-1}+o(1))$, with the $o(1)$ term tending to $0$ as $n$ tends to infinity. For every $\\epsilon >0$, $d>d_0(\\epsilon)$ and $n>n_0(d,\\epsilon)$ we present a strongly explicit construction of an $(m,d,\\lambda)$-graph with $\\lambda < (2+\\epsilon) \\sqrt d$ and $m=n+o(n)$. All constructions are obtained by starting with known ones of Ramanujan or nearly Ramanujan graphs, modifying or packing them in an appropriate way. The spectral analysis relies on the delocalization of eigenvectors of regular graphs in cycle-free neighborhoods.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)",
"title": "Explicit expanders of every degree and size",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9899864273087513,
"lm_q2_score": 0.8104789178257654,
"lm_q1q2_score": 0.8023631282673925
} |
https://arxiv.org/abs/2009.09410 | Tiling by translates of a function: results and open problems | We say that a function $f \in L^1(\mathbb{R})$ tiles at level $w$ by a discrete translation set $\Lambda \subset \mathbb{R}$, if we have $\sum_{\lambda \in \Lambda} f(x-\lambda)=w$ a.e. In this paper we survey the main results, and prove several new ones, on the structure of tilings of $\mathbb{R}$ by translates of a function. The phenomena discussed include tilings of bounded and of unbounded density, uniform distribution of the translates, periodic and non-periodic tilings, and tilings at level zero. Fourier analysis plays an important role in the proofs. Some open problems are also given. | \section{Introduction} \label{secI1}
Let $f$ be a function in $L^1(\R)$ and let
$\Lam \sbt \R$ be a discrete set.
We say that \emph{$f$ tiles $\R$
at level $w$} with the translation set $\Lam$,
or that
\emph{$f+\Lam$ is a tiling of $\R$
at level $w$} (where $w$ is a constant), if we have
\begin{equation}
\label{eqI1.1}
\sum_{\lambda\in\Lambda}f(x-\lambda)=w\quad\text{a.e.}
\end{equation}
and the series in \eqref{eqI1.1} converges absolutely a.e.
In the same way one can define tiling
by translates of an $L^1$ function on
$\R^d$, or more generally, on any locally compact
abelian group. The finite abelian groups,
and in particular the cyclic ones,
are an important class being often considered.
If $f = \1_\Omega$
is the indicator function of a set $\Omega$,
and $f + \Lam$ is a tiling at level one, then
this means
that the translated copies $\Omega+\lam$, $\lam\in\Lam$,
fill the whole space without overlaps up to measure zero.
To the contrary, for tilings by a
general real or complex-valued function
$f$, the translated copies may have
overlapping supports
and a wider variety of phenomena may (and do) occur.
In dimension one, translational
tilings exhibit a stronger structure than in
higher dimensions, and there are
interesting questions as to how rigid this
structure must be, e.g.\ how close the translation set
$\Lam$ is to being periodic, or to being constructed out
of periodic sets, or, at an even more basic level,
to being uniformly distributed in $\R$.
The subject has been studied by several authors,
see, in particular, \cite{LM91}, \cite{KL96}, \cite{Kol04},
\cite{KL16}, \cite{Liu18}, \cite{Lev20}.
The aim of this paper is to survey the results
obtained in earlier works, as well as to prove
several new results. At the end of the paper,
some open problems are also given.
In this section we survey the main
results on the structure of tilings
by translates of a function on $\R$,
and we state the new results that
will be proved in this paper.
\subsection{Tiling and density}
We say that a set
$\Lambda \sbt \R$ has \emph{bounded density} if
\begin{equation}
\label{eqI10.1}
\sup_{x\in\mathbb R} \#(\Lambda\cap[x,x+1))<+\infty.
\end{equation}
The set $\Lambda$ is said to be
\emph{uniformly distributed} if there is
a number $D(\Lam)$ satisfying
\begin{equation}
\label{eqI10.2}
\#(\Lambda\cap[x,x+r)) = D(\Lam) \cdot r + o(r), \quad r \to +\infty
\end{equation}
uniformly with respect to $x \in \R$.
In this case, $D(\Lam)$ is called the
\emph{uniform density} of $\Lam$.
The following result establishes
a connection between tiling and density:
\begin{thm}[{\cite{KL96}}]
\label{thmKL1}
Let $f+\Lam$ be a tiling at some nonzero level $w$,
where $f \in L^1(\R)$ and where
$\Lam \sbt \R$ is a set of bounded density.
Then $f$ has nonzero integral, and $\Lam$ has a uniform
density given by $D(\Lam) = w \cdot (\int f)^{-1}$.
\end{thm}
This was proved in
\cite[Lemma 2.3]{KL96} for
a weaker notion of
density\footnote{In \cite{KL96} the density
$d(\Lam)$ of a set $\Lam \sbt \R$
is defined by
$d(\Lam) := \lim_{r \to +\infty}
\#(\Lambda\cap(-r,r)) / (2r)$.}, but
a minor adjustment to
the proof in fact yields the stronger
statement above
for the uniform density.
A similar result is true also in $\R^d$.
\subsection{Tiling at level zero}
\label{secTLZ}
It is not known whether
\thmref{thmKL1} has an analog
for tilings of bounded density \emph{at level zero}.
It was conjectured in \cite[p.\ 660]{KL96}
that if $f+\Lam$ is such a tiling,
then $f$ must have zero integral.
In \cite[Lemma 2.4]{KL96} this was proved
under the extra assumption that $f$ has compact
support.
Here we will prove that the conclusion
is true in the general case:
\begin{thm}
\label{thmA5}
Let $f+\Lam$ be a tiling at level zero,
where $f \in L^1(\R)$ and where
$\Lam \sbt \R$ is a nonempty set of bounded density.
Then $f$ must have zero integral.
\end{thm}
Do there exist tilings $f+\Lam$ at level zero
such that the set $\Lam$ has density zero?
\thmref{thmKL1} does not exclude such
a possibility. In fact, in
dimensions two and higher it is easy
to exhibit tilings of this kind.
For instance, in $\R^2$ one may take
$f(x,y) = \varphi(x)\psi(y)$, $\Lam = \Gam \times \{0\}$,
where $\varphi, \psi \in L^1(\R)$
and $\varphi + \Gam$ is a tiling of $\R$ at level zero.
We will show, however, that this is \emph{not} the case
in dimension one. To state the result, we recall that a set
$\Lam \sbt \R$ is said to be
\emph{relatively dense} if there
is $r>0$ such that any interval $[x,x+r)$
contains at least one point from $\Lam$.
We will prove the following:
\begin{thm}
\label{thmA6}
Let $f+\Lam$ be a tiling at level zero,
where $f \in L^1(\R)$ is nonzero and
$\Lam \sbt \R$ is a nonempty set of bounded density.
Then $\Lam$ must be a relatively dense set.
\end{thm}
In particular this implies that $\Lam$ cannot have density zero.
Does it follow from the assumptions in \thmref{thmA6}
that $\Lam$ has a uniform (positive) density $D(\Lam)$\,?
The answer to this question is not known.
The problem is nontrivial due to the existence
of translation sets $\Lam$
that admit only tilings at level zero,
so that \thmref{thmKL1} does not apply to these sets:
\begin{thm}[{\cite{Lev20}}]
\label{thmLev20.5}
There exists a nonempty set
$\Lam \sbt \R$ of bounded density
which admits tilings $f+\Lam$ with nonzero
$f \in L^1(\R)$, but any such a tiling
is necessarily a tiling at level zero.
\end{thm}
\subsection{Periodic tilings}
We say that a set $\Lam \sbt \R$ has a \emph{periodic
structure} if it can be represented as
a disjoint union of finitely
many arithmetic progressions, namely
\begin{equation}
\label{eqI2.1}
\Lam = \biguplus_{j=1}^{N}
(a_j \Z + b_j)
\end{equation}
where $a_j, b_j$ are real numbers and
$a_j>0$.
The sets $\Lam$ with this structure constitute
the basic examples of translation sets for
tilings of $\R$. Indeed, one can check that if
\begin{equation}
f = \1_{[0,a_1]} \ast
\1_{[0,a_2]} \ast \dots \ast
\1_{[0,a_N]}
\end{equation}
and $\Lam$ is given by \eqref{eqI2.1},
then $f + \Lam$ is a tiling at some positive level $w$.
The last example shows that any set
$\Lam$ of the form \eqref{eqI2.1}
admits a tiling by a \emph{compactly supported}
function $f \in L^1(\R)$. A result first proved in
\cite{LM91} and rediscovered in \cite{KL96}
asserts that there are no other translation
sets $\Lam$ with this property:
\begin{thm}[{\cite{LM91}, \cite{KL96}}]
\label{thmLM91}
Let $f \in L^1(\R)$ be nonzero and have
compact support.
If $f$ tiles at some level
$w$ with a translation set $\Lambda$
of bounded density, then
$\Lambda$ has a periodic structure,
namely, it must be of the form \eqref{eqI2.1}.
\end{thm}
The proof of this result is based on Cohen's
characterization of idempotent measures in locally
compact abelian groups.
The group on which Cohen's theorem is used in the
proof is the \emph{Bohr compactification} of the real line
(see also \cite[p.\ 25]{Mey70}).
A discrete set $\Lambda\subset\mathbb R$ is said to have
\emph{finite local complexity} if $\Lambda$ can be enumerated as a sequence
$\{\lambda_n\}$, $n\in\Z$, such that $\lambda_n<\lambda_{n+1}$ and the successive differences $\lambda_{n+1}-\lambda_n$ take only finitely many different values. The following result establishes that tilings
of finite local complexity
must be periodic, even if $f$ does not have
compact support:
\begin{thm}[{\cite{IK13}, \cite{KL16}}]
\label{thmKL16FLC}
Let $\Lambda \sbt \R$ have finite local complexity.
If $f \in L^1(\R)$ is nonzero
and $f+\Lambda$ is a tiling at some level $w$, then $\Lambda$
must be a periodic set, namely, it has the form $\Lam = a\Z + \{b_1, \dots, b_N\}$.
\end{thm}
\subsection{Non-periodic tilings}
The papers \cite{LM91}, \cite{KL96}
leave the following question open: Does
there exist any set $\Lambda \sbt \R$
\emph{not} of periodic structure,
which can tile with some
function $f\in L^1(\R)$ of unbounded support?
Such a set $\Lam$ cannot have finite local complexity
by \thmref{thmKL16FLC}.
We settled this question affirmatively in \cite{KL16}:
\begin{thm}[{\cite{KL16}}]
\label{thmKL16}
There exists a tiling $f+\Lambda$ at level one,
where $f\in L^1(\R)$ and
$\Lambda \sbt \R$ has bounded density,
but such that $\Lam$
has no periodic structure,
i.e.\ the set $\Lam$
is not of the form \eqref{eqI2.1}.
\end{thm}
The proof was based on the implicit function method
due to Kargaev \cite{Kar82}, and it yields a set
$\Lam$ which is a small perturbation of the integers.
The proof moreover allows to choose the function $f$
in the Schwartz class. However it
yields a function $f$ satisfying
$\operatorname{supp}(f) = \R$, where
$\operatorname{supp}(f)$ is the closed support of $f$
(the smallest closed set such that $f$ vanishes
a.e.\ on its complement).
In this paper we will prove a stronger version
of Theorem \ref{thmKL16}, which establishes the existence
of non-periodic tilings $f+\Lam$ of bounded density,
such that $f$ has ``sparse'' support.
Precisely, we will show that the
support (which must be unbounded, due to
\thmref{thmLM91}) can be localized
inside any given set $\Om \sbt \R$
which contains arbitrarily long intervals:
\begin{thm}
\label{thmA1}
There is a discrete set $\Lambda \sbt \R$ of bounded density,
but which is not of the form \eqref{eqI2.1}, with the following
property: given any scalar $w$, and any set
$\Om \sbt \R$ which contains arbitrarily long intervals,
one can find a nonzero
$f \in L^1(\mathbb R)$, $\operatorname{supp}(f) \sbt \Om$,
such that $f+\Lambda$ is a tiling at level $w$.
\end{thm}
We note that the set $\Om$ in this theorem may
be chosen very sparse, for example, one may take
$\Om = \bigcup_{j=1}^{\infty} [\tau_j, \tau_j + j]$,
where the numbers $\tau_j$ grow arbitrarily fast.
\thmref{thmA1} implies in particular
the existence of non-periodic tilings
by a function $f$ such that
$\operatorname{supp}(f) \sbt [0, +\infty)$, i.e.\ the
support is \emph{bounded from below}
(while it cannot be
bounded from both above and below,
due to \thmref{thmLM91}).
The proof of \thmref{thmA1} relies on a recent result
due to Kurasov and Sarnak \cite{KS20},
who constructed a new type
of \emph{crystalline measures} on $\R$.
These are pure point
measures with discrete closed support,
whose Fourier transform
is a measure of the same type.
\subsection{Tilings of unbounded density}
\label{secTUB}
It seems that very little attention
has been paid to
tilings which are \emph{not} of bounded density.
In fact, we are not aware of any
example of such a tiling in the literature.
In \cite[Example 7.1]{KL96}
some examples are given
in a more general context,
where the points of the
translation set $\Lam$
are endowed with
nonnegative integer multiplicities
(so that $\Lam$ is actually not a set,
but a multi-set).
We will prove that there exist tilings
of unbounded density in the proper sense.
Let us say that a set $\Lam \sbt \R$ has
\emph{tempered growth} if there is $N$ such that
\begin{equation}
\label{eqA3.16}
\# (\Lam \cap (-r,r)) = O(r^N), \quad r \to +\infty.
\end{equation}
\begin{thm}
\label{thmA3}
There is a set $\Lambda \sbt \R$ which is
not of bounded density but has tempered growth,
with the following
property: given any scalar $w$ one can find
a nonzero function $f$ in the Schwartz class,
such that $f+\Lambda$ is a tiling at level $w$.
\end{thm}
Moreover, in our example the set $\Lam$ is
contained in a small
neighborhood of the integers. The
construction is done using a variant of Kargaev's
implicit function method.
It follows from \cite[Lemma 2.1]{KL96}
that the function $f$ in \thmref{thmA3} must
change sign, i.e.\ $f$ cannot be
chosen nonnegative. This indicates that
cancellations are playing a decisive role in the
tiling. We can even prove a stronger claim:
\begin{thm}
\label{thmA9}
Let $f+\Lam$ be a tiling at some level $w$,
where the set $\Lambda \sbt \R$ is not of bounded density
but has tempered growth, and where
$f$ is a function in the Schwartz class.
Then $f$ must have zero integral.
\end{thm}
It may seem counter-intuitive at first glance that
one can tile $\R$ at a nonzero level $w$
by translates of a Schwartz function $f$ whose integral is zero.
\subsection{Organization of the paper}
The rest of the paper is organized as follows.
In \secref{secF1} some preliminary background
is given, and Fourier analytic conditions for tiling
are discussed.
In \secref{secL1} we prove that
if $f+\Lam$ is a tiling at level zero,
where $f \in L^1(\R)$ is a nonzero function and
$\Lam \sbt \R$ is a nonempty set of bounded density,
then $f$ has zero integral
(\thmref{thmA5}) and
$\Lam$ is relatively dense (\thmref{thmA6}).
In \secref{secSM1} we establish the existence
of non-periodic tilings $f+\Lam$ of bounded density,
such that the function $f$ has ``sparse'' support
(\thmref{thmA1}).
In \secref{secUB1} we construct
tilings $f+\Lam$ of unbounded density,
such that $\Lam$ has tempered growth
and $f$ is in the Schwartz class
(\thmref{thmA3}). We also show
that in any such a tiling, the function $f$ must have
zero integral (\thmref{thmA9}).
In the last section, \secref{secOP1},
we pose some open problems.
\section{Preliminaries. Fourier analytic conditions for tiling.}
\label{secF1}
It is well-known that in
the study of translational tilings,
Fourier analysis plays an important role,
see e.g.\ \cite{Kol04}.
If $f \in L^1(\R)$ then its Fourier transform
is defined by
\begin{equation}
\hat{f}(t) = \int_{\R} f(x) \, e^{-2\pi i t x} \, dx.
\end{equation}
If $\alpha$ is a tempered distribution on $\R$,
then $\alpha(\varphi)$ denotes the action
of $\alpha$ on a Schwartz function $\varphi$.
The Fourier transform $\ft\alpha$ is defined by
$\ft{\alpha}(\varphi) = \alpha(\ft{\varphi})$.
If $\alpha$ is a tempered distribution on $\R$, and if
$\varphi$ is a Schwartz function, then
the convolution $\alpha \ast \varphi$ is
a tempered distribution whose Fourier transform
is $\ft{\alpha} \cdot \ft{\varphi}$.
We use $\operatorname{supp}(\alpha)$ to denote the closed support
of a tempered distribution $\alpha$.
If $f \in L^1(\R)$ then $\operatorname{supp}(f)$ is the
smallest closed set such that $f$ vanishes
a.e.\ on its complement. This set coincides
with the support of $f$ in the distributional sense.
For more details on distribution theory we refer
the reader to \cite{Rud91}.
\subsection{Necessary conditions for tiling}
For a discrete set $\Lambda \sbt \mathbb{R}$
we define the measure
\begin{equation}
\label{eqI5.4}
\delta_\Lambda:=\sum_{\lambda\in\Lambda}\delta_\lambda.
\end{equation}
The tiling condition
\eqref{eqI1.1} can then be restated as
\begin{equation}
\label{P2.2.11}
f \ast \delta_\Lam = w \quad \text{a.e.}
\end{equation}
If $\Lambda$ has bounded density
then the measure $\delta_\Lambda$ is a tempered
distribution on $\R$. So, at least formally, taking
the Fourier transform of both
sides of \eqref{P2.2.11} yields
\begin{equation}
\label{P2.2.12}
\ft{f} \cdot \ft{\delta}_\Lam = w \cdot \delta_0.
\end{equation}
If $f$ is a Schwartz function,
then condition \eqref{P2.2.12} makes sense
and it is equivalent to \eqref{P2.2.11}.
To the contrary, for an arbitrary function $f \in L^1(\R)$
(not assumed to be in the Schwartz class)
the product
$\ft{f} \cdot \ft{\delta}_\Lam$
is not well-defined,
and the condition \eqref{P2.2.12}
can only be interpreted as a heuristic principle.
The following result, inspired by
the heuristic condition \eqref{P2.2.12},
provides a necessary condition for tiling
which holds for any $f \in L^1(\R)$.
\begin{thm}[{\cite{KL16}}]
\label{thm4.5.1}
Let $f\in L^1(\mathbb R)$, and $\Lambda\sbt\R$
be a discrete set of bounded density. If $f+\Lambda$ is a tiling
at some level $w$, then
\begin{equation}
\label{eqI3.1}
\operatorname{supp}(\hat\delta_\Lambda) \setminus \{0\}
\subset \{\hat f=0\}.
\end{equation}
\end{thm}
The proof of this result is based on Wiener's tauberian theorem.
In the earlier works \cite{KL96}, \cite{Kol00a}, \cite{Kol00b}
the result was proved under various extra assumptions.
\subsection{Sufficient conditions for tiling}
One may ask whether
\thmref{thm4.5.1} admits a converse, i.e.\ if the condition
\eqref{eqI3.1} implies that $f+\Lambda$ is a tiling
at some level $w$.
If the distribution $\ft{\delta}_\Lam$ happens to
be a \emph{measure} on $\R$, then
the product $\ft{f} \cdot \ft{\delta}_\Lam$
is a well-defined measure and the
condition \eqref{P2.2.12} makes sense.
In this case, the two conditions \eqref{P2.2.12} and \eqref{eqI3.1}
are equivalent, and can be shown to imply that $f+\Lam$ is a tiling:
\begin{thm}\label{thm4.5.2}
Let $\Lambda \sbt \R$ have bounded density and suppose that
$\ft{\delta}_\Lambda$ is a locally finite measure.
If $f\in L^1(\R)$ satisfies \eqref{eqI3.1} then
$f+\Lambda$ is a tiling at level
$w = \ft{\delta}_\Lam(\{0\}) \cdot \int f$.
\end{thm}
A proof of this result is given below.
In \cite{KL96} the result was proved
under the extra assumption that $\ft{f}$ is
a smooth function.
As an example, \thmref{thm4.5.2}
applies if the set $\Lambda$ has a periodic structure,
i.e.\ if it has the form \eqref{eqI2.1}. In this case
$\ft{\delta}_\Lambda$ is a (pure point)
measure by Poisson's summation formula,
and the condition \eqref{eqI3.1} is
both necessary and sufficient for
a function $f\in L^1(\R)$ to tile
at some level $w$ with the translation set $\Lam$.
\thmref{thm4.5.2} leaves open, though, the question as to
whether the condition \eqref{eqI3.1} is sufficient
for tiling in the general case, i.e.\ for an arbitrary
set $\Lam \sbt \R$
of bounded density. This question was addressed
recently in \cite{Lev20} where it
was answered in the negative:
\begin{thm}[{\cite{Lev20}}]
\label{thmI8.10}
There is a set $\Lambda \subset \R$ of bounded density
and a function $f \in L^1(\mathbb{R})$,
such that \eqref{eqI3.1} is satisfied however $f + \Lam$ is not a tiling
at any level.
\end{thm}
The proof of this result is based on the relation of the problem to
Malliavin's \emph{non-spectral synthesis} example.
\thmref{thmI8.10} implies that a converse
to \thmref{thm4.5.1} can only be valid under certain
extra assumptions. One example of such a converse
is given by \thmref{thm4.5.2}. Another example
is the following result:
\begin{thm}\label{thm4.5.8}
Let $f\in L^1(\mathbb R)$, and let
$\Lambda \sbt \R$ be a discrete set of bounded density.
If the set
$\operatorname{supp}(\hat\delta_\Lambda) \setminus \{0\}$ is closed
and is disjoint from $\operatorname{supp}(\ft{f})$,
then $f+\Lambda$ is a tiling at some level $w$.
\end{thm}
We will not use this result in the paper
and we do not include its proof. For other results of similar type
see \cite[Theorem 3]{Kol00a}, \cite[Theorem 5]{Kol00b}.
\subsection{Translation-bounded measures}
A measure $\mu$ on $\R$ satisfying the condition
\begin{equation}
\sup_{x \in \R} |\mu|([x,x+1)) < \infty
\end{equation}
is said to be \emph{translation-bounded}.
If a measure $\mu$ is translation-bounded,
then it is a tempered distribution.
If $\mu$ is a translation-bounded measure on $\R$, and if $\nu$
is a finite measure on $\R$, then the convolution $\mu \ast \nu$
is a translation-bounded measure.
\begin{thm}
\label{thmC2}
Let $\mu$ be a translation-bounded measure on $\R$,
and suppose that $\ft{\mu}$ is a locally finite measure.
If $\nu$ is a finite measure on $\R$, then the Fourier
transform of the convolution $\mu \ast \nu$
is the measure $\ft{\mu} \cdot \ft{\nu}$.
\end{thm}
In particular, let $\Lambda \sbt \R$ be a discrete set
of bounded density, and let $f \in L^1(\R)$.
Then the measure $\delta_\Lam$ is
translation-bounded, and the convolution
$f \ast \del_\Lam$ is the sum of the
series $\sum_{\lambda\in\Lambda}f(x-\lambda)$
which converges absolutely a.e., see
\cite[Lemma 2.2]{KL96}.
If the distribution
$\ft{\delta}_\Lambda$ is assumed to be a locally finite measure,
then using \thmref{thmC2} we obtain that
the three conditions
\eqref{P2.2.11}, \eqref{P2.2.12} and \eqref{eqI3.1} are
equivalent. We thus
see that \thmref{thm4.5.2} is a consequence
of \thmref{thmC2}.
\thmref{thmC2} should be known to experts but we could not
find a proof in the literature,
so we include one below for completeness.
\begin{proof}[Proof of \thmref{thmC2}]
We will use $\sig$ to denote the measure $\ft{\mu}$.
The assertion of the theorem means that
if $\psi$ is a Schwartz function with compact support then
\begin{equation}
\label{eqP1.10}
\int_{\R} \ft{\psi(x)} \, d(\mu \ast \nu)(x) = \int_{\R} \psi(t) \, \ft{\nu}(t) \, d\sig(t).
\end{equation}
Let $\chi$ be a Schwartz function whose Fourier transform
$\ft{\chi}$ is non-negative, has compact support,
$\int \ft{\chi}(t) dt =1$, and for each $\eps > 0$ let
$\chi_\eps(x) := \chi( \eps x)$. Let
$h_\eps := (\psi \cdot \ft{\nu}) \ast \ft{\chi}_\eps$,
then $h_\eps$ is an infinitely smooth function with compact support.
As $\eps \to 0$, the function $h_\eps$ remains supported on
a certain interval $I=[a,b]$ that does not depend on $\eps$, and
$h_\eps$ converges to
$\psi \cdot \ft{\nu}$ uniformly on $I$. Hence
\begin{equation}
\label{eqP1.3}
\lim_{\eps \to 0} \ft{\mu}(h_\eps) =
\lim_{\eps \to 0}
\int_{\R} h_\eps(t) d\sig(t) =
\int_{\R} \psi(t) \, \ft{\nu}(t) \, d\sig(t).
\end{equation}
The function $\psi$ is the Fourier transform of some function
$\varphi$ in the Schwartz class.
Let $g_\eps := (\varphi \ast \nu) \cdot \chi_\eps$,
then $g_\eps$ is a smooth function in $L^1(\R)$ and
we have $\ft{g}_\eps = h_\eps$.
Since $h_\eps$ belongs to the Schwartz
space, the same is true for $g_\eps$, and it follows that
\begin{equation}
\label{eqP1.7}
\ft{\mu}(h_\eps) = \mu(\ft{h}_\eps)
= \int_{\R} {g_\eps}(-x) \, d\mu(x)
= \int_{\R} (\varphi \ast \nu)(-x) \, \chi_\eps(-x)\, d\mu(x).
\end{equation}
We observe that $|\chi_\eps(-x)| \leq 1$ and
$\chi_\eps(-x) \to 1$ pointwise as $\eps \to 0$.
We now wish to apply the dominated convergence
theorem to \eqref{eqP1.7}. Using Fubini's theorem
we obtain
\begin{equation}
\label{eqP1.23}
\int_{\R} (|\varphi| \ast |\nu|)(-x) \, |d\mu|(x)
= \int_{\R} |\varphi(-x)| \, d(|\mu| \ast |\nu|)(x).
\end{equation}
The function $\varphi$ has fast decay being in the
Schwartz class, while
$|\mu| \ast |\nu|$ is a translation-bounded measure,
hence the integral in \eqref{eqP1.23} converges.
We may therefore apply to \eqref{eqP1.7}
the dominated convergence theorem, which yields
\begin{equation}
\label{eqP1.8}
\lim_{\eps \to 0} \ft{\mu}(h_\eps)
= \int_{\R} (\varphi \ast \nu)(-x) \, d\mu(x)
= \int_{\R} \varphi (-x) \, d(\mu \ast \nu)(x).
\end{equation}
But $\varphi(-x) = \ft{\psi}(x)$, so we see
that \eqref{eqP1.10} follows from
\eqref{eqP1.3} and \eqref{eqP1.8} as needed.
\end{proof}
\section{Tiling at level zero}
\label{secL1}
In this section we prove that
if $f+\Lam$ is a tiling at level zero,
where $f \in L^1(\R)$ is nonzero and
$\Lam \sbt \R$ is nonempty and has bounded density,
then $f$ has zero integral
(\thmref{thmA5}) and
$\Lam$ is a relatively dense set (\thmref{thmA6}).
\subsection{The function \texorpdfstring{$f$}{f} has zero integral}
\label{subsecZI}
We begin with \thmref{thmA5}, for which we
give two proofs. The first proof
is Fourier analytic, and is based on the following result:
\begin{thm}[{\cite{KL16}}]
\label{thm5.1}
Let $f\in L^1(\mathbb R)$ have nonzero integral. If $\Lambda
\sbt \R$
is a set of bounded density and $f+\Lambda$ is a tiling
at some level $w$, then there is $a>0$ such that
$\hat\delta_\Lambda=c\cdot\delta_0$
in $(-a,a)$, where $c=w\cdot(\int f)^{-1}$.
\end{thm}
This result follows from the proof of \cite[Theorem 4.1]{KL16}
and it is stated in that paper as a remark on p.\ 4598.
The proof of \thmref{thm5.1} is based on Wiener's tauberian theorem.
\begin{proof}[First proof of \thmref{thmA5}]
Let $f+\Lambda$ be a tiling
at level $w=0$, and suppose to the contrary that $f$ has nonzero integral.
Then by \thmref{thm5.1} there is $a>0$ such that
$\hat\delta_\Lambda=c\cdot\delta_0$ in $(-a,a)$,
where $c=w\cdot(\int f)^{-1}$. Since $w=0$
this means that $\ft{\delta}_\Lam$
vanishes in a neighborhood $(-a,a)$ of the origin.
We will show that this cannot happen.
Indeed, choose a Schwartz function
$\varphi$ such that $\operatorname{supp}(\varphi)\subset (-a,a)$
and $\hat\varphi>0$. Then we have
$\ft{\del}_\Lam(\varphi) = 0$ since
$\ft{\delta}_\Lam$ vanishes in a neighborhood
of $\operatorname{supp}(\varphi)$. On the other hand,
\[
\ft{\del}_\Lam(\varphi) =
\del_\Lam(\ft{\varphi}) =
\sum_{\lam\in\Lam} \ft{\varphi}(\lam) > 0,
\]
since $\Lam$ is nonempty and
$\hat\varphi$ is everywhere positive.
We thus arrive at a contradiction.
\end{proof}
As a remark, we record here the following observation:
\begin{lem}
\label{lem5.2}
Let $\Lambda \sbt \R$ be a nonempty set of
bounded density, and suppose
that there is $a>0$ such that
$\hat\delta_\Lambda=c\cdot\delta_0$
in $(-a,a)$. Then $c>0$, and $\Lam$ has a uniform
density given by $D(\Lam) = c$.
\end{lem}
\begin{proof}
Let $\varphi$ be a Schwartz function
such that $\varphi > 0$, $\int \varphi = 1$ and
$\operatorname{supp}(\ft{\varphi})\subset (-a,a)$.
Then we have $\ft{\varphi} \cdot \ft{\del}_\Lam = c \cdot \del_0$
and hence $\varphi + \Lam$ is a tiling at level $c$.
Since $\varphi$ is a positive function and
$\Lambda$ is nonempty,
the tiling level $c$ must also be positive.
Finally, we obtain from \thmref{thmKL1} that
the set $\Lam$ has a uniform density given by $D(\Lam) = c$.
\end{proof}
Next we give our
second proof of \thmref{thmA5}. This proof, which
does not involve Fourier analysis, is based on the
following result due to Ruzsa and Sz\'{e}kely \cite{RS83}.
\begin{thm}[{\cite{RS83}}]
\label{thmRS83}
Let $f \in L^1(\R)$ be real-valued with $\int f > 0$.
Then there is a nonnegative (nonzero) $g \in L^1(\R)$
such that the convolution $f \ast g$ is nonnegative a.e.
\end{thm}
In fact this is proved in \cite{RS83} not only
for functions on $\R$, but on a wider class of locally
compact abelian groups,
which in particular includes also $\R^d$ for every $d \geq 1$.
\begin{proof}[Second proof of \thmref{thmA5}]
Let $f+\Lambda$ be a tiling
at level zero, and suppose to the contrary that
$\int f$ is nonzero. By replacing
$f(x)$ with the function
$\Re\big\{(\int f)^{-1} f(x)\big\}$,
we may assume that $f$
is real-valued and that $\int f >0$.
So we can use \thmref{thmRS83} to find
a nonnegative, nonzero $g \in L^1(\R)$
such that $f \ast g$ is nonnegative a.e.
Notice that also the function $f \ast g$ is nonzero, since
we have $\int (f \ast g) = (\int f)(\int g) > 0$.
On the other hand we have
$f \ast \del_\Lam = 0$ a.e., and the measure $\delta_\Lam$ is
translation-bounded since $\Lam$ has
bounded density. Using
Fubini's theorem this implies that
\[
(f \ast g) \ast \del_\Lam =
g \ast (f \ast \del_\Lam) = g \ast 0 = 0 \quad \text{a.e.,}
\]
hence $(f \ast g) + \Lam$
is a tiling at level zero. But this is a contradiction,
since $f \ast g$ is a nonnegative, nonzero
function in $L^1(\R)$, and $\Lam$ is a
nonempty set.
\end{proof}
\subsection{The set \texorpdfstring{$\Lambda$}{Lambda} is relatively dense}
We now move on to the proof of
\thmref{thmA6}. First we note that this
theorem \emph{cannot} be deduced based on
\lemref{lem5.2} above, since
there exist tilings $f+\Lam$
at level zero such that
$\ft{\del}_\Lam$ is \emph{not}
a scalar multiple of $\del_0$ in any
neighborhood of the origin; see
\cite[Section 5]{Lev20} for a
construction of such tilings.
We fix the following terminology: given
a sequence of measures $\{\mu_n\}$ on $\R$,
we say that the sequence is
\emph{uniformly translation-bounded} if there is
a constant $M>0$ not depending on $n$, such that
$\sup_{x \in \R} |\mu_n|([x,x+1)) \leq M$
for every $n$.
If $\{\mu_n\}$ is a uniformly translation-bounded sequence of measures
on $\R$, then we say that $\mu_n$ \emph{converges vaguely} to a
measure $\mu$ if for
every continuous, compactly supported function $\varphi$ we have
$\int \varphi \, d\mu_n \to \int \varphi \, d\mu$
(see, for instance, \cite[Section 7.3]{Fol99}).
In this case, the vague limit $\mu$
must also be a translation-bounded measure.
For a uniformly translation-bounded
sequence of measures $\{\mu_n\}$ to converge vaguely,
it is necessary and sufficient that $\{\mu_n\}$ converge
in the space of tempered distributions.
From any uniformly translation-bounded sequence of
measures $\{\mu_n\}$ one can extract
a vaguely convergent subsequence $\{\mu_{n_j}\}$.
\begin{proof}[Proof of \thmref{thmA6}]
Let $f+\Lam$ be a tiling at level zero, and
suppose to the contrary that $\Lam$ is not relatively dense.
Then for each $r>0$ one can find an open interval $I(r)$
of length $r$ which is disjoint from $\Lam$.
By translating the interval $I(r)$ we may assume
that one of its endpoints lies in $\Lam$
(since the set $\Lam$ is nonempty).
Then for arbitrarily large values of
$r$ the right endpoint of
$I(r)$ is in $\Lam$, or for arbitrarily large values of
$r$ the left endpoint is in $\Lam$.
We will assume that the former is the case
(the latter case can be treated similarly).
It follows that there exist two sequences $r_j \to +\infty$ and
$\lam_j \in \Lam$, such that for each $j$ the interval
$I_j := (\lam_j - r_j, \lam_j)$ does
not intersect $\Lam$.
Define $\Lam_j := \Lam - \lam_j$. Since $\Lam$ has bounded density,
the measures $\delta_{\Lam_j}$ are uniformly translation-bounded.
By passing to a subsequence if necessary, we may assume
that $\delta_{\Lam_j}$ converges vaguely to some
(also translation-bounded) measure $\mu$ on $\R$.
The vague limit $\mu$ is supported on $[0, +\infty)$, since
$\delta_{\Lam_j}$ vanishes on
$(-r_j, 0)$. Moreover, each measure $\delta_{\Lam_j}$
is positive and has a unit mass at the origin, which
implies that $\mu$ is also positive
and has an atom at the origin, of mass at least one.
In particular the measure $\mu$ is nonzero.
On the other hand, since $f+\Lam$ is a tiling (at level zero),
then by \thmref{thm4.5.1} the support of
$\ft{\delta}_\Lambda$ is contained in $\{\hat f=0\} \cup \{0\}$.
Since $\ft{f}$ is a nonzero continuous function, it follows that
there is an open interval $(a,b)$ on which $\ft{\delta}_\Lam$ vanishes.
In turn this implies that
$\ft{\delta}_{\Lam_j}$ also vanishes on $(a,b)$
for each $j$. But as the sequence $\ft{\delta}_{\Lam_j}$ converges to
$\ft{\mu}$ in the distributional sense, we obtain
that $\ft{\mu}$ vanishes on $(a,b)$ as well.
We conclude that $\mu$ is a nonzero, translation-bounded, positive
measure on $\R$, supported on $[0, +\infty)$ and
whose Fourier transform
$\ft{\mu}$ vanishes on some
open interval $(a,b)$. But
this contradicts classical results
on boundary values of functions analytic
in the upper half-plane. To be more concrete:
choose two Schwartz functions $\varphi, \psi$
such that $\varphi > 0$,
$\operatorname{supp}(\ft{\varphi}) \sbt (-\del,\del)$, $\psi$ is nonnegative,
$\psi$ is
supported on $[0,+\infty)$ and $\int \psi = 1$.
Then
$h := (\mu \cdot \varphi) \ast \psi$
is a nonzero function belonging to $L^1(\R)$, $\operatorname{supp}(h) \sbt
[0,+\infty)$, and the Fourier transform
$\ft{h} = (\ft{\mu} \ast \ft{\varphi}) \cdot \ft{\psi}$
is also in $L^1(\R)$ and $\ft{h}$ vanishes on some nonempty
open interval provided that $\delta$ is small enough.
This contradicts the uniqueness theorem for
functions in the Hardy space $H^1$, see
e.g.\ \cite[Chapter II]{Gar07}.
\end{proof}
\section{Non-periodic tilings by functions with sparse support}
\label{secSM1}
In this section we establish the existence
of non-periodic tilings $f+\Lam$ of bounded density,
such that the function $f$ has ``sparse'' support
(\thmref{thmA1}).
Recall that
the first example of a tiling $f+\Lam$ such that
$f\in L^1(\R)$, $\Lambda \sbt \R$ has bounded density,
but the set $\Lam$ does \emph{not} have a periodic structure,
was given in \cite{KL16}.
The proof was based on the implicit function method
due to Kargaev \cite{Kar82}, and it yields a set
$\Lam$ which is a small perturbation of the integers.
However, in this example the function $f$ is
\emph{analytic}, which implies that $\operatorname{supp}(f) = \R$.
In order to prove \thmref{thmA1} we will use a different approach,
which is based on two main ingredients. The first one is a
recent construction from \cite{KS20}
of a new type of \emph{crystalline measures} on $\R$.
The second main ingredient is a result concerning
interpolation of discrete functions by continuous ones with
a sparse spectrum.
\subsection{Crystalline measures}
A tempered distribution $\mu$ on $\R$ satisfying
\begin{equation}
\label{eqI1.10}
\mu = \sum_{\lam\in \Lam} a_\lam \del_\lam,
\quad
\ft{\mu} = \sum_{s \in S} b_s \del_s
\end{equation}
(i.e.\ both $\mu$ and $\ft{\mu}$ are pure point measures),
where $\Lam$ and $S$ are discrete, closed sets in $\R$,
is called a \emph{crystalline measure} \cite{Mey16}.
This notion plays a role in the
mathematical theory of quasicrystals,
i.e.\ atomic arrangements having
a discrete diffraction pattern.
The classical example of a crystalline measure
is $\mu = \delta_\Z$, which satisfies
$\ft{\mu} = \mu$ by Poisson's summation formula.
More generally, the measure $\delta_\Lam$ is crystalline
for any set $\Lam$ of the form \eqref{eqI2.1}, i.e.\ any set
$\Lam \sbt \R$ having a periodic structure.
There exist also examples of crystalline measures
on $\R$, whose support is \emph{not} contained in any set
$\Lam$ with a periodic structure. Constructions
of such examples, using different approaches, were
given in the papers \cite{LO16}, \cite{Kol16},
\cite{Mey16}, \cite{Mey17}, \cite{RV19}.
Recently, new progress was achieved
by Kurasov and Sarnak \cite{KS20}
who constructed examples of crystalline measures
on $\R$ enjoying some remarkable
properties, answering several questions
left open by the above mentioned papers.
To state the result, we recall the
terminology introduced in \secref{secTUB} above:
\begin{definition}
\label{defTG}
We say that a set $S \sbt \R$ has
\emph{tempered growth} if there is $N$ such that
\begin{equation}
\label{eqI2.5}
\# (S \cap (-r,r)) = O(r^N), \quad r \to +\infty.
\end{equation}
\end{definition}
We note that if a set $S \sbt \R$ has tempered growth then
the measure $\del_S$ is a tempered distribution,
and also the converse is true.
\begin{thm}[{\cite{KS20}}]
\label{thmI1}
There exist
crystalline measures $\mu$
of the form \eqref{eqI1.10}
with the following properties:
\begin{enumerate-num}
\item \label{thmI1:i}
$\Lambda$ is a set of bounded density;
\item \label{thmI1:ii}
$a_\lam=1$ for all $\lam\in\Lam$,
i.e.\ we have $\mu = \del_\Lam$;
\item \label{thmI1:iii}
$\Lam$ does not have a periodic structure,
i.e.\ it is not of the form \eqref{eqI2.1};
\item \label{thmI1:iv}
$S$ is a set of tempered growth.
\end{enumerate-num}
\end{thm}
Actually, the crystalline measures
constructed in \cite{KS20} have even stronger
properties than stated above -- we only mentioned
the properties that will be used in this paper.
In the more recent works
\cite{Mey20}, \cite{OU20}, alternative approaches to
the construction of crystalline measures
with these properties can be found.
\subsection{Interpolation by functions with a sparse spectrum}
We now turn to the second
main ingredient in our proof of \thmref{thmA1}.
It can be described in the context of the classical
uncertainty principle, which states that a nonzero function
$f$ and its Fourier transform $\ft{f}$ cannot be ``too small''
at the same time. This general principle has
many concrete versions, see \cite{HJ94}.
In particular, using complex analysis
one can show that if $f \in L^1(\R)$
has compact support, and if $\ft{f}$ vanishes
on a set $S \sbt \R$ satisfying
\begin{equation}
\label{eqI1.6}
\limsup_{r \to +\infty} \frac{\# (S \cap (-r,r))}{r} = + \infty,
\end{equation}
then $f = 0$ a.e. This fact was used in
\cite{LM91}, \cite{KL96} in order
to prove that if a nonzero function
$f \in L^1(\R)$ has compact support,
and if $f$ tiles at some level $w$
by a translation set $\Lambda$
of bounded density, then
$\Lambda$ must have a periodic structure
(\thmref{thmLM91}).
On the other hand, suppose that $\Om \sbt \R$ is a set
which contains arbitrarily long intervals
(and so $\Om$ must be unbounded,
but can be very sparse). Then given any discrete
set $S \sbt \R$ of tempered growth,
there exists a nonzero function $f \in L^1(\R)$,
$\operatorname{supp}(f) \sbt \Om$, such that
$\ft{f}$ vanishes on $S$. This is a consequence
of the following result:
\begin{thm}
\label{thmB2}
Let $\Om \sbt \R$ be
a set which contains arbitrarily long intervals,
and let $S \sbt \R$ be a set
of tempered growth. Then given any values
$\{c(s)\} \in \ell^1(S)$, one
can find a smooth function
$f \in L^1(\mathbb R)$,
$\operatorname{supp}(f) \sbt \Om$,
such that $\ft{f}(s)=c(s)$ for all $s \in S$.
\end{thm}
If we call $\operatorname{supp}(f)$ the ``spectrum'' of the
function $\ft{f}$, then the result means that
every discrete function in $\ell^1(S)$
can be interpolated
by a continuous function (the Fourier transform
of an $L^1$ function) with spectrum
in $\Om$. We refer
the reader to \cite{OU16} where the problem of
interpolation by functions with a given spectrum
is discussed in detail.
The approach that we use to prove \thmref{thmB2} is
inspired by \cite[Section 3]{OU09}.
\subsection{Proof of \thmref{thmB2}}
We begin with a simple lemma.
\begin{lem}
\label{lemC1}
Let $S \sbt \R$ be a discrete set of tempered growth, and
let $N$ be a sufficiently large number such that
\eqref{eqI2.5} holds. Then for any $t \in \R \setminus S$ and any
$p>N$ we have
\begin{equation}
\label{eqC1.1}
\sum_{s \in S} |s - t|^{-p} < +\infty.
\end{equation}
\end{lem}
\begin{proof}
The condition \eqref{eqI2.5} remains valid
if we replace $S$ with $S - t$, so we may assume
that $t=0$. The function $n(r) := \# (S \cap (-r,r))$
is then a step function vanishing near the origin. For each $R>0$
which is not a jump discontinuity point of $n(r)$, we have
\begin{equation}
\label{eqC1.2}
\sum_{|s| \leq R} |s|^{-p} = \int_{0}^{R} \frac{dn(r)}{r^{p}}
= \frac{n(R)}{R^p} + p \int_{0}^{R} \frac{n(r)}{r^{p+1}} dr
\end{equation}
which follows from integration by parts. But as
$n(r)=O(r^N)$ and $p>N$, the right-hand side of
\eqref{eqC1.2} remains bounded
as $R \to \infty$. The condition
\eqref{eqC1.1} is thus established.
\end{proof}
\begin{proof}[Proof of \thmref{thmB2}]
We suppose that $\Om \sbt \R$ is a set which contains arbitrarily long intervals,
and that $S \sbt \R$ is a discrete set of tempered growth.
We also choose and fix some enumeration $\{s_j\}$ of the set $S$.
Let $\Phi$ be an infinitely smooth function on $\R$
supported on the interval $[-1,1]$, and such that
$\int \Phi = 1$. For each $r>0$ we denote
$\Phi_r(x) := r^{-1} \Phi(r^{-1}x)$. Define
\[
f_j(x) := \Phi_{r_j} (x - \tau_j) \, e^{2 \pi i s_j x}
\]
where the numbers $r_j > 0$ and $\tau_j \in \R$ will
be determined later on. Then we have
\[
\ft{f}_j(t) = \ft{\Phi} (r_j(t-s_j)) \, e^{-2 \pi i \tau_j (t-s_j)}.
\]
In particular, $\ft{f}_j(s_j) = \ft{\Phi} (0) = 1$.
Let $N$ be a sufficiently large number such that
\eqref{eqI2.5} holds, and let $p > N$. Since
$\ft{\Phi}$ is a Schwartz function, there is a constant
$C=C(\Phi,p)>0$ such that
$|\ft{\Phi}(t)| \leq C |t|^{-p}$ for every nonzero $t$.
For $j$ fixed, we have
\begin{equation}
\label{eqC1.4}
\sum_{k \neq j} |\ft{f}_j(s_k)| =
\sum_{k \neq j} |\ft{\Phi}(r_j(s_k-s_j))| \leq
C r_j^{-p} \sum_{k \neq j} |s_k-s_j|^{-p}.
\end{equation}
If we use \lemref{lemC1} with the set $\{s_k : k \neq j\}$
and $t = s_j$, the lemma yields that the sum on the right-hand
side of \eqref{eqC1.4} converges.
We thus see that given any $\eps > 0$, we can choose
$r_j = r_j(S,\Phi,p,\eps) >0$ large enough such that
the right-hand side of \eqref{eqC1.4} does not exceed $\eps$.
It follows that we can choose the numbers $r_j$ in
such a way that, no matter how the numbers $\tau_j$
are chosen, we have
\begin{equation}
\label{eqC1.6}
M := \sup_{j}
\sum_{k \neq j} |\ft{f}_j(s_k)| \leq \eps.
\end{equation}
To any sequence $\mathbf{b} = \{b_j\}$ belonging to
$\ell^1$, we associate a corresponding sequence
$\mathbf{c} = \{c_k\}$ defined by
\[
c_k = \sum_{j} \ft{f}_j(s_k) b_j =
b_k + \sum_{j \neq k} \ft{f}_j(s_k) b_j.
\]
It follows from
\eqref{eqC1.6} that the mapping $\mathbf{b} \mapsto \mathbf{c}$
defines a bounded linear operator $A: \ell^1 \to \ell^1$ such that
$\|A - I\| = M \leq \eps$, where $I$ is the identity operator.
If we choose $\eps < 1$ then this implies
that $A$ is invertible in $\ell^1$.
Now suppose that we are given a
sequence $\mathbf{c} = \{c_k\} \in \ell^1$.
Let $\mathbf{b} = \{b_j\} \in \ell^1$ be the solution
of the equation $A \mathbf{b} = \mathbf{c}$, and define
\begin{equation}
\label{eqC1.8}
f(x) := \sum_{j} b_j f_j(x).
\end{equation}
We observe that
\[
\|f_j\|_{L^1(\R)} = \|\Phi\|_{L^1(\R)}, \quad \sum |b_j| < \infty,
\]
which implies that the series \eqref{eqC1.8} converges
in $L^1(\R)$. It follows that
\begin{equation}
\label{eqC1.9}
\ft{f}(t) = \sum_{j} b_j \ft{f}_j(t),
\end{equation}
where the series \eqref{eqC1.9} converges
uniformly on $\R$. In particular we have
\begin{equation}
\label{eqC1.10}
\ft{f}(s_k) = \sum_{j} b_j \ft{f}_j(s_k) = c_k
\end{equation}
for each $k$.
Finally we observe that $f_j$ is supported on
the interval $I_j := [\tau_j - r_j, \tau_j + r_j]$.
Since the $r_j$'s were chosen in a way that
does not depend on the values of the $\tau_j$'s,
we can use
the assumption that
$\Om$ contains arbitrarily long intervals
in order to choose each $\tau_j$ such that
the interval $I_j$ lies in $\Om$.
Moreover, by choosing these intervals e.g.\ such
that $\operatorname{dist}(I_j, I_k) \geq 1$ $(j \neq k)$
this implies that
$\operatorname{supp}(f) \sbt \Om$ and ensures that $f$ is
infinitely smooth.
Thus $f$ has all the required properties,
and \thmref{thmB2} is proved.
\end{proof}
\subsection{Proof of \thmref{thmA1}}
Now we can finish the proof of
\thmref{thmA1} based on the two results
above, namely, \thmref{thmI1} and \thmref{thmB2}.
Let $\mu = \del_\Lam$
be the crystalline measure given
by \thmref{thmI1}, that is, $\Lambda \sbt \R$
is a set of bounded density but not of
a periodic structure, such that
$\ft{\del}_\Lam$ is a pure point measure,
$\ft{\del}_\Lam = \sum_{s \in S} b_s \del_s$,
where $S \sbt \R$
is a set of tempered growth.
Since the set $S$ is discrete and closed,
it follows that there is $a>0$ such that we have
$\hat\delta_\Lambda=c\cdot\delta_0$
in $(-a,a)$. Moreover, we must have $c>0$
due to \lemref{lem5.2}.
By rescaling the set $\Lam$ if needed, we may
assume that $\hat\delta_\Lambda=\delta_0$ in $(-a,a)$.
In particular,
this means that the set $S$ contains the origin.
Now suppose that we are given a scalar $w$, and a set
$\Om \sbt \R$ which contains arbitrarily long intervals.
Let $\xi$ be any real number, $\xi \notin S$,
and define
$S' := S \cup \{\xi\}$. Then also the set $S'$
has tempered growth. We
prescribe the values $c(\xi) = 1$, $c(0) = w$,
and $c(s) = 0$ for all $s \in S \setminus \{0\}$,
then these values belong to $\ell^1(S')$.
Hence using \thmref{thmB2} we can find a
smooth function $f \in L^1(\mathbb R)$,
$\operatorname{supp}(f) \sbt \Om$,
such that $\ft{f}(s)=c(s)$ for all $s \in S'$.
This means that we have $\ft{f}(0)=w$ and
$\ft{f}(s)=0$ for all $s \in S \setminus \{0\}$.
It also implies that $\ft{f}(\xi) \neq 0$,
which ensures that the function $f$ is nonzero.
We conclude that $\ft{f}$ vanishes
on the set $\operatorname{supp}(\hat\delta_\Lambda) \setminus \{0\}$,
i.e.\ the condition \eqref{eqI3.1} is satisfied,
and moreover we have $\ft{\delta}_\Lam(\{0\}) \cdot \int f = w$.
Due to \thmref{thm4.5.2} this implies
that $f+\Lambda$ is a tiling at level $w$, and the
proof is thus complete.
\qed
\section{Tilings of unbounded density}
\label{secUB1}
In this section we construct
examples of tilings $f + \Lam$
such that the set $\Lam$ does
\emph{not} have bounded density
(\thmref{thmA3}).
We are not aware of any previous
example of such a tiling in the literature.
Moreover, in our example the set
$\Lam$ has tempered growth
(see \defref{defTG}) and the
function $f$ is in the Schwartz class.
We will also show
that in any such a tiling, the function $f$ must have
zero integral, no matter what the
level $w$ of the tiling is (\thmref{thmA9}).
\subsection{Translation sets with unbounded density}
We will construct tilings $f + \Lam$ such that
the translation set $\Lam \sbt \R$ has the form
\begin{equation}
\label{eqR7.30.12}
\Lam = \bigcup_{n \in \Z} \Lam_n,
\quad
\Lam_n \sbt (n-\eps, n+\eps), \quad
\# \Lam_n = n^2+1,
\end{equation}
where $\eps >0$ is an arbitrarily small number.
Then the condition \eqref{eqI10.1} is not satisfied
and $\Lam$ does not have bounded density.
On the other hand it follows from
\eqref{eqR7.30.12} that $\#(\Lambda\cap(-r,r)) = O(r^3)$
as $r\to +\infty$, so
the set $\Lam$ has tempered growth and
$\delta_\Lambda$ is a tempered distribution on $\R$.
We will prove the following theorem:
\begin{thm}
\label{thmR7.43}
Given any $\eps >0$ there is a set
$\Lam \sbt \R$ of the form \eqref{eqR7.30.12},
such that
\begin{equation}
\label{eqR7.30.9}
\ft{\delta}_\Lam = \delta_0 - \frac{\delta''_0}{4\pi^2}
\quad \text{in $(-\tfrac{1}{2}, \tfrac{1}{2})$.}
\end{equation}
\end{thm}
In order to better understand what is behind
condition \eqref{eqR7.30.9}, consider
the measure $\mu$ on $\R$ which assigns the mass $n^2+1$ to
each point $n \in \Z$. Using Poisson's summation formula
one can check that the Fourier transform
of $\mu$ is given by
$\ft{\mu} = \sum_{k\in \Z} (\delta_k - (4\pi^2)^{-1} \delta''_k)$.
\thmref{thmR7.43} now says that one can
``redistribute'' the mass $n^2+1$ assigned at each
point $n \in \Z$ into equal unit masses at $n^2+1$ distinct
points of a set $\Lam_n$ contained
in a small neighborhood of $n$,
in such a way that the Fourier transform
of the new measure $\del_\Lam$ thus obtained
remains unchanged on the interval $(-\tfrac{1}{2}, \tfrac{1}{2})$.
The role of condition \eqref{eqR7.30.9} in the tiling problem
is clarified by the following lemma:
\begin{lem}
\label{lemR7.30}
Let $\Lam \sbt \R$ be a set of
tempered growth, and suppose that
there is $a>0$ such that
\begin{equation}
\label{eqR7.33}
\operatorname{supp}(\hat\delta_\Lambda) \cap (-a,a) = \{0\}.
\end{equation}
Then given any scalar $w$ one can find
a nonzero Schwartz function $f$,
such that $f+\Lambda$ is a tiling at level $w$.
\end{lem}
\begin{proof}
It is well-known that a distribution supported by the origin is a finite linear combination of derivatives
of $\delta_0$ (see \cite[Theorem 6.25]{Rud91}). Hence
\eqref{eqR7.33} implies that
\begin{equation}
\label{eqR7.41}
\ft{\delta}_\Lam = \sum_{j=0}^{n} c_j \delta_0^{(j)}
\quad \text{in $(-a, a),$}
\end{equation}
and we may assume
that the highest order coefficient $c_n$ is nonzero.
It follows that
\begin{equation}
\label{eqR7.44}
t^n \, \ft{\del}_\Lam(t) = c_n n! (-1)^n \cdot \del_0(t)
\quad \text{in $(-a, a)$}
\end{equation}
(see e.g.\ \cite[Section 9.1, Exercise 3]{Fol99}).
Let $\varphi$ be a nonzero Schwartz function,
$\operatorname{supp}(\ft{\varphi})
\sbt (-a, a)$, $\ft{\varphi}(0)=w$. Then
\begin{equation}
\label{eqR7.42}
f(x) := \frac{\varphi^{(n)}(x)}{c_n n!(-2\pi i)^n}
\end{equation}
is also a nonzero Schwartz function.
We claim that $f+ \Lam$ is a tiling at level $w$.
Indeed,
\begin{equation}
\label{eqR7.46}
\ft{f}(t) \, \ft{\delta}_\Lam(t) =
\frac{\ft{\varphi}(t) t^n}{c_n n! (-1)^n}
\cdot \ft{\delta}_\Lam(t) =
\ft{\varphi}(t) \, \delta_0(t) = w \cdot \delta_0(t),
\end{equation}
where in the second equality we used
\eqref{eqR7.44}. This implies that
$f \ast \delta_\Lam=w$ as needed.
\end{proof}
Since condition \eqref{eqR7.30.9} implies \eqref{eqR7.33},
we see that \thmref{thmA3} is a consequence
of \thmref{thmR7.43} and \lemref{lemR7.30}.
It therefore remains to prove \thmref{thmR7.43}.
\subsection{Kargaev's implicit function method}
In order to prove
\thmref{thmR7.43} we will use a variant of Kargaev's
implicit function method \cite{Kar82}.
See also \cite{KL16}, \cite{Lev20}
for applications of the method in the
construction of translational tiling examples.
The proof will be done in several steps.
\subsubsection{}
For each $k=1,2,3,\dots$ let
$\chi_k$ be the function on $\R$ defined by
\begin{equation}
\label{eqR20.1}
\chi_k(x) = k-j+1, \quad
x \in \big[\tfrac{2(j-1)}{k(k+1)}, \,
\tfrac{2j}{k(k+1)}\big), \quad 1 \leq j \leq k,
\end{equation}
and $\chi_k(x) = 0$ outside the interval
$[0, \frac{2}{k+1})$.
Let $\{\alpha_n\}$, $n\in\Z$, be a bounded sequence of real numbers. To such a sequence
we associate a function $F$ on the real line, defined by
\begin{equation}
\label{eqR21.4}
F(x):=\sum_{n \in \Z} F_n(x), \quad x\in\R,
\end{equation}
where we let
\begin{equation}
\label{eqR21.5}
F_n(x) := \operatorname{sign}(\alpha_n) \cdot \chi_{n^2+1}(\tfrac{x-n}{\alpha_n})
\end{equation}
for each $n \in \Z$ such that $\alpha_n \neq 0$, while if $\alpha_n=0$
then we let $F_n(x):=0$.
(The notation
$\operatorname{sign}(\alpha_n)$ means $+1$ or $-1$ depending on whether
$\alpha_n>0$ or $\alpha_n<0$.)
One can verify, using the assumption
that the sequence $\{\alpha_n\}$ is bounded,
that the series
\eqref{eqR21.4} converges in the space
of tempered distributions. For instance,
this follows from the fact
that $(1+x^2)^{-1} \sum |F_n(x)|$
is a bounded function.
\begin{thm}
\label{thmR7.2}
Let $0<\eps<1$. Then for any
sufficiently small $r>0$ one can find
a real sequence $\alpha =
\{\alpha_n\}$, $n\in\Z$, satisfying
\begin{equation}
\label{eqR7.2.5}
(1-\eps) r \le |\alpha_n| \le (1+\eps) r,
\quad n \in \Z,
\end{equation}
and such that $\ft{F}=0$ in $(-\tfrac{1}{2},\tfrac{1}{2})$,
where $F$ is the function defined by \eqref{eqR21.4}.
\end{thm}
We will first prove \thmref{thmR7.2} and then
use it to deduce \thmref{thmR7.43}.
\subsubsection{}
We will need the following lemma.
\begin{lem}
\label{lemR20.90}
The function $\chi_k$ has the following properties:
\begin{enumerate-roman}
\item \label{lemR20.90.1} $\int \chi_k(x) dx = 1$;
\item \label{lemR20.90.2} $\int x \chi_k(x) dx \leq C k^{-1}$;
\item \label{lemR20.90.3} $| \ft{\chi}_k(-s) - 1 | \leq C |s| k^{-1}$;
\item \label{lemR20.90.4}
$\big| v \big( \ft{\chi}_{k}(- v) - 1\big) -
u \big( \ft{\chi}_{k}(- u) - 1\big) \big|
\leq C k^{-1} |v-u| \cdot \max\{|u|, |v|\}$
\end{enumerate-roman}
for every $s,u,v \in \R$, where $C>0$ is an absolute constant.
\end{lem}
\begin{proof}
The property \ref{lemR20.90.1} can be checked
directly. In order to establish \ref{lemR20.90.2}
we can estimate the left hand side by
$k \int_0^{2/(k+1)} x dx = 2k/(k+1)^2$.
Together with the estimate
\begin{equation}
\label{eqR20.4.2}
\textstyle | \ft{\chi}_k(-s) - 1 | =
\Big| \int \chi_k(x) (e^{2 \pi i sx}-1) dx \Big|
\leq 2 \pi |s| \int x \chi_k(x) dx
\end{equation}
this yields \ref{lemR20.90.3}.
Finally, to establish \ref{lemR20.90.4}
consider the function
$\varphi(x) := x ( e^{2 \pi i x} - 1)$.
We may suppose that $u<v$, then
the left hand side of \ref{lemR20.90.4} is equal to
\begin{align}
\label{eqR20.5.7}
& \Big| \int \chi_k(x) \frac{\varphi(vx) - \varphi(ux)}{x} dx \Big| =
\Big| \int \chi_k(x) \int_u^v \varphi'(sx) ds dx \Big| \\
\label{eqR20.5.8}
&\qquad\qquad\leq |v-u| \cdot \max_{s \in [u,v]}
\int \chi_k(x) |\varphi'(sx)| dx.
\end{align}
If we use the estimate $|\varphi'(sx)| \leq C|sx|$, then
\eqref{eqR20.5.8} and \ref{lemR20.90.2} imply
\ref{lemR20.90.4}.
\end{proof}
\subsubsection{}
Let $X$ be the space of all bounded sequences of real numbers
$\alpha = \{\alpha_n\}$, $n \in \Z$,
endowed with the norm
\[
\|\alpha\|_X := \sup_{n \in \Z} |\alpha_n|
\]
that makes $X$ into a real Banach space.
Denote $I := [-\tfrac{1}{2}, \tfrac{1}{2}]$. Let
$Y$ be the space of all continuous functions $\psi:I \to \CC$
satisfying $\psi(-t)=\overline{\psi(t)}$ for all $t\in I$.
If we endow $Y$ with the norm $\|\psi\|_Y=\sup|\psi(t)|$, $t\in I$,
then also $Y$ is a real Banach space.
Let $\alpha = \{\alpha_n\}$, $n \in \Z$, be a sequence in $X$. Define
\begin{equation}
\label{eqR2.3}
(R \alpha)(t):= \sum_{n \in \Z} e^{2\pi int}\cdot \alpha_n
\big( \ft{\chi}_{n^2+1}(-\alpha_n t) - 1\big).
\end{equation}
We notice that the terms of the series \eqref{eqR2.3}
are elements of $Y$.
By \lemref{lemR20.90}\ref{lemR20.90.3}, the $n$'th term of the series
is bounded by
$C |\alpha_n|^2 (n^2+1)^{-1} $
uniformly on $I$,
where $C>0$ does not
depend on $\alpha$ or $n$.
Hence the series \eqref{eqR2.3} converges
uniformly on $I$ to an element of $Y$, and we have
\begin{equation}
\label{eqR2.4.1}
\|R \alpha\|_Y \leq C \|\alpha\|^2_X,
\end{equation}
where the constant $C$ does not depend on $\alpha$.
We note that the mapping $R : X \to Y$
defined by \eqref{eqR2.3} is \emph{nonlinear}.
\subsubsection{}
For each $r>0$ let
$U_r$ denote the closed ball of radius $r$ around the origin
in $X$:
\begin{equation}
\label{eqR3.1.1}
U_r := \{\alpha \in X : \|\alpha\|_X \leq r\}.
\end{equation}
\begin{lem}
\label{lemR3.1}
Given any $\rho > 0$ there is $r>0$ such that
\begin{equation}
\label{eqR3.1.2}
\|R\beta - R\alpha\|_Y \leq \rho \|\beta-\alpha\|_X,
\quad \alpha,\beta \in U_r.
\end{equation}
In particular, if $r$ is small enough then $R$ is
a contractive (nonlinear) mapping on $U_r$.
\end{lem}
\begin{proof}
Let $\alpha,\beta \in U_r$. Then using
\eqref{eqR2.3} we have
\begin{equation}
\label{eqR3.2}
(R \beta - R \alpha)(t) =
\sum_{n \in \Z} e^{2\pi int}\cdot
\Big[
\beta_n \big( \ft{\chi}_{n^2+1}(-\beta_n t) - 1\big) -
\alpha_n \big( \ft{\chi}_{n^2+1}(-\alpha_n t) - 1\big) \Big].
\end{equation}
By \lemref{lemR20.90}\ref{lemR20.90.4},
the $n$'th term of the series is bounded by
$C r (n^2+1)^{-1} |\beta_n - \alpha_n|$
uniformly on $I$,
where $C>0$ does not
depend on $r$, $\alpha$, $\beta$ or $n$.
Hence the series converges
uniformly on $I$ and
$\|R\beta - R\alpha\|_Y \leq Cr \|\beta-\alpha\|_X$,
where $C$ is a constant not depending on $r$, $\alpha$ or $\beta$.
It thus suffices to choose $r$ small enough so that $C r \leq \rho$.
\end{proof}
\subsubsection{}
For each element $\psi \in Y$ we denote by
$\F(\psi)$ the sequence
\begin{equation}
\label{eqR1.1.10}
\ft{\psi}(n) = \int_{I} \psi(t) e^{-2\pi i n t} dt, \quad n \in \Z,
\end{equation}
of Fourier coefficients of $\psi$.
Since the Fourier coefficients
$\ft{\psi}(n)$ are
real and bounded, we have
a linear mapping $\F: Y \to X$ satisfying
$\|\F(\psi)\|_X \leq \|\psi\|_Y$.
\begin{lem}
\label{lemR4.2}
Given any $\eps>0$ there is
$\delta>0$ with the following property:
Let $\beta \in X$,
$\|\beta\|_X \le \delta$. Then one can find an element
$\alpha \in X$,
$\|\alpha - \beta\|_X \le \eps \|\beta\|_X$, which solves the
equation $\alpha + \F(R \alpha) = \beta$.
\end{lem}
\begin{proof}
Fix $\beta \in X$ such that $\|\beta\|_X \le \delta$,
and let
\[
B = B(\beta,\eps) :=\{ \alpha \in X : \|\alpha-\beta\|_X \le \eps \|\beta\|_X\}.
\]
We observe that if $\alpha \in B$ then $\|\alpha\|_X \le (1+\eps) \|\beta\|_X$.
Define a map $H: B \to X$ by
\[
H(\alpha) := \beta - \F(R\alpha), \quad \alpha \in B,
\]
and notice that an element $\alpha \in B$ is a solution to the equation
$\alpha + \F(R\alpha) = \beta$
if and only if $\alpha$ is a fixed point of the map $H$.
Let us show that if $\delta$ is small enough then $H(B)\subset B$.
Indeed, if $\alpha \in B$ then using \eqref{eqR2.4.1} we have
\[
\|H(\alpha)-\beta\|_X = \|\F(R\alpha)\|_X
\leq \|R \alpha\|_Y \leq
C \|\alpha\|^2_X \leq C (1+\eps)^2 \|\beta\|^2_X.
\]
Hence if we choose $\delta$ such that
$C(1+\eps)^2 \delta \leq \eps$ then
we obtain
\[
\|H(\alpha)-\beta\|_X \leq \eps \|\beta\|_X,
\]
and it follows that $H(B)\subset B$.
It also follows from \lemref{lemR3.1}
that if $\delta$ is small enough, then $H$ is a contractive
mapping from the closed set $B$ into itself. Indeed,
let $\alpha', \alpha'' \in B$, then we have
\[
\|H(\alpha'') - H(\alpha') \|_X =
\|\F(R\alpha'' - R\alpha') \|_X \le
\|R\alpha'' - R\alpha' \|_Y \le
\rho \|\alpha'' - \alpha'\|_X,
\]
where $0<\rho<1$.
Then the Banach fixed point theorem implies that
$H$ has a (unique) fixed point $\alpha \in B$,
which yields the desired solution.
\end{proof}
\subsubsection{Proof of \thmref{thmR7.2}}
Let $r>0$, and let $\beta = \{\beta_n\}$, $n \in \Z$, be a real sequence
defined by $\beta_n := (-1)^n r$. Then $\|\beta\|_X = r$.
If $r=r(\eps)>0$ is small enough,
then by \lemref{lemR4.2} there is an element
$\alpha \in X$,
$\|\alpha - \beta\|_X \le \eps \|\beta\|_X$, which solves the
equation $\alpha + \F(R \alpha) = \beta$.
We observe that the estimate
$\|\alpha - \beta\|_X \le \eps \|\beta\|_X$
implies \eqref{eqR7.2.5}.
The relation $\alpha + \F(R \alpha) = \beta$ means
that $\beta - \alpha$ is the
sequence of Fourier coefficients of
the function $R \alpha$. This implies
that the series
\[
\sum_{n \in \Z} (\beta_n - \alpha_n) e^{2\pi int}
\]
converges in $ L^2(I)$ to $R \alpha$.
Since we have $\beta_n = (-1)^n r$, $n \in \Z$,
the series
\[
\sum_{n \in \Z} \beta_n e^{2\pi int}
\]
converges in the distributional sense to the measure
$r \cdot \del_{\Z + \frac1{2}}$ on $\R$.
In particular, this series converges to zero
in the open interval $(-\frac1{2},\frac1{2})$.
Let $F$ be the function given by \eqref{eqR21.4}
associated to the sequence $\alpha=\{\alpha_n\}$. Then
\[
\hat F (-t) = \lim_{N\to\infty}\sum_{|n|\le N}\hat F_n(-t)
\]
in the sense of distributions, and by \eqref{eqR21.5} we have
\begin{equation}
\label{eqR21.6}
\ft{F}_n(-t) = e^{2 \pi i n t} \cdot \alpha_n \, \ft{\chi}_{n^2+1}(-\alpha_n t).
\end{equation}
Hence
\[
\hat F(-t) =
\lim_{N\to\infty} \Big[ \sum_{|n|\le N} \beta_n e^{2\pi int}
- \sum_{|n|\le N} (\beta_n - \alpha_n )e^{2\pi int}
+ \sum_{|n|\le N} e^{2\pi int}\cdot \alpha_n
\big( \ft{\chi}_{n^2+1}(-\alpha_n t) - 1\big) \, \Big].
\]
The first sum converges
in the distributional sense to zero in $(-\tfrac{1}{2}, \tfrac{1}{2})$.
The second sum converges in
$L^2(I)$ to $R \alpha$. The third
sum converges to $R\alpha$
uniformly on $I$.
It follows that the distribution $\hat F$ vanishes
in the open interval $(-\tfrac{1}{2}, \tfrac{1}{2})$.
\qed
\subsection{Proof of \thmref{thmR7.43}}
Let $0<\eps<1$ be given, and for $r=r(\eps)>0$ sufficiently small
let $\{\alpha(n)\}$, $n \in \Z$,
be the sequence given by \thmref{thmR7.2}. Define
\begin{equation}
\label{eqR7.40.1}
\Lam_n = \Big\{ n + \frac{2 j\alpha_n }{(n^2+1)(n^2+2)} , \quad 1 \leq j \leq n^2+1 \Big\}
\end{equation}
and
\begin{equation}
\label{eqR7.40.2}
\Lam = \bigcup_{n \in \Z} \Lam_n.
\end{equation}
We have $\alpha_n \neq 0$
for every $n \in \Z$,
due to \eqref{eqR7.2.5}. Hence $\Lam_n$ is a
set with exactly $n^2+1$ elements.
The set $\Lam_n$ is contained in the
interval $[n - |\alpha_n|, n + |\alpha_n|]$.
This yields \eqref{eqR7.30.12} provided
that $r>0$ is small enough,
again due to \eqref{eqR7.2.5}.
In particular we may assume that
the sets $\Lam_n$ are pairwise disjoint.
Observe that the distributional derivative of
the function $F$ in \eqref{eqR21.4} is
\[
F'=\sum_{n\in \Z} F'_n =
\sum_{n\in \Z}((n^2+1) \delta_n - \delta_{\Lam_n}),
\]
and hence
\[
\delta_{\Lam} = \sum_{n\in \Z} (n^2+1) \delta_n - F'.
\]
The Fourier transform
of the measure $\sum_{n \in \Z} (n^2+1) \delta_n$ is
$\sum_{k \in \Z} \big(\delta_k - (4\pi^2)^{-1} \delta''_k \big)$,
which follows from Poisson's summation formula.
This implies that
\[
\hat \delta_\Lambda= \delta_0 - \frac{\delta''_0}{4\pi^2} -\ft{{F'}}
\quad \text{in $(-\tfrac{1}{2}, \tfrac{1}{2})$.}
\]
But since $\ft{F}$ vanishes in $(-\tfrac{1}{2},\tfrac{1}{2})$, then the same is true
for $\ft{{F'}}$, so \eqref{eqR7.30.9} is established.
\qed
\subsection{Proof of \thmref{thmA9}}
Finally we show that if
$f+\Lam$ is a tiling at some level $w$,
where $\Lambda \sbt \R$ is any set of
tempered growth but not of bounded density,
and $f$ is any function in the Schwartz class,
then $f$ must have zero integral.
Suppose to the contrary that
$\int f = \ft{f}(0)$ is nonzero. Then,
due to the continuity and smoothness of $\ft{f}$,
there is a Schwartz function
$g$ such that $\ft{f} \cdot \ft{g}=1$
in some neighborhood $(-a,a)$ of the origin.
Let $h>0$ be a Schwartz function
with $\operatorname{supp}(\ft{h})\subset (-a,a)$, then
\[
\ft{h} \cdot \ft{g} \cdot \ft{f} =\ft{h}
\]
and hence
\[
h \ast g \ast f = h.
\]
It follows that
\[
h \ast \del_\Lam =
(h \ast g \ast f) \ast \del_\Lam =
(h \ast g) \ast (f \ast \del_\Lam) =
w \cdot \textstyle\int (h \ast g),
\]
where in the last equality we used the
tiling assumption $f \ast \del_\Lam = w$
(the associativity of the convolution is justified
since $\del_\Lam$ is a tempered distribution
and $f,g,h$ are Schwartz functions,
see \cite[Theorem 7.19]{Rud91}).
We conclude that $h + \Lam$ is a tiling
(at a certain level). But it is known, see
\cite[Lemma 2.1]{KL96},
that if $h$ is a \emph{nonnegative}, nonzero
function and if $h + \Lam$ is a tiling,
then the set $\Lam$ must have bounded density.
We thus arrive at a contradiction.
\qed
\section{Open problems}
\label{secOP1}
We conclude the paper by posing some open problems.
\subsection{Tiling at level zero}
The following problem was already mentioned
in \secref{secTLZ} above:
Let $f+\Lam$ be a tiling at level zero,
where the function $f \in L^1(\R)$ is nonzero and
the set $\Lam \sbt \R$ is nonempty and has bounded density.
Does it follow that $\Lam$ has a \emph{uniform density}
$D(\Lam)$\,?
In Fourier analytic terms,
the problem can be equivalently stated
as follows: Let $\Lam \sbt \R$ be
nonempty and have bounded density, and suppose
that $\ft{\del}_\Lam$ vanishes on some
open interval $(a,b)$. Does
$\Lam$ necessarily have a uniform density?
What makes the problem nontrivial is
the existence of tilings $f+\Lam$
at level zero such that
$\ft{\del}_\Lam$ is not
a scalar multiple of $\del_0$ in any
neighborhood of the origin,
see \cite[Section 5]{Lev20}.
In particular, \lemref{lem5.2} does not apply.
We note that by \thmref{thmA6} the set
$\Lam$ must be relatively dense,
so if the density $D(\Lam)$ exists then it
is a strictly positive number.
\subsection{Non-periodic tilings}
Let $f$ be a nonzero function in $L^1(\R)$, and suppose that
the set $\{x : f(x) \neq 0\}$ has \emph{finite measure}.
If $f$ tiles at some level $w$ by a translation
set $\Lam \sbt \R$ of bounded density, does it follows
that $\Lam$ has a periodic structure?
\thmref{thmLM91} does not apply here,
since $f$ is \emph{not} assumed to have compact support.
Does there exist a measurable set
$\Om \sbt \R$, $0<\operatorname{mes}(\Om)<+\infty$,
whose indicator function $\1_\Om$ can tile
at level one, or, a weaker requirement, at some other integer
level $w$, with a translation set $\Lam \sbt \R$
that \emph{does not} have a periodic structure?
Notice that
such a set $\Om$ (if it exists) must be unbounded,
again due to \thmref{thmLM91}.
\subsection{Tilings of unbounded density}
Let $f \in L^1(\R)$ be nonzero and have \emph{compact support}, and
suppose that $f+\Lam$ is a tiling at some level $w$, where $\Lam \sbt \R$
is a discrete set (not a multi-set) of tempered growth.
Does it follow that $\Lam$ is of the form \eqref{eqI2.1},
i.e. $\Lambda$ is a set
of bounded density having a periodic structure?
In other words, the question is whether
\thmref{thmLM91} remains valid if the set
$\Lambda$ is not assumed to have
bounded density, but only tempered growth.
We note that \thmref{thmA3} does \emph{not} provide a negative
answer to this question, since the function $f$ constructed in
the proof of this theorem has \emph{unbounded} support.
\subsection{Lattice tilings}
Let $f \in L^1(\R^d)$, $d \geq 1$, and suppose that
$f$ tiles at some level $w$
with a translation set $\Lambda \sbt \R^d$ of
bounded density.\footnote{A set
$\Lambda \sbt \R^d$ is said to have
\emph{bounded density} if there exists
$M>0$ such that $\#(\Lam \cap (x+B)) \leq M$
for all $x \in \R^d$, where $B$ is the open
unit ball in $\R^d$.}
Does there necessarily exist a \emph{lattice}
$L \sbt \R^d$ such that
$f+L$ is also a tiling, possibly at a
different level $w'$\,?
The answer is known to be affirmative
in the special case where $\Lambda$ is assumed to be
a disjoint union of finitely
many translated lattices, namely,
$\Lam = \biguplus_{j=1}^{N} (L_j + \tau_j)$
where each $L_j$ is a lattice in
$\R^d$ and the $\tau_j$ are translation vectors.
This was proved in dimension one in
\cite[p.\ 673]{KL96}, while in several dimensions
the result was proved more recently in \cite[Theorem 1.6]{Liu18}.
In both proofs, number theory plays an essential
role: the proof in $\R$ uses
the classical Skolem--Mahler--Lech theorem,
while in $\R^d$ the proof relies on
a result due to Evertse, Schlickewei and Schmidt
\cite{ESS02}.
\bibliographystyle{amsplain}
| {
"timestamp": "2021-09-14T02:31:47",
"yymm": "2009",
"arxiv_id": "2009.09410",
"language": "en",
"url": "https://arxiv.org/abs/2009.09410",
"abstract": "We say that a function $f \\in L^1(\\mathbb{R})$ tiles at level $w$ by a discrete translation set $\\Lambda \\subset \\mathbb{R}$, if we have $\\sum_{\\lambda \\in \\Lambda} f(x-\\lambda)=w$ a.e. In this paper we survey the main results, and prove several new ones, on the structure of tilings of $\\mathbb{R}$ by translates of a function. The phenomena discussed include tilings of bounded and of unbounded density, uniform distribution of the translates, periodic and non-periodic tilings, and tilings at level zero. Fourier analysis plays an important role in the proofs. Some open problems are also given.",
"subjects": "Classical Analysis and ODEs (math.CA); Functional Analysis (math.FA); Metric Geometry (math.MG)",
"title": "Tiling by translates of a function: results and open problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9899864273087513,
"lm_q2_score": 0.8104789109591832,
"lm_q1q2_score": 0.8023631214695693
} |
https://arxiv.org/abs/2208.03959 | Partial reconstruction of measures from halfspace depth | The halfspace depth of a $d$-dimensional point $x$ with respect to a finite (or probability) Borel measure $\mu$ in $\mathbb{R}^d$ is defined as the infimum of the $\mu$-masses of all closed halfspaces containing $x$. A natural question is whether the halfspace depth, as a function of $x \in \mathbb{R}^d$, determines the measure $\mu$ completely. In general, it turns out that this is not the case, and it is possible for two different measures to have the same halfspace depth function everywhere in $\mathbb{R}^d$. In this paper we show that despite this negative result, one can still obtain a substantial amount of information on the support and the location of the mass of $\mu$ from its halfspace depth. We illustrate our partial reconstruction procedure in an example of a non-trivial bivariate probability distribution whose atomic part is determined successfully from its halfspace depth. | \section{The Depth Characterization/Reconstruction Problem}
Let $x$ be a point in the $d$-dimensional Euclidean space $\R^d$ and let $\mu$ be a finite Borel measure in $\R^d$. We write $\half$ for the collection of all closed halfspaces\footnote{A halfspace is one of the two regions determined by a hyperplane in $\R^d$; any halfspace can be written as a set $\left\{ y \in \R^d \colon \left\langle y, u \right\rangle \leq c \right\}$ for some $c \in \R$ and $u \in \R^d \setminus \left\{ 0 \right\}$.} in $\R^d$ and $\half(x)$ for the subset of those halfspaces from $\half$ that contain $x$ in their boundary hyperplane. The \emph{halfspace depth} (or \emph{Tukey depth}) of the point $x$ with respect to $\mu$ is defined as
\begin{equation} \label{halfspace depth}
\D\left(x;\mu\right) = \inf_{H\in\half(x)} \mu(H).
\end{equation}
The history of the halfspace depth in statistics goes back to the 1970s \cite{Tukey1975}. The halfspace depth plays an important role in the theory and practice of nonparametric inference of multivariate data; for many references see \cite{Liu_etal1999,Nagy_etal2019,Zuo_Serfling2000}.
The depth~\eqref{halfspace depth} was originally designed to serve as a multivariate generalization of the quantile function. As such, it is desirable that just as the quantile function in $\R$, the depth function $x \mapsto \D(x;\mu)$ in $\R^d$ characterizes the underlying measure $\mu$ uniquely, and $\mu$ is straightforward to be retrieved from its depth. The question whether the last two properties are valid for $\D$ are known as the \emph{halfspace depth characterization and reconstruction problems}. They both turned out not to have an easy answer. In fact, only the recent progress in the theory of the halfspace depth gave the first definite solutions to some of these problems.
In \cite{Nagy2019b}, the general characterization question for the halfspace depth was answered in the negative, by giving examples of different probability distributions in $\R^d$ with $d \geq 2$ with identical halfspace depth functions. On the other hand, several authors have obtained also partial positive answers to the characterization problem; for a recent overview of that work see \cite{Nagy2020c}. Only three types of distributions are known to be completely characterized by their halfspace depth functions: \begin{enumerate*}[label=(\roman*)] \item univariate measures, in which case the depth~\eqref{halfspace depth} is just a simple transform of the distribution function of $\mu$; \item atomic measures with finitely many atoms (which we subsequently call \emph{finitely atomic measures} for brevity) in $\R^d$ \cite{Struyf_Rousseeuw1999,Laketa_Nagy2021}; and \item measures that possess all Dupin floating bodies\footnote{A Borel measure $\mu$ on $\R^d$ is said to possess all Dupin floating bodies if each tangent halfspace to the halfspace depth upper level set $\left\{ x \in \R^d \colon \D(x;\mu) \geq \alpha \right\}$ is of $\mu$-mass exactly $\alpha$, for all $\alpha \geq 0$.} \cite{Nagy_etal2019}.\end{enumerate*}
In this contribution we revisit the halfspace depth reconstruction problem. We pursue a general approach, and do not restrict only to atomic measures, or to measures with densities. Our results are valid for any finite (or probability) Borel measure $\mu$ in $\R^d$. As the first step in addressing the reconstruction problem, our intention is to identify the support and the location of the atoms of $\mu$, based on its depth. We will see at the end of this note that without additional assumptions, neither of these problems is possible to be resolved. We, however, prove several positive results.
We begin by introducing the necessary mathematical background in Section~\ref{section:preliminaries}. In Section~\ref{section:main} we state our main theorem; a detailed proof of that theorem is given in the Appendix. We show that \begin{enumerate*}[label=(\roman*)] \item the support of the measure $\mu$ must be concentrated only in the boundaries of the level sets of its halfspace depth; \item each atom of $\mu$ is an extreme point of the corresponding (closed and convex) upper level sets of the halfspace depth; and \item each atom of $\mu$ induces a jump in the halfspace depth function on the line passing through that atom. \end{enumerate*} These advances enable us to identify the location of the atoms of $\mu$, at least in simpler scenarios. We illustrate this in Section~\ref{section:examples}, where we give an example of a non-trivial bivariate probability measure $\mu$ whose atomic part we are able to determine from its depth. We conclude by giving an example of two measures whose depth functions are the same, yet both their supports and the location of their atoms differ.
\section{Preliminaries: Flag Halfspaces and Central Regions} \label{section:preliminaries}
\subsubsection*{Notations.}
When writing simply a subspace of $\R^d$ we always mean an affine subspace, that is the set $a + L = \left\{ a + x \in \R^d \colon x \in L \right\}$ for $a \in \R^d$ and $L$ a linear subspace of $\R^d$. The intersection of all subspaces in $\R^d$ that contain a set $A \subseteq \R^d$ is called the affine hull of $A$, and denoted by $\aff{A}$. It is the smallest subspace that contains $A$. The affine hull $\aff{\{x,y\}}$ of two different points $x, y \in \R^d$ is the infinite line passing through both $x$ and $y$; another example of a subspace is any hyperplane in $\R^d$.
For a set $A\subseteq \R^d$ we write $\intr(A)$, $\cl(A)$ and $\bd(A)$ to denote the interior, closure, and boundary of $A$, respectively. The interior, closure, and boundary of a set $B\subseteq A$ when considered only as a subset of a subspace $A \subseteq \R^d$ are denoted by $\intr_A(B)$, $\cl_A(B)$ and $\bd_A(B)$, respectively. For two different points $x,y\in\R^d$, $x \ne y$, we denote by $L(x,y)$ the interior of the line segment between $x$ and $y$ when considered inside the infinite line $\aff{\{x,y\}}$. In other words, $L(x,y)$ is the open line segment between $x$ and $y$. In the special case of $A=\aff{B}$ we write $\relint(B)=\intr_A(B)$, $\relbd(B)=\bd_A(B)$ and $\relcl(B)=\cl_A(B)$ to denote the relative interior, relative boundary, and relative closure of $B$, respectively. For instance, $\relbd(L(x,y)) = \{x, y \}$ and $L(x,y) = \relint(L(x,y))$, but $\intr(L(x,y)) = \emptyset$ if $d>1$.
We write $\Meas$ for the collection of all finite Borel measures in $\R^d$. For a subspace $A\subseteq \R^d$ and $\mu\in\Meas$ we write $\mu|_A$ to denote the measure obtained by restricting $\mu$ to the subspace $A$, that is the finite Borel measure given by $\mu|_A(B) = \mu(B \cap A)$ for any Borel set $B \subseteq \R^d$. By $\Support{\mu}$ we mean the support of $\mu\in\Meas$, which is the smallest closed subset of $\R^d$ of full $\mu$-mass.
\subsection{Minimizing halfspaces and flag halfspaces}
For $\mu\in\Meas$ and $x \in \R^d$ we call $H \in \half(x)$ a \emph{minimizing halfspace} of $\mu$ at $x$ if $\mu(H) = \D\left(x;\mu\right)$. For $d = 1$ a minimizing halfspace always trivially exists. It also exists if $\mu$ is smooth in the sense that $\mu(\bd(H)) = 0$ for all $H \in \half(x)$, or if $\mu \in \Meas$ is finitely atomic. In general, however, the infimum in~\eqref{halfspace depth} does not have to be attained. We give a simple example.
\begin{example} \label{example:flag}
Take $\mu \in \Meas[\R^2]$ the sum of a uniform distribution on the disk $B = \left\{ x \in \R^2 \colon \left\Vert x \right\Vert \leq 2 \right\}$ and a Dirac measure at $a = (1,1) \in \R^2$. For $x = (1,0) \in \R^2$ no minimizing halfspace of $\mu$ at $x$ exists. As can be seen in Fig.~\ref{figure:flag halfspace}, the depth $\D(x;\mu)$ is approached by $\mu(H_{n})$ for a sequence of halfspaces $H_n\in\half(x)$ with inner normals $v_n = \left( \cos(-1/n), \sin(-1/n) \right)$ that converge $H_{v} \in \half(x)$ with inner normal $v = (1,0)$, yet $\D(x;\mu) = \lim_{n \to \infty} \mu(H_n) < \mu(H_{v})$.
\end{example}
\begin{figure}[htpb]
\begin{center}
\includegraphics[width=\twofigb\textwidth]{Flag1.eps} \qquad \includegraphics[width=\twofigb\textwidth]{Flag2.eps
\end{center}
\caption{The support of $\mu \in \Meas[\R^2]$ from Example~\ref{example:flag} (colored disk) and its unique atom $a$ (diamond). No minimizing halfspace of $\mu$ at $x = (1,0)\in\R^2$ exists. On the left hand panel we see a halfspace $H_n \in \half(x)$ whose $\mu$-mass is almost $\D(x;\mu)$. The halfspace $H_n$ does not contain $a$. On the right hand panel the unique minimizing flag halfspace $F \in \flag(x)$ of $\mu$ at $x$ is displayed.}
\label{figure:flag halfspace}
\end{figure}
For certain theoretical properties of the halfspace depth of $\mu$ to be valid, the existence of minimizing halfspaces appears to be crucial. As a way to alleviate the issue of their possible non-existence, in \cite{Pokorny_etal2022} a novel concept of the so-called flag halfspaces was introduced. A \emph{flag halfspace} $F$ centered at a point $x\in\R^d$ is defined as any set of the form
\begin{equation}\label{flag halfspace}
F=\{x\} \cup \left( \bigcup_{i=1}^d \relint(H_i) \right).
\end{equation}
In this formula, $H_d\in\half(x)$ and for each $i\in \{1,\dots,d-1\}$, $H_i$ stands for an $i$-dimensional halfspace inside the subspace $\relbd(H_{i+1})$ such that $x\in\relbd(H_i)$. The collection of all flag halfspaces in $\R^d$ centered at $x\in\R^d$ is denoted by $\flag(x)$. An example of a flag halfspace in $\R^2$ is displayed in the right hand panel of Fig.~\ref{figure:flag halfspace}. That flag halfspace is a union of an open halfplane $H_2$ (light-colored halfplane) whose boundary passes through $x$, a halfline (thick halfline) in the boundary line $\bd(H_2)$ starting at $x$, and the point $x$ itself.
The results derived the present paper lean on the following crucial observation, whose complete proof can be found in \cite[Theorem~2]{Pokorny_etal2022}.
\begin{lemma}\label{theorem:Pokorny}
For any $x\in\R^d$ and $\mu\in\Meas$ it holds true that
\[ \D\left(x;\mu\right)=\min_{F\in\flag(x)}\mu(F). \]
In particular, there always exists $F\in\flag(x)$ such that $\mu(F)=\D\left(x;\mu\right)$.
\end{lemma}
Any flag halfspace $F \in \flag(x)$ from Lemma~\ref{theorem:Pokorny} that satisfies $\mu(F)=\D\left(x;\mu\right)$ is called a \emph{minimizing flag halfspace} of $\mu$ at $x$. This is because it minimizes the $\mu$-mass among all the flag halfspaces from $\flag(x)$. Lemma~\ref{theorem:Pokorny} tells us two important messages. First, the halfspace depth $\D(x;\mu)$ can be introduced also in terms of flag halfspaces instead of the usual closed halfspaces in~\eqref{halfspace depth}, and the two formulations are equivalent to each other. Second, in contrast to the usual minimizing halfspaces that do not exist at certain points $x \in \R^d$, according to Lemma~\ref{theorem:Pokorny} there always exists a minimizing flag halfspace of any $\mu$ at any $x$.
\subsection{Halfspace depth central regions}
The upper level sets of the halfspace depth function $\D(\cdot;\mu)$, given by
\begin{equation} \label{central region}
\Damu = \left\{ x \in \R^d \colon \depth{x}{\mu} \geq \alpha \right\} \mbox{ for }\alpha\geq 0,
\end{equation}
play the important role of multivariate quantiles in depth statistics. The set $\Damu$ is called the \emph{central region} of $\mu$ at level $\alpha$. All central regions are known to be convex and closed. The sets~\eqref{central region} are clearly also nested, in the sense that $\Damu \subseteq \D_\beta(\mu)$ for $\beta\leq \alpha$. Besides~\eqref{central region}, another collection of depth-generated sets of interest considered in \cite{Laketa_Nagy2021b,Pokorny_etal2022} is
\begin{equation*}
\Uamu = \left\{ x \in \R^d \colon \depth{x}{\mu} > \alpha \right\} \mbox{ for }\alpha\geq 0.
\end{equation*}
We conclude this collection of preliminaries with another result from \cite{Pokorny_etal2022}, which tells us that no set difference of the level sets $\Damu\setminus\Uamu$ can contain a relatively open subset of positive $\mu$-mass. That result lends an insight into the properties of the support of $\mu$, based on its depth function $\D\left(\cdot;\mu\right)$. It will be of great importance in the proof of our main result in Section~\ref{section:main}. The complete proof of the next technical lemma can be found in~\cite[Lemma~6]{Pokorny_etal2022}.
\begin{lemma}\label{main lemma for support}
Let $\mu \in \Meas$ and let $K \subset \R^d$ be a relatively open set of points of equal depth of $\mu$ that contains at least two points. Then $\mu(K) = 0$.
\end{lemma}
\section{Main Result} \label{section:main}
The preliminary Lemma~\ref{main lemma for support} hints that the mass of $\mu$ cannot be located in the interior of regions of constant depth. We refine and formalize that claim in the following Theorem~\ref{theorem:support}, which is the main result of the present work.
In part~\ref{support2} of Theorem~\ref{theorem:support} we bound the support of $\mu\in\Meas$, based on the information available in its depth function $\D\left(\cdot;\mu\right)$. We do so by showing that $\mu$ may be supported only in the closure of the boundaries of the central regions $\Damu$. That is a generalization of a similar result, known to be valid in the special case of finitely atomic measures $\mu\in\Meas$ \cite{Laketa_Nagy2021,Liu_etal2020,Struyf_Rousseeuw1999}. In the latter situation, all central regions $\Damu$ are convex polytopes, there is only a finite number of different polytopes in the collection $\left\{ \Damu \colon \alpha \geq 0 \right\}$, and the atoms of $\mu$ must be located in the vertices of the polytopes from that collection. Nevertheless, not all vertices of $\Damu$ are atoms of $\mu$; an algorithmic procedure for the reconstruction of the atoms, and the determination of their $\mu$-masses, is given in \cite{Laketa_Nagy2021}.
Extending the last observation about the possible location of atoms from finitely atomic measures to the general scenario, in part~\ref{support1} of Theorem~\ref{theorem:support} we show that all atoms of $\mu$ are contained in the extreme points\footnote{For a convex set $C \subset \R^d$, a face of $C$ is a convex subset $F \subseteq C$ such that $x, y \in C$ and $(x+y)/2 \in F$ implies $x, y \in F$. If $\{z\}$ is a face of $C$, then $z$ is called an extreme point of $C$.} of the central regions $\Damu$. Note that this indeed corresponds to the known theory for finitely atomic measures --- the extreme points of polytopes are exactly their vertices.
Our last observation in part~\ref{jump} of Theorem~\ref{theorem:support} is that each atom $x\in\R^d$ of $\mu$ induces a jump discontinuity in the halfspace depth, when considered on the straight line connecting any point of higher depth with $x$. This will be useful in detecting possible locations of atoms for general measures.
\begin{theorem} \label{theorem:support}
Let $\mu\in\Meas$.
\begin{enumerate}[label=(\roman*)]
\item \label{support2} Let $A$ be a subspace of $\R^d$ that contains at least two points. Then
\[ \Support{\mu|_A} \subseteq \cl_A\left(\bigcup_{\alpha \geq 0}\bd_A\left(\Damu\cap A\right)\right), \]
In particular, for $A = \R^d$ we have
\[ \Support{\mu} \subseteq \cl\left(\bigcup_{\alpha \geq 0}\bd\left(\Damu\right)\right). \]
\item \label{support1} Each atom $x$ of $\mu$ with $\depth{x}{\mu} = \alpha$ is an extreme point of $\D_\beta(\mu)$ for all $\beta \in (\alpha - \mu(\{x\}), \alpha]$.
\item \label{jump} For any $x \in \R^d$ with $\depth{x}{\mu} = \alpha$, any $z\in\Uamu$, and any $y \in \R^d$ such that $x$ belongs to the open line segment $L(y,z)$ between $y$ and $z$, it holds true that
\[ \D\left(y;\mu\right)\leq\D\left(x;\mu\right)-\mu(\{x\}).\]
\end{enumerate}
\end{theorem}
The proof of Theorem~\ref{theorem:support} is given in the Appendix. Theorem~\ref{theorem:support} sheds light on the support and the location of the atoms of a measure. Its part~\ref{support2} tells us that if a depth function $\D\left(\cdot;\mu\right)$ attains only at most countably many different values, and each level set $\Damu$ is a polytope, the mass of $\mu$ must be concentrated in the closure of the set of vertices of the level sets $\Damu$. A special case is, of course, the setup of finitely atomic measures treated in \cite{Struyf_Rousseeuw1999,Laketa_Nagy2021}.
\section{Examples} \label{section:examples}
We conclude this note by giving two examples. Parts~\ref{support1} and~\ref{jump} of Theorem~\ref{theorem:support} show a way, at least in special situations, to locate the atomic parts of measures. We start by reconsidering our motivating Example~\ref{example:flag}. The distribution $\mu\in\Meas[\R^2]$ is not purely atomic, and can be shown not to possess Dupin floating bodies. Thus, it is currently unknown whether its depth function $\D\left(\cdot;\mu\right)$ determines $\mu$ uniquely. In our first example of this section we show how Theorem~\ref{theorem:support} recovers the position the atomic part of $\mu$. Then, in Example~\ref{example:Nagy} we argue that the general problem of determining the support, or the location of the atoms of $\mu \in \Meas$ from its halfspace depth is impossible to be solved without further restrictions.
\addtocounter{example}{-1}
\begin{example}[continued]
Suppose that in Example~\ref{example:flag} we have $\mu(\{a\}) = \delta$ for $\delta \in (0,1/2)$ small enough, and that the non-atomic part of $\mu$ is $\nu\in\Meas[\R^2]$ uniform on the disk $B$, with $\nu(B) = 1$. Hence, $\mu(\R^2) = \nu(B) + \mu(\{a\}) = 1 + \delta$. We first compute the halfspace depth function $\D\left(\cdot;\mu\right)$ of $\mu$, and then show how to use Theorem~\ref{theorem:support} to find the atom $a$ of $\mu$ from its depth. The computation of the depth function is done by means of determining all the central regions~\eqref{central region} at levels $\beta \geq 0$ of $\mu$. We denote $\alpha = \D(x;\mu)$, and split our argumentation into three situations according to the behavior of the regions $\D_{\beta}(\mu)$: \begin{enumerate*}[label=(\roman*)] \item $\beta < \alpha$ where $x$ is contained in the interior of $\D_{\beta}(\mu)$; \item $\beta \in (\alpha, \alpha + \delta]$ where $x$ lies in the boundary of $\D_{\beta}(\mu)$; and \item $\beta > \alpha + \delta$ where $\D_{\beta}(\mu)$ does not contain $x$.\end{enumerate*} First note that because $\nu$ is uniform on a unit disk, all non-empty depth regions $D_{\beta}(\nu)$ of $\nu$ are circular disks centered at the origin, and all the touching halfspaces\footnote{We say that $H\in\half$ is \emph{touching} $A\subset\R^d$ if $H\cap A\neq \emptyset$ and $\intr(H)\cap A=\emptyset$.} of $\D_{\beta}(\nu)$ carry $\nu$-mass exactly $\beta$.
\begin{figure}[htpb]
\begin{center}
\includegraphics[width=\twofigb\linewidth]{Example1continued.eps} \qquad
\includegraphics[width=\twofigb\linewidth]{Cauchy.eps}
\end{center}
\caption{Left panel: The measure $\mu$ from Example~\ref{example:flag} being the sum of a uniform distribution on a disk and a single atom at $a \in \R^2$ (black point) with $\delta = 1/10$, with several contours of its depth $\D\left(\cdot;\mu\right)$ (thick colored lines). The halfspace median is the red line segment in the middle of the plot. From the depth only, Theorem~\ref{theorem:support} allows us to determine the mass and the location of the atom. Two depth contours that share $a \in \R^2$ as an extreme point are plotted with boundaries in green. Right panel: Example~\ref{example:Nagy} with $d=2$. Several density contours of the measure $\mu \in \Meas[\R^2]$ (solid lines) and its atom (point at the origin), together with multiple contours of the corresponding depth $\D(\cdot;\mu) \equiv \D(\cdot; \nu)$ (dashed lines).}
\label{figure:atom}
\end{figure}
\smallskip\noindent
\textbf{Case I:} $\beta\leq \alpha$. For $\alpha = \depth{a}{\mu} = \depth{a}{\nu}$ we have that $\D_{\alpha}(\nu)$ is a disk centered at the origin containing $a$ on its boundary. Note that the halfspace depths of $\mu$ and $\nu$ remain the same outside $\D_{\alpha}(\nu)$, since the added atom $a$ does not lie in any minimizing halfspace of $x \notin \D_\alpha(\nu)$, so we have $\D_{\beta}(\mu)=\D_{\beta}(\nu)$ for all $\beta \leq \alpha$.
\smallskip\noindent
\textbf{Case II:} $\beta\in (\alpha,\alpha+\delta]$. We have $\depth{a}{\mu}=\alpha+\delta\geq \beta$, meaning that $a\in D_{\beta}(\mu)$. Because $\mu$ is obtained by adding mass to $\nu$, it must be $D_{\beta}(\nu)\subseteq D_{\beta}(\mu)$ and due to the convexity of the central regions \eqref{central region}, the convex hull $C$ of $D_{\beta}(\nu) \cup \{a\}$ must be contained in $D_{\beta}(\mu)$. Denote by $H \in \half(a)$ a touching halfspace of $D_{\beta}(\nu)$ that contains $a$ on its boundary. Then $\nu(H)=\beta$, and hence $\intr(H)\cap D_{\beta}(\mu)=\emptyset$. We obtain that $D_{\beta}(\mu)$ is equal to the convex hull of $D_{\beta}(\nu)\cup\{a\}$.
\smallskip\noindent
\textbf{Case III:} $\beta>\alpha+\delta$. In a manner similar to Case II one concludes that $D_{\beta}(\mu)$ is the convex hull of a circular disk $D_{\beta}(\nu)$ and $a$, intersected with the disk $D_{\beta-\delta}(\nu)$.
\smallskip
In order to complete the reconstruction of the atomic part of measure $\mu$ from Example~\ref{example:flag} based on its depth function, we present Lemma~\ref{lemma:Laketa}, which is a special case of a more general result (called the \emph{generalized inverse ray basis theorem}) whose complete proof can be found in \cite[Lemma~4]{Laketa_Nagy2021b}.
\begin{lemma} \label{lemma:Laketa}
Suppose that $\mu\in\Meas$, $\alpha > 0$, a point $x\notin \Damu$ and a face $F$ of $\Damu$ are given so that the relatively open line segment $L(x,y)$ does not intersect $\Damu$ for any $y \in F$. Then there exists a touching halfspace $H \in \half$ of $\Damu$ such that $\mu(\intr(H))\leq\alpha$, $x\in H$, and $F\subset\bd(H)$
\end{lemma}
\smallskip\noindent
\emph{Reconstruction.} We now know the complete depth function $\D\left(\cdot;\mu\right)$ of $\mu$, see also Fig.~\ref{figure:atom}. From this depth only, we will locate the atoms of $\mu$ and their mass. The only point in $\R^2$ that is an extreme point of more than one depth region is certainly $a$, so that $a$ is the only possible candidate for an atom of $\mu$ by part~\ref{support1} of Theorem~\ref{theorem:support}. Take any $\beta\in(\alpha,\alpha+\delta)$. Then $D_{\beta}(\mu)$ is the convex hull of a circular disk and the point $a$ outside that disk, so its boundary contains a line segment $L(a,y_\beta)$ for $y_\beta\in \bd(D_{\beta}(\nu))$. Due to Lemma~\ref{lemma:Laketa}, there is a halfspace $H_{\beta} \in \half$ such that $L(a,y_\beta)\subset\bd(H_{\beta})$ and $\mu(\intr(H_{\beta}))\leq \beta < \alpha+\delta=\depth{a}{\mu} \leq \mu(H_\beta)$, the last inequality because $a \in H_\beta$. We obtain $\mu(\bd(H_{\beta}))\geq \alpha + \delta - \beta$. This is true for any $\beta \in (\alpha, \alpha + \delta)$, and for different $\beta_1, \beta_2 \in (\alpha,\alpha+\delta)$ we have $H_{\beta_1}\neq H_{\beta_2}$ with $x \in \bd(H_{\beta_i})$ and $\mu\left(\bd(H_{\beta_i})\right) \geq \alpha + \delta - \beta_i$, $i=1,2$. In conclusion, we obtain uncountably many different lines $\bd(H_\beta)$ of positive $\mu$-mass, all passing through $a$. That is possible only if $a$ is an atom of $\mu$, and $\mu(\{a\}) \geq \delta$. Theorem~\ref{theorem:support} again guarantees that $\mu(\{a\}) = \delta$ and that there is no other atom of $\mu$.
\end{example}
The complete Example~\ref{example:flag} gives a partial positive result toward the halfspace depth characterization problem, and promises methods allowing one to determine features of $\mu$ from its depth $\D\left(\cdot;\mu\right)$, at least for special sets of measures. The complete determination of the support or the atoms of $\mu$ from its depth is, however, a problem considerably more difficult, impossible to be solved in full generality. Follows an example of mutually singular measures\footnote{Recall that $\mu, \nu \in \Meas$ are called \emph{singular} if there is a Borel set $A \subset \R^d$ such that $\mu(A) = \nu(\R^d\setminus A) = 0$.} $\mu, \nu \in \Meas$ sharing the same depth function from \cite[Section~2.2]{Nagy2020c}.
\begin{example} \label{example:Nagy}
For $\mu_1 \in \Meas[\R^d]$ with independent Cauchy marginals and $\mu_2 \in \Meas[\R^d]$ the Dirac measure at the origin, define $\mu \in \Meas[\R^d]$ by the sum of $\mu_1$ and $\mu_2$ with weights $1/d$ and $1/2 - 1/(2d)$, respectively. The total mass of $\mu$ is hence $\mu\left(\R^d\right) = 1/2 + 1/(2d)$, and its support is $\R^d$. For the other distribution take $\nu \in \Meas[\R^d]$ the probability measure supported in the coordinate axes $A_i = \left\{ x = \left(x_1, \dots, x_d \right) \in \R^d \colon x_j = 0 \mbox{ for all }j \ne i \right\}$, $i=1,\dots,d$. The density $g$ of $\nu$ with respect to the one-dimensional Hausdorff measure on its support $\Support{\nu} = \bigcup_{i=1}^d A_i$ is given as a weighted sum of densities of univariate Cauchy distributions in $A_i$
\[ g(x) = \frac{1}{d} \sum_{i=1}^d \frac{\I{x \in A_i}}{\pi(1+x_i^2)} \quad\mbox{for }x = \left(x_1, \dots, x_d\right)\in\R^d. \]
It can be shown \cite[Section~2.2]{Nagy2020c} that the depths of $\mu$ and $\nu$ coincide at all points $x = \left(x_1, \dots, x_d\right)$ in $\R^d$
\[
\depth{x}{\mu} = \depth{x}{\nu} =
\begin{cases}
\frac{1}{d} \left( \frac{1}{2} - \frac{{\arctan(\max_{i=1,\dots,d} \left\vert x_i \right\vert)}}{\pi} \right) & \mbox{if }x \in \R^d \setminus \{ 0 \}, \\
1/2 & \mbox{for }x = 0 \in \R^d.
\end{cases}
\]
The two measures $\mu$ and $\nu$ are, however, singular as for $A = \Support{\nu}\setminus\{0\}$ we have $\mu(A) = \nu(\R^d \setminus A) = 0$. For an arbitrary finite Borel measure, it is therefore impossible to retrieve the full information about its support only from its depth function. For a visualization of measure $\mu$ and its halfspace depth see Fig.~\ref{figure:atom}.
The same example demonstrates that in general, also the location of the atoms of $\mu\in\Meas$, or even the number of them, cannot be recovered from the depth function $\D\left(\cdot;\mu\right)$ only --- the measure $\nu$ in Example~\ref{example:Nagy} does not contain any atoms, but $\mu$ has a single atom at its unique halfspace median (the smallest non-empty central region~\eqref{central region}). Because of the very special position of the atom of $\mu$, it is impossible to use our results from parts~\ref{support1} and~\ref{jump} of Theorem~\ref{theorem:support} to decide whether the origin is an atom of $\mu$, or not.
\end{example}
\section{Conclusion}
The halfspace depth has many applications, for example in classification or in nonparametric goodness-of-fit testing. However, in order to apply it properly, one needs to make sure that the measure $\mu$ is characterized by its halfspace depth function, so that we can use the halfspace depth to distinguish $\mu$ from other measures. For that reason, it is important to know which collections of measures satisfy this property. The partial reconstruction procedure provided in this paper may be used to narrow down the set of all possible measures that correspond to a given halfspace depth function. That can be used to guide the selection of an appropriate tool of depth-based analysis. The problem of determining those distributions that are uniquely characterized by their halfspace depth, however, remains open.
| {
"timestamp": "2022-08-09T02:22:11",
"yymm": "2208",
"arxiv_id": "2208.03959",
"language": "en",
"url": "https://arxiv.org/abs/2208.03959",
"abstract": "The halfspace depth of a $d$-dimensional point $x$ with respect to a finite (or probability) Borel measure $\\mu$ in $\\mathbb{R}^d$ is defined as the infimum of the $\\mu$-masses of all closed halfspaces containing $x$. A natural question is whether the halfspace depth, as a function of $x \\in \\mathbb{R}^d$, determines the measure $\\mu$ completely. In general, it turns out that this is not the case, and it is possible for two different measures to have the same halfspace depth function everywhere in $\\mathbb{R}^d$. In this paper we show that despite this negative result, one can still obtain a substantial amount of information on the support and the location of the mass of $\\mu$ from its halfspace depth. We illustrate our partial reconstruction procedure in an example of a non-trivial bivariate probability distribution whose atomic part is determined successfully from its halfspace depth.",
"subjects": "Statistics Theory (math.ST)",
"title": "Partial reconstruction of measures from halfspace depth",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795098861571,
"lm_q2_score": 0.8128673201042493,
"lm_q1q2_score": 0.802283389198966
} |
https://arxiv.org/abs/2009.08573 | Detection of Change Points in Piecewise Polynomial Signals Using Trend Filtering | While many approaches have been proposed for discovering abrupt changes in piecewise constant signals, few methods are available to capture these changes in piecewise polynomial signals. In this paper, we propose a change point detection method, PRUTF, based on trend filtering. By providing a comprehensive dual solution path for trend filtering, PRUTF allows us to discover change points of the underlying signal for either a given value of the regularization parameter or a specific number of steps of the algorithm. We demonstrate that the dual solution path constitutes a Gaussian bridge process that enables us to derive an exact and efficient stopping rule for terminating the search algorithm. We also prove that the estimates produced by this algorithm are asymptotically consistent in pattern recovery. This result holds even in the case of staircases (consecutive change points of the same sign) in the signal. Finally, we investigate the performance of our proposed method for various signals and then compare its performance against some state-of-the-art methods in the context of change point detection. We apply our method to three real-world datasets including the UK House Price Index (HPI), the GISS surface Temperature Analysis (GISTEMP) and the Coronavirus disease (COVID-19) pandemic. |
\section{Discussion}
\label{sec:discussion.proj1}
This paper proposed an algorithm, PRUTF, to detect change points in piecewise polynomial signals using trend filtering. We demonstrated that the dual solution path produced by the PRUTF algorithm forms a Gaussian bridge process for any given value of the regularization parameter $\lambda$. This conclusion allowed us to derive an efficient stopping rule for terminating the search algorithm, which is vital in change point analysis. We then proved that when there is no staircase block in the signal, the method guarantees consistent pattern recovery. However, it fails in doing so when there is a staircase in the underlying signal. To address this shortcoming in such a case, we suggested a modification in the procedure of constructing the solution path, which effectively prevents false discovery of change points. Evidence from both simulation studies and real data analysis reveals the accuracy and the high detection power of the proposed method.
\chapter*{APPENDICES}
\section{Introduction}
\label{sec:introduction.proj1}
The problem of change point detection is more than sixty years old. It was first studied by \cite{page1954continuous,page1955test}, and since then, has been of interest to many scientists including statisticians. Many of the earlier developments concerned the existence of at most one change point; however, considerable attention in recent years has been given to multiple change point analysis, which has found applications in many fields such as finance and econometrics, \cite{bai2003computation, hansen2001new}, bioinformatics and genomics, \cite{futschik2014multiscale, lavielle2005using}, climatology, \cite{liu2010impacts, pezzatti2013fire}, and technology, \cite{siris2004application, oudre2011segmentation, lung2012distributed, ranganathan2012pliss, galceran2017multipolicy}. Consequently, there is a vast and rich literature on the subject. In the following, we only review a body of literature on a retrospective change point framework closely related to our work and refer the interested readers to \cite{eckley2011analysis, lee2010change, horvath2014extensions, truong2018review} for more comprehensive reviews.
We consider the univariate signal plus noise model
\begin{align}\label{fmodel.proj1}
y_{_i}=f_{_i}+\varepsilon_{_i}, \qquad\qquad i=1,\,\ldots,\,n,
\end{align}
where $f_{_i}=f(i/n)$ is a deterministic and unknown signal with equally spaced input points over the interval $[0,\,1]$. The error terms $\varepsilon_{_1},\,\ldots,\,\varepsilon_{_n}$ are assumed to be independently and identically distributed Gaussian random variables with mean zero and finite variance $\sigma^2$.
We assume that $f(\cdot)$ undergoes $J_{_0}$ unknown and distinct changes at point fractions $0=\omega_{_0}<\omega_{_1}< \ldots< \omega_{_{J_0}}< \omega_{_{J_0+1}}=1$, where the number of change point fractions, $J_{_0}$ can grow with the sample size $n$. Additionally, we assume that $f(\cdot)$ is a piecewise polynomial function of order $r \in \mbb N$. These assumptions imply that, associated with $\omega_{_0}, \ldots, \omega_{_{J_0+1}}$, there are change points locations $0=\tau_{_0}<\tau_{_1}< \ldots< \tau_{_{J_0}}< \tau_{_{J_0+1}}=n$, which partition the entire signal $\mbf f=(f_{_1}, \ldots, f_{_n})$ into $J_0+1$ segments. More specifically, any subsignal of $\mbf f$ within segments created by the change points follows an $r$-degree polynomial structure with or without a continuity constraint at the change points. For more detail, see Figure \ref{fig:coor-removal}. Change in the level of a piecewise constant signal, known as the canonical multiple change point problem, and change in the slope of a piecewise linear signal are examples of the problem under consideration in this paper. In change point analysis, the objective is to estimate the number of change points, $J_{_0}$, as well as their locations $\bsy \tau=\{\tau_{_1},\,\ldots,\,\tau_{_{_{J_0}}}\}$ based on the observations $\mbf y=(y_{_1},\,\ldots,\,y_{_n})$.
The canonical multiple change point problem, where the signal $\mbf f$ is modelled as a piecewise constant function, has been extensively studied in the literature. In this framework, there are many approaches and we only attempt to list a selection of them here. The majority of these techniques seek to identify all change points at once by solving an optimization problem consisting of a loss function, often the negative log-likelihood, and a penalty criterion. \cite{yao1988estimating, yao1989least} used the square error loss along with the Schwarz Information Criterion (SIC) as a penalty function to consistently estimate the bounded number of change points and their locations for the data drawn from a Gaussian distribution. Within the same setting,
incorporation of various penalty functions including Modified Information Criterion (MIC) \cite{pan2006application}, modified Bayesian Information Criterion (mBIC) \cite{zhang2007modified}, Simultaneous Information Theoretic Criterion (SITC) \cite{wu2008simultaneous} and modified SIC \cite{ciuperca2011general, ciuperca2014model}, have been studied. Specific algorithms such as Optimal Partitioning \cite{auger1989algorithms}, Segment Neighbourhood \cite{jackson2005algorithm}, and pruning approaches such as PELT \cite{killick2012optimal} and PDPa \cite{rigaill2015pruned} are developed to solve such optimization problems.
Apart from penalty-based techniques, another frequently used class of change point detection approaches encompasses the greedy search procedures in which they search sequentially for one single change point at a time. The most popular methods in this class are Binary Segmentation \cite{vostrikova1981detecting} and its variants such as Circular Binary Segmentation (CBS) \cite{olshen2004circular}, and Wild Binary Segmentation (WBS) \cite{fryzlewicz2014wild}. In recent years, researchers have attempted to improve Binary Segmentation's performance from statistical and computational viewpoints. \cite{fryzlewicz2018tail} suggested a backward (bottom-up) mechanism, called Tail Greedy Unbalanced Haar (TGUH), which is computationally fast and statistically consistent in estimating both the number and the locations of change points.
Also, \cite{fryzlewicz2018detecting} introduced Wild Binary Segmentation 2 (WBS2) to deal with the shortcomings of WBS in datasets with frequent changes. It has been shown that the method is fast in run time and accurate in detection.
Beyond the canonical change point problem, signals in which $f$ is modelled as a piecewise polynomial of order $r\geq 1$ have attracted less attention in the literature despite many applications. For instance, piecewise linear signals are applied in monitoring patient health (\cite{aminikhanghahi2017survey}, \cite{stasinopoulos1992detecting}), climate change (\cite{robbins2011changepoints}), and finance (\cite{bianchi1999comparison}). In such a framework, \cite{bai1997estimating} introduced a method based on Wald-type sequential tests, and \cite{maidstone2017optimal} devised a dynamic programming applied to an $\ell_{_0}$-penalized least square procedure. In continuous piecewise linear models, \cite{kim2009ell_1} developed a methodology called $\ell_{_1}$-trend filtering. Furthermore, \cite{baranowski2019narrowest} put forward the method of Narrowest Over Threshold (NOT), and \cite{anastasiou2019detecting} developed an approach called Isolate-Detect (ID) which both provide asymptotically consistent estimators of the number and locations of change points.
Our goal in this paper is to introduce a unifying method covering the canonical change point problem and beyond. More precisely, the method is cable of detecting change points in piecewise polynomial signals of order $r$ ($r=0,\,1,\,2,\,\ldots$) with and without continuity constraint at the locations of change points.
The detection of change points in a sequence of data can be formulated as a penalized regression fitting problem. According to our notation, the quantity $f_{\tau}-f_{\tau+1}$ is nonzero if the signal $ f$ undergoes a change at point $\tau$, and is zero otherwise. Moreover, if we assume that change points are sparse, that is, the number of locations where $f$ changes, $J_{_0}$, is much smaller than the number of observations $n$, change points can be estimated using the one-dimensional fused lasso problem
\begin{align*
\min_{\mbf f\in \mathbb{R}^n}
\frac{1}{2}\, \big\|\,\mbf y- \mbf f \, \big\|_2^2 \,+\, \lambda\, \ssum{1}{n-1} \big| f_{_{i+1}}-f_{_i} \big|\,,
\end{align*}
where $\mbf f=(f_{_1},\,\ldots,\,f_{_n})$.
This formulation of the canonical change point problem was first considered in \cite{huang2005detection} and was applied to analyze a DNA copy number dataset. \cite{harchaoui2010multiple} considered the same formulation and proved the consistency of the respective change point estimates when the number of change points is bounded. Employing sparse fused lasso which is composed of both the $\ell_{_1}$-norm and the total variation seminorm penalties, \cite{rinaldo2009properties} proposed a sparse piecewise constant fit and established the consistency of the corresponding estimates when the variance of the noise terms vanishes, and the minimum magnitude of jumps is bounded from below. However, \cite{rojas2014change} argued that the consistency results achieved by \cite{rinaldo2009properties} are incorrect when a frequently viewed pattern, called {\it staircase}, exists in the signal. The staircase phenomenon occurs in a piecewise constant model when there are either two consecutive downward jumps or upward jumps in its mean structure. The staircase pattern will be discussed in more detail in Section \ref{sec:pattern.recovery.proj1}. Additionally, \cite{qian2016stepwise} showed that the lasso problem of \cite{tibshirani1996regression} when derived by transforming fused lasso does not satisfy the Irrepresentable Condition (\cite{zhao2006model}) that is necessary and sufficient for exact pattern recovery. In particular, \cite{qian2016stepwise} proposed an approach called preconditioned fused lasso based on the puffer transformation of \cite{jia2015preconditioning} and established that it can recover the exact pattern with probability approaching one.
A similar approach to that of the piecewise constant signals can be considered for estimating change points in piecewise polynomial signals.
In particular, a positive integer $\tau$ is a change location in an $r$-th degree piecewise polynomial signal $f$ if $\tau$-th element of the vector $\mbf D^{(r+1)}\, \mbf f$ is non-zero, denoted by $[\,\mbf D^{(r+1)}\, \mbf f\,]_{\tau}\neq 0$. Here $\mbf D^{(r+1)}$ is a penalty matrix that will be defined in Section \ref{sec:dual.tf.proj1}. Hence, change points can be estimated from nonzero elements of $\mbf D^{(r+1)}\, \widehat{\mbf f}$, where $\widehat{\mbf f}$ is the solution of
\begin{align}\label{tf.obj.proj1}
\min_{\mbf f\in \mathbb{R}^{^n}}
\frac{1}{2}\|\,\mathbf{y}-\mbf f \,\|_{_2}^2+\lambda \,\|\mathbf{D}^{(r+1)}\mbf f \|_{_1}\,.
\end{align}
The aforementioned problem was first studied by \cite{steidl2006splines} in the context of image processing and was called {\it higher order total variation regularization}. It was later rediscovered by \cite{kim2009ell_1} and termed {\it trend filtering} in the nonparametric regression setting. \cite{kim2009ell_1} specifically explored linear trend filtering ($r=1$) which fits piecewise linear models. \cite{tibshirani2014adaptive} extensively studied trend filtering and compared its performance with smoothing splines \cite{green1993nonparametric} and locally adaptive regression splines \cite{mammen1997locally} in the context of nonparametric regression. \cite{tibshirani2014adaptive} also established that trend filtering enjoys desirable and strong theoretical properties of locally adaptive regression splines while being computationally less intensive due to its banded penalty matrix. Moreover, trend filtering has an adaptive knot selection property, which makes it well suited for change point analysis.
From a computational and algorithmic standpoint, \cite{kim2009ell_1} described Primal-Dual Interior Point (PDIP) method for deriving the estimates of the linear trend filtering problem at a fixed value of $\lambda$. This can be easily carried over to the trend filtering problem of any order.
\cite{wang2014falling} suggested an algorithm based on a falling factorial basis while \cite{ramdas2016fast} derived an algorithm based on the Alternating Direction Method of Multipliers (ADMM) discussed in \cite{boyd2011distributed}.
The computational complexity of all these algorithms is of order $O(n)$.
In this paper, we develop a new methodology called {\it Pattern Recovery Using Trend Filtering} (PRUTF) for identifying unknown change points in piecewise polynomial signals with no continuity restriction at change point locations. Therefore, a change point is defined as a sudden jump in the signal and its all derivatives up to order $r$. Figure \ref{fig:coor-removal} displays such change points for various $r$. In this paper, we make the following contributions.
\begin{itemize}
\item We propose a generic dual solution path algorithm along with the regularization parameter for trend filtering. This solution path, whose basic idea is borrowed from \cite{tibshirani2011solution} enables us to determine change points at each level of the regularization parameter. Our algorithm, PRUTF, is different from that of \cite{tibshirani2011solution} as we remove $(r+1)$ coordinates of dual variables after identifying each change point. This adjustment to the algorithm allows us to have independent dual variables between each pair of neighbouring change points. Besides, the elimination of $(r+1)$ coordinates at each step leads to faster implementation of the algorithm.
\item We establish a stopping criterion that plays an essential rule in the PRUTF algorithm used to find change points. Notably, we show that the dual variables of trend filtering between consecutive change points constitute a Gaussian bridge process. This finding allows us to introduce a threshold for terminating our proposed algorithm.
\item If the signal contains a staircase pattern, we prove that the method is statistically inconsistent, which makes it unfavourable. Explaining the reason for this end, we modify the PRUTF algorithm to produce estimates consistent in terms of both the number and location of change points.
\end{itemize}
This paper is organized as follows: we first describe how to characterize the dual optimization problem of trend filtering. In Section \ref{sec:solution-path-algorithm}, we develop our main algorithm, PRUTF, to use in constructing the dual solution path of trend filtering and, in turn, identifying the locations of change points. Section \ref{sec:property.solution.path.proj1} discusses the properties of this dual solution path. We establish that the dual variables derived from the solution path form a Gaussian bridge process that makes them favourable for statistical inference. Applying these properties, we develop a stopping rule for the change point search algorithm in Section \ref{sec:stop.rule.proj1}. The quality of the PRUTF algorithm is validated in terms of pattern recovery of the true signal in Section \ref{sec:pattern.recovery.proj1}. It is established that the proposed technique in its naive form fails to consistently identify the true signal when a special pattern, called staircase, is present in the signal. Section \ref{sec:modified.trend.filtering.proj1} elaborates on how to modify the algorithm in order to estimate the true pattern consistently. Simulation results, and real-world applications are presented in Section \ref{sec:simulation.proj1}. We explore the performance of our proposed method for signals with frequent change points as well as models with dependent error terms in Section \ref{sec:model_misspecification.proj1}. We conclude the paper with a discussion in Section \ref{sec:discussion.proj1}.
\section{Notations and Fundamental Concepts}\label{sec:notations.concepts.proj1}
\subsection{Notations}\label{sec:notations.proj1}
We begin this section with setting up notations that will be used throughout this article. For an $m\times n$ matrix $\mbf A$, we denote its rows by $\mbf A_{_1},\ldots,\mbf A_{_m}$ and express the matrix as $\mbf A=(\mbf A_{_1}^{^T},\ldots,\mbf A_{_m}^{^T})^T$. Now for the set of indices $\mca I=\{i_{_1},\,\ldots,\,i_{_k}\}\subseteq\{1,\,\ldots,\,m\}$, the notation $\mbf A_{_{\mca I}}=(\mbf A_{i_{_1}}^{^T},\,\ldots,\,\mbf A_{i_{_k}}^{^T})^T$ represents the submatrix of $\mbf A$ whose row labels are in the set $\mca I$. In a similar manner, for a vector $\bsy a$ of length $m$, we let $\bsy a_{_{\mca I}}=( a_{_{i_{_1}}},\ldots, a_{_{i_{_k}}})^{^T}$ denote a subvector of $\bsy a$ whose coordinate labels are in $\mca I$. We write $\mbf A_{_{-\mca I}}$ and $\bsy a_{_{-\mca I}}$ to denote $\mbf A_{_{\{1,\,\ldots,\,m\} \backslash \mca I}}$ and $\bsy a_{_{\{1,\,\ldots,\,m\} \backslash \mca I}}$\,, respectively, where
$\mca J\backslash\mca I$ is the set of indices in $\mca J$ but not in $\mca I$.
Furthermore, for selecting $i$-th row of $\mbf A$, the notation $[\mbf A]_i$ and for its $(i,j)$-th element the notation $[\mbf A]_{ij}$ are used. Also, $[\bsy a]_i$ extracts the $i$-th elements of the vector $\bsy a$. We write $\diag{\mbf A}$ to denote the vector of the main diagonal entries of the matrix $\mbf A$. Moreover, for a real number $x$, $\lfloor x\rfloor$ denotes the greatest integer less than or equal $x$, and $\lceil x \rceil$ denotes the least integer greater or equal $x$. For a set $A$, the indicator function is denoted by $\mathbbm{1}(A).$
\subsection{The Dual Problem of Trend Filtering}
\label{sec:dual.tf.proj1}
Recall the trend filtering problem
\begin{align}\label{tf2.obj.proj1}
\min_{\mbf f\in \mathbb{R}^{^n}}
\frac{1}{2}\|\mathbf{y}-\mbf f \|_{_2}^2+\lambda \|\mathbf{D}^{(r+1)}\mbf f \|_{_1},
\end{align}
where $\lambda\geq 0$ is the regularization parameter for controlling the effect of smoothing, and the $(n-r-1)\times n$ penalty matrix $\mathbf{D}^{(r+1)}$ is the difference operator of order $(r+1)$. For $r=0$, the first order difference matrix $\mathbf{D}^{(1)}$ is defined as
\begin{align*}
\mathbf{D}^{(1)}=\begin{pmatrix}
-1 & 1 & 0 & \ldots & 0 & 0 \\
0 & -1 & 1 & \ldots & 0 & 0 \\
\vdots & & & & & \vdots \\
0 & 0 & 0 & \ldots & -1 & 1 \\
\end{pmatrix},
\end{align*}
and for $r\geq 1$, the difference operator of order $r+1$ can be recursively computed by $\mathbf{D}^{(r+1)}=\mathbf{D}^{(1)}\times \mathbf{D}^{(r)}$. Notice that, in this matrix multiplication, we consider only the submatrix consisting of the first $n-r-2$ rows and $n-r-1$ columns of the matrix $\mathbf{D}^{(1)}$. Figure \ref{fig:tf-splines} displays the trend filtering fits for $r=0,1,2$ for simulated data.
\begin{figure}[!ht]
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/sigspln1.jpg}
\caption{Linear}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/sigspln2.jpg}
\caption{Quadratic}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/sigspln3.jpg}
\caption{Cubic}
\end{subfigure}
\caption[]{Trend filtering solutions for $r=1,\,2,\,3$ producing (a) piecewise linear, (b) piecewise quadratic and (c) piecewise cubic fits, respectively.}
\label{fig:tf-splines}
\end{figure}
Although the objective function in the $r$-th order trend filtering \eqref{tf2.obj.proj1} is strictly convex and thus the minimization has a guaranteed unique solution, the penalty term is not differentiable in $\mbf f $, so solving the optimization in its current form is difficult.
To overcome this difficulty, we follow the argument in \cite{tibshirani2011solution} and convert this optimization problem into its dual form. Since the objective function in the primal problem is strictly convex with no constraint, the strong duality holds, meaning that the primal and the dual solutions coincide.
The trend filtering problem \eqref{tf2.obj.proj1} can be rewritten as
\begin{equation*}
\min_{\mbf f \in \mathbb{R}^{^n}}
\frac{1}{2}\|\mathbf{y}-\mbf f \|_{_2}^2+\lambda \|\mathbf{z}\|_{_1}, \quad \text{subject to } \mathbf{z}=\mathbf{D}\mbf f \,,
\end{equation*}
where, for ease in the notation, we use $\mathbf{D}=\mathbf{D}^{(r+1)}$. For any given $\lambda>0$, the Lagrangian is
\begin{align*}
\mathcal{L}(\mbf f ,\,\mathbf{z},\,\mathbf{u})&=\frac{1}{2}\|\mathbf{y}-\mbf f \,\|_{_2}^2+\lambda \|\mathbf{z}\|_{_1}+\mathbf{u}^T(\mathbf{D}\mbf f -\mathbf{z})
\end{align*}
and, thus the dual function is given by
\begin{equation*}
g(\mathbf{u})= \inf_{\mbf f \in \mathbb{R}^{^n},\,\mathbf{z}\in\mathbb{R}^{^m}}\mathcal{L}(\mbf f ,\,\mathbf{z},\,\mathbf{u}),
\end{equation*}
which is a concave function defined on $\mathbb{R}^{^m}$, where $m=n-r-1$ and takes values in the extended real line $\mathbb{R}\cup\lbrace -\infty,\,\infty\rbrace$. The vectors $\mbf f $ and $\mathbf{u}$ are called the primal and dual variables, respectively.
Taking the derivative of the Lagrangian $\mathcal{L}(\mbf f ,\,\mathbf{z},\,\mathbf{u})$ with respect to $\mbf f $ and setting it to be equal to zero, we obtain
\begin{align}\label{primal.dual.exact.proj1}
\mbf f =\mathbf{y-D}^T\mathbf{u}.
\end{align}
Now substituting this back into the Lagrangian $\mathcal{L}(\mbf f ,\,\mathbf{z},\,\mathbf{u})$, and performing certain algebraic manipulations, we obtain
\begin{align*}
\mathcal{L}^\ast(\mathbf{z},\,\mathbf{u})&=\inf_{\mbf f \in \mathbb{R}^{^n}}\mathcal{L}(\mbf f ,\,\mathbf{z},\,\mathbf{u})\\
&=-\frac{1}{2}\|\mathbf{y}-\mathbf{D}^T\mathbf{u}\|_{_2}^2+\frac{1}{2}\|\mathbf{y}\|^2+\lambda \|\mathbf{z}\|_{_1}-\mathbf{u}^T\mathbf{z}\,.
\end{align*}
Minimizing $\mathcal{L}^\ast(\mathbf{z},\,\mathbf{u})$, or equivalently maximizing $\mathbf{u}^T\mathbf{z}-\lambda \|\mathbf{z}\|_{_1}$, with respect to $\mathbf{z}\in\mathbb{R}^{^m}$ leads us to the dual function $g(\mathbf{u})$. Notice that $\sup\limits_{\mathbf{z}}\lbrace\mathbf{u}^T\mathbf{z}-\lambda \|\mathbf{z}\|_{_1}\rbrace$ is the conjugate of the function $\lambda \|\mathbf{z}\|_{_1}$ in the context of conjugate convex functions. See \cite{brezis2010functional} and \cite{boyd2004convex}. This conjugate function is given by
\begin{align*}
\sup\limits_{\mathbf{z}}\lbrace\mathbf{u}^T\mathbf{z}-\lambda \|\mathbf{z}\|_{_1}\rbrace=\begin{cases}
0 & \text{ if }\|\mathbf{u}\|_{_\infty}\le \lambda\\
\infty & \text{ otherwise\,.}
\end{cases}
\end{align*}
From all these, the dual function is given as
\begin{align*}
g(\mathbf{u})= -\frac{1}{2}\|\mathbf{y}-\mathbf{D}^T\mathbf{u}\|_{_2}^2+\frac{1}{2}\|\mathbf{y}\|^2 \quad \textrm{ for }\quad \|\mathbf{u}\|_{_\infty}\leq\lambda\,,
\end{align*}
and, thus the dual problem is to find the maximum of the dual function $g(\mathbf{u})$. This is equivalent to
\begin{equation}\label{tf.dual.obj.proj1}
\min\limits_{\mathbf{u}\in \mathbb{R}^{^m}} \frac{1}{2}\|\mathbf{y}-\mathbf{D}^T\mathbf{u}\|_{_2}^2 \quad \textrm{subject to }\quad \|\mathbf{u}\|_{_\infty}\leq\lambda\,.
\end{equation}
The constraint in \eqref{tf.dual.obj.proj1} is an $\ell_{_\infty}$-ball or a hypercube centered at the origin with the boundaries given by the set $\lbrace -\lambda,\,+\lambda\rbrace^{m}$. Since the matrix $\mbf D$ is full row rank, the problem \eqref{tf.dual.obj.proj1} is strictly convex and has a unique solution, see \cite{ali2019generalized}. In addition, notice that the dimension of the dual vector $\mbf u$ is $m$, which is smaller than that of the primal vector $\mbf f $ and may lead to relatively faster computations. The connection between the primal and the dual solutions is given by the equations
\begin{align}\label{primal.to.dual}
\hspace{-1.45cm}\widehat{\mbf u}_{_\lambda}=\lambda \,\widehat{\bsy\gamma},
\end{align}
\vspace{-1.2cm}
\begin{align}\label{dual.to.primal}
\widehat{\mbf f }_{_\lambda}=\mathbf{y}-\mathbf{D}^T\widehat{\mathbf{u}}_{_\lambda}\,,
\end{align}
where $\widehat{\bsy\gamma} \in \mathbb{R}^{^m}$ is a subgradient of $\| \mbf x\|_{_1}$ computed at $\mbf x=\mbf D\widehat{\mbf f }_{_\lambda}$. This subgradient is given by
\begin{align}\label{gamma.subgrad}
\widehat{\gamma}_{_i}\in \left\{
\begin{array}{lcl}
\{+1\} & \textrm{if} & [\mathbf{D\widehat{\mbf f }_{_\lambda}}]_i>0 \\
\{-1\} & \textrm{if} & [\mathbf{D\widehat{\mbf f }_{_\lambda}}]_i<0 \\
\,[-1,+1] & \textrm{if} & [\mathbf{D\widehat{\mbf f }_{_\lambda}}]_i=0\,.
\end{array}
\right.
\end{align}
The statements in Equations \eqref{primal.to.dual}-\eqref{gamma.subgrad} are equivalent to the KKT optimality conditions of the primal problem \eqref{tf2.obj.proj1}.
The dual problem \eqref{tf.dual.obj.proj1} demonstrates that $\mbf D^T\widehat{\mbf u}_{_\lambda}$ is the projection, $P_{_{\mbb C}}(\mbf y)$, of $\mathbf{y}$ onto the convex polyhedron (or hypercube here) $\mathbb{C}=\{\mbf x\in \mathbb{R}^{^m}:\, \|\mbf x\|_{_\infty}\leq \lambda\}\,$.
From this, the primal solution \eqref{dual.to.primal} can be rewritten in the form of $(\mbf I- P_{_{\mathbb{C}}})\,(\mbf y)$, representing the residual projection map of $\mbf y$ onto the polyhedron $\mathbb{C}$.
Our idea of applying trend filtering to discover change points in piecewise polynomial signals is inspired by \cite{rinaldo2009properties} and its correction \cite{rinaldo2014corrections}, in which change point detection is studied using fused lasso. Besides extending to piecewise polynomial signals, the novelty of our work is in providing an exact stopping criterion, which is based on the Gaussian bridge property of the trend filtering dual variables. In addition, we propose an algorithm which, unlike that proposed in \cite{rinaldo2009properties}, always produces consistent change points even in the presence of staircase patterns.
\section{Solution Path of Trend Filtering and PRUTF Algorithm}\label{sec:solution-path-algorithm}
In this section, we construct and study the solution path of dual variables $\widehat{\mbf u}_{\lambda}$ as the regularization parameter decreases from $\lambda=\infty$ to $\lambda=0$. In the following, the PRUTF algorithm is given to compute the entire dual solution path. This dual solution path identifies the corresponding primal solution using \eqref{dual.to.primal}. For any given $\lambda$, we call any coordinate of $\widehat{\mbf u}_{\lambda}$ a boundary coordinate if it is a vertex of the polyhedron $\mathbb{C}= \big\{ \mbf x\in \mathbb{R}^{^m}:\, \|\mbf x\|_{_\infty}\leq \lambda \big\}\,$, meaning that its absolute value becomes $\lambda$. In the process of constructing the solution path, for any $\lambda$, we trace several sets, introduced below.
\begin{itemize}
\item The set $\mca B=\mca B(\lambda)$, called the boundary set, contains the boundary coordinates identified by $\widehat{\mbf u}_{\lambda}$.
\item The vector $\mbf s_{\mca B}=\mbf s_{\mca B}(\lambda)$, called the sign vector, represents collectively the signs of the boundary points in $\mca B(\lambda)$.
\item The set $\mca A=\mca A(\lambda)$, called the augmented boundary set, contains the boundary coordinates in $\mca B(\lambda)$ as well as the first $r_{_a}=\lfloor (r+1)/2\rfloor$ coordinates immediately after.
\item The vector $\mbf s_{\mca A}=\mbf s_{\mca A}(\lambda)$ represents collectively the signs of the augmented boundary points in $\mca A(\lambda)$.
\end{itemize}
In the following, we discuss the need for the augmented boundary set $\mca A$. We begin by studying the structure of the dual vector $\mbf u=\mbf D\mbf f$ in a piecewise polynomial signal of order $r$, where the signal is partitioned into a number of blocks defined by the position of the change points. Because the signal $f$ is a piecewise polynomial of order $r$, to compute the $i$-th coordinate of the vector $\mbf{u}$, we need $r_{_b}=\lceil (r+1)/2\rceil-1$ points directly before the $i$-th element of $\mbf{f}$ as well as $r_{_a}=\lfloor (r+1)/2\rfloor$ points immediately after that. Consequently, the first $r_{_a}$ elements of $\mbf D\mbf f$ within each block cannot be computed. Moreover, within each block, the last $r_{_b}+1$ elements of $\mbf D\mbf f$ are all nonzero due to the existence of a change point. This observation is depicted in Figure \ref{fig:coor-removal} for $r=0,\, 1,\, 2,\, 3$. To explain this point clearly, consider the case of $r=2$ in Figure \ref{fig:coor-removal} in which the structure of $\mbf {Df}$ is shown, where the true change points are at $6$ and $13$. As can be seen, the points on the boundary -- the nonzero coordinates of $\mbf{Df}$ -- are $\mca B(\lambda)= \{ 5,\,6,\,12,\,13\}$ with their respective signs $\mbf s_{\mca B}(\lambda)=\{ 1,\,1,\,-1,\,-1 \}$. Notice that $\mbf{Df}$ does not exist at points 7 and 14. The augmented boundary set contains these points as well as the boundary points; that is $\mca A(\lambda)= \{5,\,6,\,7,\,12,\,13,\,14\}$. The respective signs of the coordinates in the augmented boundary set $\mca A(\lambda)$ are given by $\mbf s_{\mca A}(\lambda)=\{1,\,1,\,1,\,-1,\,-1,\,-1\}$. At each value of $\lambda$, we call the coordinates that belong to the augmented boundary set $\mca A(\lambda)$ the augmented boundary coordinates, and the rest, the interior coordinates.
\begin{figure}[!t]
\begin{center}
\begin{subfigure}{.43\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/removal0.jpg}
\caption{Piecewise constant, $r=0$.}
\end{subfigure}
\begin{subfigure}{.43\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/removal1.jpg}
\caption{Piecewise linear, $r=1$.}
\end{subfigure}
\\
\begin{subfigure}{.43\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/removal2.jpg}
\caption{Piecewise quadratic, $r=2$.}
\end{subfigure}
\begin{subfigure}{.43\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/removal3.jpg}
\caption{Piecewise cubic, $r=3$.}
\end{subfigure}
\caption[Structure of $\mbf D\mbf f$ For Piecewise Polynomial Signals]{Structure of $\mbf D\mbf f$ for piecewise polynomial signals with various orders $r=0,\,1,\,2,\,3$. The olive lines display the true signals with two change points at the locations $6$ and $13$. Empty circles represent the indices that $\mbf D\mbf f$ does not exist.}
\label{fig:coor-removal}
\end{center}
\end{figure}
At the $j$-th iteration with $\lambda=\lambda_{_j}$, we assume that the boundary set and its corresponding sign vector are $\mca B=\mca B(\lambda)$ and $\mbf s_{\mca B}=\mbf s_{\mca B}(\lambda)$, respectively. Furthermore, we assume the augmented boundary set and its sign vector are $\mca A=\mca A(\lambda)$ and $\mbf s_{\mca A}=\mbf s_{\mca A}(\lambda)$, respectively. Dual coordinates can be split into augmented boundary coordinates $\widehat{\mathbf{u}}_{_{\lambda_j,\,\mathcal{A}}}$ and interior coordinates $\widehat{\mathbf{u}}_{_{\lambda_j,\,-\mathcal{A}}}$. Recall from Section \ref{sec:notations.proj1} that $\widehat{\mathbf{u}}_{_{\lambda_j,\,\mathcal{A}}}$ represents the subvector of $\widehat{\mbf u}_{_\lambda}$ with the coordinate labels in the set $\mca A$ and $\widehat{\mathbf{u}}_{_{\lambda_j,\,-\mathcal{A}}}$ represents the subvector of $\widehat{\mbf u}_{_\lambda}$ with the coordinate labels in the set $\lbrace 1,\,2,\, \cdots, \,m\rbrace \backslash \mca A$. It is apparent from the definition of the boundary coordinates that
\begin{align}\label{u.boundary}
\widehat{\mathbf{u}}_{_{\lambda_j,\,\mathcal{A}}}=\lambda_{_j}\, \mathbf{s}_{\mathcal{A}}\,.
\end{align}
Replacing the boundary coordinate with $\lambda_{_j}\,\mbf s_{\mca A}$ in \eqref{tf.dual.obj.proj1} and solving the resulting quadratic problem with respect to the interior coordinates, lead to their least square estimates, given by
\begin{align}\label{u.interior}
\widehat{\mathbf{u}}_{_{\lambda_j,\mathcal{-A}}}&=\left(\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T\right)^{-1}\mathbf{D}_{_{-\mathcal{A}}}\left(\mathbf{y}-\lambda_{_j} \mathbf{D}_{_{\mathcal{A}}}^T~\mathbf{s}_{\mathcal{A}}\right).
\end{align}
It should be noted that for the purpose of simplicity, we denote $(\mbf D_{\mca A})^T$ and $(\mbf D_{_{-\mca A}})^T$ with $\mbf D_{\mca A}^T$ and $\mbf D_{_{-\mca A}}^T$, respectively. Notice that in \eqref{u.interior}, the first term $\big( \mathbf{D}_{_{-\mathcal{A}}} \mathbf{D}_{_{-\mathcal{A}}}^T \big)^{-1} \mathbf{D}_{_{-\mathcal{A}}}\, \mathbf{y}$ simply yields the least square estimate of regressing the response vector $\mbf y$ on the design matrix $\mathbf{D}_{_{-\mathcal{A}}}$. The second term $-\lambda_{_j}\, \big( \mathbf{D}_{_{-\mathcal{A}}} \mathbf{D}_{_{-\mathcal{A}}}^T \big)^{-1} \mathbf{D}_{_{-\mathcal{A}}}\, \mathbf{D}_{_{\mathcal{A}}}^T~ \mathbf{s}_{\mathcal{A}}$ can be interpreted as a shrinkage term due to the condition $\|\mbf u\|_{_\infty}\leq \lambda$.
The expression \eqref{u.interior} is true for $\lambda\leq \lambda_{_j}$ until either an interior coordinate joins to the boundary or a coordinate in the boundary set leaves the boundary. The following argument explains how to specify values of $\lambda$ while the interior coordinates change.
We define the joining time associated with the interior coordinate $i\in \lbrace 1,\,2,\, \cdots, \,m\rbrace \backslash \mca A$ as the time at which this interior coordinate joins the boundary. To determine the next joining time, we reduce the value of $\lambda$ in a linear direction starting from $\lambda_{_j}$ and solve $\widehat{\mathbf{u}}_{_{\lambda,\mathcal{-A}}}=(\pm\lambda,\,\cdots,\,\pm\lambda)^T$. Note that the right-hand side of \eqref{u.interior} can be expressed as $\bsy a-\lambda_{_j}\,\mbf b$, where
\begin{align}\label{ab}
\bsy a&=\big(\mathbf{D}_{_{-\mca A}}\mathbf{D}_{_{-\mca A}}^T\big)^{-1}\mathbf{D}_{_{-\mca A}}\,\mathbf{y}\,,\\[8pt]
\mathbf{b}&=\big(\mathbf{D}_{_{-\mca A}}\mathbf{D}_{_{-\mca A}}^T\big)^{-1}\mathbf{D}_{_{-\mca A}}\mathbf{D}_{_{\mca A}}^T\,\mathbf{s}_{_{\mca A}}\,.
\end{align}
The joining time for every $i\in \lbrace 1,\,2,\, \cdots, \,m\rbrace \backslash \mca A\,$ is hence the solution of the equation $a_{_i}-\lambda \,b_{_i}=\pm\lambda$ with respect to $\lambda$, which is given by
\begin{align*}
\lambda_{_i}^{^\textrm{join}}=\frac{a_{_i}}{b_{_i}\pm 1}\,, \qquad\qquad i\in\lbrace 1,\,2,\, \cdots, \,m\rbrace \backslash \mca A\,.
\end{align*}
Note that $\lambda_{_i}^{^\textrm{join}}$ is uniquely defined because only one of the signs $-1$ or $+1$ yields $\lambda_{_i}\in [0,\, \lambda_{_j}]$.
Now we turn the attention to the characterization of a coordinate which leaves the boundary set $\mca B$. For $i\in \mca B$, the leaving time is defined as the time that the coordinate $i$ leaves the boundary set $\mca B$. Since $\mbf s_{\mca B}$ is the sign vector of changes captured by $\big[ \mbf D\, \widehat{\mbf f} \big]_{\mca B}$, then $\diag{\mbf s_{\mca B}}\, \big[ \mbf D\, \widehat{\mbf f} \big]_{\mca B}> \mbf 0$, which in turn, along with Equation \eqref{dual.to.primal}, implies $\diag{\mbf s_{\mca B}} \big[ \mbf D\, \big(\mbf y-\mbf D^T \widehat{\mbf u}_{_\lambda}\big)\, \big]_{\mca B}> \mbf 0$. Here, for any vector $\bsy{\eta}$, $\diag{\bsy{\eta}}$ denotes the diagonal matrix with the diagonal elements given by $\bsy{\eta}$, and $\bsy{\eta} > \mbf{0}$ holds element-wise. Therefore, a coordinate $i\in\mca B$ leaves the boundary set $\mca B$ if $\diag{\mbf s_{\mca B}} \big[ \mbf D\, \big(\mbf y-\mbf D^T \widehat{\mbf u}_{_\lambda}\big)\, \big]_{\mca B}> \mbf 0$ is violated. Using the relation
\begin{align*}
\Big[ \mbf D\, \big(\mbf y-\mbf D^T \widehat{\mbf u}_{_\lambda}\big)\, \big]_{\mca B}=\mbf D_{\mca B}\, \big( \mbf y-\mbf D^T \widehat{\mbf u}_{_\lambda} \big)\,,
\end{align*}
and the decomposition $\mbf D^T \widehat{\mbf u}_{_\lambda}=\mbf D_{\mca A}^T\, \widehat{\mbf u}_{_{\lambda,\,\mca A}}+\mbf D_{_{-\mca A}}^T\, \widehat{\mbf u}_{_{\lambda,-\mca A}}$, we obtain
\begin{align}
\diag{\mbf s_{\mca B}} \Big[ \mbf D\, \big(\mbf y-\mbf D^T \widehat{\mbf u}_{_\lambda}\big) \Big]_{\mca B}=\mbf c-\lambda\,\mbf d\,,
\end{align}
where
\begin{align}\label{cd}
\mathbf{c}&=\mathrm{diag}(\mathbf{s}_{\mathcal{B}})\,\mathbf{D}_{\mathcal{B}}\big(\mathbf{y}-\mathbf{D}_{_{-\mathcal{A}}}^T\,\bsy{a}\big)\,,\\[8pt]
\mathbf{d}&=\mathrm{diag}(\mathbf{s}_{\mathcal{B}})\,\mathbf{D}_{\mathcal{B}}\big(\mathbf{D}_{\mathcal{A}}^T\,\mathbf{s}_{\mathcal{A}}-\mathbf{D}_{_{-\mathcal{A}}}^T\,\mathbf{b}\big).
\end{align}
Hence, a leaving time is obtained from the equation $c_{_i}-\lambda\, d_{_i} > 0$ as
\begin{align*}
\lambda_{_i}^{^\textrm{leave}}=\left\{\begin{array}{lll}
\dfrac{c_{_i}}{d_{_i}}, && \textrm{if}~ c_{_i}<0~ \textrm{~and~} ~ d_{_i}<0\,, \\[8pt]
0, && \textrm{otherwise}\,.
\end{array}\right.
\end{align*}
The conditions in the aforementioned equation is due to the fact that at the $j$-th iteration with $\lambda\leq \lambda_{_j}$, the expression $c_{_i}-\lambda\, d_{_i} > 0$ fails for $i\in \mca B$, if both $c_{_i}$ and $d_{_i}$ are negative.
An alternative way to determine the next leaving time is to use the KKT optimality conditions of \eqref{tf.dual.obj.proj1}. We refer the reader to the supplementary materials of \cite{tibshirani2011solution}.
The following algorithm, PRUTF, describes the process of constructing the entire dual solution path of trend filtering.
\begin{algorithm}[PRUTF Algorithm] \label{tf.path.alg}
\begin{enumerate}
\item[]
\item Initialize the set of change points locations as $\bsy\tau_{_0}=\emptyset$, the empty set.
\item At step $j=1$, initialize the boundary set $\mathcal{B}_{_1}=\big\{\tau_{_1}-r_{_b},\,\tau_{_1}-r_{_b}+1,\,
\ldots,\tau_{_1}\big\}$ and its associated sign vector $s_{_{\mathcal{B}_1}}=\{s_{_1},\ldots,s_{_1}\}$, both with cardinality of $r_{_b}+1$, where $\tau_{_1}$ is obtained by
\begin{align}\label{firstjoin}
\tau_{_1}=\underset{i=1,\,\ldots,\,m}{\rm argmax} \mid \widehat{u}_{_i}\mid \,,
\end{align}
and $s_{_1}=\sign{ \widehat{u}_{_{ \tau_{_1}}}}$, where $\widehat{u}_{_i}$ is the $i$-th element of the vector $\widehat{\mathbf{u}}=\left(\mathbf{DD}^T\right)^{-1}\mathbf{D}\,\mathbf{y}$. The updated set of change points locations is now $\bsy\tau_{_1}=\{\tau_{_1}\}$. We also record the first joining time $\lambda_{_1}= \mid\widehat{u}_{\tau_{_1}}\mid$ and keep track of the augmented boundary set $\mathcal{A}_{_1}=\{\tau_{_1}-r_{_b},\ldots,\tau_{_1}+r_{_a}\}$ and its corresponding sign vector $\mathbf{s}_{_{\mathcal{A}_{_1}}}=\{s_{_1},\,\ldots,\,s_{_1}\}$ of length $r+1$. The dual solution is regarded as $\widehat{\mbf u}(\lambda)=\left(\mathbf{DD}^T\right)^{-1}\mathbf{D}\,\mathbf{y}$, for $\lambda\geq \lambda_{_1}$.
\item For step $j=2,\,3,\,\ldots\,,$
\begin{enumerate}
\item Obtain the pair $\big( \tau_{_j}^{^\mathrm{join}},s_{_j}^{^\mathrm{join}} \big)$ from
\begin{align}\label{joinpair}
\big(\tau_{_j}^{^\mathrm{join}},s_{_j}^{^\mathrm{join}}\big)=~ \underset{i\notin \mathcal{A}_{_{j-1}},\,s\in\{-1,\,1\}}{\rm argmax}~~ \frac{a_{_i}}{s+b_{_i}}\cdot \mathbbm{1} \left\{0\leq \dfrac{a_{_i}}{s+b_{_i}} \leq \lambda_{_{j-1}}\right\},
\end{align}
and set the next joining time $\lambda_{_j}^{^\mathrm{join}}$ as the value of $\frac{a_{_i}}{s+b_{_i}}$, for $i=\tau_{_j}^{^\mathrm{join}}$ and $s= s_{_j}^{^\mathrm{join}}$.
\item Obtain the pair $\big( \tau_{_j}^{^\mathrm{leave}},s_{_j}^{^\mathrm{leave}}\big)$ from
\begin{align}\label{leavepair}
\big(\tau_{_j}^{^\mathrm{leave}},\,s_{_j}^{^\mathrm{leave}}\big)=~ \underset{i\in \mathcal{B}_{_{j-1}},\,s\in\{-1,\,1\}}{\rm argmax}~~ \dfrac{c_{_i}}{d_{_i}}\cdot\, \mathbbm{1} \Big\{c_{_i} < 0~,~ d_{_i}< 0\Big\},
\end{align}
and assign the next leaving time $\lambda_{_j}^{^\mathrm{leave}}$ as the value of $\dfrac{c_{_i}}{d_{_i}}$, for $i=\tau_{_j}^{^\mathrm{leave}}$ and $s=s_{_j}^{^\mathrm{leave}}$.
\item Let $\lambda_{_j}=\max \big\{\lambda_{_j}^{^\mathrm{join}}\, ,\, \lambda_{_j}^{^\mathrm{leave}}\big\}$,
then the boundary set $\mathcal{B}_{_j}$ and its sign vector $\mathbf{s}_{_{\mathcal{B}_{j}}}$ are updated in the following fashion:
\begin{itemize}
\item[-- ] Either append $\big\{\tau_{_j}^{^\mathrm{join}}-r_{_b},\,\tau_{_j}^{^\mathrm{join}}-r_{_b}+1,\,\ldots,\tau_{_j}^{^\mathrm{join}}\big\}$ and the corresponding signs $\big\{s_{_j}^{^\mathrm{join}},\,\ldots,\,s_{_j}^{^\mathrm{join}}\big\}$ to $\mathcal{B}_{_{j-1}}$ and $\mathbf{s}_{_{\mathcal{B}_{j-1}}}$, respectively, provided that $\lambda_{_j}=\lambda_{_j}^{^\mathrm{join}}$. Also, add $\tau_{_j}^{^\mathrm{join}}$ to $\bsy\tau_{_{j-1}}$.
\item[-- ] Or remove $\big\{\tau_{_j}^{^\mathrm{leave}},\, \tau_{_j}^{^\mathrm{leave}}+1,\,
\ldots,\,\tau_{_j}^{^\mathrm{leave}}+r_{_b}\big\}$ and the corresponding signs $\big\{s_{_j}^{^\mathrm{leave}}, \,\ldots$, $ \,s_{_j}^{^\mathrm{leave}}\big\}$ from $\mathcal{B}_{_{j-1}}$ and $\mathbf{s}_{_{\mathcal{B}_{j-1}}}$, respectively, provided that $\lambda_{_j}=\lambda_{_j}^{^\mathrm{leave}}$. Also, remove $\tau_{_j}^{^\mathrm{leave}}$ from $\bsy\tau_{_{j-1}}$.
\end{itemize}
In the same manner, the augmented boundary set, $\mathcal{A}_{_j}$ and its sign, $\mathbf{s}_{_{\mathcal{A}_{_j}}}$ are formed by adding $\big\{\tau_{_j}^{^\mathrm{join}}-r_{_b},\,\ldots,\, \tau_{_j}^{^\mathrm{join}}+r_{_a}\big\}$ and $\big\{s_{_j}^{^\mathrm{join}},\,\ldots,\,s_{_j}^{^\mathrm{join}}\big\}$ to $\mathcal{A}_{_{j-1}}$ and $\mathbf{s}_{_{\mathcal{A}_{j-1}}}$, respectively, if $\lambda_{_j}=\lambda_{_j}^{^\mathrm{leave}}$ or, otherwise, by removing the associated set $\big\{\tau_{_j}^{^\mathrm{leave}},\,\ldots,\, \tau_{_j}^{^\mathrm{leave}}+r\big\}$ and $\big\{s_{_j}^{^\mathrm{leave}},\,\ldots,\,s_{_j}^{^\mathrm{leave}}\big\}$ from $\mathcal{A}_{_{j-1}}$ and $\mathbf{s}_{_{\mathcal{A}_{_{j-1}}}}$. Thus, the dual solution is computed as $\widehat{\mbf u}_{_{\mca A_{_j}}}(\lambda)=\bsy a-\lambda\, \mbf b$ for interior coordinates and $\widehat{\mbf u}_{_{-\mca A_{_j}}}(\lambda)=\lambda\,\mbf s_{_{\mca A_{_j}}}$ for boundary coordinates over $\lambda_{_j}\leq\lambda\leq\lambda_{_{j-1}}$.
\end{enumerate}
\item Repeat step 3 until $\lambda_{_j}> 0$.
\end{enumerate}
\end{algorithm}
The critical points $\lambda_{_1}\, \geq\, \lambda_{_2}\, \geq\, \ldots\, \geq\, 0$ indicate the values of the regularization parameter at which the boundary set changes.
\begin{remark}
Notice that the vector $\bsy\tau$ derived by the PRUTF algorithm represents the locations of change points for the dual variables. In order to obtain the locations of change points in primal variables, we must add $r_{_a}$ to any element of $\bsy\tau$, that is, $\big\{ \tau_{_1}+r_{_a}, \, \tau_{_2}+r_{_a},\, \ldots\, \big\}$. This relationship between the primal and dual change point sets is visible from Figure \ref{fig:coor-removal}.
\end{remark}
\begin{remark}
For fused lasso, $r=0$, Lemma 1 of \cite{tibshirani2011solution}, known as the boundary lemma, is satisfied since the matrix $\mbf D\mbf D^T$ is diagonally dominant, meaning that $\big[\,\mbf D \mbf D^T\,\big]_{i,i} \geq \sum_{j\neq i}\big| \big[\,\mbf D \mbf D^T\, \big]_{i,j} \big|$, for $i=1,\ldots,m$. This lemma states that when a coordinate joins the boundary, it will stay on the boundary for the rest of the path. Consequently, part (b) of step 3 in Algorithm \ref{tf.path.alg} is unnecessary, and hence the next leaving time in part (c) is set to zero, i.e., $\lambda_{_j}^{^\mathrm{leave}}=0$, for every step $j$. However, the boundary lemma is not satisfied for $r=1,\,2,\,3,\,\ldots$.
\end{remark}
\begin{remark}\label{rem:tibshirani_difference:proj1}
There is a subtle and important distinction between our proposed algorithm, PRUTF, and the one presented in \cite{tibshirani2011solution}.
The latter work studies the generalized lasso problem for any arbitrary penalty matrix $\mbf D$ (unlike $\mbf D$ used in trend filtering, which must have a certain structure). The proposed algorithm in \cite{tibshirani2011solution} relies on adding or removing only one coordinate to or from the boundary set at every step. The key attribute of our algorithm is to add or remove $r+1$ coordinates to or from the augmented boundary set, an approach inspired by the argument presented at the beginning of this section. Essentially, this attribute makes PRUTF, presented in Algorithm \ref{tf.path.alg}, well-suited for change point analysis. It is important to mention that PRUTF requires at least $r+1$ data points between neighbouring change points.
\end{remark}
\begin{remark}
For a given $\lambda$, equations \eqref{u.boundary} and \eqref{u.interior} give the values of the dual variables in $\widehat{\mbf u}_{_\lambda}$. The equations demonstrate that the dual solution path is a linear function of $\lambda$ with change in the slopes at joining or leaving times $\lambda_1\geq\lambda_2\geq\ldots\geq 0$.
\end{remark}
\begin{remark}
The number of iterations required for PRUTF, presented in Algorithm \ref{tf.path.alg}, is at most $(3^{\,p}+1)/2$, where $p=\lceil \frac{m}{r+1}\rceil$, see \cite{tibshirani2013lasso}, Lemma 6. However, this upper bound for the number of iterations is usually very loose. The upper bound comes from the following realization discovered by \cite{osborne2000lasso} and later improved by \cite{mairal2012complexity}. Any pair $\big(\mca A\,,\,\mbf s_{\mca A}\big)$ appears at most once throughout the solution path. In other words, if $\big(\mca A\,,\,\mbf s_{\mca A}\big)$ is visited in one iteration of the algorithm, the pair $\big(\mca A\,,\,-\mbf s_{\mca A}\big)$ as well as $\big(\mca A\,,\,\mbf s_{\mca A}\big)$ cannot reappear again for the rest of the algorithm. Interestingly, this fact says that once a coordinate enters the boundary set, it cannot immediately leave the boundary set at the next step.
\noindent Moreover, note that at one iteration of the PRUTF algorithm with the augmented boundary set $\mca A$, the dominant computation is in solving the least square problem
\begin{align}\label{lsproblem}
\min\limits_{\mathbf{u}\, \in\, \mbb R^{m}}~ \frac{1}{2}\, \big\| \,\mathbf{y}-\mathbf{D}_{_{\mca A}}^T\, \mathbf{u}\, \big\|_{2}^2 \,.
\end{align}
One can apply QR decomposition
of $\mathbf{D}_{_{\mca A}}^T$ to solve the least square problem, and then update the decomposition as set $\mca A$ changes. However, since $\mathbf{D}_{_{-\mathcal{A}}} \mathbf{D}_{_{-\mathcal{A}}}^T$ is a banded Toeplitz matrix (see Section \ref{sec:property.solution.path.proj1}), a solution of \eqref{lsproblem} always exists and can be computed using a banded Cholesky decomposition. Thus, the computational complexity for the iteration is of order $O \big((m-|\mca A|)\, r^2 \big)$, which is linear in the number of interior coordinates as $r$ is fixed and usually small. Overall, if $K$ is the total number of steps run by the PRUTF algorithm, then the total computational complexity is $O \big(K(m-|\mca A|)\, r^2 \big)$. See \cite{tibshirani2011solution} and \cite{arnold2016efficient}.
\end{remark}
\section{Statistical Properties of the Solution Path}
\label{sec:property.solution.path.proj1}
An important component of the methodology that we develop in this work involves computing algebraic expressions based on the matrix $\mbf D=\mbf D^{(r+1)}$. In this section, we describe the properties of such expressions. To begin with, let $\mathcal{A}=\{A_{_1},\,\ldots,\,A_{_J}\}$ and $\mathbf{s}_{\mathcal{A}}=\{\mbf s_{_1},\,\ldots,\,\mbf s_{_J}\}$ be the augmented boundary set and its corresponding sign vector, respectively, after a number of iterations of Algorithm \ref{tf.path.alg}, where $A_{_j}=\big\{\tau_{_j}-r_{_b},\,\tau_{_j}-r_{_b}+1,\,\ldots,\,\tau_{_j}+r_{_a}\big\}$ and $\mathbf{s}_{_j}=\{s_{_j},\,\ldots,\,s_{_j}\}$ for $j=1,\,\ldots,\,J$. This augmented boundary set corresponds to $J$ change points $\{\tau_{_1},\,\ldots,\,\tau_{_J}\}$ that partition all the dual variables into $J+1$ blocks $B_{_j}=\big\{\tau_{_j}+1,\,\ldots,\tau_{_{j+1}}\big\}$ for $j=0,\,1,\,\ldots,\,J$, with the conventions that $\tau_{_0}=0$ and $\tau_{_{J+1}}=m$. In the following, we list some properties of matrix multiplications involving $\mbf D$.
\begin{itemize}
\item It follows from the definition of the matrix $\mbf D$ that it is a banded Toeplitz matrix with bandwidth $r+1$. It tuns out that the matrix $\mbf D\mbf D^T$ reveals the same property, meaning that it is a square banded Toeplitz matrix. Moreover, its $r+1$ nonzero row elements are consecutive binomial coefficients of order $2\,r+2$ with alternating signs. In other words, $(i\, ,\, j)$-th element of $\mbf D\mbf D^T$ for $i\geq j$ is $(-1)^{\, i-j}{2\,r+2 \choose r+1+i-j}$. An example, for $r=1$, is given in panel (a) of Figure \ref{fig.Dstruct}. Note that $\mbf D\mbf D^T$ is a symmetric, nonsingular and positive definite matrix \citep{demetriou2001certain}.
\item The matrix $\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T$ is a block diagonal matrix whose diagonal submatrices correspond to $J+1$ blocks. More precisely, the $j$-th submatrix on the diagonal of $\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T$ is a matrix with the first $(\tau_{_{j+1}}-\tau_{_j}-r)$ rows and columns of $\mbf D\mbf D^T$, see panel (b) of Figure \ref{fig.Dstruct}. Notice that, due to its non-singularity, $\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T$ is always invertible. In fact, both $\big(\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T\big)^{-1}$ and $\big(\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T\big)^{-1}\mathbf{D}_{_{-\mathcal{A}}}$ are block diagonal matrices. Another interesting result is that every row of the matrix $\big(\mathbf{D}_{_{-\mathcal{A}}}\, \mathbf{D}_{_{-\mathcal{A}}}^T\big)^{-1} \mathbf{D}_{_{-\mathcal{A}}}$ is a contrast vector, meaning that for any $t=1,\,\ldots,\,m$,
\begin{align*}
\ssum{1}{n}\left[\big(\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T\big)^{-1}\mathbf{D}_{_{-\mathcal{A}}}\right]_{t,\,i}=0\,.
\end{align*}
\begin{figure}[!t]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
{\small
\begin{align*}
\left(\begin{array}{rrrrrrrrrrr}
6 & -4 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
-4 & 6 & -4 & 1 & 0 & 0 & 0 & 0 & 0 \\
1 & -4 & 6 & -4 & 1 & 0 & 0 & 0 & 0 \\
0 & 1 & -4 & 6 & -4 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & -4 & 6 & -4 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & -4 & 6 & -4 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & -4 & 6 & -4 & 1 \\
0 & 0 & 0 & 0 & 0 & 1 & -4 & 6 & -4 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & -4 & 6 \\
\end{array}\right)
\end{align*}}
\caption{Structure of $\mbf D \mbf D^T$.}
\label{fig.Dstruct.a}
\end{subfigure}
\quad
\begin{subfigure}[b]{0.3\textwidth}
\centering
{\small
\begin{align*}
\left(\begin{array}{ccc|cccc}
6 & -4 & 1 & 0 & 0 & 0 & 0 \\
-4 & 6 & -4 & 0 & 0 & 0 & 0 \\
1 & -4 & 6 & 0 & 0 & 0 & 0 \\\hline
0 & 0 & 0 & 6 & -4 & 1 & 0 \\
0 & 0 & 0 & -4 & 6 & -4 & 1 \\
0 & 0 & 0 & 1 & -4 & 6 & -4 \\
0 & 0 & 0 & 0 & 1 & -4 & 6 \\
\end{array}\right)
\end{align*}}
\caption{The structure of $\mbf D_{_{-\mca A}}\, \mbf D^T_{_{-\mca A}}$.}
\label{fig.Dstruct.b}
\end{subfigure}
\caption[Structure of Quadratic Forms of Matrix $\mbf D$]{Structure of quadratic forms of matrix $\mbf D$.}
\label{fig.Dstruct}
\end{figure}
\item Another interesting term in analyzing the behaviour of the dual variables is $\mbf D_{\mca A}^T\,\mbf s_{\mca A}$. It can be shown that the vector $\mbf D_{\mca A}^T\,\mbf s_{\mca A}$ can be partitioned into $J+1$ subvectors associated with the change points $\tau_{_j},~j=1,\,\ldots,\,J$. The subvector associated with $\tau_{_j},\, \,j=2,\, \ldots, \,J-1$, is $\mbf D_{_{A_j}}^T\,\mbf s_{_{A_j}}$, whose elements are zero, except the first consecutive $r+1$ as well as the last consecutive $r+1$ elements. The first $r+1$ nonzero elements of $\mbf D_{_{A_j}}^T\,\mbf s_{_{A_j}}$ are the binomial coefficients in the expansion of $s_{_j}\,(x-1)^r$, and its last $r+1$ elements are the binomial coefficients in the expansion of $-s_{_{j+1}}\,(x-1)^r$. Furthermore, the first $r+1$ elements of the first subvector and the last $r+1$ elements of the last subvector are also equal to zero. For example, for a piecewise cubic signal, $r=3$, with two change points $\big(\tau_{_1}\, ,\, \tau_{_2}\big)$ and signs $\big(-1\, ,\, 1\big)$, the vector $\mathbf{D}_{_{\mathcal{A}}}^T\, \mathbf{s}_{_{\mathcal{A}}}$ becomes
{\small
\begin{align*}
\left(\underbrace{0,\ldots,\,0,\,1,\, -3,\, 3,\,-1}_{1\, :\, (\tau_{_1}+r_{_a})}\, ,\, \underbrace{-1,\, 3,\, -3,\, 1,\, 0,\, \ldots, \,0,\, -1,\, 3,\, -3,\, 1}_{(\tau_{_1}+r_{_a}+1)\, :\, (\tau_{_2}+r_{_a})}\, ,\, \underbrace{1,\, -3,\, 3,\, -1,\, 0,\,\ldots,\,0 }_{(\tau_{_2}+r_{_a}+1)\, :\, m}\right).
\end{align*}}
Consequently, the structure of $\mbf D_{_{A_j}}^T\,\mbf s_{_{A_j}}$ allows us to write $\mathbf{D}_{_{\mathcal{A}}}^T\,\mathbf{s}_{_{\mathcal{A}}}=\sum_{j=0}^J \mathbf{D}_{_{{A}_j}}^T\,\mathbf{s}_{_j}$. Additionally, if the signs of two consecutive change points $\tau_{_{j}}$ and $\tau_{_{j+1}}$ are the same, then
\begin{align}\label{drift.term.staircase.proj1}
\left[ \big(\mathbf{D}_{_{-\mca A}} \mathbf{D}_{_{-\mca A}}^T \big)^{-1}\mathbf{D}_{_{-\mca A}}\right]_t\,\left(\mathbf{D}_{_{A_{j+1}}}^T \mathbf{s}_{_{j+1}}+\mathbf{D}_{_{A_j}}^T \mathbf{s}_{_j}\right)= -s_j,
\end{align}
for $t=\tau_{_j}+r_{_a}\, ,\, \ldots\, ,\, \tau_{_{j+1}}+r_{_b}$.
\item
Let $\mbf P_D=\mbf D_{_{-\mca A}}^T \big( \mbf D_{_{-\mca A}} \mbf D_{_{-\mca A}}^T \big)^{-1}\mbf D_{_{-\mca A}}$ be the projection matrix onto the row space of the matrix $\mbf D_{-\mca A}$. Moreover, let $\mbf X_j$ be the design matrix of the $r$-th polynomial regression on the indices of
$j$-th segment $\big\{ \tau_{j}+1\, ,\, \ldots\, ,\, \tau_{j+1} \big\}$, that is,
\begin{align*}
\mbf X_{j}=\begin{pmatrix}
1 & \frac{\tau_{_j}+1}{n} & \left(\frac{\tau_{_j}+1}{n}\right)^2 & \cdots & \left(\frac{\tau_{_j}+1}{n}\right)^r
\\[10pt]
1 & \frac{\tau_{_j}+2}{n} & \left(\frac{\tau_{_j}+2}{n}\right)^2 & \cdots & \left(\frac{\tau_{_j}+2}{n}\right)^r \\
\vdots & \vdots & \vdots & & \vdots \\
1 & \frac{\tau_{_{j+1}}}{n} & \left(\frac{\tau_{_{j+1}}}{n}\right)^2 & \cdots & \left(\frac{\tau_{_{j+1}}}{n}\right)^r \\
\end{pmatrix}.
\end{align*}
Therefore, the orthogonal projection matrix $\mbf I - \mbf P_{D}$ is a block diagonal matrix whose $j$-th block associated with the segment $\big\{ \tau_{j}+1\, ,\, \ldots\, ,\, \tau_{j+1} \big\}$ is equal to the projection map onto the column space of $\mbf X_j$, i.e.,
\begin{align}\label{Dprojection.map.proj1}
\mbf I - \mbf P_{D} = \mbf X_j \big(\mbf X_j^T\, \mbf X_j \big)^{-1}\, \mbf X_j^T.
\end{align}
\end{itemize}
Equation \eqref{u.boundary} says that the absolute values of the boundary coordinates are $\lambda$, that is,
\begin{align}
\widehat{u}(t;\lambda)=\lambda\,s_{_j}\qquad\qquad\textrm{for }t\in A_j.
\end{align}
On the other hand, the values of the interior coordinates are given by
{\small
\begin{align}\label{usegment}
\widehat{u}(t;\lambda)=\left\{\begin{array}{ll}
\left[\big(\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T\big)^{-1}\mathbf{D}_{_{-\mathcal{A}}}\right]_t\,\left(\mathbf{y}-\lambda \,\mathbf{D}_{_{{A}_1}}^T \mathbf{s}_{_1}\right), & 1\leq t<\tau_{_1}-r_{_b}\\\\
\left[\big(\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T\big)^{-1}\mathbf{D}_{_{-\mathcal{A}}}\right]_t\,\left(\mathbf{y}-\lambda\, \left(\mathbf{D}_{_{A_{j+1}}}^T \mathbf{s}_{_{j+1}}+\mathbf{D}_{_{A_j}}^T \mathbf{s}_{_j}\right)\right), & \tau_{_j}+r_{_a}< t<\tau_{_{j+1}}-r_{_b}\\\\
\left[\big(\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T\big)^{-1}\mathbf{D}_{_{-\mathcal{A}}}\right]_t\,\left(\mathbf{y}-\lambda \,\mathbf{D}_{_{A_{_J}}}^T \mathbf{s}_{_J}\right), & \tau_{_J}+r_{_a} < t \leq m\,.
\end{array}\right.
\end{align}}
For a given $\lambda$, the dual variables $\widehat{u}(t;\,\lambda)$ for $t=0,\,\ldots, \,m$ can be collectively viewed as a random bridge, that is, a conditioned random walk with drift whose end points are set to zero. Moreover, $\widehat{u}(t;\,\lambda)$ is bounded between $-\lambda$ and $\lambda$.
The quantity $\widehat{u}(t;\,\lambda)$ can also be decomposed into a sum of several smaller random bridges which are formed by blocks created from the change points. Recall that the last consecutive $r_{_b}+1$ elements of the block $B_{_j}$ are $\lambda\, s_{_j}$, for any $j=0,\,1,\,\cdots, \,J$.
Hence, for $t=\tau_{_j}+r_{_a},\,\ldots,\,\tau_{_{j+1}}-r_{_b}$, the random bridge associated with the $j$-th block is given by
\begin{align}\label{z.dual.seg}
\widehat{u}_{_j}(t;\, \lambda)=\left[\big(\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T\big)^{-1}\mathbf{D}_{_{-\mathcal{A}}}\right]_t\, \left(\mathbf{y}-\lambda \,\big(\mathbf{D}_{_{A_{j+1}}}^T \mathbf{s}_{_{j+1}}+\mathbf{D}_{_{A_j}}^T \mathbf{s}_{_j}\big)\right),\quad j=0,\,\ldots,\,J\,,
\end{align}
with the conventions $\mbf s_{_0}=\mbf s_{_{J+1}}=\mbf 0\in \mathbb{R}^{^{r+1}}$. It is important to note that similar to $\widehat{u}(t;\,\lambda)$, the process $\widehat{u}_{_j}(t;\,\lambda)$ satisfies the conditions $\widehat{u}_{_j}(\tau_{_j}+r_{_a};\,\lambda)=\lambda\,s_{_j}$ and $\widehat{u}_{_j}(\tau_{_{j+1}}-r_{_b};\lambda)=\lambda\,s_{_{j+1}}$.
From \eqref{z.dual.seg}, the process $\widehat{u}_{_j}(t;\,\lambda)$ is composed of the stochastic term
\begin{align}\label{u.stoch}
\widehat{u}_{_j}^{\,\textrm{st}}(t)=\left[\big(\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T\big)^{-1}\mathbf{D}_{_{-\mathcal{A}}}\right]_t\, \mathbf{y},
\end{align}
and the drift term
\begin{align}\label{u.drift}
\widehat{u}_{_j}^{\,\textrm{dr}}(t;\,\lambda)=-\lambda\left[\big(\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T\big)^{-1}\mathbf{D}_{_{-\mathcal{A}}}\right]_t\,\left(\mathbf{D}_{_{A_{j+1}}}^T \mathbf{s}_{_{j+1}}+\mathbf{D}_{_{A_j}}^T \mathbf{s}_{_j}\right).
\end{align}
According to model \eqref{fmodel.proj1} with Gaussian noises, it turns out that the discrete time stochastic process term $\widehat{u}_{_j}^{\,\textrm{st}}(t)$ can be embedded in a continuous time Gaussian bridge process. The following theorem describes the characteristics of this process.
\begin{theorem}\label{thm:gaussian.bridge.proj1}
Suppose the observation vector $\mbf y$ is drawn from the model \eqref{fmodel.proj1}, where the error vector $\bsy\varepsilon$ has a Gaussian distribution with mean zero and covariance matrix $\sigma^2\mbf\, \mbf I$. For given $\mbf D$ and $\mca A$,
\begin{enumerate}[label=(\alph*)]
\item Define
\begin{align}\label{wj.process.proj1}
W_j(t)= \big( \tau_{j+1}-\tau_j-r \big)^{-(2r+1)/2} \left[ \big(\mathbf{D}_{_{-\mathcal{A}}} \mathbf{D}_{_{-\mathcal{A}}}^T \big)^{-1} \mathbf{D}_{_{-\mathcal{A}}} \right]_{\lfloor mt\rfloor}\mathbf{y},
\end{align}
for $(\tau_{_j}+r_{_a})/m ~\leq~ t ~\leq~ (\tau_{_{j+1}}-r_{_b})/m$, where
\begin{align}\label{wbridge.tails.proj1}
W_{_{j}} \Big(\, \frac{\tau_{_j}+r_{_a}}{m}\, \Big)= W_{_{j}} \Big(\, \frac{\tau_{_{j+1}}-r_{_b}}{m}\, \Big)=0,
\end{align}
and, for $j=0\, ,\, \ldots\, ,\, J$. Then the stochastic process $\mbf W_j=\big\{ W_j(t):~(\tau_{_j}+r_{_a})/m\leq t\leq (\tau_{_{j+1}}-r_{_b})/m\big\}$ is a Gaussian bridge process with mean vector zero and covariance function
\begin{align}\label{w.cov}
{\rm Cov} \Big( W_j(t)\, ,\, W_j(t')\Big)= \sigma^2 \left[\big(\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T\big)^{-1}\right]_{\lfloor mt\rfloor,\lfloor mt'\rfloor},
\end{align}
for any $(\tau_{_j}+r_{_a})/m ~\leq~ t\, ,\, t' ~\leq~ (\tau_{_{j+1}}-r_{_b})/m$.
\item The processes $\mbf W_j$ and $\mbf W_{j'}$ are independent, for $j'\neq j$.
\end{enumerate}
\end{theorem}
A proof is given in Appendix \ref{prf:gaussian.bridge.proj1}.
This theorem could be extended to the case of non-Gaussian random variables and therefore establishes a Donsker type Central Limit Theorem for $\mbf W_j$. Theorem \ref{thm:gaussian.bridge.proj1} guarantees that the dual variable process associated with the $j$-th block, i.e.
\begin{align*}
\mbf u_j= \Big\{\, \widehat{u} \big( \lfloor mt\rfloor;\,\lambda \big):~ (\tau_{_j}+r_{_a})/m\leq t\leq (\tau_{_{j+1}}-r_{_b})/m \Big\}
\end{align*}
is a Gaussian bridge process with the drift term
\begin{align}\label{W.drift.simple}
-\lambda\left[\big(\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T\big)^{-1}\mathbf{D}_{_{-\mathcal{A}}}\right]_{\lfloor mt\rfloor}\,\left(\mathbf{D}_{_{A_{j+1}}}^T \mathbf{s}_{_{j+1}}+\mathbf{D}_{_{A_j}}^T \mathbf{s}_{_j}\right),
\end{align}
and the covariance matrix stated in \eqref{w.cov}.
Recall that a standard Brownian bridge process defined on the interval $[a,\,b]$ is a standard Brownian motion $B(t)$ conditioned on the event $B(a)=B(b)=0$. It is often characterized from a Brownian motion $B(t)$ with $B(a)=0$, by setting $$B_{_0}(t)=B(t)-\frac{t-a}{b-a}\,B(b)\,.$$
The mean and covariance functions of the Brownian bridge $B_{_0}(t)$ are given by ${\rm E} \big(B_{_0}(t)\big)=0$ and ${\rm Cov} \big( B_{_0}(s),\,B_{_0}(t) \big)= \min\lbrace s-a,\,t-a\rbrace -(b-a)^{-1}(s-a)(t-a)$ for any $s,\, t\in[a,\,b]$, respectively. A Gaussian bridge process is an extension of the Brownian bridge process when the Brownian motion $B(t)$, in the definition of the Brownian bridge $B_{_0}(t)$, is replaced by a more general Gaussian process $G(t)$. See, for example, \cite{gasbarra2007gaussian}.
\begin{remark}
The celebrated Donsker theorem \cite{donsker1951invariance} states that the partial sum process of a sequence of i.i.d. random variables, with mean zero and variance 1, converges weakly to a Brownian bridge process. See \cite{van1996weak} or \cite{billingsley2013convergence}. A version of Theorem \ref{thm:gaussian.bridge.proj1} involving non-Gaussian random variables would extend this result to weighted partial sum processes and show that the limiting process is a Gaussian bridge with a certain covariance structure. So the Gaussian assumption in Theorem \ref{thm:gaussian.bridge.proj1} is not restrictive. It is also interesting to show that for $r=0, \,1$, the process $\widehat{u}_{_j}^{\textrm{\,st}} \big( \lfloor mt\rfloor \big)$ boils down to its respective CUSUM processes. To show this, consider the interval $\big[(\tau_{_j}+r_{_a})/m\, , (\tau_{_{j+1}}-r_{_b})/m\big]$,
\begin{itemize}
\item For the piecewise constant signals, $r=0$, the quantity $\left[\big(\mathbf{D}_{_{-\mathcal{A}}} \mbf D_{_{-\mathcal{A}}}^T\big)^{-1}\mbf D_{_{-\mathcal{A}}}\right]_{\lfloor mt\rfloor}\mbf y$ can be written as
{\small
\begin{align*}
\bigg(0,\ldots,\underbrace{0}_{ \tau_{_j}},1-\frac{\lfloor mt\rfloor}{\tau_{_{j+1}}-\tau_{_j}}, \ldots, \underbrace{1-\frac{\lfloor mt\rfloor}{\tau_{_{j+1}}-\tau_{_j}}}_{\lfloor mt\rfloor}, -\frac{\lfloor mt\rfloor}{\tau_{_{j+1}}-\tau_{_j}}, \ldots,-\frac{\lfloor mt\rfloor}{\tau_{_{j+1}}-\tau_{_j}},\, \underbrace{0}_{\tau_{_{j+1}}},\ldots,0 \bigg)\, \mbf y.
\end{align*}}
Notice that the above statement is the CUSUM statistic for the $j$-th segment, that is
\begin{align}\label{CUSUMr0}
\sum\limits_{k=\tau_{_j}+1}^{\lfloor mt \rfloor}\, \Big( y_{_k}-\overline{y}_{_{(\tau_{_j}+1):\tau_{_{j+1}}}}\Big)\,,
\end{align}
where $\overline{y}_{_{(\tau_{_j}+1):\tau_{_{j+1}}}}$ is the sample average of $\big( y_{_{\tau_{_j}+1}}, \,\ldots,\, y_{_{\tau_{_{j+1}}}} \big)$. It is well known that the CUSUM statistic \eqref{CUSUMr0} converges weakly to the Brownian bridge. In addition, for any $(\tau_{_j}+r_{_a})/m\leq t' \leq t\leq (\tau_{_{j+1}}-r_{_b})/m$, the covariance function becomes
\begin{align*}
\left[\big(\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T\big)^{-1}\right]_{(\lfloor mt'\rfloor,\lfloor mt\rfloor)}= (\lfloor mt'\rfloor-\tau_{_j})- \frac{(\lfloor mt'\rfloor-\tau_{_j}) (\lfloor mt\rfloor-\tau_{_j})}{\tau_{_{j+1}}-\tau_{_j}},
\end{align*}
which is identical to the covariance function of the Brownian bridge.
\item For the piecewise linear signals $r=1$, the quantity $\left[\big(\mathbf{D}_{_{-\mathcal{A}}} \mbf D_{_{-\mathcal{A}}}^T\big)^{-1}\mbf D_{-\mathcal{A}}\right]_{\lfloor mt\rfloor}\mbf y$ reduces to
\begin{align}\label{CUSUMr1}
\sum\limits_{k=\tau_{_j}+1}^{\lfloor mt \rfloor} \, k\, \big( y_{_k}-\widehat{ f}_{_k} \big)\,,
\end{align}
where $\widehat{f}$ is the least square fit of the simple linear regression of $\big(y_{_{\tau_{_j}+1}}, \,\ldots,\, y_{_{\tau_{_{j+1}}}}\big)$ onto $\big(\tau_{_j}+1, \,\ldots,\, \tau_{_{j+1}}\big)$. As proved in Theorem \ref{thm:gaussian.bridge.proj1}, the preceding statistic \eqref{CUSUMr1} is also a Gaussian bridge process. Furthermore, using the results in \cite{hoskins1972some}, for any $(\tau_{_j}+r_{_a})/m\leq t' \leq t\leq (\tau_{_{j+1}}-r_{_b})/m$, the covariance function of this Gaussian bridge process is given by
{\small
\begin{align*}
\left[\big(\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T \big)^{-1} \right]_{(\lfloor mt'\rfloor,\lfloor mt\rfloor)}&= \frac{\big(\Delta_{_j}- \lfloor mt\rfloor+\tau_{_j}\big)\big(\Delta_{_j}- \lfloor mt\rfloor+\tau_{_j}+1\big)}{3\, \Delta_{_j} \big(\Delta_{_j}+1\big) \big(\Delta_{_j}+2\big)}\\\\
&\hspace{-3cm}\times \big(\lfloor mt'\rfloor-\tau_{_j}\big) \big(\lfloor mt'\rfloor-\tau_{_j}+1\big)\\\\
&\hspace{-3cm}\times \Big [ \big(\lfloor mt\rfloor-\tau_{_j}+1\big) \big(\lfloor mt'\rfloor-\tau_{_j}-1\big) \big(\Delta_{_j}+2\big)-\big(\lfloor mt\rfloor-\tau_{_j}\big)\big(\lfloor mt'\rfloor-\tau_{_j}+2\big)\Delta_{_j} \Big ],
\end{align*}}
where $\Delta_{_j}=\tau_{_{j+1}}-\tau_{_j}$.
\end{itemize}
\end{remark}
\section{Stopping Criterion}
\label{sec:stop.rule.proj1}
This section concerns developing a stopping criterion for the PRUTF algorithm. We provide tools for deriving a threshold value at which the PRUTF algorithm terminates the search if no values of dual variables exceed this threshold. Consider the dual variables at the first step of the algorithm, i.e. $\widehat{u}^{\,\trm{st}}(t)= \big[\left(\mathbf{D}\mathbf{D}^T\right)^{-1}\mathbf{D}\big]_t\,\mathbf{y}$, for $t=0,\ldots,m$, which correspond to $\widehat{u}^{\,\trm{st}}(t)$ in \eqref{u.stoch} with $\mca A=\emptyset$. It turns out that $\widehat{u}^{\,\trm{st}}(t)$ is a stochastic process with local minima and maxima attained at the change points. This structure is displayed with cyan-colored lines (\tikz\draw [color=cyan, thick, solid] (0,0) -- (.5,0);) in Figure \ref{fig:stopping-rule} for both piecewise constant $r=0$ and piecewise linear $r=1$ signals. As the PRUTF algorithm detects more change points and forms the augmented boundary set $\mca A$, the local minima or maxima corresponding to these change points are removed from the stochastic process
\begin{align}\label{Uhatst}
\widehat{u}_{_{-\mca A}}^{\,\trm{st}}(t)= \left[\big(\mathbf{D}_{_{-\mathcal{A}}}\mathbf{D}_{_{-\mathcal{A}}}^T\big)^{-1}\mathbf{D}_{_{-\mathcal{A}}}\right]_t\mathbf{y}= \ssum[j]{0}{J} ~\widehat{u}_{_j}^{\,\trm{st}}(t)\, \mathbbm 1 \big\{ t \in B_j \big\},
\end{align}
\begin{figure}[!t]
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/stopRule0.jpg}
\caption{Piecewise constant with $r=0$}
\end{subfigure}
\qquad
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/stopRule1.jpg}
\caption{Piecewise linear with $r=1$}
\end{subfigure}
\caption[Structure of $\widehat{u}^{\,\trm{st}}(t)$ With Removed Change Points]{The cyan-colored lines
show the dual variables for the full matrix $\mbf D$. Dual variables computed after removing rows of the matrix $\mbf D$ associated with $\tau_{_1}$, that is $\mbf D_{_{-\mca A_{_1}}}$, are displayed by the olive-colored lines.
The augmented boundary set $\mca A_{_2}$ corresponding to $\tau_{_1}$ and $\tau_{_2}$ results to the dual variables shown by orange-colored lines.
}
\label{fig:stopping-rule}
\end{figure}
\noindent for $t=1,\,\ldots,\, m-|\mca A|$. This fact is shown by olive-colored lines (\tikz\draw [color=olive,thick,solid] (0,0) -- (.5,0);) in Figure \ref{fig:stopping-rule}. The last equality in \eqref{Uhatst} expresses that the $\widehat{u}_{_{-\mca A}}^{\,\trm{st}}(t)$ is the stochastic term of the dual variables for all the interior coordinates and is derived by stacking the stochastic terms of the dual variables associated with $j$-th block, $\widehat{u}_{_j}^{\,\trm{st}}(t)$, as defined in \eqref{u.stoch}, for $j=0,\,\ldots,\,J$. This behaviour suggests a way to introduce a stopping rule for the PRUTF algorithm. As can be viewed from the orange-colored lines (\tikz\draw [color=orange,thick,solid] (0,0) -- (.5,0);) of Figure \ref{fig:stopping-rule}, if all true change points are captured by the algorithm and stored in the augmented set $\mca A_0$, the resulting process
\begin{align*}
\widehat{u}_{_{-\mca A_0}}^{\,\trm{st}}(t)=\left[\big(\mathbf{D}_{_{-\mca A_0}}\mathbf{D}_{_{-\mca A_0}}^T\big)^{-1}\mathbf{D}_{_{-\mca A_0}}\right]_t\,\mathbf{y}
\qquad \text{ for }\quad t=0,\,\ldots,\,m-|\mca A_0|\,,
\end{align*}
contains no noticeable optimum points and tends to fluctuate close to the zero line (x-axis).
We terminate the search in Algorithm \ref{tf.path.alg} at step $j$ by checking whether the maximum of $\big|\,\widehat{ u}_{_{-\mca A_j}}^{\,\trm{st}} (t)\,\big|$, for $t=0,\,\ldots,\,m-|\mathcal{A}_{_j}|$, is smaller than a certain threshold. To exactly specify this threshold, as suggested by Theorem \ref{thm:gaussian.bridge.proj1}, we need to calculate the {\it excursion probabilities} of a Gaussian bridge process. As stated in \cite{adler2009random}, analytic formulas for the excursion probabilities are known to be available only for a small number of Gaussian processes. One of such Gaussian processes is the Brownian bridge process. It is well known that for the Brownian bridge process $B_{_0}(t)$ defined on the interval $[a,\,b]$
\begin{align}\label{BBmax}
\Pr\Big(\sup_{a\leq t\leq b} \big| \,B_{_0}(t)\, \big| \geq x \Big)=2\,\sum\limits_{i=1}^{\infty}\, (-1)^{i+1}\exp\left(\frac{-2\,i^2\,x^2}{b-a}\right)\,.
\end{align}
See, for example, \cite{adler2009random}, and \cite{shorack2009empirical}.
Hence for the piecewise constant signals, the required threshold for stopping Algorithm \ref{tf.path.alg} can be obtained from \eqref{BBmax}, for a suitably chosen interval $[a,\,b]$. That is, for a given value $\alpha$, we choose $x_{_\alpha}$ such that $\Pr\big( \sup_{a\, \leq\, t\, \leq\, b} |\,B_{_0}(t)\,| \geq x_{_\alpha} \big)=1-\alpha$. Therefore, for $r=0$ and $a=0$, $b=1$, we stop Algorithm \ref{tf.path.alg} at the iteration $j_{_0}$ if
\begin{align*}
\max_{0\, \leq\, t\, \leq\, 1}~ \Big|\widehat{\mbf u}_{_{-\mca A_{j_{_0}}}}^{\,\trm{st}}\big( \lfloor\, kt\,\rfloor \big) \Big| ~\leq~ \sigma\, x_{_\alpha}\,\sqrt{k}\,, \qquad\qquad \text{ for } \quad t=0,\,\ldots,\,m-|\mca A_{_{j_0}}|\,,
\end{align*}
and $k=m-|\mca A_{j_0}|$.
For $r\geq 1$, the threshold is obtained in a similar fashion. Although the excursion probabilities for the Gaussian bridge processes are not known, we notice that by adopting the steps for the proof of \eqref{BBmax} in \cite{beghin1999maximum}, we can establish a similar formula for the Gaussian bridge process $G_{_0}(t)$ in Theorem \ref{thm:gaussian.bridge.proj1} as
\begin{align}\label{GBmax}
\Pr\Big(\sup_{a\, \leq\, t\, \leq\, b}\, \big|\,G_{_0}(t)\, \big| \geq x \Big)=2\, \sum\limits_{i=1}^{\infty}~(-1)^{i+1}\, \exp\left(\frac{-2\,i^2\,x^2}{S_r^2(k)}\right)\,,
\end{align}
where $k=m-|\mca A_{_{j_{_0}}}|$, and the quantity $S_r^2(k)$ is the $k$-th diagonal element of the matrix $$\Big(\mathbf{D}_{_{-\mathcal{A}_{j_{_0}}}}\mathbf{D}_{_{-\mathcal{A}_{j_{_0}}}}^T\Big)^{-1}\,.$$
Hence, we stop Algorithm \ref{tf.path.alg} at the iteration $j_{_0}$ if
\begin{align}\label{stop.rule.proj1}
\max_{0\, \leq\, t\, \leq\, 1}~ \Big|\widehat{\mbf u}_{_{-\mca A_{j_{_0}}}}^{\,\trm{st}}\big( \lfloor\, kt\,\rfloor\big) \Big| \leq \sigma x_{_\alpha}\left(k-r\right)^{(2\,r+1)/2}\,, \qquad \text{ for }\quad t=0,\,\ldots,\,m-|\mca A_{_{j_0}},
\end{align}
where $x_{_\alpha}$ is derived from the equation
\begin{align}\label{thresh.stop.rule}
\sum\limits_{i=1}^{\infty}(-1)^{i+1}\exp\left(\frac{-2\,i^2\,x_{_\alpha}^2}{S_r^2(k)}\right)=\frac{\alpha}{2}\,.
\end{align}
\section{Pattern Recovery and Theories}
\label{sec:pattern.recovery.proj1}
The main purpose of this section is to investigate whether the PRUTF algorithm can recover features of the true signal $\mbf f$. We also demonstrate conditions under which the structure of the estimated signal $\widehat{\mbf f}$ matches the true signal $\mbf f$. To verify the performance of PRUTF in the discovery the true signal, we first define what we mean by pattern recovery.
\begin{definition}{(Pattern Recovery):}
A trend filtering estimate $\widehat{\mbf f}$ recovers the pattern of the true signal $\mbf f$ if
\begin{align}\label{pat.rec}
\trm{sign} \big( \big[\,\mbf D\widehat{\mbf f}\, \big]_i \big)=\trm{sign}\big(\big[\,\mbf D\mbf f\,\big]_i\big), \qquad\qquad \textrm{for} \quad i=1,\ldots,m,
\end{align}
where $m=n-r-1$ is the number of rows of matrix $\mbf D$. We use the notation $\widehat{\mbf f} \stackrel{pr}{=}\mbf f$ to briefly denote the pattern recovery feature of $\widehat{\mbf f}$.
\end{definition}
In the asymptotic framework, a trend filtering estimate is called pattern consistent if
\begin{align}
\Pr \big(\, \widehat{\mbf f}\, \stackrel{pr}{=} \, \mbf f \, \big) ~\longrightarrow 1\qquad\qquad \textrm{as} \quad n\longrightarrow \infty,
\end{align}
where $\widehat{\mbf f}=\widehat{\mbf f}_n$, to denote its dependency to the sample size $n$. Pattern recovery is very similar to the concept of sign recovery in lasso \citep{zhao2006model,wainwright2009sharp} as it deals with the specification of both locations of non-zero coefficients and their signs.
The problem of pattern recovery is studied for the special case of the fused lasso in several papers. \cite{rinaldo2009properties} derived conditions under which fused lasso consistently identifies the true pattern. This was contradicted by \cite{rojas2014change}, who argued that fused lasso does not always succeed in discovering the exact change points. \cite{rojas2014change} showed that fused lasso can be reformulated as the usual lasso, for which the necessary conditions for exact sign recovery have been established in the literature. Then, they proved that one such necessary condition, known as the irrepresentable condition, is not satisfied for the transformed lasso when there is a specific pattern called a staircase (Definition \ref{staircase.def}). Corrections to \cite{rinaldo2009properties} were appeared in \cite{rinaldo2014corrections}. Later on, \cite{qian2016stepwise} proposed a method called puffer transformation, which is shown to be consistent in specifying the exact change points, including in the presence of staircases.
In the remaining part of this section, we use the dual variables to demonstrate the situations in which PRUTF can correctly recover the pattern of the true signal. Exact pattern recovery implies that the dual variables are comprised of $J_{_0}+1$ consecutive bounded processes whose endpoints correspond to the true change points. The following lemma describes the situations in which exact pattern recovery can be attained. A particular case of this result in the context of piecewise constant signals was established in \cite{rinaldo2014corrections}.
\begin{theorem}\label{thm:consistency.constraints.proj1}
Exact pattern recovery in PRUTF occurs when the discrete time processes $\big\{ \widehat{u}_{_j}^{\,\mathrm{st}}(t)\, ,$ $t=\tau_{_j}+r_{_a}\, ,\, \ldots\, ,\, \tau_{_{j+1}}-r_{_b} \big\}$, for $j=0,\,\ldots,\,J_{_0}$, satisfy the following conditions simultaneously with probability one:
\begin{enumerate}[label=(\alph*)]
\item {\bf First block constraint:} for $t=1\, ,\,\ldots,\,\tau_{_1}-r_{_b}$,
{\footnotesize
\begin{align}\label{block.const.first}
-\lambda\left(1-\left[\big( \mathbf{D}_{_{-\mca A}}\mathbf{D}_{_{-\mca A}}^T \big)^{-1}\mathbf{D}_{_{-\mca A}}\right]_t\,\mbf D_{_{A_1}}^T\mbf 1\right)& ~\leq~ \widehat{u}_{_0}^{\,\mathrm{st}}(t) ~\leq~ \lambda\left(1+\left[\big(\mathbf{D}_{_{-\mca A}}\mathbf{D}_{_{-\mca A}}^T \big)^{-1}\mathbf{D}_{_{-\mca A}}\right]_t\,\mbf D_{_{A_1}}^T\mbf 1\right)\,.
\end{align}}
\item {\bf Last Block constraint:} for $t=\tau_{_{J_0}}+r_{_a}\, ,\, \ldots\, ,\, m$,
{\footnotesize
\begin{align}\label{block.const.last}
\hspace{-.4cm}-\lambda\left(1+\left[ \big(\mathbf{D}_{_{-\mca A}}\mathbf{D}_{_{-\mca A}}^T\big)^{-1}\mathbf{D}_{_{-\mca A}}\right]_t\,\mbf D_{_{{A}_{_{J_0}}}}^T\mbf 1\right)& ~\leq~ \widehat{u}_{_{J_0}}^{\,\mathrm{st}}(t) ~\leq~ \lambda \left(1-\left[\big(\mathbf{D}_{_{-\mca A}}\mathbf{D}_{_{-\mca A}}^T \big)^{-1}\mathbf{D}_{_{-\mca A}}\right]_t\,\mbf D_{_{{A}_{_{J_0}}}}^T\mbf 1\right)\,.
\end{align}}
\item {\bf Interior Block constraints:} for $t=\tau_{_j}+r_{_a}\, ,\, \ldots\, ,\, \tau_{_{j+1}}-r_{_b}$, if $s_{_{j}}\neq s_{_{j+1}}$
{\footnotesize
\begin{align}\label{block.const.inter}
&-\lambda\left(1-\left[ \big(\mathbf{D}_{_{-\mca A}}\mathbf{D}_{_{-\mca A}}^T \big)^{-1}\mathbf{D}_{_{-\mca A}}\right]_t\, \big(\mbf D_{_{A_{j+1}}}^T\mbf 1-\mbf D_{_{A_j}}^T\mbf 1 \big)\right)\, \leq \, \widehat{u}_{_j}^{\,\mathrm{st}}(t) \\[10pt]
&\hspace{5cm}\leq\lambda\left(1+\left[\big(\mathbf{D}_{_{-\mca A}}\mathbf{D}_{_{-\mca A}}^T \big)^{-1}\mathbf{D}_{_{-\mca A}}\right]_t\, \big(\mbf D_{_{A_{j+1}}}^T\mbf 1-\mbf D_{_{A_j}}^T\mbf 1 \big)\right)\,,
\end{align}}
and if $s_{_{j}}\neq s_{_{j+1}}$, which corresponds to a staircase block, $\widehat{u}_{_j}^{\,\mathrm{st}}(t) \,\leq\, 0$ or $\widehat{u}_{_j}^{\,\mathrm{st}}(t) \,\geq\, 0$.
\end{enumerate}
\end{theorem}
\noindent
In the foregoing equations, $\mbf 1 \in \mbb R^{r+1}$ is a vector of size $r+1$ whose elements are all 1. A proof of the theorem is given in Appendix \ref{prf:consistency.constraints.proj1}.
We analyze the performance of the PRUTF algorithm in pattern recovery in two different scenarios; signals with staircase patterns and
signals without staircase patterns. To our knowledge, \cite{rojas2014change} was the first paper to carefully investigate the staircase pattern for the piecewise constant signals in the change points analysis setting. In \cite{rojas2014change}, a staircase pattern for a piecewise constant signal refers to the phenomenon of equal signs in two consecutive changes. We extend this concept to the general case, which covers any piecewise polynomial signals of order $r$, by applying the penalty matrix $\mbf D=\mathbf{D}^{(r+1)}$.
\begin{definition}\label{staircase.def}
Suppose that the true signal $\mathbf{f}$ is a piecewise polynomial of order $r$ with change points at the locations $\boldsymbol\tau= \big\{\tau_{_1},\,\ldots,\,\tau_{_{J_{_0}}} \big\}$. Moreover, let $\mathbf{B}= \big\{B_{_0},\,\ldots,\,B_{_{J_{_0}}}\big\}$ be blocks created by the change points in $\bsy \tau$. A staircase occurs in block $B_{_j},~ j=1,\,\ldots,\,J_{_0}-1$ if
\begin{align}\label{stair}
\mathrm{sign}\big( \big[\,\mathbf{D}\mbf f\,\big]_{\tau_{_j}}\big)=\mathrm{sign}\big(\big[\,\mathbf{D}\mbf f\,\big]_{\tau_{_{j+1}}}\big).
\end{align}
\end{definition}
The following theorem investigates the consistency of PRUTF in pattern recovery, in both with and without staircases. Specifically, it shows that for a signal without a staircase, the exact pattern recovery conditions are satisfied with probability one. On the other hand, in the presence of staircases in the signal, the probability of these conditions holding, which is equivalent to the probability of a Gaussian bridge process never crossing the zero line, converges to zero.
In the literature, the consistency of a change point method is usually characterized by the signal size $n$, the number of change points $J_0$, the noise variance $\sigma_n^2$, the minimal spacing between change points,
\begin{align*}
\underline{L}_n= \min\limits_{j=0,\,\ldots,\,J_{_0}}~ \big|L_{n,\, j} \big|= \min\limits_{j=0,\,\ldots,\,J_{_0}}~ \big|\tau_{_{j+1}}-\tau_{_j}\big|,
\end{align*}
and the minimum magnitude of jumps between change points,
\begin{align*}
\delta_n= \min\limits_{j=1,\,\ldots,\,J_{_0}}~ \big| \mbf D_{\tau_j} \mbf f \big|.
\end{align*}
All the above quantities are allowed to change as $n$ grows.
In the following, we present our main theorem providing conditions under which the output of the PRUTF algorithm consistently recovers the pattern of the true signal $\mbf f$.
\begin{theorem}\label{thm:consistency.proj1}
Suppose that $\mbf y$ follows the model in \eqref{fmodel.proj1}. Let $\bsy\tau$ be the set of $J_0$ change points for the true signal $\mbf f$. Additionally, assume that $\widehat{\bsy\tau}_n$ and $\widehat{\mbf f}_n$ are the set of change points estimates and the corresponding signal estimate obtained by the PRUTF algorithm, respectively. The followings hold for the PRUTF algorithm.
\begin{enumerate}[label=(\alph*)]
\item {\bf Non-staircase Blocks:} Suppose there is no staircase block in the true signal $\mbf f$. For some $\xi>0$ and with
\begin{align*}
\lambda_n < \dfrac{\delta_n\, \underline{L}_n^{2r+1}}{n^{2r}\, 2^{\,r+2}},
\end{align*}
if the conditions
\begin{itemize}
\item \begin{minipage}{0.875\textwidth}
\begin{align}\label{consistent.conditions1.proj1}
\frac{\delta_{n}\, \underline{L}_{n}^{r+1/2}}{n^r\, \sigma_n} \longrightarrow \infty
\qquad\quad \trm{and} \qquad\quad
\frac{ \delta_{n}\, \underline{L}_{n}^{r+1/2}}{ 2^{\,r/2+2}\,n^r\, \sigma_n\, \sqrt{\log (J_0)}} ~>~ (1+\xi),
\end{align}
\end{minipage}
\item \begin{minipage}{0.875\textwidth}
\centering
\begin{align}\label{consistent.conditions2.proj1}
\frac{\lambda_n\,\, \underline{L}_{n}^{r+1/2}}{n^r\, \sigma_n} \longrightarrow \infty
\qquad\quad \trm{and} \qquad\quad
\frac{ 2^{\,r/2+1}\, \lambda_n\,\, \underline{L}_{n}^{r+1/2}}{n^r\, \sigma_n\, \sqrt{\log (n-J_0)}} ~>~ (1+ \xi),
\end{align}
\end{minipage}
\end{itemize}
hold, then the PRUTF algorithm guarantees exact pattern recovery with probability approaching one. That is,
\begin{align*}
\Pr \big(\, \widehat{\mbf f}_n\, \stackrel{pr}{=}\, \mbf f \big)\longrightarrow 1
\qquad\qquad\qquad
\trm{as}
\qquad
n \longrightarrow \infty.
\end{align*}
\item {\bf Staircase Blocks:} On the other hand, if the true signal $\mbf f$ contains at least one staircase block, then the probability of exact pattern recovery by the PRUTF algorithm converges to zero. That is, \begin{align*}
\Pr \big(\, \widehat{\mbf f}_n\, \stackrel{pr}{=}\, \mbf f \big)\longrightarrow 0
\qquad\qquad\qquad
\trm{as}
\qquad
n \longrightarrow \infty.
\end{align*}
\end{enumerate}
\end{theorem}
A proof is given in Appendix \ref{prf:consistency.proj1}.
\begin{remark}
The performance of PRUTF in terms of consistent pattern recovery relies on the quantity $\delta_n\, \underline{L}_{n}^{r+1/2}/\sigma_n$ and the choice of $\lambda_n$. In the piecewise constant case, the former quantity reduces to the well-known signal-to-noise-ratio quantity, which is crucial for a consistent change point estimation \citep{fryzlewicz2014wild,wang2020univariate}. The statements in \ref{consistent.conditions1.proj1} illustrate that the consistency of PRUTF in non-staircase blocks is achievable if the quantity $\delta_n\,\underline{L}_{n}^{\,r+1/2}/\sigma_n$ is of order $O(n^{r+c})$, for some $c>0$. In addition, the number of the change points $J_0$ is allowed to diverge, provided
\begin{align*}
\log \big( J_0 \big) ~\lesssim~ \frac{ \delta_{n}^2\,\, \underline{L}_{n}^{2r+1}}{ n^{2r}\, \sigma_n^2}\,.
\end{align*}
\end{remark}
The drift term \eqref{u.drift} plays a key role in assessing the performance of PRUTF in pattern recovery. From \eqref{drift.term.staircase.proj1}, this drift for a staircase block $B_{_j}$ becomes $\lambda \,s_{_j}$, which is constant in $t$ for the entire block. Consequently, the interior dual variables $\widehat{u}_{_j}(t;\lambda)$ for the staircase block $B_{_j}$ contain only the stochastic term $\widehat{u}_{_j}^{\,\textrm{st}}(t)=\left[\big(\mathbf{D}_{_{-\mca A}}\mathbf{D}_{_{-\mca A}}^T\big)^{-1}\mathbf{D}_{_{-\mca A}}\right]_t\, \mathbf{y}$, which fluctuates around the line $\lambda\, s_{_j}$. Recall that the KKT conditions for the dual problem of trend filtering require $\widehat{u}_{_j}(t;\,\lambda)$ to stay within the lines $-\lambda$ and $\lambda$. Thus, for a signal with staircase patterns, the PRUTF algorithm is sensitive to the variability of random noises and identifies change points once $\widehat{u}_{_{j}}^{\,\textrm{st}}(t)$ touches the $\pm\lambda$ boundaries.
Examples of piecewise constant and piecewise linear signals, along with their corresponding dual variables, are depicted in Figure \ref{fig:stair-dual}, in which the above argument can be clearly seen.
\begin{figure}[!ht]
\begin{center}
\begin{subfigure}{.38\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/PrimalDual-Stair0.jpg}
\caption{Piecewise constant signal with staircase block (50, 80].}
\end{subfigure}
\qquad\qquad
\begin{subfigure}{.38\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/PrimalDual-Stair1.jpg}
\caption{Piecewise linear signal with staircase block (20, 55].}
\end{subfigure}
\caption[Piecewise Constant and Piecewise Linear Signals With Staircase Patterns]{Piecewise constant and piecewise linear signals with staircase pattern at blocks (50, 80] and (20, 55] and their corresponding dual variables. }
\label{fig:stair-dual}
\end{center}
\end{figure}
According to Theorem \ref{thm:consistency.proj1}, if there is no staircase pattern in the underlying signal, the PRUTF algorithm consistently estimates the true signal, and fails to do so, otherwise. Given the results in Theorem \ref{thm:consistency.proj1}, the natural question is whether Algorithm \ref{tf.path.alg} could be modified to enjoy the consistent pattern recovery in any case. In the next section, we will present an effective remedy based on altering the sign of a change associated with a staircase block.
\section{Modified PRUTF Algorithm}
\label{sec:modified.trend.filtering.proj1}
In this section, we attempt to modify the PRUTF algorithm in such a way that it produces consistent estimates of the number and locations of change points even in the presence of staircase patterns. As previously mentioned, for a staircase block, the drift term \eqref{u.drift} is constant and leads to false discoveries in change points. This is shown in Figure \ref{fig:stair.false.discovery} with a piecewise constant signal of size $n=100$ and the true change points at $\bsy\tau= \big\{15,\, 40,\, 50,\, 80 \big\}$. The figure reveals that the staircase block $(50,\, 80]$ leads to three false discoveries at the locations 52, 54 and 76.
\begin{figure}[!t]
\begin{center}
\begin{subfigure}[b]{.36\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/dual_orig_alg1.jpg}
\caption{First change point at $\tau=50$.}
\end{subfigure}
\qquad\quad
\begin{subfigure}[b]{.36\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/dual_orig_alg2.jpg}
\caption{Second change point at $\tau=15$.}
\end{subfigure}
\\
\begin{subfigure}[b]{.36\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/dual_orig_alg3.jpg}
\caption{Third change point at $\tau=80$.}
\end{subfigure}
\qquad\quad
\begin{subfigure}[b]{.36\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/dual_orig_alg4.jpg}
\caption{Fourth change point at $\tau=40$.}
\end{subfigure}
\caption[Impact of Staircase Patterns in Change Point False Discovery] {The process of detecting change points using PRUTF for a signal with a staircase pattern. In panel (b), there are three falsely detected change points $\{52, \,54,\, 76\}$ which is due to the staircase block $(50, \,80]$.}
\label{fig:stair.false.discovery}
\end{center}
\end{figure}
\begin{figure}[!hb]
\begin{center}
\begin{subfigure}{.36\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/dual_mod_alg1.jpg}
\caption{First change point at $\tau=50$.}
\end{subfigure}
\qquad\quad
\begin{subfigure}{.36\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/dual_mod_alg2.jpg}
\caption{Second change point at $\tau=15$.}
\end{subfigure}
\\
\begin{subfigure}{.36\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/dual_mod_alg3.jpg}
\caption{Third change point at $\tau=80$.}
\end{subfigure}
\qquad\quad
\begin{subfigure}{.36\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/dual_mod_alg4.jpg}
\caption{Fourth change point at $\tau=40$.}
\end{subfigure}
\caption[Steps of the mPRUTF Algorithm]{Steps of the mPRUTF algorithm until all four true change points are identified.}
\label{fig:modified-tf}
\end{center}
\end{figure}
The inconsistency of PRUTF in the presence of a staircase as established in Theorem \ref{thm:consistency.proj1}, stems from the fact that the change signs of the two consecutive change points at both ends of the staircase block are identical. That is, for the staircase block $B_{_j}$, $\mathrm{sign} \big( [\,\mathbf{D} \mbf f\,]_{\tau_{_j}} \big)= \mathrm{sign} \big( [\,\mathbf{D} \mbf f\,]_{\tau_{_{j+1}}} \big)$. Therefore, a question arises: can we modify Algorithm \ref{tf.path.alg} in such a way that the change signs of two neighbouring change points never become equal but still yield the solution path of trend filtering? We suggest a simple but very efficient solution to the above question.
Once a new change point is identified, we check whether its $r$-th order difference sign is the same as that of the change points right before and after. If these change signs are not identical, then the procedure continues to search for the next change point. Otherwise, we replace the sign of the neighbouring change point with zero. This replacement of the sign prevents the drift term \eqref{u.drift} from becoming zero. This idea is implemented for the above signal, and the result is displayed in Figure \ref{fig:modified-tf}. As shown in panel (b), the sign of the first change point at location 50 is set to zero since its sign is identical to the sign of the second change point at 15. This sign replacement vanishes false discoveries appeared in panel (b) of Figure \ref{fig:stair.false.discovery}.
Based on the above argument, PRUTF presented in Algorithm \ref{tf.path.alg} can be modified as follows to avoid false discovery and to produce consistent pattern recovery.
\begin{algorithm}[mPRUTF] \label{tf.modified.path.alg}
\begin{enumerate}
\item[]
\item Execute steps 1 and 2 of Algorithm \ref{tf.path.alg}.
\item
\begin{enumerate}
\item Execute part (a) of step 3 in Algorithm \ref{tf.path.alg} to obtain $\tau_{_j}^{^\mathrm{join}}$ and its sign $s_{_j}^{^\mathrm{join}}$. At this point, the algorithm checks whether $s_{_j}^{^\mathrm{join}}$ is identical to the signs of the change points just before and after $\tau_{_j}^{^\mathrm{join}}$. If so, set the sign of change point which is identical to $s_{_j}^{^\mathrm{join}}$ to zero.
Then, repeat part (a) of step 3 again to obtain new $\tau_{_j}^{^\mathrm{join}}$ and $s_{_j}^{^\mathrm{join}}$ and update the sets $\mca A_{_j}$ and $\mca B_{_j}$.
\item Execute parts $(b)$ and $(c)$ of step 3 in Algorithm \ref{tf.path.alg}.
\end{enumerate}
\item Repeat step 3 until either $\lambda_{_j}>0$ or a stopping rule is met.
\end{enumerate}
\end{algorithm}
The modified PRUTF (mPRUTF) algorithm produces consistent change point estimations, even in the presence of staircase patterns. This consistency has been achieved by converting the staircase patterns to non-staircase patterns that avoid false change point detection. In other words, running mPRUTF on an arbitrary signal (with or without staircases) is equivalent to running PRUTF on a signal without any staircase; see Figures \ref{fig:stair.false.discovery} and \ref{fig:modified-tf}. Thus, from part (a) of Theorem \ref{thm:consistency.proj1}, the mPRUTF algorithm is consistent in pattern recovery.
\begin{remark}
In step 2, part (a) of the mPRUTF algorithm, presented in Algorithm \ref{tf.modified.path.alg}, it is impossible for the sign $s_{_j}^{^\mathrm{join}}$ of the new change point to be identical to the sign of both of its immediate neighbouring change points, because the algorithm has already checked the equality of signs at previous steps. If they are equal, the sign of the immediate neighbouring change point will be set to zero.
\end{remark}
Recall that the KKT optimality conditions for solutions of the trend filtering problem in \eqref{tf.dual.obj.proj1} requires the dual variables $\widehat{\mbf u}_{_\lambda}$ to be less than or equal to $\lambda$ in absolute values, i.e., $|\widehat{\mbf u}_{_\lambda}|\leq \lambda$. This condition still holds when we replace the sign values ($+1$ or $-1$) with 0. Consequently, we have the following theorem.
\begin{theorem}\label{lem:modif.alg}
The mPRUTF algorithm presented in Algorithm \ref{tf.modified.path.alg} is a solution path of trend filtering.
\end{theorem}
For brevity, we do not provide the proof of Theorem \ref{lem:modif.alg} here. We refer the reader to the similar arguments for the LARS algorithm of lasso in \cite{tibshirani2013lasso}.
It is worth pointing out that the mPRUTF algorithm requires slightly more computation than the original PRUTF algorithm. The increase in computation time directly depends on the number of staircase blocks in the underlying signal. To show how mPRUTF resolves the problem of false discovery in signals with staircases, we ran both algorithms for 1000 generated datasets from a piecewise constant and piecewise linear signals. The frequency plot of the estimated change points for both algorithms are represented in Figure \ref{fig:simul-mod-vs-orig}. The figure reveals that the original algorithm produces false discoveries within staircase blocks for both signals, whereas mPRUTF resolves this issue.
\begin{figure}[!h]
\begin{center}
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\linewidth]{img/cpfreq_pc.jpg}
\caption{A piecewise constant signal with blocks 4 and 8 as staircase blocks.}
\end{subfigure}
\quad
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\linewidth]{img/cpfreq_pl.jpg}
\caption{A piecewise linear signal with blocks 3 and 5 as staircase blocks.}
\end{subfigure}
\caption[Change Point Frequency Plots For the PRUTF and mPRUTF Algorithms]{The frequency plots of estimated change points using the PRUTF and mPRUTF algorithms.}
\label{fig:simul-mod-vs-orig}
\end{center}
\end{figure}
\section{Numerical Studies}
\label{sec:simulation.proj1}
In this section, we provide numerical studies to demonstrate the effectiveness and performance of our proposed algorithm, mPRUTF . We begin with a simulation study and then provide real data analyses.
\subsection{Simulation Study}
In this section, we investigate the performance of our proposed method, mPRUTF, by a simulation study. We consider two scenarios, namely piecewise constant and piecewise linear signals with staircase patterns. We compare our method to some powerful state-of-the-art approaches in change point analysis. These methods, a list of their available packages on CRAN, and their applicability for different scenarios are listed in Table \ref{tab:CPmethods}.
\begin{table}[!t]
\centering
{\small
\begin{tabular}{ll ll ll llll}
\hline
Method && Reference && R Package && \multicolumn{3}{c}{Signal} \\
\cline{7-9}
&& && && PWC && PWL \\ \cline{1-1}\cline{3-3}\cline{5-5}\cline{7-9}
PELT && \cite{killick2012optimal} && {\bf changepoint} && \checkmark && \ding{53} \\
WBS && \cite{fryzlewicz2014wild} && {\bf wbs} && \checkmark && \ding{53} \\
SMUCE && \cite{frick2014multiscale} && {\bf stepR} && \checkmark && \ding{53} \\
NOT && \cite{baranowski2019narrowest} && {\bf not} && \checkmark && \checkmark \\
ID && \cite{anastasiou2019detecting} && {\bf IDetect} && \checkmark && \checkmark \\
\hline
\end{tabular}}
\caption[A List of Change Point Detection Methods With Their Packages in CRAN]{A list of change point detection and estimation methods with their packages in CRAN. The last two columns indicate which methods can be applied to piecewise constant or/and piecewise linear signals.}
\label{tab:CPmethods}
\end{table}
We have adopted the simulation setting of \cite{baranowski2019narrowest}, and consider piecewise constant and piecewise linear signals as follows.
\begin{enumerate}[label=(\roman*)]
\item A piecewise constant signal (PWC) of size $n=2024$ with the number of change points $J_{_0}=8$. The locations of the true change points are $\bsy\tau= \big\{$205, 308, 512, 820, 902, 1332, 1557, 1659$\big\}$ with jump sizes 1.464, -0.656, 0.098, 1.830, 0.537, 0.768, -0.574, -3.335. We set the starting intercept to 0.
\item A piecewise linear signal (PWL) of size $n=1408$ and the number of change points $J_{_0}=7$. The true change points are located at $\bsy\tau=\big\{$256, 512, 768, 1024, 1152, 1280, 1344$\big\}$. The corresponding intercepts and slopes for 8 created blocks by $\bsy\tau$ are 0.111, 0.553, -0.481, 3.002, -7.169, -0.030, 7.217, -0.958 and -8, 6, -3, -11, 12, 4, -7, 8, respectively.
\end{enumerate}
\begin{figure}[!b]
\begin{center}
\begin{subfigure}{.475\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/signal_pwc.jpg}
\caption{PWC signal with staircases at blocks $(512\,,\, 820]$ and $(1557\,,\, 1659]$.}
\end{subfigure}
\quad
\begin{subfigure}{.475\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/signal_pwl.jpg}
\caption{PWL signal with staircases at blocks $(512\,,\,768]$ and $(1024\,,\,1152]$.}
\end{subfigure}
\caption[Simulated Piecewise Constant and Piecewise Linear Signals]{The piecewise constant (PWC) and piecewise linear (PWL) signals with the generated samples used in the simulation study.}
\label{fig:signal-scenarios}
\end{center}
\end{figure}
\noindent Figure \ref{fig:signal-scenarios} displays the true PWC and PWL signals, with their representative datasets generated using model \eqref{fmodel.proj1}. We note that both PWC and PWL signals contain two staircase blocks. These blocks for the PWC signal are $(512\,,\, 820]$, $(1557\,,\, 1659]$ and for PWL signal are $(512\,,\,768]$ and $(1024\,,\,1152]$.
We apply mPRUTF presented in Algorithm \ref{tf.modified.path.alg} to estimate the number and the locations of the change points for the PWC and PWL signals. In each iteration of the simulation study, we simulate a dataset according to model \eqref{fmodel.proj1} under the assumption that the error terms are independently and identically distributed as $N(0\,,\,\sigma^2)$. Moreover, we set the significance level to $\alpha=0.05$ for the stopping rule in \eqref{stop.rule.proj1}.
\begin{figure}[!b]
\begin{center}
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{img/ncpts_pc.jpg}
\caption{Average number of change points.}
\label{fig:simul-pc-a}
\end{subfigure}
\quad
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{img/mse_pc.jpg}
\caption{MSE estimations.}
\label{fig:simul-pc-b}
\end{subfigure}
\\
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{img/dh_pc.jpg}
\label{fig:simul-pc-c}
\end{subfigure}
\quad
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{img/time_pc.jpg}
\caption{Computation time.}
\label{fig:simul-pc-d}
\end{subfigure}
\caption[Comparison of mPRUTF With the State-of-the-Art Change Point Methods: Piecewise Constant Signal]{The estimated average number of change points, MSE and Hausdorff distance, as well as the computation time of various methods for PWC signal. The results are provided for different values of the noise variability $\sigma$.}
\label{fig:simul-pc}
\end{center}
\end{figure}
\begin{figure}[!b]
\begin{center}
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{img/ncpts_pl.jpg}
\caption{Average number of change points.}
\label{fig:simul-pl-a}
\end{subfigure}
\quad
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{img/mse_pl.jpg}
\caption{MSE estimations}
\label{fig:simul-pl-b}
\end{subfigure}
\newline
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{img/dh_pl.jpg}
\caption{Hausdorff distance.}
\label{fig:simul-pl-c}
\end{subfigure}
\quad
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{img/time_pl.jpg}
\caption{Computation time.}
\label{fig:simul-pl-d}
\end{subfigure}
\caption [Comparison of mPRUTF With the State-of-the-Art Change Point Methods: Piecewise Linear Signal]{The estimated average number of change points, MSE and Hausdorff distance, as well as the computation time of various methods for PWL signal. The results are provided for different values of the noise variability $\sigma$.}
\label{fig:simul-pl}
\end{center}
\end{figure}
In order to explore the impact of different noise levels on the change point methods, we run each simulation for various values of $\sigma$ in $\big\{ 0.5 , \,1 , \,1.5 ,\, \ldots\, ,\, 4.5 , \,5 \big\}$. We run the simulation $N=5000$ times and report the results for each change point technique in terms of estimates of the number of change points, estimates of the mean square error given by $\textrm{MSE}=N^{-1} \sum_{\, i=1}^{N} \big(\widehat{f}_{_i}-f_{_i} \big)^2$, estimates of the scaled Hausdorff distance given by
\begin{align*}
d_H=\frac{1}{N}\,\max\left\{\max_{j=0,\,\ldots,\,J_{_0}}\,\,\min_{i=0,\,\ldots,\,\widehat{J}_{_0}}\,\big|\widehat{\tau}_{_i}-\tau_{_j} \big|~~,~~\max_{i=0,\,\ldots,\,\widehat{J}_{_0}}\,\min_{j=0,\,\ldots,\,J_{_0}}\, \big|\widehat{\tau}_{_i}-\tau_{_j} \big|\right\},
\end{align*}
and the computation time in seconds. These quantities are frequently used to assess the performance of a change point detection technique in the literature, for example, see \cite{baranowski2019narrowest}, \cite{anastasiou2019detecting}. The signal estimate, $\widehat{f}$, is computed by the least square fit of a polynomial of order $r$ to the observations within segments created by each change point method. We also remark that the tuning parameters and stopping criteria for the methods listed in Table \ref{tab:CPmethods} are set to the default values by the packages.
The results for the PWC and PWL signals are presented in Figures \ref{fig:simul-pc} and \ref{fig:simul-pl}, respectively. In the case of piecewise constant signal, as in Figure \ref{fig:simul-pc}, mPRUTF performs comparable to PELT and SMUCE in terms of the average number of change points, MSE and the scaled Hausdorff distance up to $\sigma=3$, and outperforms them as $\sigma$ increases. For $\sigma \geq 4$, similar performance to WBS, NOT and ID is viewed from these measurements. As indicated by the average number of change points, MSE and the scaled Hausdorff distance, WBS, NOT and ID outperform the other methods in almost all noise levels. From a computational point of view, mPRUTF takes a slightly longer time, mainly due to the matrix $\mbf D$ multiplications, however, this computation time decreases as noise level $\sigma$ increases. As in panel (d) of Figure \ref{fig:simul-pc}, the methods PELT, SMUCE and ID are the fastest ones.
In the case of piecewise linear signal, mPRUTF is only compared to NOT and ID methods, which are applicable to the piecewise polynomials of order $r \geq 1$. As in Figure \ref{fig:simul-pl} mPRUTF outperforms both NOT and ID in terms of the average number of change points and the scaled Hausdorff distance for all noise levels. In terms of MES, mPRUTF outperforms the other two for $\sigma \geq 2$. As shown in Panel (d) of Figure \ref{fig:simul-pl}, the computation time of mPRUTF ranks second after ID.
The mPRUTF method performs well in terms of the estimation of the number of change points, their locations, as well as the true signals. In fact, simulation results for most of the scenarios indicate that mPRUTF is among the most competitive change point detection approaches in the literature.
\subsection{Real Data Analysis}
In this section, we have analyzed UK HPI and GISTEMP and COVID-19 datasets, using our proposed algorithm. Because $\sigma^2$ is unknown for these real datasets, we applied median absolute deviation (MAD) proposed by \cite{hampel1974influence}, to robustly estimate $\sigma^2$. More specifically, a MAD estimate of $\sigma$ for piecewise constant signals is given by $\widehat{\sigma}=\textrm{Median}\big(\mbf D^{(1)}\,\mbf y \big) \big/ \big[\sqrt{2}\, \Phi^{-1}(0.75) \big]$ and for piecewise linear signals by $\widehat{\sigma}=\textrm{Median} \big( \mbf D^{(2)}\,\mbf y \big) \big/ \big[\sqrt{6}\,\Phi^{-1}(0.75) \big]$, where $\Phi^{-1}(\cdot)$ represents the inverse cumulative density function of the standard normal distribution.
\begin{example}[UK HPI Data] \label{HPIdata.example}
The UK House Price Index (HPI) is a National Statistic that shows changes in the value of residential properties in England, Scotland, Wales and Northern Ireland. The HPI measures the price changes of residential housing by calculating the price of completed houses sale transactions as a percentage change from some specific start date. The UK HPI uses the hedonic regression model as a statistical approach to produce estimates of the change in house prices for each period. For more details, see \url{https://landregistry.data.gov.uk/app/ukhpi}.Many researchers, including \cite{fryzlewicz2018detecting}, \cite{fryzlewicz2018tail} and \cite{anastasiou2019detecting}, have studied the UK HPI dataset in carrying out change point analysis.
In the current study, we consider monthly percentage changes in the UK HPI at Tower Hamlets (an east borough of London) from January 1996 to November 2018.
We have applied the mPRUTF algorithm to the dataset. The algorithm have found five change points located at the dates December 2002, April 2008 and August 2009 (may be attributed to the Credit Crunch and Financial Crises), May 2012 (may be attributed to The London 2012 Summer Olympics) and August 2015 (may be attributed to regulatory and tax changes, and also by lower net migration from the EU). The dataset, the change points derived by mPRUTF and its piecewise constant fit are presented in panel (a) of Figure \ref{fig:HPI-GISTEMP}.
\end{example}
\begin{figure}[!t]
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/hpi_cpts.jpg}
\caption{UK HPI dataset and its piecewise constant fit.}
\end{subfigure}
\quad
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/gistemp_cpts.jpg}
\caption{GISTEMP dataset and its piecewise linear fit.}
\end{subfigure}
\caption[Detected Change Points Using mPRUTF For UK HPI and GISTEMP Datasets]{The time series and fitted signals for both Tower Hamlet HPI and GISTEMP datasets presented in examples }
\label{fig:HPI-GISTEMP}
\end{figure}
\begin{example} [GISTEMP Data] \label{GISTEMPdata.example}
The Goddard Institute for Space Studies (GISS) monitors broad global changes around the world. The GISS Surface Temperature Analysis (GISTEMP) is an estimate of the global surface temperature changes (see \url{https://www.giss.nasa.gov}). In the analysis of GISTEMP data, the temperature anomalies are used rather than the actual temperatures. A temperature anomaly is a difference from an average or baseline temperature. The baseline temperature is typically computed by averaging thirty or more years of temperature data (1951 to 1980 in the current dataset). A positive anomaly indicates the observed temperature was warmer than the baseline, while a negative anomaly indicates the observed temperature was cooler than the baseline. For more details see \cite{hansen2010global} and \cite{lenssen2019improvements}.
The GISTEMP dataset has been frequently explored in change point literature, for example see \cite{ruggieri2013bayesian}, \cite{james2015change} and \cite{baranowski2019narrowest}.
Panel (b) of Figure \ref{fig:HPI-GISTEMP} displays the monthly land-ocean temperature anomalies recorded from January 1880 to August 2019 (see \url{https://data.giss.nasa.gov/gistemp}). The plot reveals the presence of a linear trend with several change points in the dataset. For this dataset, we have identified six change points using mPRUTF located in September 1899, February 1911, May 1929, April 1941, March 1960, October 1984. The locations of change points and an estimate of the piecewise linear signal are presented in panel (b) of Figure \ref{fig:HPI-GISTEMP}.
\end{example}
\begin{example}[COVID-19 Data] \label{example:covid19.proj1}
Since the initial outbreak of Novel Coronavirus Disease 2019 (COVID-19) in Wuhan, China, in mid-November 2019, the virus has rapidly spread throughout the world. The pandemic has infected 21.26 million people and killed more than 761,000 \url{https://covid19.who.int/}, greatly stressing public health systems and adversely influencing global society and economies. Therefore, every country has attempted to slow down the transmission rate by various regional and national policies such as the declaration of national emergencies, quarantines and mass testing. Of vital interest to governments is understanding the pattern of the epidemic growth and assessing the effectiveness of policies undertaken. A scientist can investigate these matters by analyzing the sequence of infection data for COVID-19. Changepoint detection is one possible framework for studying the behaviour of COVID-19 infection curves. By detecting the locations of alterations in the curves, change point analysis gives us insights into changes in the transmission rate or efficiency of interventions. It also enables us to raise warning signals if the disease pattern changes.
For this example, we consider the log-scale of the cumulative number of confirmed cases for Australia, Canada, the United Kingdom and the United States, during the period March 10, 2020 through April 30, 2021. We have applied mPRUTF to detect change points that have occurred in the data for each country. We then fitted a piecewise linear model to the data using the selected change points, which provides a more direct perception of how the growth rate changes over time.
\begin{figure}[!t]
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/aus.jpg}
\end{subfigure}
\quad
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/can.jpg}
\end{subfigure}
\\
\vspace{.6cm}
\\
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/uk.jpg}
\end{subfigure}
\quad
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=1\linewidth]{img/us.jpg}
\end{subfigure}
\caption[Detected Change Points Using mPRUTF For COVID-19 Datasets]{The change point locations and estimated linear trend for the transformed COVID-19 datasets in Example \ref{example:covid19.proj1}. The dates indicated on each plot show the detected change points.}
\label{fig:covid19}
\end{figure}
Figure \ref{fig:covid19} displays the locations of change points detected by the mPRUTF algorithm as well as the estimated linear trends for the four countries. For example, our algorithm has identified eight change points for Canada, on March 26, 2020; April 9, 2020; May 11, 2020; July 14, 2020; August 31, 2020; October 10, 2020; January 12, 2021 and March 18, 2021. The figure shows segments created by the estimated change points as well as their growth rate. The growth rate for the first segment (from March 10, 2020 to March 26, 2020) is remarkably high, but starts to slightly decline after the first change point on March 26, 2020. This mild decline may be linked to the declaration of the the state of emergency, quarantine and international travel ban declared by the Government of Canada. The third segment (from April 9, 2020 to May 11, 2020), the fourth segment (from May 11, 2020 to July 14, 2020) and the fifth segment (from July 14, 2020 to August 31, 2020) have witnessed noticeable decreases in the growth rate. The decrease can perhaps/probably be explained by the mandatory use of face-coverings and the border closure with the United States for the third segment, and the use of COVID-19 serological tests and the national contact tracing for the fourth and fifth segments. An upward trend in the growth rate observed from August 31, 2020 to October 10, 2020 could have resulted from the opening of businesses and public spaces. It seems that the second wave started on October 10, 2020, with a remarkable increase in the rate that continued until January 12, 2021. After this date, the rate again declined until March 18, 2021, which could be the result of provincial states of emergency and lockdowns. The last segment witnessed another surge in the rate, perhaps due to new variants of Coronavirus.
The mPRUTF algorithm has also detected seven change points for the United Kingdom on the following dates: April 4, 2020; April 28, 2020; May 25, 2020; June 22, 2020; September 9, 2020; November 26, 2020 and February 5, 2021. As can be viewed from the figure, there are remarkable declines in the growth rates for the second segment (perhaps due to the nationwide lockdown), the third segment (perhaps due to the international travel ban) and the segments from May 25, 2020 to September 9, 2020 (perhaps due to mandatory use of face masks and comprehensive contact tracing). The country witnessed a significant increase in the growth rate starting from September 9, 2020, which aligns with the reopening of businesses, schools and universities. The second national lockdown could be linked to the very small decrease in the slope of the segment from November 26, 2020 to February 5, 2021. Finally, the growth rate in the last segment seemed to be under control, which could be the result of COVID vaccinations.
\end{example}
\section{More on Models With Frequent Change Points or With Dependent Errors}
\label{sec:model_misspecification.proj1}
This section empirically investigates the performance of mPRUTF in models with frequent change points as well as models with dependent random errors.
\subsection{mPRUTF in Signals With Frequent Change Points}
\label{discussion:frequent.changepoint.proj1}
In order to evaluate the detection power of mPRUTF in signals with frequent change points, we employ a teeth signal for the piecewise constant case and a wave signal for the piecewise linear case. For the teeth signal, we consider a signal with 29 change points and varying segment lengths defined as follows: \begin{itemize}
\item for $1 \leq t \leq 50$, $f_t=0$ if $(t~ \trm{mod}~ 10) \in \{1, \ldots, 5\}$; $f_t=1$, otherwise,
\item for $51 \leq t \leq 150$, $f_t=0$ if $(t~ \trm{mod}~ 20) \in \{1, \ldots, 10\}$; $f_t=1$, otherwise,
\item for $151 \leq t \leq 250$, $f_t=0$ if $(t~ \trm{mod}~ 40) \in \{1, \ldots, 20\}$; $f_t=1$, otherwise,
\item for $251 \leq t \leq 500$, $f_t=0$ if $(t~ \trm{mod}~ 100) \in \{1, \ldots, 50\}$; $f_t=1$, otherwise.
\end{itemize}
The signal is displayed in the top-left panel of Figure \ref{fig:cpts_frequent_changepoint.proj1}.
\begin{figure}[!b]
\begin{center}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=\linewidth]{img/pwc_frq_cpts.jpg}
\caption{Teeth signal}
\end{subfigure}
\quad
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=\linewidth]{img/pwl_frq_cpts.jpg}
\caption{Wave signal}
\end{subfigure}
\caption[Histograms of the Locations of Change Points For the Teeth and Wave Signals]{Histograms of the locations of change points for the teeth and wave signals. The histograms show the frequencies of the change points detected using mPRUTF in both signals. The result are displayed for two different $\sigma$ values.}
\label{fig:cpts_frequent_changepoint.proj1}
\end{center}
\end{figure}
\noindent The wave signal also has 29 change points with varying slopes which is defined as follows:
\begin{itemize}
\item for $1 \leq t \leq 50$, $f_t=-1+0.4\, t$ if $(t~ \trm{mod}~ 10) \in \{1, \ldots, 5\}$; $f_t=1-0.4\,t$, otherwise,
\item for $51 \leq t \leq 150$, $f_t=-1+0.2\,t$ if $(t~ \trm{mod}~ 20) \in \{1, \ldots, 10\}$; $f_t=1-0.2\,t$, otherwise,
\item for $151 \leq t \leq 250$, $f_t=-1+0.1\,t$ if $(t~ \trm{mod}~ 40) \in \{1, \ldots, 20\}$; $f_t=1-0.1\,t$, otherwise,
\item for $251 \leq t \leq 500$, $f_t=-1+0.04\,t$ if $(t~ \trm{mod}~ 100) \in \{1, \ldots, 50\}$; $f_t=1-0.04\,t$, otherwise.
\end{itemize}
The top-right panel of Figure \ref{fig:cpts_frequent_changepoint.proj1} shows this signal.
We generated $1000$ independent samples of $y_{_t}$ in model \eqref{fmodel.proj1} with $\varepsilon_t \stackrel{\trm{i.i.d}} {\sim} N(0\, ,\, \sigma^2)$ for both signals. The mPRUTF algorithm was then applied to these samples to estimate their change point locations. Figure \ref{fig:cpts_frequent_changepoint.proj1} shows the histograms of the locations of these change points for the signals. The figure provides evidence that mPRUTF is unable to effectively detect change points in signals with frequent change points and short segments. It also shows that the results deteriorate when the noise variance $\sigma^2$ or the polynomial order $r$ increase.
It turns out that the success of the mPRUTF algorithm critically relies on its stopping rule. Equation \eqref{stop.rule.proj1} verifies that estimating the noise variance $\sigma^2$ and specifying the threshold $x_{\alpha}$ from a Gaussian bridge process play crucial roles in the stopping rule. As discussed in \cite{fryzlewicz2018detecting}, the two widely used robust estimators of $\sigma$, Mean Absolute Deviation (MAD) (used here) and Inter-Quartile Range (IQR), overestimate $\sigma$ in frequent change point scenarios. In addition, determining the accurate value of the threshold $x_{\alpha}$ using \eqref{thresh.stop.rule} is affected in such scenarios. These two factors prevent the stopping rule from being effective in the mPRUTF algorithm and lead to the underestimation of change points for these scenarios. We must note that such poor performance in frequent change point scenarios is not specific to mPRUTF. As investigated in \cite{fryzlewicz2018detecting}, state-of-the-art methods such as PELT, WBS, MOSUM, SMUCE and FDRSeg are among the approaches that fail in such scenarios.
\subsection{mPRUTF in Models With Dependent Error Terms}
\label{discussion:numeric.dependent.proj1}
How can mPRUTF's performance be affected by various types of random errors, such as non-Gaussian or dependent errors? This is of course an important question and will be the topic of future works. Notice that the dual solution path of trend filtering is not impacted by the type of random errors. However, the type of random errors plays a key role in the stopping rule of mPRUTF because the stopping rule is built based on Gaussian bridge processes established by Donsker's Theorem.
\begin{figure}[!ht]
\begin{center}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{img/ncpts_pc_dep.jpg}
\label{fig:pwc.ncpts.supp.proj1}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{img/mse_pc_dep.jpg}
\caption{PWC signal }
\label{fig:pwc.mse.supp.proj1}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{img/dh_pc_dep.jpg}
\label{fig:pwc.dh.supp.proj1}
\end{subfigure}
\newline
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{img/ncpts_pc_dep.jpg}
\label{fig:pwl.ncpts.supp.proj1}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{img/mse_pc_dep.jpg}
\caption{PWL signal }
\label{fig:pwl.mse.supp.proj1}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{img/dh_pc_dep.jpg}
\label{fig:pwl.dh.supp.proj1}
\end{subfigure}
\caption[mPRUTF Results for PWC and PWL in Models With Dependent Random Errors]{The estimated average number of change points, MSEs and Hausdorff distances of various methods for both PWC and PWL signals. The results are based on weakly dependent observations and provided for various values of the error variability $\sigma$.}
\label{fig:simul-pc-pl_dep.proj1}
\end{center}
\end{figure}
To empirically investigate the performance of mPRUTF for weakly dependent random errors, a simulation study is carried out here. To this end, we generate $N=5000$ samples from model \eqref{fmodel.proj1} with the PWC and PWL signals. We consider errors $\varepsilon_i$ from an $AR(1)$ model with $\varepsilon_i = \rho \, \varepsilon_{i-1} + e_i$, for $i=1,\ldots,n$. Here, $e_i$'s are independent and identical random errors drawn from $N\big(0\, ,\, (1-\rho^2)\, \sigma^2\big)$ with $\rho \in \{-0.75,\, -0.5,\, -0.25,\, 0,\, 0.25,\, 0.5,\, 0.75 \}$ and $\sigma \in \big\{ 0.5,\, 1,\, 1.5,\, \ldots,\, 4.5,\, 5 \big\}$. The results of mPRUTF for both PWC and PWL signals are provided in Figure \ref{fig:simul-pc-pl_dep.proj1}. As can be seen, the results are very similar, in terms of the average number of change points, MSEs and the scaled Hausdorff distances, for various values of $\rho$. Therefore, it appears that the mPRUTF algorithm is quite robust against dependent error terms. Extensive studies of mPRUTF for non-Gaussian and dependent random errors will be carried out in future research.
| {
"timestamp": "2021-08-02T02:19:10",
"yymm": "2009",
"arxiv_id": "2009.08573",
"language": "en",
"url": "https://arxiv.org/abs/2009.08573",
"abstract": "While many approaches have been proposed for discovering abrupt changes in piecewise constant signals, few methods are available to capture these changes in piecewise polynomial signals. In this paper, we propose a change point detection method, PRUTF, based on trend filtering. By providing a comprehensive dual solution path for trend filtering, PRUTF allows us to discover change points of the underlying signal for either a given value of the regularization parameter or a specific number of steps of the algorithm. We demonstrate that the dual solution path constitutes a Gaussian bridge process that enables us to derive an exact and efficient stopping rule for terminating the search algorithm. We also prove that the estimates produced by this algorithm are asymptotically consistent in pattern recovery. This result holds even in the case of staircases (consecutive change points of the same sign) in the signal. Finally, we investigate the performance of our proposed method for various signals and then compare its performance against some state-of-the-art methods in the context of change point detection. We apply our method to three real-world datasets including the UK House Price Index (HPI), the GISS surface Temperature Analysis (GISTEMP) and the Coronavirus disease (COVID-19) pandemic.",
"subjects": "Methodology (stat.ME)",
"title": "Detection of Change Points in Piecewise Polynomial Signals Using Trend Filtering",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795121840872,
"lm_q2_score": 0.8128673178375734,
"lm_q1q2_score": 0.8022833888297155
} |
https://arxiv.org/abs/2007.11828 | Exponential node clustering at singularities for rational approximation, quadrature, and PDEs | Rational approximations of functions with singularities can converge at a root-exponential rate if the poles are exponentially clustered. We begin by reviewing this effect in minimax, least-squares, and AAA approximations on intervals and complex domains, conformal mapping, and the numerical solution of Laplace, Helmholtz, and biharmonic equations by the "lightning" method. Extensive and wide-ranging numerical experiments are involved. We then present further experiments showing that in all of these applications, it is advantageous to use exponential clustering whose density on a logarithmic scale is not uniform but tapers off linearly to zero near the singularity. We give a theoretical explanation of the tapering effect based on the Hermite contour integral and potential theory, showing that tapering doubles the rate of convergence. Finally we show that related mathematics applies to the relationship between exponential (not tapered) and doubly exponential (tapered) quadrature formulas. Here it is the Gauss--Takahasi--Mori contour integral that comes into play. | \section{\label{sec-intro}Introduction}
Analytic functions can be approximated by polynomials with
exponential convergence, i.e., $\|f-p_n\| = O( \exp(-Cn))$ for
some $C>0$ as $n\to\infty$. Here $n$ is the polynomial degree
and $\|\cdot\|$ is the $\infty$-norm on an approximation domain
$E$, which may be a closed interval of the real axis or more
generally a simply connected compact set in the complex plane.
This result is due to Runge~\cite{runge,walsh} and explains the
exponential convergence of many numerical methods when applied
to analytic functions, including Gauss and Clenshaw--Curtis
quadrature~\cite{gaussCC,atap} and spectral methods for ordinary
and partial differential equations~\cite{smim,series}. It is
also the mathematical basis of Chebfun~\cite{chebfun}.
If $f$ is not analytic in a neighborhood of $E$, then
Bernstein showed in 1912 that exponential convergence of
polynomial approximations is impossible~\cite{bernstein,atap}.
Bernstein also showed that in approximation of functions with derivative
discontinuities such as $f(x) = |x|$ on $[-1,1]$, polynomials can converge
no faster than $O(n^{-1})$~\cite{b14}. Now from the beginning,
going back to Chebyshev in the mid-19th century, approximation
theorists had investigated approximation by rational functions as
well as polynomials. Yet it was not until fifty years after these
works by Bernstein that it was realized that for this problem
of approximating $|x|$ on $[-1,1]$, rational functions
can achieve the much faster rate of {\em root-exponential
convergence,} that is, $\|f-r_n\| = O( \exp(-C\sqrt n\kern
1pt))$ for some $C>0$. This result was published by
Newman in 1964~\cite{newman}, who also showed that faster
convergence is not possible. With hindsight,
it can be seen that the root-exponential effect was implicit in
the results of Chebyshev's student Zolotarev nearly a century
earlier~\cite{gonZ,nf,stahl93,zol}, but this was not noticed.
Newman's theorem has been a great stimulus
to further research in rational approximation
theory~\cite{gonZ,gon67,gonchar,levsaff,safftotik,stahlgeneral,stahl93,vyach}.
It has not, however, had much impact on scientific computing
until very recently with the discovery that it can be the basis of
root-exponentially converging numerical methods for the solution
of partial differential equations (PDE\kern .3pt s) in domains
with corner singularities~\cite{stokes,conf,lightning,PNAS,laplace}.
The aim of this paper is to contribute to building the bridge
between approximation theory and numerical computation.
In particular, we shall focus on the key feature that gives
rational approximations their power: the exponential clustering of
poles near singularities. (The zeros are also
exponentially clustered, typically interlacing the poles, with the
alternating pole-zero configuration serving as proxy for a branch cut.)
This has been a feature of the theory since Newman's explicit
construction. Our aim is, first, to show how widespread this
effect is, not only with minimax approximations (i.e., optimal in
the $\infty$-norm), the focus of most theoretical studies, but
also for other kinds of approximations that may be more useful
in computation. Section~\ref{sec2} explores this effect in a
wide range of applications.
In section~\ref{sec3} we turn to a new contribution of this
paper, the observation that good approximations tend to
make use of poles which, although exponentially clustered,
have a density on a logarithmic scale that tapers to zero at
the endpoint. Specifically, the distances of the clustered
poles to the singularity appear equally spaced when the log of
the distance is plotted against the square root of the index.
We show experimentally that this scaling appears not just with
minimax approximations but more generally.
To explain this effect, we begin with a review in section~\ref{sec4}
of the Hermite contour integral, which is the basis of the application of
potential theory in approximation. We show how
this leads to the idea of condenser capacity for
the analysis of rational approximation of analytic functions.
Section~\ref{sec5} then turns to functions with singularities
and explains the tapering effect.
In this case the condenser is short-circuited, and it is
not possible to estimate the Hermite integral by considering
the $\infty$-norm of the factors of its integrand, but the
$1$-norm gives the required results. Analysis of a model problem
shows how the tapered exponential clustering of
poles enables better overall resolution, potentially doubling
the rate of convergence. These arguments are
related to those developed in the
theoretical approximation theory literature by Stahl and
others~\cite{stahlgeneral,stahl93,stahl}, but we believe that
section~\ref{sec5} of this paper is the first to connect this
theory with numerical analysis.
Finally in section~\ref{sec6} we turn to a different
problem, the quadrature of functions with endpoint
singularities on $[-1,1]$. Here the famous methods are
the exponential (tanh) and double exponential (tanh-sinh)
formulas~\cite{haber,IMT,mori,ms01,oms,sugihara,tm73,tm74,tanaka}.
Making use of the link to another contour integral formula, the
Gauss--Takahasi--Mori integral~\cite{gauss,gaussCC,tm71}, we show
that the distinction between straight and tapered exponential
clustering arises here too.
Throughout the paper, $R_n$ denotes the set of rational functions
of {\em degree~$n$}, that is, functions that can be written as
$r(x) = p(x)/q(x)$ where $p$ and $q$ are polynomials of degree~$n$.
The norm $\|\cdot\|$ is the $\infty$-norm on $E\kern 1pt$,
but, as mentioned above, other measures will come into play in sections~\ref{sec5}
and~\ref{sec6}, and indeed, a theme of our discussion is that
certain aspects of rational approximation are often concealed by
too much focus on the $\infty$-norm.
The numerical experiments in this paper are a major part
of the contribution; we are not aware of comparably detailed studies
elsewhere in the literature.
Our emphasis is on the results, not the algorithms, but our
numerical methods are briefly summarized in the discussion section at the end.
\section{\label{sec2}Root-exponential convergence and exponential clustering of poles}
In this section we explore the convergence
of a variety of rational approximations to analytic functions
with boundary branch point singularities. Our starting point
is Fig.~\ref{fig1}, which presents results for six kinds
of approximations of $f(x) =\sqrt x$ on $[\kern .5pt 0,1]$ by rational
functions of degrees $1\le n \le 20$. (By the substitution $x =
t^2$, this is equivalent to Newman's problem of approximation
of $|t|$ on $[-1,1]$.) The choice of $f$ is not special; as
we shall illustrate in Figs.~\ref{fig2} and~\ref{fig4},
other functions with endpoint singularities give similar results.
First, the big picture. The upper-left image of the figure shows
$\infty$-norm errors $\|f-r_n\|$ plotted on a log scale as
functions not of $n$ but of $\sqrt n$. With the exception of the
erratic case labeled AAA, all the curves plainly approach straight lines
as $n\to\infty$: root-exponential convergence. (The shapes would
be parabolas if we plotted against $n$.) The upper-right image shows
the absolute values of the 20 poles for the approximations with
$n=20$, that is, their distances from the singularity at $x=0$.
On this logarithmic scale the poles are smoothly distributed:
exponential clustering. This clustering is further shown
in the lower images, for the approximation labeled minimax,
by a phase portrait~\cite{wegert} of the square root function
(the standard branch) and its degree 20 rational approximation
after an exponential change of variables.
The top four approximations have preassigned poles, making the
approximation problems linear; indeed the Stenger, trapezoidal,
and Newman approximations are given by explicit formulas.
The AAA and minimax approximations are nonlinear,
with poles determined during the computation.
Although it is tempting to rank these candidates from
worst at the top to best at the bottom (the minimax approximation
is best by definition), this is not the point.
All these approximations converge root-exponentially, and the
differences in efficiency among them amount to constant factors of
order 10, which can in fact be improved in most cases by
introducing a scaling parameter or two. In particular,
minimax and other nonlinear approximations can approximately
double the rate of convergence of
the linear approximations~\cite{rakh}.
All these approximations can achieve accuracy $10^{-6}$
with degrees $n\approx 100$, whereas with polynomials one
needs $n=\hbox{140,085}$.
\begin{figure}
\vskip 1.4em
\begin{center}
\includegraphics[scale=.99]{fig1.eps}
\end{center}
\caption{\label{fig1}Root-exponential convergence of six kinds
of degree $n$ rational approximations
of $f(x) = \sqrt x$ on $[\kern .5pt 0,1]$ as $n\to\infty$.
On the upper-left, the asymptotically straight
lines on this log scale with $\sqrt n$ on the horizontal
axis (except for AAA) show the root-exponential effect.
On the upper-right,
the distances of the poles in $(-\infty,0)$ from the singularity
at $x=0$ show the exponential clustering. Below, phase
portraits in the complex plane
of the square root function (the standard branch)
and its degree 20 minimax approximation on $[\kern .5pt 0,1]$, after an exponential
change of variables, show how a branch cut is approximated
by interlacing exponentially clustered poles and zeros.
Red before yellow going counterclockwise indicates a zero,
and yellow before red indicates a pole.
We use $10^z$ instead of $e^z$ to enable comparison with the axis labels
in the images above.}
\end{figure}
We comment now on the individual approximations of
Fig.~\ref{fig1}. The Newman approximation comes
from the explicit formula presented in his four-page
paper~\cite{newman}. The approximation is\/ $r(x) = {\sqrt x\kern 1pt} (\kern
.7pt p({\sqrt x\kern 1pt})-p(-{\sqrt x\kern 1pt}))/(\kern .7pt p({\sqrt x\kern 1pt})+p(-{\sqrt x\kern 1pt}))$, where $p(t)
= \prod_{k=0}^{2n-1} (t+\xi^k)$ and $\xi = \exp(-1/\sqrt{\kern
.5pt 2n}\kern 1pt )\kern .7pt$; this can be shown to be a rational
function in $x$ of degree $n$. The asymptotic convergence rate is
$\exp(-\sqrt{\kern .5pt 2\kern .3pt n}\kern 1pt )$~\cite{xz}. This
can be improved to approximately $\exp(-(\pi/2)\sqrt{\kern .5pt 2\kern
.3pt n}\kern 1pt )$ by defining $\xi = \exp(-(\pi/2)/\sqrt{\kern
.5pt 2\kern .3pt n}\kern 1pt )$, an example of the scaling
parameters mentioned in the last paragraph (these values are
conjectured to be optimal based on numerical experiments).
The trapezoidal approximation
originates with Stenger's investigations
of sinc functions and associated
approximations~\cite{stengersurvey,stenger,stengerbook}.
Following p.~211 of~\cite{atap}, we approximate
$\sqrt x$ by starting from the identity
$\sqrt x = {2\kern .3pt x/\pi} \int_0^\infty (t^2 + x)^{-1}dt$,
which with the change of variables $t = e^s$ becomes
\begin{equation}
\sqrt x = {2\kern .3pt x\over \pi} \int_{-\infty}^\infty {e^s ds\over e^{2s} + x}.
\end{equation}
For $n\ge 1$, we approximate this integral by an equispaced
$n$-point trapezoidal rule with step size $h>0$,
\begin{equation}
r(x) = {2h x\over \pi} \sum_{k =-(n-1)/2}^{(n-1)/2}
{e^{kh}\over e^{2kh}+x}.
\label{traprule}
\end{equation}
(If $n$ is even, the values of $k$ are half-integers.) There are
$n$ terms in the sum, so $r$ is a rational function of degree
$n$ with simple poles at the points $p_k = -\exp(2kh)$.
Two sources of error make $r(x)$ differ from $\sqrt x$.
The termination of the sum at $n<\infty$ introduces an
error of the order of $\exp(-nh/2)$, and the finite step
size introduces an error on the order of $\exp(-\pi^2/h)$,
since the integrand is analytic in the strip around the
real $s$-axis of half-width $\pi/2$~\cite[Thm.~5.1]{trap}.
Balancing these errors gives the optimal step size $h \approx
\pi\sqrt{\kern .5pt 2/n}$ and approximation error $\|r - {\sqrt x\kern 1pt} \|
\approx \exp(-\pi\sqrt{n/2}\kern .5pt)$. Note that the poles
for this approximation cluster at $\infty$ as well as at $0$,
and indeed, it converges root-exponentially not just on $[\kern .5pt 0,1]$
but on any interval $[\kern .5pt 0,L]$ with $L>0$.
The derivation by the trapezoidal rule just given explains in
a general way why root-exponential convergence is achievable
for a wide range of problems with endpoint singularities.
With any exponentially graded discretization, there will be
errors associated with finite grid sizes and errors associated
with truncation of an infinite series. If both sources of error
follow an exponential dependence, then an optimal balance with
step sizes scaling with $1/\sqrt n$ can be expected to lead to a
root-exponential result. Such effects are familiar in the analysis
of $hp$ discretizations of partial differential equations when
the step sizes $h$ and orders $p$ of multiscale discretizations
are balanced to achieve optimal rates of convergence near
corners~\cite{schwab}.
A drawback of the trapezoidal approximation is that its derivation
depends on the precise spacing of the poles, since it relies on
the property that the trapezoidal rule is exponentially accurate
in this special case~\cite{trap}. The curves labeled Stenger in
Fig.~\ref{fig1} come from a more flexible alternative approach,
also proposed by Stenger~\cite{stenger}, where we fix $n$
distinct poles $p_k\in (-\infty,0)$, $1\le k \le n$ and $n+1$
interpolation points $x_k\in [\kern .5pt 0,1]$, $0\le k \le n$, and then
take~$r$ to be the unique rational function of degree $n$ with
these poles that interpolates $f(x)$ in these points. The theory
of rational interpolation with preassigned poles was developed by
Walsh~\cite{walsh} and will be discussed in section~\ref{sec4}.
For our problem of approximation on $[\kern .5pt 0,1]$ with a singularity at
$x=0$, a good choice is to take $x_0 = 0$ and $x_k = -p_k$ for
$k\ge 1$. In particular, our {\em Stenger approximant\/}\footnote{Stenger
considered rational approximations of this kind, though not in
this precise setting of a finite interval with just one endpoint
singularity.} is the rational function $r$ resulting from the
choices \begin{equation} -p_k = x_k = \exp(-(k-1) h) , \quad 1\le
k \le n, \label{stenginterp} \end{equation} with $h = O(1/\sqrt
n\kern 1pt)$. Figure~\ref{fig1} takes $h = \pi /\sqrt n$.
Interpolation is important for theoretical analysis, but for
practical computation, least-squares fitting is more robust
and more accurate, since it does not require knowledge of good
interpolation points. The least-squares data of Fig.~\ref{fig1}
come from fixing the same exponentially clustered poles as in
(\ref{stenginterp}), but now choosing approximation coefficients
by minimizing the least-squares error $f-r$ on a discretization
of $[\kern .5pt 0,1]$ by standard methods (\hbox{MATLAB}\ backslash). As always
when discretizing near singularities, we use an exponentially
graded mesh ({\tt logspace(-12,0,2000})), and a weight function
$w(x) = \sqrt x$ is introduced in the discrete least-squares
problem so that it approximates a uniformly weighted problem on
the continuum. The error curve $r(x) - \sqrt x$ for $x\in [\kern .5pt 0,1]$
for this approximation (not shown) approximately equioscillates
between $n+2$ extrema, indicating that it is a reasonable
approximation to the best $L^\infty$ approximation with these
fixed poles.
The minimax data in Fig.~\ref{fig1} correspond to the true
optimal (real) approximations, rational approximations with free poles.
Here the error curve equioscillates between $2\kern .3pt n+2$
extrema~\cite{atap}, and the error is approximately squared;
the asymptotic convergence rate is $\exp(-\pi\sqrt{\kern .5pt
2\kern .3pt n}\kern 1pt )$~\cite{stahl,vyach}.
Computing minimax approximations, however, can be challenging~\cite{minimax},
and on a complex domain they need not even be unique~\cite{gt}.
This brings us to the data in the figure for AAA (adaptive Antoulas--Anderson)
approximation, a fast method of near-best rational approximation
introduced in~\cite{aaa}. AAA approximation is at
its least robust on real intervals, as reflected in the erratic
data of the figure, but for more complicated problems and in the
complex plane, it is often the most practical method for
rational approximation.
\begin{figure}
\begin{center}
\vskip 1em
\includegraphics[scale=.99]{fig2.eps}
\end{center}
\caption{\label{fig2}Four more minimax approximations, showing
the same root-exponential convergence and exponential
clustering of poles as in Fig.~{\rm\ref{fig1}}. Two involve
the functions $x^{1/\pi}$ and $x\log x$ on $[\kern .5pt 0,1]$, one involves
$x$ on $[\kern .5pt 0,1]$ but with the $\infty$-norm weighted by $x$, and one
involves $\sqrt z$ on the disk about ${\textstyle{1\over 2}}$ of radius ${\textstyle{1\over 2}}$.
In the right image, $n$ takes its final value from the left
image for each problem, $14$ for the weighted
approximation and $20$ for the other cases.}
\end{figure}
This concludes our discussion of Fig.~\ref{fig1}. The next
figure, Fig.~\ref{fig2}, illustrates that these effects are not
confined to approximation on a real interval or to the function
$\sqrt x$. The figure presents data for four further examples of
minimax approximations. One set of curves shows approximation of
$x^{1/\pi}$ on $[\kern .5pt 0,1]$, with the value $1/\pi$ chosen to dispel any
thought that rational exponents might be special. This problem
requires poles particularly close to the singularity since the
exponent is so small. Another shows approximation of $x\log x$
on $[\kern .5pt 0,1]$. With a much weaker singularity, this problem shows
higher approximation accuracy. A third shows approximation of
$\sqrt x$ again, but now it is weighted minimax approximation,
with a weight function $x$ (and the error measured is now the
weighted error, notably smaller than before). Finally the
fourth set of data shows minimax approximation of $\sqrt z$
on the complex disk $\{z {\kern 1pt:\kern 3pt} |z-{\textstyle{1\over 2}}|< {\textstyle{1\over 2}}\}$.
\begin{figure}
\begin{center}
\vskip 1em
\includegraphics[scale=.95]{fig3.eps}
\end{center}
\caption{\label{fig3}The conformal map of a circular pentagon onto
the unit disk has been computed and then approximated numerically
by a rational function of degree $70$~{\rm\cite{conf,conformal}} by the
AAA algorithm. The poles cluster exponentially at the corners, where the
map is singular.}
\end{figure}
Figure~\ref{fig3} turns to our first problem of scientific
computing. Following methods presented in~\cite{conf}
and~\cite{conformal}, a region $E$ in the complex plane bounded
by three line segments and two circular arcs has been conformally
mapped onto the unit disk, and the map has then been approximated
to about eight digits of accuracy by AAA approximation, which
finds a rational function with $n=70$. This process is entirely
adaptive, based on no a priori information about corners or
singularities, yet it clusters the poles near the corners just
as in Figs.~\ref{fig1} and~\ref{fig2}.\ \ Many poles cluster
at the strong singularity {\sf A} and only a few at the weak
singularity {\sf B}.\ \ Note that the poles lie asymptotically on
the bisectors of the external angles. This effect is well known
especially from the theory of Pad\'e approximation as worked out
initially by Stahl~\cite{stahl12,suetin}. Optimal approximations line up their
poles along curves which balance the normal derivatives of a
potential gradient on either side, and evidently the AAA method
comes close enough to optimal for the same effect to appear.
We finish this section with a look at lightning solvers for
PDE\kern .3pt s in two-dimensional domains, introduced in 2019
and applied to date to Laplace~\cite{lightning,PNAS,laplace},
Helmholtz~\cite{PNAS}, and biharmonic equations
(Stokes flow)~\cite{stokes}. In the basic case of a Laplace problem
$\Delta u = 0$, the idea is to represent the solution on a domain
$E$ as $u(z) \approx \hbox{\rm Re\kern .7pt} r(z)$, the real part of a rational
function with no poles in $E$ that approximates the boundary data
to an accuracy typically of 6--10 digits. The rational
functions have preassigned poles that cluster exponentially at
the corners, where the solution will normally have singularities~\cite{lehman,wasow},
and the name ``lightning'' alludes to this exploitation of the same
mathematics that makes lightning strike objects at sharp corners.
Coefficients for the solution are found by least-squares fitting,
making this an approximation process of the same structure as
in the least-squares example of Fig.~\ref{fig1}.\ \ The difference
is that the approximations are now applied to give values of
$u(z)$ in the interior of the domain $E$, where it is not known
a priori. See Fig.~\ref{fig4} for an example on a ``snowflake''
with boundary data $\log|z|$.
\begin{figure}
\begin{center}
\vskip 2em
\includegraphics[scale=.85]{fig4.eps}
\end{center}
\caption{\label{fig4}Example of the lightning Laplace
solver~{\rm\cite{lightning,PNAS}} as implemented
in the code {\tt laplace.m}~{\rm\cite{laplace}}.
For each number of degrees of freedom (DoF),
poles are clustered exponentially near the
$12$ corners of the domain $E$, and the numbers are
increased until a solution to $10$-digit accuracy
is obtained in the form of a rational function
with $480$ poles. This takes $2.3$ s on a laptop, and subsequent
evaluations take $22$ $\mu$s per point, with the accuracy of each
evaluation guaranteed by the maximum principle.}
\end{figure}
Lightning solvers have been generalized to the Helmholtz
equation $\Delta u + k^2u = 0$~\cite{PNAS} and the biharmonic
equation $\Delta^2 u = 0$~\cite{stokes}, as illustrated in
Fig.~\ref{fig5}. In the Helmholtz case, poles $(z-z_k)^{-1}$
of rational functions become singularities of complex Hankel functions
$H_1(k|z-z_k|)\exp(\pm i \kern .7pt \hbox{\rm arg\kern .2pt}(z-z_k))$, and the biharmonic case is handled by the
Goursat reduction $u(z) = \hbox{\rm Im\kern .7pt} (\kern .5pt \overline z f(z) +
g(z))$ to a coupled pair of analytic functions $f$ and $g$,
each of which is approximated by its own rational function.
The mathematics of lightning methods for Helmholtz and biharmonic
problems has not yet been worked out fully, and the analysis
given in section~\ref{sec5} applies just to the Laplace case.
\begin{figure}
\begin{center}
\vskip 10pt
\raisebox{12pt}{\includegraphics[scale=.66]{helmfig.eps}~~~~~~~~~~~~~~~~~}%
\includegraphics[scale=.80]{stokesfig.eps}
\vspace{-5pt}
\end{center}
\caption{\label{fig5}Lightning solvers have been generalized to the
two-dimensional Helmholtz (left)~{\rm\cite{PNAS}} and biharmonic
equations (right)~{\rm\cite{stokes}}.
The Helmholtz image shows a plane wave incident from
the left scattered from
a sound-soft equilateral triangle. The biharmonic image shows
contours of the stream function for Stokes flow in a cavity driven
by a quarter-circular boundary segment rotating at
speed 1 and with zero velocity on the remainder of the boundary.
The black contours in the corners, representing the stream
function value $\psi = 0$, delimit counter-rotating Moffatt vortices.
Tapered exponentially clustered singularities are
used in both computations.}
\end{figure}
Although it is not the purpose of this article to give details
about lightning PDE solvers, they are at the heart of our motivation.
Usually in approximation theory, minimax approximations are
investigated as an end in themselves, and the locations of
their poles may be examined as an outgrowth of this process;
a magnificent example is~\cite{stahl94}. Here, the order is
reversed. Our aim is to exploit an understanding of how poles
cluster to construct approximations on the fly to solve
problems of scientific computing.
\section{\label{sec3}Tapered exponential clustering}
In the last section, 13 plots were presented of the distances
of poles to singularities on a log scale, the right-hand images
of Figs.~\ref{fig1}, \ref{fig2}, and~\ref{fig3}. All showed
exponential clustering, and all but three showed a further effect
which we call {\em tapered exponential clustering\/\kern .4pt},
the main subject of the rest of this paper: on the log scale, the
spacing of the poles grows sparser near the singularity. This was
also colorfully evident in the phase portrait at the bottom
of Fig.~\ref{fig1}. The three exceptions were the Stenger,
least-squares, and trapezoidal approximations of Fig.~\ref{fig1},
all of which are based on poles preassigned with strictly uniform
exponential clustering. These examples illustrate that tapering
of the pole distribution is not necessary for root-exponential
convergence.
A fourth set of data in Fig.~\ref{fig1} also involves preassigned
poles, the Newman data, and some tapering is apparent in this case.
\begin{figure}
\begin{center}
\vskip 15pt
\includegraphics[scale=1]{fig6}
\vspace{-8pt}
\end{center}
\caption{\label{fig6}Tapered exponential clustering of poles
near singularities for the nine examples with free poles
from Figs.~{\rm\ref{fig1}--\ref{fig3}} of the last section.
The crucial feature is that the curves appear straight with
this horizontal
axis marking $\sqrt k$ rather than $k$, where $\{d_k\}$ are
the sorted distances of the poles from the singularities.
The data for the poles at vertex {\sf A} of
Fig.~{\rm \ref{fig3}}
have been deemphasized to diminish clutter (black dots),
since they lie at such a different slope from the others.}
\end{figure}
Figure~\ref{fig6} shows the nine remaining examples of exponential
clustering of poles from Figs.~\ref{fig1}--\ref{fig3},
the ones with free poles,
presenting the distances $\{d_k\}$ of the
poles from their nearest singularities on a log scale. What is
immediately apparent is that all the curves look straight
for smaller values of $k$. Note that five of them stop at $n=20$,
one at $n=14$, and the remaining three, from the approximation
of a conformal map of Fig.~\ref{fig3}, at different values
determined adaptively by the AAA algorithm.
Yet the horizontal axis in Fig.~\ref{fig6} is not $k$ but $\sqrt k$.
Plotted against $k$ (not shown), the data would look completely
different. Evidently in a wide range of rational
approximations, both
best and near-best, the distances $\{d_k\}$ of poles to singularities
is well approximated by the formula
\begin{equation}
\log d_k \approx \alpha + \sigma \sqrt k
\label{model1}
\end{equation}
for some constants $\alpha$ and $\sigma$, that is,
\begin{equation}
d_k \approx \beta\exp(\sigma \sqrt k \kern 1pt)
\label{model2}
\end{equation}
for some $\beta$ and $\sigma$.
To make sense of the $\sqrt k$ scaling, let us remove the exponential
from the problem by defining a distance variable $s= \log d$,
thereby transplanting an interval such as $d\in [\kern .5pt 0,1]$ to $s\in
(-\infty, 0\kern .5pt]$. We ask, what can be said of the density
$\rho(s)$ of poles with respect to $s\kern .7pt$? If $\rho(s)$
were constant, this would correspond to a uniform exponential
distribution of poles, requiring an infinite number of poles
since $s$ goes to $-\infty$. So some kind of cutoff of $\rho(s)$
to $0$ must occur as $s\to-\infty$. An abrupt cutoff, as with
the Stenger, trapezoidal, and least-squares distributions of
Fig.~\ref{fig1}, leads to a linear cumulative distribution,
as shown in the left column of Fig.~\ref{relu}. By contrast, a linear
cutoff gives a quadratic cumulative distribution, as shown in the
right column, and when this
is inverted, the result is the $\sqrt k$ distribution we have
observed.
Thus the straight lines of Fig.~\ref{fig6} can be explained
if pole density functions $\rho(s)$ for good rational
approximations tend to take the form
sketched in the upper-right of
Fig.~\ref{relu}. (Aficionados of deep learning may call this
the ``ReLU'' shape.) In section~\ref{sec5}
we will explain why this is the case and continue the story
of Fig.~\ref{relu} in Fig.~\ref{relupot}.
We have not presented data in this section for lightning PDE solutions,
but it was in this context
that we first became aware of the importance of tapered
exponential clustering.
In the course of the work leading to~\cite{lightning},
the first author noticed that
although straight exponential spacing of preassigned poles gave
root-exponential convergence, better efficiency could be achieved
if the resulting approximations
were re-approximated a second time by the AAA algorithm.
On examination it was found that the AAA
approximations had poles in a
tapered distribution, just like cases {\sf A}--{\sf C}
of Fig.~\ref{fig6}. The model
(\ref{model1})--(\ref{model2}) was developed empirically in this
context, with $\sigma\approx 4$ found to be an effective choice.
This became
the formula for preassignment of poles in the lightning Laplace
software~\cite{laplace}, where it improved the overall speed
by a good factor, and it appears as equation (3.6) in~\cite{lightning}.
\begin{figure}
\vskip 15pt
\begin{center}
\includegraphics[scale=1.2]{relu}
\vspace{-3pt}
\end{center}
\caption{\label{relu}The algebra of exponential clustering.
With respect to the variable $s = \log d$, where $d$ is the distance to the
singularity, the simplest exponential clustering of poles
would have uniform density $\rho(s)$
down to a certain value and then cut off abruptly
(left column). A tapered distribution cuts off linearly instead (right column),
resulting in poles exponentially clustered in
the $\sqrt k$ fashion seen in Fig.~{\rm \ref{fig6}}.}
\end{figure}
\section{\label{sec4}Hermite integral formula and potential theory}
The basic tool for estimating accuracy of rational
approximations is the Hermite integral formula~\cite{levsaff,walsh}.
In this section we review how this
formula leads to the use of potential theory~\cite{ransford}, and
in particular the quantity known as the
condenser capacity, for approximations of analytic functions.
Building on the work of Walsh~\cite{walsh}, these ideas began to
be developed by Gonchar and Rakhmanov in the Soviet Union not long
after the appearance of Newman's paper~\cite{gon67,gonchar}.
The following statement is adapted from Thm.~8.2 of~\cite{walsh}.
\begin{theorem}
Let\/ $\Omega$ be a simply connected domain in ${\bf C}$ bounded
by a closed curve $\Gamma$, and let $f$ be analytic in
$\Omega$ and extend continuously to the boundary.
Let distinct interpolation points $x_0,\dots, x_n\in \Omega$
and poles $p_1,\dots,p_n$ anywhere in
the complex plane be given.
Let $r$ be the unique degree $n$ rational function
with simple poles at $\{p_k\}$ that
interpolates $f$ at $\{x_k\}$. Then for any $x\in\Omega$,
\begin{equation}
f(x) - r(x) = {1\over 2\pi i} \int_\Gamma {\phi(x)\over \phi(t)}
{f(t)\over t-x} \kern 1pt dt,
\label{herm}
\end{equation}
where
\begin{equation}
\phi(z) = \prod_{k=0}^n(z-x_k)\biggl/\kern 1.5pt
\prod_{k=1}^n(z-p_k).
\label{phidef}
\end{equation}
\label{hermthm}
\end{theorem}
\medskip
To see how this theorem is applied,
let\/ $\Omega$ be a simply connected domain bounded
by a closed curve~$\Gamma$,
as indicated in Fig.~\ref{figpotent} (see also Fig.~9 in
the next section),
and let $f$ be analytic in
$\Omega$ and extend continuously to $\Gamma$.
Suppose $f$ is to be approximated on a
compact set $E\subset \Omega$, which in this section we
take to be disjoint from $\Gamma$.
Theorem~\ref{hermthm} implies that for any $x\in E$,
\begin{equation}
|f(x) - r(x)| \le C \tau,
\label{tauest}
\end{equation}
where $C$ is a constant independent of $n$ and $\tau$ is the ratio
\begin{equation}
\tau = {\max_{z\in E} |\phi(z)|\over
\min_{z\in \Gamma} |\phi(z)|}.
\label{rhodef}
\end{equation}
If $\phi$ is much smaller on $E$ than
on $\Gamma$, then $\tau$ and hence $f-r$ must be small.
\begin{figure}
\begin{center}
\vskip 10pt
\includegraphics[scale=.8]{figpotent.eps}~~~~
\end{center}
\caption{\label{figpotent}Potential theory and
rational approximation. In each image,
the shaded region is an approximation domain $E$ for a function
$f$ analytic in the region $\Omega$ bounded by
$\Gamma$.\ \ If we think of the poles of
an approximation $r\approx f$ as positive point charges
and the interpolation points as negative point charges,
then a minimal-energy equilibrium distribution of the charges
gives a favorable configuration for approximation.
This is a discrete problem of
potential theory that becomes continuous in
the limit $n\to \infty$, enabling one to take advantage of
invariance under conformal maps.
In these images $E$ and\/ $\Gamma$ are
disjoint and the convergence is exponential, but the third
domain and its close-up illustrate the clustering effect,
which will become more pronounced as the gap shrinks to zero. The
pairs of interpolation points and poles marked by hollow dots delimit
one half of the total, highlighting how both sets of points
accumulate close to the singularity.}
\end{figure}
Figure~\ref{figpotent} gives an idea of how this can happen.
In each image,
the red dots on~$\Gamma$ represent a good choice of poles $\{p_k\}$
and the blue dots on the boundary of
$E$ a corresponding good choice of interpolation points $\{x_k\}$.
Consider first the
upper-left image, where $E$ and $\Gamma$ define a circular annulus.
The equispaced configurations of $\{p_k\}$ and $\{x_k\}$
ensure that $\tau$ will decrease exponentially as $n\to\infty$.
To see this, in view of (\ref{phidef}), we define
\begin{equation}
u(z) = n^{-1}\sum_{k=0}^n \log|z-x_k| -
n^{-1}\sum_{k=1}^n\log |z-p_k|.
\end{equation}
This is the potential function generated by $n+1$
negative point charges of strength
$n^{-1}$ at the interpolation points and $n$ positive point
charges of strength $-n^{-1}$
at the poles.
Then $\exp(n u(z)) = |\phi(z)|$,
and therefore
\begin{equation}
\tau = \exp(-n \kern 1pt [\kern 1pt \min_{z\in \Gamma} u(z) -
\max_{z\in E} u(z)\kern 1pt]\kern 1pt ).
\end{equation}
For $\tau$ to be small, we want $u$ to be uniformly bigger on
$\Gamma$ than on $E$. Finding the best such configuration is an
extremal problem that will be approximately solved if the points
are placed in an energy-minimizing equilibrium position. In each
of the images of Fig.~\ref{figpotent}, the points are close to
such an equilibrium. Each charge is attracted to the charges of
the other color, but repelled by charges of its own color.
Finding an optimal configuration (for the given
choice of $\Gamma$) is complicated for
finite~$n$, but the problem becomes cleaner
in the limit $n\to \infty$, and this is where
the power of potential theory is fully revealed.
We now imagine
continua of interpolation points and poles defined by
a signed measure $\mu$ supported on $E$, where it is nonpositive
with total mass $-1$,
and on $\Gamma$, where it is nonnegative with total
mass $1$. It can be shown that
there is a unique measure of this kind that minimizes the energy
\begin{equation}
I(\kern .7pt\mu) = - \int\kern -4pt \int \log|z-t| \kern .7pt d\mu(z) \kern .7pt d\mu(t),
\label{energy}
\end{equation}
with associated potential function
\begin{equation}
u(z) = -\int \log|z-t|\kern .7pt d\mu(t),
\label{potential}
\end{equation}
and $u$ takes constant values $u_E^{} < 0$ on $E$ and $u_\Gamma^{} = 0$ on $\Gamma$.
The minimum $I_{\min{}} = \inf_\mu I(\kern .5pt \mu)$ is known
to be positive, and
for minimax degree $n$ rational approximations $r_n^*$ one has
exponential convergence as $n\to \infty$ at a corresponding rate:
\begin{equation}
\limsup \| f- r_n^*\|^{1/n} \le \exp(-I_{\min{}}).
\label{optrate}
\end{equation}
(The actual rate is in fact twice as fast as this, $\exp(-2\kern .3pt I_{\min{}})$,
for functions whose singularities in the complex plane are
just isolated algebraic branch points~\cite[p.~93]{levsaff},
\cite{stahlgeneral}.)
The reciprocal of $I_{\min{}}$ is known as the {\em condenser capacity}
for the $(E,\Gamma)$ pair, a term that reflects an electrostatic
interpretation of the approximation problem. In electronics,
capacitance is the ratio of charge to voltage difference. A
capacitor has high capacitance if its positive and negative plates
are close to one another, so that the attraction of charges of
opposite sign enables a great deal of charge to be accumulated
on them without the need for much of a voltage difference.
For fast-converging rational approximation, on the other hand,
we want $\Gamma$ and $E$ to be far apart, corresponding to a {\em
small\/} amount of charge relative to the voltage difference,
hence small numbers of poles and interpolation points needed to
achieve a given ratio $\tau$.
We can now see how the second and third images
of Fig.~\ref{figpotent} were drawn. They were obtained by
conformal transplantation, exploiting the invariance of problems
of potential theory under conformal maps. The eccentric domain
of the second image comes from a M\"obius transformation, and the
pinched domain of the third image comes from a further squaring.
The blue and red points obtained as conformal images of equispaced
points in the symmetric annulus are known as Fej\'er--Walsh
points~\cite{starke93}.
One might wonder, for arguments of this kind, is it necessary
to place the poles of\/ $r$ on the boundary of the region of
analyticity of $f\kern .7pt $? In fact, $\Gamma$ does not
have to lie as far out as that boundary, nor do the poles have
to be on $\Gamma$, for as stated in Theorem~\ref{hermthm}, the
integral representation (\ref{herm}) is valid for any placement
of the poles. Asymptotically as $n\to\infty$, however, it is
known that the convergence rate cannot be improved by placing
poles beyond the region of analyticity of $f$~\cite{levsaff}. A
special choice is to put all the poles at $x=\infty$, in which
case rational approximation reduces to polynomial approximation,
still with exponential convergence though at a lower rate than
in~(\ref{optrate}).
\section{\label{sec5}Explanation of tapered exponential clustering}
Now we examine how the analysis of the last section
must change for approximations with singularities.
There is a considerable specialist literature here by
authors including Aptekarev, Saff, Stahl, Suetin, and
Totik~\cite{bt,levsaff,rtw,safftotik,stahl,stahl12,suetin}, which
investigates certain best approximations in detail. Our emphasis
is on the broad ideas applicable to near-best approximations too.
\begin{figure}
\begin{center}
\vskip .2in
\includegraphics{sketchfig.eps}
\end{center}
\caption{\label{sketchfig}Two kinds of
problems of rational approximation of a function $f$ on a
domain $E$. On the left (section~\ref{sec4}), $f$ is analytic on $E$ and poles
can be placed on a contour\/ $\Gamma$ enclosing $E$ in the region of
analyticity:\ convergence is exponential with accuracy on the order
of $\exp(-n\kern .5pt \delta\kern .5pt)$ for a constant
$\delta>0$.
On the right (section~\ref{sec5}), $f$ has a singularity at a point $z_c^{}$
on the boundary of $E$, and
$\Gamma$ must touch $E$ at $z_c^{}$:\ convergence is root-exponential with
accuracy of order $\exp(-n\kern .5pt \delta\kern .5pt)$ again,
but now with~$\delta$ diminishing at the rate $1/\sqrt n$ as $n\to\infty$.
In the circled region, the potential makes the transition from
$u_\Gamma^{}= 0$ to $u_E^{} = -\delta$.}
\end{figure}
From Fig.~\ref{figpotent} it is clear that potential theory
should give some insight when $f$ has a singularity on
the boundary of $E$. The lower pair of images shows clustering
of poles where $\Gamma$ has a cusp close to the boundary of $E$,
and as the cusp is brought closer to $E$, the clustering will
grow more pronounced. However, the argument we have
presented breaks down when $\Gamma$ actually touches $E$.
The situation is sketched
in Fig.~\ref{sketchfig}. Physically, this would be a capacitor
of infinite capacitance, implying that an equipotential distribution
$u$ with a nonzero voltage difference would require an infinite
quantity of charge. Mathematically, the estimate (\ref{tauest})
fails because $\tau$ cannot be smaller than $1$.
To see what happens in such cases, we can examine
the function $\phi$ computed numerically for an example problem.
The left column of Fig.~\ref{figmini} shows error curves in type $(9,10)$
minimax approximation of $\sqrt x$ on $[\kern .5pt 0.01,1]$ (above) and $[\kern .5pt 0,1]$
(below). (Type $(m,n)$ means numerator degree at most $m$
and denominator degree at most $n\kern .3pt$; we choose these
parameters rather than $(n,n)$ to make the plots slightly cleaner.)
The curves each equioscillate between $m+n+2 = 21$ extrema,
and in the lower curve, on the semilogx scale, we see the
wavelength increasing as $x\to 0$. As a minimax approximation
with free poles, this rational function has $m+n+1 = 20$ points
of interpolation rather than the standard number $n = 10$ for
an approximation with preassigned poles, so for the cleanest
display of the potential function $\phi$ in the right column we
have picked out just half of these, marked by the red dots.
\begin{figure}
\begin{center}
\vskip 10pt
\includegraphics[scale=.9]{figmini.eps}~~~~
\end{center}
\caption{\label{figmini}On the left, error curves in type $(9,10)$
minimax approximation of $\sqrt x$ on $[\kern .5pt 0.01,1]$ and $[\kern .5pt 0,1]$.
On the right, plots of $\phi(z)$ as defined
by $(\ref{phidef})$ on these approximation intervals
and on $[-1,0\kern .5pt]$. The curves in the upper-right image show
a reasonable approximation to constant
values on $[-1,0\kern .5pt]$ (upper curve) and on $[\kern .5pt 0.01,1]$ (lower
curve), but in the lower-right image,
nothing like constant behavior of
$|\phi(z)|$ on $[-1,0\kern .5pt]$ is evident. We explain this
by noting that what matters to the accuracy of an approximation
is the integral (\ref{hermagain}) of $|\phi(x)/\phi(t)|$ with
respect to $t\in \Gamma$, not its maximum. Taking
advantage of this property, poles and interpolation points
distribute themselves more sparsely near the singularity,
freeing more of them to contribute to the approximation further away---the
phenomenon of tapered exponential clustering.}
\end{figure}
The right column of Fig.~\ref{figmini} shows the function
$|\phi|$ plotted on the approximation interval
(the lower blue curve) and on the important portion $[-1,0\kern .5pt]$
of the integration contour $\Gamma$ (the upper red curve).
(To be precise, for these plots the numerator of (\ref{phidef})
ranges over just the interpolation points $x_1,\dots, x_n$
marked by red dots.) In the upper image, for $[\kern .5pt 0.01,1]$, the curves
reveal a reasonable approximation to what the last section
has led us to expect from
potential theory. The blue curve has approximately even magnitude,
and this is about five orders of magnitude below the red curve,
also of approximately even magnitude. Thus the ratio $\tau$ of
(\ref{rhodef}) is far below $1$, and the estimate (\ref{tauest})
serves to bound the approximation error. (The actual
error is about the square of this bound since we have omitted
half the interpolation points.)
The lower image, which is a centerpiece of
this paper, tells a strikingly different story. Here again the
blue curve is flat, showing the even dependence on $x$ we expect
in a minimax approximation. The red curve for $|\phi(z)|$ on
$[-1,0\kern .5pt]$, however, is now tilted at an angle on these log-log axes,
showing a steady closing of the gap between the curves as\/ $t$
moves from $-1$ to $0$.
Clearly in this case $[-1,0\kern .5pt]$ is not at all a curve of
constant $|\phi|$.
To understand the linearly closing gap in Fig.~\ref{figmini}, we note
that what fails in the analysis of the
last section for an approximation problem
with a singularity is not the Hermite integral,
\begin{equation}
f(x) - r(x) = {1\over 2\pi i} \int_\Gamma {\phi(x)\over \phi(t)}
{f(t)\over t-x} \kern 1pt dt,
\label{hermagain}
\end{equation}
but the
estimate (\ref{tauest}) we derived from it. Implicitly (\ref{tauest})
came from bounding (\ref{hermagain}) by H\"older's inequality,
\begin{equation}
|f(x) - r(x) | \le {1\over 2\pi}
\left\|{\phi(x)\over \phi(t)}\right\|_\infty
\left\|{f(t)\over t-x}\right\|_\infty \|1\|_1^{},
\label{option1}
\end{equation}
where the $\infty$- and $1$- norms are defined over $t\in \Gamma$.
(The norm $\|1\|_1^{}$ is equal to the length of $\Gamma$.)
When $\Gamma$ and $E$ are disjoint, the first $\infty$-norm in (\ref{option1}) is
exponentially small as $n\to\infty$ and the second is bounded.
However, these properties fail as $\Gamma$ and $E$ touch.
We can rescue the argument
by noting that $|\phi(x)/\phi(t)|$ does not have to be
small for all $t$ so long as its integral is small.
More precisely, the quantity
$f(t)/(t-x)$ of (\ref{option1}) may not be bounded as $t,x\to z_c^{}$ but
$f(t)|\kern .5pt t-z_c^{}|^{1-\alpha} /(t-x)$ will be
bounded if we assume
$f(t-z_c^{}) = O(|\kern .5pt t-z_c^{}|^\alpha)$ for some constant $\alpha$.
So what actually matters is that the integral of
$|\kern .5pt t-z_c^{}|^{\alpha - 1} |\phi(x)/\phi(t)|$
should be small, and we
accordingly replace (\ref{option1}) by the alternative H\"older estimate
\begin{equation}
|f(x) - r(x) | \le {1\over 2\pi}
\left\|{\phi(x)\over \phi(t) }|\kern .5pt t-z_c^{}|^{\alpha-1}\right\|_1
\left\|{f(t)\over t-x}|\kern .5pt t-z_c^{}|^{1-\alpha}\right\|_\infty.
\label{option3}
\end{equation}
\begin{figure}
\begin{center}
\vskip 18pt
\includegraphics[scale=1.2]{relupot.eps}
\end{center}
\caption{\label{relupot}The potential theory of exponential clustering
(continuation of Fig.~{\rm \ref{relu}}).
The first two rows (right column) show the function $|\phi(t)|$ of $(\ref{phidef})$
and the associated potential $u(t) = n^{-1} \log |\phi(t)|$
for the model problem $(\ref{modelprob0})$--$(\ref{modelprob})$.
The third row shows the behavior along the real $s$-axis after
the change of variables to $s = \log t$; the domain is now
the infinite strip $0 < \hbox{\rm Im\kern .7pt} s < \pi$, with $u=0$ for $\hbox{\rm Im\kern .7pt} s = \pi$.
The final row shows the charge density $\rho(s) = n u_n(s)/\pi$, where
$u_n$ is the normal derivative of $u$ on the boundary of the strip.
The intervals that matter (emphasized by
solid rather than dashed lines)
are $\varepsilon < |t| < 1$ in the $t$ variable
and $\log \varepsilon < \hbox{\rm Re\kern .7pt} s < 0$ in the $s$ variable.
Smaller values of $|t|$ and $s$ contribute
negligibly to the integral {\rm (\ref{hermagain})},
and larger values are far from the singularity.}
\end{figure}
For simplicity let us assume that the singularity
lies at $z_c^{} = 0$ and the
part of the contour $\Gamma$ that matters is $[\kern .5pt 0,1]$, and
let the domain be scaled so that $|\phi(x)|\approx 1$ for $x\in E$.
We want $|\phi(t)|$ to be at most
$1$ for $t<0$ and at least $(t/\varepsilon)^\alpha$ for $t> \varepsilon$,
where $\varepsilon$ is the distance of the closest pole to the singularity.
See the upper-right image of Fig.~\ref{relupot}.
Defining $u(t) = n^{-1} \log \phi(t)$ leads us
to the model problem sketched in the image below
this in the figure: find a harmonic function $u(t)$ in
the upper half $t$-plane such that
\begin{equation}
u(t) = \cases{
0, & $t \le \varepsilon$, \cr\noalign{\vskip 3pt}
\alpha \kern .5pt n^{-1}\log(t/\varepsilon) , & $t> \varepsilon$.\cr
}
\label{modelprob0}
\end{equation}
We now make the change of variables $s = \log t$,
which transplants the Laplace problem
to the infinite strip $S = \{s\in{\bf C}:\, 0 <\hbox{\rm Im\kern .7pt} s < \pi\}$, as
sketched in the $(3,2)$ position of the figure:
find a harmonic function $u(s)$ in $S$ satisfying
\begin{equation}
u(s) = \cases{
0, & $\hbox{\rm Im\kern .7pt} s = \pi$, \cr\noalign{\vskip 3pt}
0, & $\hbox{\rm Im\kern .7pt} s = 0$ and $\hbox{\rm Re\kern .7pt} s \le \log \varepsilon$, \cr\noalign{\vskip 3pt}
\alpha \kern .5pt n^{-1}(s-\log\varepsilon), & $\hbox{\rm Im\kern .7pt} s = 0$ and $\hbox{\rm Re\kern .7pt} s > \log \varepsilon$.\cr
}
\label{modelprob}
\end{equation}
This change of variables is convenient mathematically, and it is
also important conceptually, since it is well known
that influences on harmonic functions decay
exponentially with distance along a strip.
Consequently, if $\varepsilon$ is small, the solution to a Laplace problem
for $\log \varepsilon \ll \hbox{\rm Re\kern .7pt} s \ll 0$ will be essentially
(though not exactly) determined by the boundary conditions
in that region.
This just matches what we need for the model problem as posed in
the original~$t$ variable,
where behavior for $|t|$ of order
$\varepsilon$ or less is unimportant because it contributes negligibly to the
integral (\ref{hermagain}) and behavior for $|t|$ of order
$1$ or more is unimportant because it is
far from the singularity under investigation.
So we address our attention to (\ref{modelprob}).
An exact solution can be obtained via the Poisson
integral formula for an infinite strip~\cite{widder},
\begin{equation}
u(x+iy) = {\alpha \kern .5pt n^{-1}\over 2\pi} \int_0^\infty {\xi \sin(y)\over
\,\cosh(\xi - (x-\log \varepsilon)) - \cos(y)\,} \kern 1pt d\xi,
\label{solnexact}
\end{equation}
where we have set $s = x+iy$. However, we do not need exactly this
since the region where our model applies is
$\log \varepsilon \ll \hbox{\rm Re\kern .7pt} s \ll 0$. In this region, the bilinear harmonic function
\begin{equation}
u(x+iy) = \alpha\kern .5pt n^{-1} (1-{y\over \pi}) (x - \log\varepsilon)
\label{solnapprox}
\end{equation}
satisfies the boundary conditions and is accordingly
a good approximation to the solution to~(\ref{modelprob}). The corresponding
pole density distribution on the real $x$ axis is $(n/\pi)$ times
the normal derivative,
\begin{equation}
\rho(s) = - {n\over \pi} {\partial \over \partial y} u(x+iy)
= {\alpha\over \pi^2} (x - \log\varepsilon).
\end{equation}
This linear growth, sketched in the bottom-right image
of Fig.~\ref{relupot}, is just what we set out to explain in
Fig.~\ref{relu}.
Let us now look at the quantitative implications of this argument,
comparing uniform exponential clustering (left column of
Fig.~\ref{relupot}) with tapered exponential clustering
(right column).
According to our model, the integral of the solid
portions of the $\rho(s)$ curves in the bottom should be
equal to $n$, the total number of poles.
For uniform clustering the integral is $(-\alpha\kern .5pt\log \varepsilon)^2/\pi^2$,
leading to the estimates
\begin{equation}
\hbox{Closest pole:~~} \varepsilon \approx \exp(-\pi \sqrt{n/\alpha}\kern 1pt), \quad
\hbox{Accuracy:~~} \varepsilon^\alpha \approx \exp(-\pi
\sqrt{\alpha\kern .5pt n}\kern 1pt).
\label{est1}
\end{equation}
For tapered clustering the integral is ${1\over 2}(-\alpha\kern .5pt\log \varepsilon)^2/\pi^2$, leading
to the estimates
\begin{equation}
\hbox{Closest pole:~~} \varepsilon \approx \exp(-\pi \sqrt{2\kern .3pt n/\alpha}\kern 1pt), \quad
\hbox{Accuracy:~~} \varepsilon^\alpha \approx \exp(-\pi
\sqrt{2\kern .3pt \alpha\kern .5pt n}\kern 1pt).
\label{est2}
\end{equation}
Thus, as mentioned in the abstract, our model
leads to the prediction of a
factor of 2 speedup with tapered clustering.
It would be interesting to investigate whether, for certain problems,
exactly this ratio could be established theoretically
in the limit $n\to\infty$.
As an example of a
problem in which we may make such a comparison
numerically, consider Fig.~\ref{fig12}.
These data show $\infty$-norm errors for rational linear-minimax
approximations of even degrees $n$ from 2 to 50 with
preassigned exponentially clustered poles. That is, the
approximations are optimal in the $\infty$-norm among rational
functions in $R_n$ with simple poles at the prescribed points; they
are characterized by error curves equioscillating
between $n+2$ extrema.
The upper curves
correspond to uniformly clustered poles
$p_k = -\exp(-\pi k/\sqrt n)$, $0\le k \le n-1$, and the lower curves
to tapered poles
$p_k = -\exp(\sqrt 2 \pi(\sqrt k-\sqrt n))$, $1\le k \le n$.
The asymptotic errors appear to be about
$\exp(-\sqrt{2.3 n})$ for uniform clustering and
$\exp(-\sqrt{4.7 n})$ for tapered clustering.
With $\alpha = 1/2$ for $f(x) = \sqrt x$, the corresponding
estimates (\ref{est1}) and (\ref{est2}) are
$\exp(-\sqrt{2.2 n})$ and $\exp(-\sqrt{4.4 n})$.
\begin{figure}
\vskip 1.4em
\begin{center}
\includegraphics[scale=.99]{fig12.eps}
\end{center}
\caption{\label{fig12}Linear-minimax approximation of $f(x) =\sqrt x$
on $[\kern .5pt 0,1]$ with preassigned exponentially
clustered poles in $[-1,0\kern .5pt]$, $n = 2,4,\dots, 50$.
Tapering the distribution makes
the convergence rate approximately double, as predicted by the model of
Section~$\ref{sec5}$.}
\end{figure}
Analyses related to the argument we have presented were published
by Stahl for rational minimax approximation of $|x|$
on $[-1,1]$ and $x^\alpha$ on
$[\kern .5pt 0,1]$~\cite{stahl92,stahl93,stahl94,stahl}.
For $x^\alpha$ Stahl gives the result
\begin{equation}
\hbox{Accuracy:~~} \varepsilon^\alpha \approx \exp(-\pi
\sqrt{4\kern .3pt \alpha\kern .5pt n}\kern 1pt),
\label{stahlest2}
\end{equation}
which is not just an estimate but a theorem concerning
the limit $n\to\infty$
(assuming $\alpha$ is not an integer), with precise constants.
This is exactly what one would expect based on (\ref{est2}),
since, as mentioned earlier,
the effective value of $n$ is doubled
in the case of true minimax approximants~\cite{rakh}.
Stahl worked
essentially in the variable $t$ rather
than $s$, so his boundary conditions involved logarithms, as
in the second image of the right column of Fig.~\ref{relupot}.
Whenever one has a Laplace problem with Dirichlet boundary
data, one can interpret it as the problem of finding an
equipotential distribution in the presence of an external
field defined by that boundary data, and this interpretation
has been carried far in approximation theory~\cite{safftotik}.
From this point of view one can say that tapered exponential
clustering results from poles and zeros being slightly pushed
away from a singular point by a logarithmic potential field.
\section{\label{sec6}Exponential and double exponential quadrature}
In this final section we turn to another problem where exponential
clustering appears.
Let $f$ be a continuous function on $[-1,1]$.
We wish to approximate the integral of $f$ by a linear combination
\begin{equation}
I_n = \sum_{k=1}^n w_k f(x_k),
\label{quadsum}
\end{equation}
where $\{x_k\}$ are distinct nodes in $[-1,1]$ and
$\{w_k\}$ are corresponding weights, in such a way that
the accuracy is good even if $f$ has
branch point singularities at the endpoints.
To this end, we introduce a change of variables $g(s)$ from the real
line to $[-1,1]$, so that the integral becomes
\begin{equation}
I = \int_{-1}^1 f(x) \kern .7pt dx = \int_{-\infty}^\infty f(\kern .5pt g(s))
\kern .7pt g'(s) \kern .7pt ds,
\label{cov}
\end{equation}
and we apply the equispaced trapezoidal rule. This involves
an infinity of sample points in principle, but if $g'(s)$ decays
rapidly, we may truncate
these to an $n$-point rule like (\ref{traprule}):
\begin{equation}
I_n = \kern 2pt h \kern -6pt \sum_{k =-(n-1)/2}^{(n-1)/2}
f(\kern .3pt g(kh)) \kern .7pt g'(kh).
\label{DESE}
\end{equation}
Quadrature formulas of this kind were introduced
around 1970 by Mori, Takahasi,
and other Japanese researchers and also in
the analysis of sinc methods by Stenger.
See~\cite{haber,IMT,stengersurvey,stengerbook,tm73,trap},
as well as~\cite{mori} for the history as told by Mori himself.
The standard ``exponential'' choice of $g$ is
\begin{equation}
g(s) = \tanh(s), \quad g'(s) = \hbox{sech}^2(s),
\label{tanh}
\end{equation}
with which (\ref{DESE}) becomes the {\em tanh formula}.
As in section 2, we estimate the truncation error as of order
$\exp(-nh)$ and the discretization error of order $\exp(-\pi^2/h)$.
(The latter could be worse if $f$ has additional singularities
near $(-1,1)$.) This gives a balance $h \approx \pi/\sqrt n$,
with convergence rate of order $\exp(-\pi\sqrt n\kern .7pt )$.
An estimate of this form is valid for any H\"older continuous
branch point singularity; see~\cite[Thm.~3.4]{stengersurvey},
\cite[Thm.~2.1]{tanaka}, and~\cite[Thm.~14.1]{trap}.
Root-exponential convergence! This is much better than any
algebraic order, but for practical applications on one-dimensional
domains, methods of this kind often seem very wasteful, with almost
all the points
being used up in resolving the singularity
(100$\%$ of them, in the limit $n\to \infty$)~\cite{sincfun}.
A year or two after the first exponential formulas
appeared, it was realized that one can do better with
``double exponential'' formulas.
We focus on the {\em tanh-sinh formula\/} proposed by Takahasi
and Mori in~\cite{tm74} and subsequently used and analyzed
by many others including Okayama, Sugihara, and Tanaka as
well as Bailey and Borwein~\cite{bb,ms01,oms,sugihara,tanaka}.
Here (\ref{tanh}) is replaced by
\begin{equation}
g(s) = \tanh({\textstyle{\pi\over 2}} \sinh(s)), \quad g'(s) =
{\textstyle{\pi\over 2}}\cosh(s)\kern 1.3pt \hbox{sech}^2({\textstyle{\pi\over 2}}\sinh(s)).
\label{tanhsinh}
\end{equation}
Under suitable assumptions we can now estimate the truncation
and discretization errors
as of orders $\exp(-(\pi/2) \exp(nh/2))$ and
$\exp(-\pi^2/h)$. The first of these estimates is
the big improvement, for this quantity can be almost-exponentially
small with a much smaller value of $h$ than before, of order
$\log(n)/ n$ rather than $1/\sqrt n$.
By {\em almost-exponential,} we mean of order\break
$\exp(-C n/\log n)$ for some \hbox{$C>0$}.
With this reduced value of $h$, the second estimate
becomes almost-exponentially small too.
\begin{figure}
\begin{center}
\vspace{12pt}
\includegraphics[scale=.96]{fig13.eps}
\vspace{-7pt}
\end{center}
\caption{\label{fig13}
On the left, root-exponential convergence of the tanh quadrature
formula applied to integration of $\sqrt{1+x}$ (note the $\sqrt n$
axis as usual); the tanh-sinh formula converges much faster down
to machine precision.
On the right, the distances of nodes from poles (with a $\sqrt k$
axis) show uniform exponential clustering for the
tanh formula with $n=40$ and tapered exponential clustering for tanh-sinh.}
\end{figure}
Figure~\ref{fig13} shows data for the tanh and tanh-sinh formulas.
(We used the empirical choices $h = \pi/\sqrt n$ and $h = 1.2\log(2\pi n)/n$,
respectively.)
The left image plots $|I_n- I|$ against $\sqrt n$
for $n$ from $1$ to $40$ for the integration
of $f(x) = \sqrt{1+x}$. The tanh curve appears
straight, confirming the root-exponential convergence, and
the tanh-sinh curve bends downward, confirming that
its rate is faster. The unexpected image is on the right, a plot of
distances of the nodes from the endpoint $x=-1$.
For tanh quadrature, these distances are uniformly exponentially spaced,
appearing as a parabola on these axes.
The curve for tanh-sinh quadrature, however, is
almost perfectly straight. It would seem that tanh-sinh quadrature
exploits tapered exponential clustering!
It surprised us when we first saw curves like this.
Why is there a resemblance between the tanh and tanh-sinh
quadrature formulas and the phenomena of rational approximation
discussed in the earlier sections of this article?
Some steps toward an answer come from a beautiful connection
introduced by Gauss and exploited by Takahasi and
Mori~\cite{gauss,tm71,gaussCC}: every quadrature formula
can be associated with a rational approximation.
Suppose first that $f$ can be analytically
continued to a neighborhood $\Omega$ of $[-1,1]$ bounded by a
contour $\Gamma$.
Then the integral can be written
\begin{equation}
I = \int_{-1}^1 f(x) \kern .7pt dx = {1\over 2\pi i} \int_\Gamma
f(t) \kern .7pt \varphi(t)\kern .7pt dt,
\label{Iint}
\end{equation}
where the {\em characteristic function} $\varphi$ is defined by
\begin{equation}
\varphi(t) = \int_{-1}^1 {dx\over t-x} = \log{t+1\over t-1}.
\end{equation}
On the other hand the quadrature sum (\ref{quadsum}) can be written
\begin{equation}
I_n = {1\over 2\pi i} \int_\Gamma f(t) \kern .7pt r(t)\kern .7pt dt,
\label{Inint}
\end{equation}
where $r$ is the degree $n$ rational function defined by
\begin{equation}
r(t) = \sum_{k=1}^n {w_k\over t-x_k}.
\label{ratdef}
\end{equation}
Subtracting (\ref{Inint}) from (\ref{Iint}) gives what we
call the {\em Gauss--Takahasi--Mori (GTM) contour integral,}
\begin{equation}
I - I_n = {1\over 2\pi i} \int_\Gamma^{} f(t) \kern .7pt
(\varphi(t)-r(t)) \kern .7pt dt
\label{rel1}
\end{equation}
and the corresponding error bound
\begin{equation}
| I - I_n | \le {1\over 2\pi}
\|f\|_\infty^{}\kern .7pt \|\varphi - r\|_\infty^{}\kern .7pt \|1\|_1^{},
\label{rel2}
\end{equation}
which we have written in the style of (\ref{option1}),
with the norms defined over $\Gamma$.
Equations (\ref{rel1}) and (\ref{rel2}) relate accuracy of a
quadrature formula to an approximation problem: if the nodes
and weights are such that $\varphi - r$ is small on the boundary
$\Gamma$ of a region where $f$ is analytic, then $|I-I_n|$ must
be small. This reasoning was applied by Takahasi and Mori to
a range of quadrature formulas~\cite{tm71}. Now $\varphi$ is an
analytic function in the extended complex plane minus the segment
$[-1,1]$. It follows that so long as $\Gamma$ is disjoint from
$[-1,1]$, there exist rational approximations to $\varphi$ that
converge exponentially on $\Gamma$ as $n\to\infty$. In particular,
this holds for the rational functions associated with Gauss and
Clenshaw--Curtis quadrature~\cite{gaussCC}, where it is convenient
to take $\Gamma$ in the form of an ellipse about $[-1,1]$ with foci
$\pm 1$. It follows that both these quadrature formulas
converge exponentially as $n\to\infty$ for analytic integrands
(cf.\ \cite[Thm.~19.3]{atap}).
But what if $f$ has endpoint singularities? Now $\Gamma$ must
touch $[-1,1]$ at the endpoints, and (\ref{rel2}) fails just
as (\ref{option1}) did in such a case. In fact, this failure
is more severe, since $\|\varphi - r\|_\infty = \infty$
for any $r$ because of the logarithmic singularities of
$\varphi$.
The last section, however, suggests a solution.
Instead of (\ref{rel2}), we can derive from (\ref{rel1}) the bound
\begin{equation}
| I - I_n | \le {1\over 2\pi} \kern .7pt\|f\|_\infty{}
\|\varphi - r\|_1^{} \kern 1pt .
\label{rel3}
\end{equation}
The switch from the $\infty$- to the $1$-norm changes the
problem of rational approximation of $\varphi$ profoundly.
Since the dominant effects just concern approximation of a logarithmic
singularity near the singular point, the
essential question becomes, how fast can $\log t$ be approximated
by rational functions over both sides of the interval $[-1,0\kern .5pt]$ in
the $1$-norm?
\begin{figure}
\begin{center}
\vspace{12pt}
\includegraphics[scale=1]{fig14.eps}
\vspace{-7pt}
\end{center}
\caption{\label{fig14} Error $|\varphi(t)-r(t)|$ as a function of
distance to the left from $t=-1$ for the tanh and tanh-sinh
approximations with $n=40$ with
$h = \pi/\sqrt n$ and $h = 1.2\log(2\pi n)/n$, respectively.
By symmetry, the same behavior would appear to the right from $t=1$.
Compare Fig.~\ref{figmini}, where the ratio of the blue and
red values in the lower-right image is closely analogous
to the blue curve here.
The $1$-norms of the approximation errors over $[-2,-1]$ are indicated.
The slight irregularities at the left are the result of rounding error.}
\end{figure}
As we did with Fig.~\ref{figmini}, let us get some insight
by looking at the details of the approximation problem.
The rational function (\ref{ratdef}) for the tanh rule is
\begin{equation}
r(t) = \kern 2pt h \kern -6pt
\sum_{k =-(n-1)/2}^{(n-1)/2} {\hbox{sech}^2(kh) \over t - \tanh(kh)},
\label{tanhrule}
\end{equation}
and for the tanh-sinh rule, it is
\begin{equation}
r(t) = \kern 2pt h \kern -6pt
\sum_{k =-(n-1)/2}^{(n-1)/2} {{\textstyle{\pi\over 2}}\kern -.5pt \cosh(kh)
\kern 1.3pt\hbox{sech}^2({\textstyle{\pi\over 2}}\sinh(kh))
\over t - \tanh({\textstyle{\pi\over 2}}\sinh(kh))}.
\label{tanhsinhrule}
\end{equation}
Figure~\ref{fig14} plots $|\kern .5pt \varphi(t)-r(t)|$ for these
two approximations.
For tanh quadrature, we know that $|\kern .5pt
\varphi(t)-r(t)|$ must diverge to $\infty$ as $t\to -1$ because of
the log singularity of $\varphi$ at $t=-1$. Yet the singularity is
so weak that the divergence only shows up as a gentle upward drift
in the blue curve at the left. Over the main part of the plot, $|\kern
.5pt\varphi(t)-r(t)|$ decreases steadily down to around
$10^{-7}$. The $1$-norm, measured here
over $t\in [-2,-1]$, is consequently very small, confirming via
(\ref{rel3}) the high accuracy of this quadrature rule.
As $n\to\infty$, this $1$-norm decays root-exponentially.
For tanh-sinh quadrature, again no approximation of $\varphi$
is possible in the $\infty$-norm.
In the $1$-norm, however, one might expect that the convergence
will now be almost-exponential.
Indeed, $|\kern .5pt \varphi - r|$ decays almost-exponentially as $n\to\infty$
over any domain bounded away from the singularity.
But the 1-norm decay over the whole interval is in fact
just root-exponential, as is suggested by the number listed
being barely smaller than before.
The following reasoning suggests why this must be.
Consider approximation of $f(x) = \log x$ on $[\kern .5pt 0,
1]$. Suppose rational approximations existed with faster than
root-exponential convergence in the $1$-norm. Then by integrating,
we would get rational approximations to $g(x) = x\log x - x$ with
faster than root-exponential convergence in the $\infty$-norm,
which would contradict the evidence of Fig.~\ref{fig2}.
If $\|\kern .7pt \varphi - r\kern .7pt \|_1$ decreases only
root-exponentially as $n\to\infty$, how does the quadrature
formula converge almost-exponentially? It appears that this
depends on additional properties
that go beyond rational approximation, involving
analytic continuation of the integrand onto an infinitely-sheeted
Riemann surface in exponentially small neighborhoods
of the endpoints~\cite{sugihara,tanaka}.
There remains the phenomenon of tapered exponential clustering, so
vividly evident in Fig.~\ref{fig13}. We do not yet have
an explanation for this, nor a view of whether an
approximate $\sqrt k$ dependence
is genuine or just an artifact.
This is a topic for ongoing research, where it would
be good to investigate also the
distributions of exponentially clustered nodes, also
apparently tapered, that arise with the
``universal quadrature'' formulas of Bremer, et al.~\cite{brs,serkh}.
\section{Discussion}
\label{sec-discussion}
Exponential clustering of poles at singularities has been part
of the landscape of rational approximation for half a century,
but we believe this is the first study to focus on this effect.
Our motivation is that this clustering is what makes rational
approximations so powerful, and understanding it enables one
to improve existing numerical algorithms and develop new ones.
We find these phenomena fascinating,
especially the tapered clustering effect, and discovering that
tapering also appears in double exponential quadrature
was a bonus. The elucidation of these matters with
the help of a sometimes seemingly endless program of numerical
experiments will forever be associated in our minds with the
Covid-19 shutdowns of 2020.
Here are some details of our computations.
Figures~\ref{fig1}, \ref{fig2}, \ref{fig6} and~\ref{figmini}
made use of the Chebfun {\tt minimax} command~\cite{minimax},
principally due to Silviu Filip, and Filip also kindly provided
us with a modified code for the weighted
minimax approximations of Figs.~\ref{fig2} and~\ref{fig6}.\ \ For
successful results in some of these
problems, we applied a M\"obius
transformation of $[\kern .5pt 0,1]$ to itself to weaken the singularity
while preserving the space $R_n$.
For the approximations of Figs.~\ref{fig2}
and~\ref{fig6} on a complex disk, the AAA-Lawson algorithm was used as
implemented in Chebfun~\cite{chebfun,lawson}, again with a
M\"obius transformation. Figure~\ref{fig3} was produced with
the {\tt confmap} code available at~\cite{laplace}, which in
turn calls {\tt aaa} from Chebfun~\cite{aaa} and {\tt laplace}
from~\cite{laplace}.
The {\tt aaa} code was also used directly
in Figs.~\ref{fig1} and~\ref{fig6},
and {\tt laplace} in Fig.~\ref{fig5}.
The Stokes and Helmholtz results of Fig.~\ref{fig5} were
produced by experimental codes that are not yet
publicly available developed with Abi Gopal and
Pablo Brubeck, respectively.
In Fig.~\ref{fig12}, a least-squares problem was extended by
a Lawson iteration (iteratively reweighted least-squares) to
compute minimax approximations with preassigned poles.
All the remaining results are based on straightforward
computations in \hbox{MATLAB}\ and Chebfun.
\begin{acknowledgements}
We have benefited from helpful
advice from Bernd Beckermann, Pablo Brubeck,
Silviu Filip, Abi Gopal, Stefan G\"uttel, Arno Kuijlaars,
Andrei Mart\'inez-Finkelshtein,
Ed Saff, Kirill Serkh, Alex Townsend, and Heather Wilber.
\end{acknowledgements}
| {
"timestamp": "2020-07-24T02:10:49",
"yymm": "2007",
"arxiv_id": "2007.11828",
"language": "en",
"url": "https://arxiv.org/abs/2007.11828",
"abstract": "Rational approximations of functions with singularities can converge at a root-exponential rate if the poles are exponentially clustered. We begin by reviewing this effect in minimax, least-squares, and AAA approximations on intervals and complex domains, conformal mapping, and the numerical solution of Laplace, Helmholtz, and biharmonic equations by the \"lightning\" method. Extensive and wide-ranging numerical experiments are involved. We then present further experiments showing that in all of these applications, it is advantageous to use exponential clustering whose density on a logarithmic scale is not uniform but tapers off linearly to zero near the singularity. We give a theoretical explanation of the tapering effect based on the Hermite contour integral and potential theory, showing that tapering doubles the rate of convergence. Finally we show that related mathematics applies to the relationship between exponential (not tapered) and doubly exponential (tapered) quadrature formulas. Here it is the Gauss--Takahasi--Mori contour integral that comes into play.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Exponential node clustering at singularities for rational approximation, quadrature, and PDEs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795106521339,
"lm_q2_score": 0.8128673155708976,
"lm_q1q2_score": 0.8022833853472782
} |
https://arxiv.org/abs/2002.04664 | Universal Average-Case Optimality of Polyak Momentum | Polyak momentum (PM), also known as the heavy-ball method, is a widely used optimization method that enjoys an asymptotic optimal worst-case complexity on quadratic objectives. However, its remarkable empirical success is not fully explained by this optimality, as the worst-case analysis -- contrary to the average-case -- is not representative of the expected complexity of an algorithm. In this work we establish a novel link between PM and the average-case analysis. Our main contribution is to prove that any optimal average-case method converges in the number of iterations to PM, under mild assumptions. This brings a new perspective on this classical method, showing that PM is asymptotically both worst-case and average-case optimal. | \section{Introduction}
Polyak momentum (PM), also known as the heavy-ball method, is a widely used optimization method. Originally developed to solve linear equations~\citep{frankel1950convergence, rutishauser1959theory}, it was generalized to smooth functions and popularized in the optimization community by Boris Polyak~\citep{polyak1964some,polyak1987introduction}.
This method has seen a renewed interest in recent years, as its stochastic variant which replaces the gradient with a stochastic estimate is effective on deep learning problems~\citep{sutskever2013importance, zhang20202dive}.
PM also enjoys a locally optimal rate of convergence for strongly convex and twice differentiable objectives. As is common within the optimization literature, this optimality is relative to the \emph{worst-case} analysis, that provides complexity bounds for \emph{any} input from a function class, no matter how unlikely.
Despite its widespread use, the worst-case is not representative of the typical behavior of optimization methods. The simplex method, for example, has a worst-case exponential complexity, that becomes polynomial when considering the average-case \citep{spielman2004smoothed}.
A more representative analysis of the typical behavior is given by the \emph{average-case} complexity, which averages the algorithm's complexity over all possible inputs.
The average-case analysis is standard for analyzing sorting~\citep{knuth1997art} and cryptography~\citep{katz2014introduction} algorithms, to name a few.
However, little is known of the average-complexity of optimization algorithms, whose analysis depends on the often unknown probability distribution over the inputs.
The recent work of \citet{pedregosa2020acceleration, lacotte2020optimal}
overcame this dependency on the input probability distribution through the use of random matrix theory techniques.
In the same papers, the authors noticed the convergence of some optimal average-case methods to PM, as the number of iterations grows (see Figure~\ref{fig:convergence_pm}). This is rather surprising given their crucial differences. For instance, average-case optimal methods use knowledge of the full spectral distribution, while PM only requires knowledge of its edges (i.e., smallest and largest eigenvalue).
Since this convergence was only shown on specific methods, it raises the question on whether this is a spurious phenomenon or if this holds more generally:
\begin{myenv}{Conjecture}
\centering As the number of iterations grows, all average-case optimal methods converge to Polyak momentum.
\vspace{-0.8em}\vphantom{1}
\end{myenv}
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{figures/polyak_convergence.pdf}
\caption{{\bfseries Convergence of optimal average-case methods to Polyak Momentum}. For the Marchenko-Pastur and uniform distribution of eigenvalues (left), we construct the method that has optimal average-case complexity and plot the momentum (middle) and steps-size (right) parameters. For the two methods considered, the momentum and step-size parameters converge as the number of iterations grows to those of Polyak momentum, displayed here as a straight line.
}
\label{fig:convergence_pm}
\end{figure*}
The {\bfseries main contribution} of this paper is to give a positive answer to this conjecture.
The main, but not so restrictive assumption, is that the probability density function of the eigenvalues is non-zero on the interval containing its support.
With this we can show the previously unknown property that PM is \emph{asymptotically} optimal under the average-case analysis, bringing a new perspectiveon the remarkable empirical performance of this classical method.
Furthermore, this statement is \emph{universal}, i.e., independent of the probability distribution over the inputs.
\subsection{Related work}
This work draws from the fields of optimization, complexity analysis and orthogonal polynomials, of which we comment on the most closely related ideas.
\paragraph{Average-case analysis.}
The average-case analysis has a long history in computer science and numerical analysis. Often it is used to justify the superior performances of algorithms such as Quicksort \citep{Hoare1962Quicksort} and the simplex method in linear programming~\citep{spielman2004smoothed}.
Despite this rich history, it's challenging to transfer these ideas into continuous optimization due to
the ill-defined notion of a typical continuous optimization problem.
In the context of optimization, \citet{pedregosa2020acceleration} derived a framework for analyzing the average-case gradient-based methods and developed methods that are non-asymptotic optimal algorithms with respect to the average-case.
Such average-case analysis finds applications in various domains. For instance, \citet{lacotte2020optimal} use this framework to derive optimal average-case algorithms to minimize least-squares with random matrix sketching. Prior to this stream of papers, \citet{berthier2018accelerated} use methods based on Jacobi polynomials to design average-case optimal gossip methods, but without generalizing the framework.
In the numerical analysis literature,~\citet{deift2019conjugate} have recently developed an average-case complexity of conjugate gradient.
\paragraph{Asymptotics or orthonormal polynomials.} A key ingredient of the proof are asymptotics or orthonormal polynomials. This is a vast subject with applications in stochastic processes~\citep{grenander1958toeplitz}, random matrix theory~\citep{deift1999orthogonal} and numerical integration~\citep{mhaskar1997introduction} to name a few. The monograph of \citep{lubinsky2000asymptotics} discusses all results used in this paper.
\paragraph{Notation.} Throughout the paper we denote vectors in lowercase boldface (${\boldsymbol x}$), matrices in uppercase boldface letters ($\boldsymbol H$), and polynomials in uppercase latin letter ($P, Q$). We will sometimes omit integration variable, with the understanding that $\int \varphi \mathop{}\!\mathrm{d} \mu$ is a shorthand for $\int \varphi(\lambda) \mathop{}\!\mathrm{d}\mu(\lambda)$.
\section{Average-Case Analysis of Gradient-Based Methods}\label{scs:average_analysis}
The goal of the average-case analysis is to quantify the expected error ${\mathbb{E}\,} \|{\boldsymbol x}_t - {\boldsymbol x}^\star\|^2$, where ${\boldsymbol x}_t$ is the $t$-th update of some optimization method and the expectation is taken over all possible problem instances. To make this analysis tractable, and following \citep{pedregosa2020acceleration}, we consider quadratic optimization problems of the form
\begin{empheq}[box=\mybluebox]{equation*}\tag{OPT}\label{eq:quad_optim}
\vphantom{\sum_0^i}\min_{{\boldsymbol x} \in {\mathbb R}^d} \Big\{ f({\boldsymbol x}) \stackrel{\text{def}}{=}\!\mfrac{1}{2}({\boldsymbol x}\!-\!{\boldsymbol x}^\star)^\top\!{\boldsymbol H}({\boldsymbol x}\!-\!{\boldsymbol x}^\star) \Big\},
\end{empheq}
where ${\boldsymbol H} \in {\mathbb R}^{d \times d}$ is a \textit{random} symmetric positive-definite matrix and ${\boldsymbol x}^\star$ is a \textit{random} $d$-dimensional vector which is a solution of \eqref{eq:quad_optim}.
\begin{remark} Problem \eqref{eq:quad_optim} subsumes the quadratic minimization problem $\min_{{\boldsymbol x}} {\boldsymbol x}^\top{\boldsymbol H}{\boldsymbol x} + {\boldsymbol b}^\top {\boldsymbol x} + c$ but the notation above will be more convenient for our purposes.
\end{remark}
\begin{remark}
The expectation in ${\mathbb{E}\,} \|{\boldsymbol x}_t - {\boldsymbol x}^\star\|^2$ is over the inputs and not over any randomness of the algorithm, as is common in the stochastic literature. In this paper we only consider deterministic algorithms.
\end{remark}
We consider in this paper the class of \textit{first order methods}, which build ${\boldsymbol x}_t$ using a pre-defined linear combination of an initial guess and previous gradients:
\begin{equation}
{\boldsymbol x}_{t} \in {\boldsymbol x}_0 + \textbf{span}\{\nabla f({\boldsymbol x}_0),\;\ldots,\;\nabla f({\boldsymbol x}_t)\}. \label{eq:first_order_definition}
\end{equation}
This wide class includes most gradient-based optimization methods, such as gradient descent and momentum. However, it excludes quasi-Newton methods, preconditioned gradient descent or Adam (to cite a few), as the preconditioning allows the iterates to go outside span.
\subsection{Tools of the trade: orthogonal polynomials and spectral densities}
Average-case optimal methods rely on two key concepts that we now introduce: \textit{residual orthogonal polynomials} and the \textit{expected spectral distribution}.
\subsubsection{Orthogonal (residual) polynomials}
This section defines orthogonal polynomials and residual polynomials.
\begin{definition} \label{def:orthogonal_polynomial}
Let $\alpha$ be a non-decreasing function such that $\int Q\mathop{}\!\mathrm{d}\alpha$ is finite for all polynomials $Q$.
We will say that the sequence of polynomials $P_0, P_1, \ldots$ is orthogonal with respect to $\mathop{}\!\mathrm{d}\alpha$ if $P_i$ has degree $i$ and
\begin{equation}\label{eq:optimal_orthogonal_polynomials}
\int_{\mathbb{R}} P_i\,P_j\mathop{}\!\mathrm{d}\alpha \begin{cases}
= 0 & \text{if } i\neq j \\
> 0 & \text{if } i = j
\end{cases}.
\end{equation}
Furthermore, if they verify $P_i(0) = 1$ for all $i$, we call these {\bfseries residual orthogonal} polynomials.
\end{definition}
Residual orthogonal polynomials verify a three-term recurrence \citep[\S 2.4]{fischer1996polynomial}, that is, there exists a sequence of real values $a_t, b_t$ such that
\begin{equation}\label{eq:recurence_orthogonal_polynomials}
P_{t}(\lambda)
= (a_t + b_t \lambda) P_{t-1}(\lambda) + (1-a_{t})P_{t-2}(\lambda)\,,
\end{equation}
where $P_0(\lambda) = 1$ and $P_1(\lambda) = 1+b_1\lambda$\,.
\subsubsection{Expected spectral distribution}
The expected spectral distribution and the extreme eigenvalues of the matrix ${\boldsymbol H}$ play similar roles in the case of, respectively, average-case and worst-case optimal methods. They measure the problem's difficulty and define the optimal method's parameters.
\begin{definition}[Empirical/Expected Spectral Measure]
Let ${\boldsymbol H}$ be a random matrix with eigenvalues $\{\lambda_1, \ldots, \lambda_d\}$. The {\bfseries empirical spectral measure} of ${\boldsymbol H}$, called ${\mu}_{{\boldsymbol H}}$, is the probability measure
\begin{equation}\label{eq:empirical_spectral_density}
\mu_{{\boldsymbol H}}(\lambda) \stackrel{\text{def}}{=} {\textstyle{\frac{1}{d}\sum_{i=1}^d}} \delta_{\lambda_i}(\lambda) ~,
\end{equation}
where $\delta_{\lambda_i}$ is the Dirac delta, a distribution equal to zero everywhere except at $\lambda_i$ and whose integral over the entire real line is equal to one.
Since ${\boldsymbol H}$ is random, the empirical spectral measure $\mu_{{\boldsymbol H}}$ is a random measure. Its expectation over ${\boldsymbol H}$,
\begin{equation}
\mu \stackrel{\text{def}}{=} {\mathbb{E}\,}_{{\boldsymbol H}}[\mu_{{\boldsymbol H}}]\,,
\end{equation}
is called the {\bfseries expected spectral distribution}.
\end{definition}
\begin{example}[Marchenko-Pastur density and large least squares problems]
Consider a matrix ${\boldsymbol A} \in {\mathbb R}^{n \times d}$, where each entry is an iid random variable with mean zero and variance $\sigma^2$. Then it is known that the expected spectral distribution of ${\boldsymbol H} = \frac{1}{n}{\boldsymbol A}^\top\!{\boldsymbol A}$ converges to to the Marchenko-Pastur distribution \citep{marchenko1967distribution} as $n$ and $d \to \infty$ at a rate in which the asymptotic ratio $d / n \rightarrow r$ is finite.
The Marchenko-Pastur distribution $\dif\mu_{\mathrm{MP}}$ is defined as
\begin{equation} \label{eq:mp_distribution}
\max\{1 - \tfrac{1}{r}, 0\}\delta_0(\lambda) + \frac{\sqrt{(L - \lambda)(\lambda - \ell)}}{2 \pi \sigma^2 r \lambda}1_{\lambda \in [\ell, L]}\,.
\end{equation}
Here $\ell\stackrel{\text{def}}{=} \sigma^2(1 - \sqrt{r})^2$, $L \stackrel{\text{def}}{=} \sigma^2(1 + \sqrt{r})^2$ are the extreme nonzero eigenvalues, $\delta_0$ is a Dirac delta at zero (which disappears if $r \geq 1$) and $1_{\lambda \in [\ell, L]}$ is a rectangular window function, equal to 1 for $\lambda \in [\ell, L]$ and 0 elsewhere.
\end{example}
\subsection{Average-case optimal methods}
With these two ingredients, we can construct the method with optimal average-case complexity. We rewrite the expected error as an integral with weight function the expected spectral density $\mu$.
\begin{theorem}\label{thm:pedregosa_rate}\citep{pedregosa2020acceleration}
Assume ${\boldsymbol x}_0$, ${\boldsymbol x}^{\star}$ are random variables independent of ${\boldsymbol H}$, satisfying $\mathbb{E}[({\boldsymbol x}_0-{\boldsymbol x}^\star)({\boldsymbol x}_0-{\boldsymbol x}^\star)^\top] = R^2\boldsymbol{I}$. Let ${\boldsymbol x}_t$ be generated by a first-order method, associated to the polynomial $P_t$. Then the expected error at iteration $t$ reads
\begin{empheq}{equation}\label{eq:error_norm_x}
\vphantom{\sum_0^i}\mathbb{E}\|{\boldsymbol x}_t-{\boldsymbol x}^\star\|^2 = {\overbrace{R^2\vphantom{R_t}}^{\text{initialization}}}\int_{\mathbb R} {\underbrace{P_t^2}_{\text{algorithm}}} {\overbrace{\mathop{}\!\mathrm{d}\mu}^{\text{problem}}}
\,.
\end{empheq}
\end{theorem}
The optimal first order method is obtained by minimizing the above identity over the space of residual polynomials of degree~$t$. This turns out to be equivalent to finding a sequence of residual polynomials $\{P_i\}$ orthogonal w.r.t. the weight function $\lambda \mu(\lambda)$, as shown in the following theorem.
\begin{theorem}\label{thm:optimal_polynomial}
\citep{pedregosa2020acceleration}
Let $a_t$ and $b_t$ be the coefficients of the three-term recurrence \eqref{eq:recurence_orthogonal_polynomials} for the sequence of residual polynomials orthogonal w.r.t. $\lambda \mathop{}\!\mathrm{d}\mu(\lambda)$.
Then the following method has optimal average-case complexity over the class of problems \eqref{eq:quad_optim}:\footnote{Throughout the paper, we will color-code {\color{colormomentum} momentum} and {\color{colorstepsize}step-size} parameters.}
\begin{align} \label{eq:optimal_algorithm}
&{\boldsymbol x}_1 = {\boldsymbol x}_0 + {\color{colorstepsize}b_1} \nabla f({\boldsymbol x}_0),\\
&{\boldsymbol x}_{t} = {\boldsymbol x}_{t-1} + {\color{colormomentum}(a_t-1)}({\boldsymbol x}_{t-1}-{\boldsymbol x}_{t-2}) + {\color{colorstepsize}b_t} \nabla f({\boldsymbol x}_{t-1})\nonumber\,.
\end{align}
\end{theorem}
Due to the dependency of the coefficients $a_t, b_t$ on the expected spectral distribution, equation \eqref{eq:optimal_algorithm} does not represents a single scheme, but rather a family of algorithms: each different expected spectral distribution generates a different optimal method. Below is an example of such optimal algorithm w.r.t the Marchenko-Pastur expected spectral distribution.
\begin{example}[Marchenko-Pastur acceleration]\label{ex:mp}
Let $\mathop{}\!\mathrm{d}\mu$ be the density associated with the Marchenko-Pastur distribution. Then, the recurrence of the optimal average-case method associated with this distribution is
\begin{equation*}\label{algo:mp_algo}
\begin{split}
&\rho = \mfrac{1+r}{\sqrt{r}}\,,~\delta_{0} = 0;\\
& {\boldsymbol x}_1 = {\boldsymbol x}_0-{\color{colorstepsize}\mfrac{1}{(1+r)\sigma^2}}\nabla f({\boldsymbol x}_0)\,;~\\
&\delta_{t} = -({\rho+\delta_{t-1}})^{-1}\,; \\
&{\boldsymbol x}_{t} = {\boldsymbol x}_{t-1} + {\color{colormomentum}\left(1 + \rho\delta_t\right)}({\boldsymbol x}_{t-2} - {\boldsymbol x}_{t-1}) + {\color{colorstepsize}\mfrac{ \delta_t}{\sigma^2\sqrt{r}}}\nabla f({\boldsymbol x}_{t-1})\,.
\end{split}
\end{equation*}
The coefficients come from the orthogonal polynomials w.r.t. $\lambda \mathop{}\!\mathrm{d}\mu(\lambda)$, which is a shifted Chebyshev polynomials of the second kind.
\end{example}
\subsection{Polyak Momentum and worst-case optimality}
The Polyak momentum algorithm~\citep{polyak1964some}
has an optimal worst-case convergence rate over the class of first order methods with constant coefficients~\citep{polyak1987introduction, scieur2017integration}.
The method requires knowledge of the smallest and largest eigenvalue of the Hessian ${\boldsymbol H}$ (denoted $\ell$ and $L$ respectively) and iterates as follows:
\begin{align}\label{algo:pm_algo}\tag{PM}
&{\boldsymbol x}_1 = {\boldsymbol x}_0 - {\color{brown}\mfrac{2}{L + \ell}}\nabla f({\boldsymbol x}_0)\\
&{\boldsymbol x}_{t+1} = {\boldsymbol x}_t + {\color{colormomentum}\textstyle{\Big(\frac{\sqrt{L}-\sqrt{\ell}}{\sqrt{L}+\sqrt{\ell}}\Big)^2}}({\boldsymbol x}_{t} - {\boldsymbol x}_{t-1}) - {\color{colorstepsize} \textstyle\Big(\frac{ 2}{\sqrt{L}+\sqrt{\ell}}\Big)^2}\nabla \nonumber
f({\boldsymbol x}_t)
\end{align}
\vspace{0.5em}\begin{remark} Unlike the Marchenko-Pastur accelerated method of Example~\ref{ex:mp}, coefficients of this method are constant in the iterations. Furthermore, these coefficients only depend on the edges of the spectral distribution and not on the full density.
\end{remark}
\break
\section{All Roads Lead to Polyak Momentum}\label{scs:asymptotic_heavyball}
\begin{myenv}{Main result}
\begin{theorem}\label{thm:conv_hb}
Assume the density function $\mathop{}\!\mathrm{d}\mu$ is strictly positive in the interval $[\ell, L]$ with $\ell > 0$ and let ${a_t}, b_t$ be the parameters of the optimal average-case method (Theorem~\ref{thm:optimal_polynomial}). Then these parameters converge to those of \eqref{algo:pm_algo}. More precisely, we have the limits:
\begin{align} \label{eq:limit_method}
&\lim_{t \to \infty} {\color{colormomentum} a_t-1} = \underbrace{\left(\mfrac{\sqrt{L}-\sqrt{\ell}}{\sqrt{L}+\sqrt{\ell}}\right)^2}_{=\,\text{\eqref{algo:pm_algo} momentum}}\,, \;\;\quad\text{ and }\\
&\lim_{t \to \infty}{\color{colorstepsize} b_t} = \underbrace{- \left(\mfrac{2}{\sqrt{L} + \sqrt{\ell}}\right)^2}_{=\,\text{\eqref{algo:pm_algo} step-size}}\,.
\end{align}
~
\end{theorem}
\end{myenv}
The key insight of the proof is to cast the three-term recurrence of residual \textit{orthogonal} polynomials into orthonormal polynomials\footnote{A sequence $Q_1, Q_2, ...$ of orthogonal polynomials with respect to $\mathop{}\!\mathrm{d}\omega$ is orthonormal if $\int Q_i^2 \mathop{}\!\mathrm{d} \omega = 1$.} in the interval $[-1,\,1]$. Once this is done, we will use asymptotic properties of these polynomials. The proof is split into three steps.
\begin{itemize}[leftmargin=*]
\item Step 0 introduces notation and some known results.
\item Step 1 writes the coefficients of optimal average-case methods in terms of properties of a class of orthonormal polynomials in the $[-1, 1]$ interval.
\item Step 2 computes the limits of the expressions derived in the previous step by using known asymptotic properties of orthonormal polynomials.
\end{itemize}
{\bfseries \underline{Step 0}: Definitions.} In the classical theory of orthogonal polynomials, the weight function associated with orthogonal polynomials is defined in the interval $[-1, 1]$. However, in our case the spectral densities are instead defined in $[\ell, L]$. To translate results from one setting to the other we define the following linear mapping from $[\ell, L]$ to $[-1, 1]$:
\begin{equation}
m(\lambda) = \mfrac{L+\ell}{L - \ell} - \mfrac{2}{L-\ell}\lambda\,.
\end{equation}
For notational convenience, we will also use the shorthand $m_0 \stackrel{\text{def}}{=} m(0)$.
We now define $Q_i(m(\cdot))$ as the $i$-th degree orthonormal polynomial with respect to the weight function $\lambda\mu(\lambda)$. That is, the sequence $Q_1, Q_2, \ldots$ verifies
\begin{equation}
\int_{\ell}^L Q_i(m(\lambda))Q_j(m(\lambda)) \lambda \mathop{}\!\mathrm{d}\mu(\lambda) = \begin{cases} 1 \text{ if $i=j$} \\ 0 \text{ otherwise\,.}\end{cases}\,,
\end{equation}
where $\delta_{ij}$ represents Kronecker's delta.
Like residual orthogonal polynomials, orthonormal polynomials also verify a three-term recurrence relation. This time, the relation is of the form
\begin{equation}\label{eq:recurrence_orthonormal_poly}
\alpha_t Q_t(\xi) = (\xi - \beta_t)Q_{t-1}(\xi) - \alpha_{t-1}Q_{t-2}(\xi)\,,
\end{equation}
and depends on coefficients $\alpha_t, \beta_t$:
{\bfseries \underline{Step 1}: Parameters of optimal method and orthonormal polynomials}.
In this step we derive the recurrence relation for an \emph{orthonormal} family with respect to the density $\mathop{}\!\mathrm{d}\nu$. This will allow us to use existing results on the asymptotics of orthonormal polynomials.
\begin{lemma}\label{lemma:orthonormal_a_b}
Let $a_t, b_t$ be the parameters associated with optimal average-case method (Theorem~\ref{thm:optimal_polynomial}). These coefficients verify the following identity,
\begin{align}\label{eq:reformation_a_b}
&{\color{colormomentum}1 - a_t} = -\frac{\alpha_{t-1}}{ \alpha_t}\frac{Q_{t-2}(m_0)}{ Q_t(m_0)} ~, \quad \text{ and }\\
&{\color{colorstepsize} b_t} = - \frac{2}{\alpha_t (L - \ell)} \frac{Q_{t-1}(m_0)}{Q_{t}(m_0)}~.
\end{align}
\end{lemma}
\begin{proof}
Since orthogonality is preserved after multiplication by a scalar, the polynomial $P_t(\lambda) \stackrel{\text{def}}{=} Q_t(m(\lambda)) / Q_t(m_0)$ is also orthogonal with respect to the weight function $\lambda \mathop{}\!\mathrm{d}\mu(\lambda)$. The normalization $1/Q_t(m_0)$ ensures $P_t$ is a residual polynomial. Note that $Q_t(m_0)$ cannot be zero because $m_0$ lies outside of the weight function's support $[-1,\,1]$.
Using Theorem~\ref{thm:optimal_polynomial}, the coefficients of the optimal average-case method can be derived from the three-term recurrence of this polynomial. Indeed, starting from the three-term recurrence of $Q_i$ \eqref{eq:recurrence_orthonormal_poly}, we obtain for $P_t$
\begin{align*}
P_t(\lambda) &= (m(\lambda) - \beta_{t-1})\mfrac{Q_{t-1}(m(\lambda))}{ \alpha_t \, Q_t(m_0)} - \alpha_{t-1}\mfrac{Q_{t-2}(m(\lambda))}{ \alpha_t \, Q_t(m_0)}\\
& = \underbrace{\mfrac{1}{\alpha_t}\left(\mfrac{L+\ell}{L - \ell} - \beta_{t-1} - \mfrac{2}{L-\ell}\lambda\right)\mfrac{Q_{t-1}(m_0)}{ \, Q_t(m_0) }}_{= (a_t + b_t\lambda)} P_{t-1}\\
&\qquad - \underbrace{\mfrac{\alpha_{t-1}}{ \alpha_t}\mfrac{Q_{t-2}(m_0)}{ Q_t(m_0)}}_{=-(1 - a_t)} P_{t-2}(\lambda)\,,
\end{align*}
where in the last line we used the definition of $m$ and the identity $P_{i}(\lambda) = Q_i(m(\lambda))/Q_i(m_0)$ for $i=t-1$ and $i=t-2$.
Finally, matching the coefficients of this recurrence with \eqref{eq:recurence_orthogonal_polynomials} yields the identity in the Lemma.
\end{proof}
{\bfseries \underline{Step 2}: Asymptotics of orthonormal polynomials.} This step uses known result on asymptotics of orthonormal polynomials to compute the limit $t \to \infty$ of expressions derived in the previous step.
We use the following theorem on the asymptotic ratio between two successive orthonormal polynomials.
\begin{theorem}[\citep{rakhmanov1983asymptotics};\footnote{The original version of this theorem was stated for monic orthogonal polynomials but is valid for polynomials with other normalizations like orthonormal, see for instance \citep{lubinsky2000asymptotics, denisov2004rakhmanov}. } Ratio Asymptotics] \label{thm:rakhmanov1983asymptotics}
Let $\{Q_i\}$ be a sequence of orthonormal polynomials with respect to a weight function strictly positive in $]-1, 1[$, and zero elsewhere. Then we have the following limit for the ratio of polynomials evaluated outside of the support,
\begin{equation}\label{eq:rakhmanov}
\lim_{t\rightarrow\infty} \frac{Q_t(\xi)}{Q_{t-1}(\xi)} = \xi + \sqrt{\xi^2-1} \quad \text{for} \;\; \xi > 1\,.
\end{equation}
\end{theorem}
We can use this result to compute the limit of the ratio $Q_{t-1}(m_0)/Q_t(m_0)$, that appears in \eqref{eq:reformation_a_b}, as $m(0)> 1$ (and thus is not in the interval $[-1,\,1]$):
\begin{align}
\lim_{t \to \infty} \mfrac{Q_{t-1}(m_0)}{Q_t(m_0)} &\stackrel{\eqref{eq:rakhmanov}}{=} \Big(\mfrac{L + \ell}{L - \ell} + \sqrt{\big(\mfrac{L + \ell}{L - \ell}\big)^2-1}\big)\Big)^{-1} \nonumber\\
&= \mfrac{\sqrt{L}-\sqrt{\ell}}{\sqrt{L}+\sqrt{\ell}}\,. \label{eq:limit_ratio_q}
\end{align}
The other dependency of Eq. \eqref{eq:reformation_a_b} on the iteration $t$ is through the coefficients $\alpha_t, \beta_t$. To compute the limits of these we use the following known asymptotics:\footnote{It can be shown that the last two theorems are equivalent \citep[Theorem 13]{nevai1979orthogonal}. However, it will be more convenient for our purposes to present them as independent results.}
\begin{theorem}[\citet{mate1985asymptotics}; Limits of recurrence coefficients] Under the same assumptions as Theorem \ref{thm:rakhmanov1983asymptotics}, the limits of the coefficients $\alpha_t, \beta_t$ in the orthonormal three-terms recurrence (Eq. \ref{eq:recurrence_orthonormal_poly}) is
\begin{equation}\label{eq:mate}
\lim_{t\rightarrow\infty}\alpha_t = \mfrac{1}{2}~,\qquad \lim_{t\rightarrow\infty}\beta_t = 0 ~.
\end{equation}
\end{theorem}
Using this last theorem together with \eqref{eq:limit_ratio_q}, we have
\begin{align*}
\lim_{t \to \infty} {\color{colormomentum}(1 - a_t)} &~~\stackrel{\eqref{eq:reformation_a_b}}{=}
-\left(\lim_{t \to \infty} \mfrac{\alpha_{t-1}}{ \alpha_t}\right) \left( \lim_{t \to \infty}\mfrac{Q_{t-2}(m_0)}{ Q_t(m_0)}\right) \nonumber\\
&\!\stackrel{(\ref{eq:limit_ratio_q}, \ref{eq:mate})}{=} -\Big(\mfrac{\sqrt{L}-\sqrt{\ell}}{\sqrt{L}+\sqrt{\ell}}\Big)^2,
\end{align*}
which is the claimed limit.
To conclude the proof, we compute the same limit for the step-size ${\color{colorstepsize}b_t}$:
\begin{align}
\lim_{t \to \infty} {\color{colorstepsize}b_t} &\stackrel{\eqref{eq:reformation_a_b}}{=} - \mfrac{2}{ L - \ell} \left(\lim_{t \to \infty} \alpha_t^{-1}\right) \left(\lim_{t \to \infty} \mfrac{Q_{t-1}(m_0)}{Q_{t}(m_0)}\right) \\
&\!\!\!\!\stackrel{(\ref{eq:limit_ratio_q}, \ref{eq:mate})}{=} - \left(\mfrac{2}{\sqrt{L} + \sqrt{\ell}}\right)^2\,\,,~\widetilde{r}_1 = r_1, \widetilde{r}_0 = r_0.
\end{align}
\section{Asymptotic Expected Convergence Rates} \label{sec:proof}
The previous section showed convergence of the method's parameters to PM, but said nothing about its rate of convergence. This section fills this gap by providing the asymptotic convergence of the expected convergence rate $\mathbb{E}\|{\boldsymbol x}_t-{\boldsymbol x}^\star\|^2$.
More precisely, we show that the expected convergence rate converges to the rate of convergence of Polyak, and that this convergence rate is \textit{independent} of the probability distribution.
\begin{restatable}{thm}{conv_hb_rate} \label{thm:conv_hb_rate}
Under the same assumptions of Theorem \ref{thm:conv_hb}, the asymptotic expected rate of convergence of the optimal method converges to the worst-case rate of convergence,
\begin{equation}\label{eq:asympt_rate}
\limsup_{t\rightarrow \infty}\sqrt[t]{\mathbb{E}\left[\mfrac{\|{\boldsymbol x}_t-{\boldsymbol x}^\star\|^2}{\|{\boldsymbol x}_0-{\boldsymbol x}^\star\|^2}\right]} = \left(\mfrac{\sqrt{L}-\sqrt{\ell}}{\sqrt{L}+\sqrt{\ell}}\right)^2.
\end{equation}
\end{restatable}
\begin{proof}
Let $P_t$ be the residual orthogonal polynomial w.r.t. $\lambda \mathop{}\!\mathrm{d}\mu(\lambda)$. \citet[Theorem 3.1]{pedregosa2020acceleration} showed that the expected rate of convergence for average-case optimal methods admits the following simple form
\begin{equation}
\mathbb{E}\|{\boldsymbol x}_t-{\boldsymbol x}^\star\|^2 = R^{2}\int_{\mathbb{R}} P_t \mathop{}\!\mathrm{d}\mu~.
\end{equation}
This form is particularly convenient for us, as we can then use the the three-term recurrence to obtain a recurrence of this expression. Let $r_t = \int_{\mathbb{R}} P_t \mathop{}\!\mathrm{d}\mu$. After using the recurrence over $P_t$,
\begin{align}
r_t &= \int_{\mathbb{R}} (a_t+\lambda b_t)P_{t-1}(\lambda) + (1-a_t) P_{t-2}(\lambda) \mathop{}\!\mathrm{d}\mu(\lambda)\nonumber\\
&= a_t \underbrace{\int_{\mathbb{R}} P_{t-1}\mathop{}\!\mathrm{d}\mu}_{=r_{t-1}} + (1-a_t) \underbrace{\int_{\mathbb{R}}P_{t-2} \mathop{}\!\mathrm{d}\mu}_{=r_{t-2}}\,,
\end{align}
where in the last identity we have used the orthogonality between $P_t$ and $P_0(\lambda)=1$ w.r.t. $\lambda \mathop{}\!\mathrm{d}\mu(\lambda)$. In all, we have that the convergence rate $r_t$ is described by the recurrence
\begin{equation}
\begin{split}
r_t &= a_t r_{t-1} + (1-a_t) r_{t-2}\,,\\
r_1 &= 1 + b_1 \int_{\mathbb{R}} \lambda \mathop{}\!\mathrm{d}\mu\,,~ r_0 = 1 \,.
\end{split}
\end{equation}
A classical result, often referred to as the Poincar{\'e}-Perron theorem (see for example \citet[Theorem C]{pituk2002more} or \citep[Thm. 8.11]{elaydi2005introduction} ), states that if $a_t$ has a finite limit --guaranteed by the previous theorem and which we denote $a_{\infty}$-- then the recurrence has a fundamental set of solutions $\{r_t^1, r_t^2\}$ such that
\begin{equation}
\limsup_{t \to \infty} \sqrt[t]{r^i_t} = |\lambda_i|\quad \text{ $i =1, 2$}\,,
\end{equation}
where $\lambda_i$ are the roots of the characteristic equation ${\lambda^2 - a_{\infty} \lambda - (1 - a_{\infty})}$. In our case, these roots are $1$ and $1 - a_{\infty}$. Now, since the method we're considering is average-case optimal, this limit cannot be larger than that of Polyak momentum, known to be $(\tfrac{\sqrt{L}-\sqrt{\ell}}{\sqrt{L}+\sqrt{\ell}})^2 < 1$. Hence, we can eliminate the solution $r^1_t = 1$ and conclude
\begin{equation}
\limsup_{t \to \infty} \sqrt[t]{r_t} = (1 - a_{\infty}) = \left(\mfrac{\sqrt{L}-\sqrt{\ell}}{\sqrt{L}+\sqrt{\ell}}\right)^2\,.
\end{equation}
\end{proof}
\section{Discussion and Simulations: Speed of Convergence to PM}
The main result (Theorem~\ref{thm:conv_hb}) shows that, asymptotically, any average-case optimal method converge towards Polyak momentum. This could be interpreted as evidence against average-case optimal methods, as average-case optimal methods are not ``essentially different'' from PM. However, simulations show other dynamics at play.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/speed_convergence.pdf}
\includegraphics[width=\linewidth]{figures/speed_convergence_uniform.pdf}
\caption{{\bfseries Speed of convergence to Polyak Momentum}. For different parametrizations of the Marchenko-Pastur (\textit{top line}) and uniform (\textit{bottom line}) distributions, we plot the absolute difference between the average-case optimal momentum parameter (\textit{middle}) and average-case optimal step-size (\textit{right}) and the momentum and step-size of the Polyak method.
The plots show a high anti-correlation between the speed of convergence of optimal average-case methods to PM and problem conditioning: for well-conditioned problems (small condition number) the parameters converge faster to PM than for ill-conditioned (large condition number) problems.
Thus, in a regime were we perform only a few iterations, Polyak momentum may not be the best choice.}
\label{fig:speed_convergence}
\end{figure*}
In Figure \ref{fig:speed_convergence} we plot the speed of convergence of the parameters of the optimal average-method for the Marchenko-Pastur distribution with different ratios $r = \frac{d}{n}$ (and hence condition number) and for the uniform distribution with different intervals. We see a clear effect of the condition number on the speed of convergence. The more ill-conditioned the problem, the slower the convergence of the optimal method to PM, implying that PM behaves sub-optimally for a larger number of iterations. This observation is consistent with the results of \citep{pedregosa2020acceleration}, who showed important speedups in the ill-conditioned regime.
\section{Conclusion and Perspectives}
In this work, we've shown that optimal average-case methods for minimizing quadratics converge to PM under mild assumptions on the expected spectral distribution. This universality over the probability measure is somewhat surprising, as Polyak momentum method only depends on the edges of the spectrum, while on the other hand optimal average-case methods depend on the whole spectrum.
A potential area for future work is the analysis of the rate of convergence of optimal method to Polyak momentum algorithm. It seems the convergence of the step-size and momentum parameters are bounded polynomially in the number of iterations. This observation indicates the potential benefit of optimal methods over PM in the case where we perform a small number of iteration, typical in machine-learning problems.
A second research direction is the study of optimal polynomials on the \textit{complex} plane. In this case, we are no longer solving the optimization problem \eqref{eq:quad_optim}. Instead, we aim to solve the linear system ${\boldsymbol A} {\boldsymbol x}={\boldsymbol b}$, where the matrix ${\boldsymbol A}$ is non-symmetric, with potentially complex eigenvalues. This has implication in the study of optimal algorithm in game theory \citep{azizian2020accelerating} or in the acceleration of primal-dual algorithms \citep{bollapragada2018nonlinear}.
Finally, our results are only valid in the strongly convex regime ($\ell > 0$), ruling out the important case $r=1$ in the Marchenko-Pastur distribution, which corresponds to large least squares problems with a square matrix. After the first version of this paper appeared, \citet{paquette2020} derived an average-case analysis for gradient descent and showed a gap between the asymptotic average-case and worst-case convergence rate. The development of average-case optimal methods and the study of their asymptotic limits in this regime remains an open problem.
\clearpage
\section*{Acknowledgements}
We would like to thank our colleague Gauthier Gidel for identifying and reporting some gaps in the proof of Theorem \ref{thm:conv_hb_rate}. A note of gratitude also goes to Reza Babanezad, Simon Lacoste-Julien, Remi Lepriol, Nicolas Loizou, Adam Ibrahim, Nicolas Leroux and Courtney Paquette for their insightful discussions and relevant remarks. We also thank Francis Bach and Raphaël Berthier for their useful remarks and pointers.
| {
"timestamp": "2021-01-25T02:02:18",
"yymm": "2002",
"arxiv_id": "2002.04664",
"language": "en",
"url": "https://arxiv.org/abs/2002.04664",
"abstract": "Polyak momentum (PM), also known as the heavy-ball method, is a widely used optimization method that enjoys an asymptotic optimal worst-case complexity on quadratic objectives. However, its remarkable empirical success is not fully explained by this optimality, as the worst-case analysis -- contrary to the average-case -- is not representative of the expected complexity of an algorithm. In this work we establish a novel link between PM and the average-case analysis. Our main contribution is to prove that any optimal average-case method converges in the number of iterations to PM, under mild assumptions. This brings a new perspective on this classical method, showing that PM is asymptotically both worst-case and average-case optimal.",
"subjects": "Optimization and Control (math.OC)",
"title": "Universal Average-Case Optimality of Polyak Momentum",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.98697950682225,
"lm_q2_score": 0.8128673178375735,
"lm_q1q2_score": 0.8022833844712535
} |
https://arxiv.org/abs/2005.08304 | Understanding Nesterov's Acceleration via Proximal Point Method | The proximal point method (PPM) is a fundamental method in optimization that is often used as a building block for designing optimization algorithms. In this work, we use the PPM method to provide conceptually simple derivations along with convergence analyses of different versions of Nesterov's accelerated gradient method (AGM). The key observation is that AGM is a simple approximation of PPM, which results in an elementary derivation of the update equations and stepsizes of AGM. This view also leads to a transparent and conceptually simple analysis of AGM's convergence by using the analysis of PPM. The derivations also naturally extend to the strongly convex case. Ultimately, the results presented in this paper are of both didactic and conceptual value; they unify and explain existing variants of AGM while motivating other accelerated methods for practically relevant settings. | \section{Introduction}
The \emph{proximal point method (PPM)}~\cite{moreau1965proximite,martinet1970regularisation,rockafellar1976monotone} is a fundamental method in optimization which solves the minimization of the cost function $f:\mathbb{R}^d\to \mathbb{R}$ by iteratively solving the subproblem
\begin{align}
\label{prox}
x_{t+1} \leftarrow \argmin_{x\in \mathbb{R}^d} \left\{ f(x) + \frac{1}{2\eta_{t+1}} \norm{x-x_t}^2 \right\}
\end{align}
for a step size $\eta_{t+1}>0$, where the norm is chosen as the $\ell_2$ norm.
The motivation of the method is clear: we add a quadratic regularization to make the cost function well conditioned for faster optimization\footnote{For instance, when $f$ is nonconvex, the regularization term can make each subproblem \eqref{prox} convex, and even when $f$ is convex, the regularization term will serve to increase the strong convexity parameter, which results in faster optimization.
}.
Nevertheless, solving \eqref{prox} is in general as difficult as solving the original optimization problem, and PPM is largely regarded as a ``conceptual'' guiding principle for accelerating optimization algorithms~\cite{drusvyatskiy2017proximal}.
On the other hand, there is another prevalent accelerated method called \emph{Nesterov's accelerated gradient method} (AGM)~\cite{nesterov1983method}.
In constrast to PPM, AGM is \emph{implementable} and has been applied to a myriad of applications, including sparse linear regression~\cite{beck2009fast}, compressed sensing~\cite{becker2011nesta}, the maximum flow problem~\cite{lee2013new}, and deep neural networks~\cite{sutskever2013importance}.
Nonetheless, in contrast with the clear motivation of PPM, AGM has an obscure driving principle.
In particular, original construction of AGM relies on an ingenious yet abstruse technique called \emph{estimate sequence}~\cite[Section 2.2.1]{nesterov2018lectures}, which has motivated researchers to investigate numerous alternative explanations (see Section~\ref{sec:related} for details).
Recently, Defazio~\cite{defazio2019curved} established an inspiring connection between PPM and AGM. The main observation is that for strongly convex costs, one can derive a version of AGM from the primal-dual form of PPM with a tweak of geometry.
This observation constitutes an important step toward understanding AGM because PPM is not as difficult to understand.
This inspiring result, nevertheless, leaves open many other important questions.
Most importantly, \cite{defazio2019curved} lacks \emph{quantitative} explanations as to why AGM achieves the accelerated convergence rates of $O(\nicefrac{1}{T^2})$ for smooth (Definition~\ref{def:ell2}) costs and $O(\exp(\nicefrac{-T}{\sqrt{\kappa}}))$ for smooth and strongly convex (Definition~\ref{def:str}) costs, where $T$ is the number of iterations and $\kappa$ is the condition number of the problem.
Moreover, it is not clear whether such connection can be made without assuming strong convexity and can be extended to other more general and practical versions of AGM.
In this work, we build a \emph{thorough} understanding of Nesterov's acceleration from the proximal point method by strengthening the connection made in \cite{defazio2019curved}.
The main observation in this paper is that the mysterious updates of AGM can be fully understood by viewing it as a simple approximation of PPM.
In particular, this observation leads to a straightforward derivation of AGM that does not rely on duality unlike \cite{defazio2019curved}.
Moreover, the PPM view of AGM offers a simple analysis of AGM based on the standard analysis of PPM~\cite{guler1991convergence}.
We also demonstrate the generality of our view.
More specifically, our view naturally extends to the strongly convex case and obtains a general\footnote{It is general in the sense that it smoothly interpolates between the strongly convex case and the non-strongly convex case.} version of AGM~\cite[(2.2.19)]{nesterov2018lectures}, and our view also gives rise to the key idea of the \emph{method of similar triangles}, a version of AGM shown to have simple extensions to practically relevant settings~\cite{tseng2008accelerated,gasnikov2018universal,nesterov2018lectures}.
\section{Baseline: analysis of the proximal point method}\label{sec:ppm}
The baseline of our discussion is the following convergence rate of PPM for convex costs proved in a seminal paper by G{\"u}ler~\cite{guler1991convergence} (here $\xs$ denotes a global optimum point, i.e., $\xs \in \argmin_x f(x)$):
\begin{align}
\boxed{\textstyle f(x_{T})- f(\xs) \leq \OO{\big(\sum_{t=1}^{T} \eta_t\big)^{-1}}\quad \text{for any $T\geq 1$.}} \label{conv:prox}
\end{align}
In words, one can achieve an arbitrarily fast convergence rate by choosing step sizes $\eta_t$'s large.
Below, we review a short Lyapuov function proof of \eqref{conv:prox}, which will serve as a backbone to other analyses.
\begin{proof}[{\bf Proof of \eqref{conv:prox}}]
It turns out that the following Lyapunov function is suitable:
\begin{align}
\boxed{\textstyle \Phi_t:= \big(\sum_{i=1}^{t} \eta_i \big)\cdot \big(f(x_t)-f(\xs)\big) + \frac{1}{2}\norm{\xs-x_t}^2,}\label{def:lya}
\end{align}
where $\Phi_0:= \frac{1}{2}\norm{\xs-x_0}^2$ and here and below, $\norm{\cdot}$ is the $\ell_2$ norm unless stated otherwise.
Now, it suffices to show that $\Phi_t$ is decreasing, i.e., $\Phi_{t+1}\leq \Phi_t$ for all $t\geq 0$.
Indeed, if $\Phi_t$ is decreasing, we have $\Phi_T\leq \Phi_0$ for any $T\geq 1$, which precisely recovers \eqref{conv:prox}. To that end, we use a standard result: \vspace{-5pt}
\begin{propbox}[Proximal inequality~(see e.g. {\cite[Proposition 12.26]{bauschke2011convex}})] \label{prop:per} For a convex function $\phi:\mathbb{R}^d\to \mathbb{R}$, let $x_{t+1}$ be the unique minimizer of the following proximal step:
\vspace{-5pt}
\begin{align}
\textstyle x_{t+1} \leftarrow \min_{x\in \mathbb{R}^d} \left\{\phi(x) +\frac{1}{2}\norm{x-x_t}^2\right\}\,. \label{prox:cond}
\end{align}
\vspace{-10pt}
\noindent Then, for any $u\in \mathbb{R}^d$, $\phi(x_{t+1})-\phi(u) +\frac{1}{2}\norm{u-x_{t+1}}^2 +\frac{1}{2}\norm{ x_{t+1}-x_t}^2 -\frac{1}{2}\norm{u-x_t}^2\leq 0$.
\end{propbox}
Now Proposition~\ref{prop:per} completes the proof as follows: First, we apply Proposition~\ref{prop:per} with $\phi=\eta_{t+1} f$ and $u=\xs$ and drop the term $\frac{1}{2}\norm{x_{t+1}-x_t}^2$ to obtain:
\begin{align}
\tag{B1} \eta_{t+1}\left[f( x_{t+1}) -f( \xs)\right] + \frac{1}{2} \norm{\xs- x_{t+1}}^2 -\frac{1}{2} \norm{ \xs-x_t}^2\leq 0\,.\label{ineq:1}
\end{align}
Next, from the optimality of $x_{t+1}$ for \eqref{prox:cond}, it readily follows that
\begin{align}
\tag{B2} f(x_{t+1}) - f(x_t) \leq 0\,. \label{ineq:2}
\end{align}
Now, computing $\eqref{ineq:1}+( \sum_{i=1}^{t} \eta_i )\times $\eqref{ineq:2} yields $\Phi_{t+1}\leq \Phi_t$, which finishes the proof. \end{proof}
\vspace{-10pt}
\subsection{Our conceptual question}
Although the convergence rate \eqref{conv:prox} seems powerful, it does not have any practical values as PPM is in general not implementable.
Nevertheless, one can ask the following conceptual question:
\begin{center}
\emph{Can we develop an implementable approximation of PPM for large step sizes $\eta_t$'s?}
\end{center}
Perhaps, the most straightforward approximation would be to replace the cost function $f$ in \eqref{prox} with its lower-order approximations.
We implement this idea in the next section.
\section{Two simple approximations of the proximal point method} \label{sec:warmup}
To analyze approximation errors, let us assume that the cost function $f$ is $L$-smooth.
\begin{definition}[Smoothness] \label{def:ell2}
For $L>0$, we say a differentiable function $f:\mathbb{R}^d\to \mathbb{R}$ is $L$-smooth if $f(x) \leq f(y) + \inp{\nabla f(y)}{x-y} +\frac{L}{2}\norm{x-y}^2 $ for any $x,y\in\mathbb{R}^d$.
\end{definition}
\noindent From the convexity and the $L$-smoothness of $f$, we have the following lower and upper bounds:
\begin{align*}
\textstyle f(y) +\inp{\nabla f(y)}{x-y} \leq f(x) \leq f(y) +\inp{\nabla f(y)}{x-y}+\frac{L}{2}\norm{x-y}^2\quad \text{for any $x,y\in \mathbb{R}^d$.}
\end{align*}
In this section, we use these bounds to approximate PPM.
\subsection{First approach: using first-order approximation}
\label{sec:app1}
Let us first replace $f$ in the objective \eqref{prox} with its lower approximation:
\begin{align}
x_{t+1} \leftarrow \argmin_{x} \left\{ f(x_t)+\inp{\nabla f(x_t)}{x-x_t} + \frac{1}{2\eta_{t+1}} \norm{x-x_t}^2 \right\}\,. \label{gd}
\end{align}
Writing the optimality condition, one quickly notices that \eqref{gd} actually leads to gradient descent:
\begin{align}
x_{t+1} =x_t -\eta_{t+1} \nabla f(x_{t})\,.\label{gd:1}
\end{align}
Let us see how well \eqref{gd} approximates PPM:
\vspace{-7pt}
\begin{proof}[{\bf Analysis of the first approach}] We first establish counterparts of \eqref{ineq:1} and \eqref{ineq:2}.
We begin with \eqref{ineq:1}. We first apply Proposition~\ref{prop:per} with $\phi(x)=\eta_{t+1}[f(x_t)+\inp{\nabla f(x_t)}{x-x_t}]$ and $u=\xs$:
\begin{align*}
&\phi(x_{t+1})-\phi(\xs)+ \frac{1}{2} \norm{\xs -x_{t+1}}^2 + \frac{1}{2} \norm{ x_{t+1}-x_t}^2 -\frac{1}{2 } \norm{\xs -x_{t}}^2\leq 0\,.
\end{align*}
Now using convexity and $L$-smoothness, we have $\phi(x)\leq \eta_{t+1} f(x)\leq \phi(x) +\frac{L\eta_{t+1}}{2}\norm{x-x_t}^2$, and hence the above inequality implies the following approximate version of \eqref{ineq:1}:
\begin{align*}
\eta_{t+1} \left[ f(x_{t+1})- f(\xs)\right] + \frac{1}{2} \norm{\xs -x_{t+1}}^2- \frac{1}{2 } \norm{\xs -x_{t}}^2 \leq \text{($\mathcal{E}_1$)},
\end{align*}
where $\text{($\mathcal{E}_1$)}:= (\frac{L\eta_{t+1}}{2}-\frac{1 }{2} )\norm{x_{t+1}-x_t}^2$.
Next, we use the $L$-smoothness of $f$ and the fact $\nabla f(x_t)= -\nicefrac{1}{\eta_{t+1}}(x_{t+1}-x_t)$ (due to \eqref{gd:1}), to obtain the following counterpart of \eqref{ineq:2}:
\begin{align*}
f(x_{t+1}) -f(x_t) \leq \inp{\nabla f(x_t)}{x_{t+1}-x_t} + \frac{L}{2} \norm{x_{t+1}-x_t}^2 = \text{($\mathcal{E}_2$)},
\end{align*}
where $\text{($\mathcal{E}_2$)}:=( \frac{L}{2}- \frac{1}{\eta_{t+1}}) \norm{x_{t+1}-x_t}^2$.
Now paralleling the proof of \eqref{conv:prox}, to show that $\Phi_t$~\eqref{def:lya} is a valid Lyapunov function, we need to find the step sizes $\eta_t$'s that satisfy the following relation: $\text{($\mathcal{E}_1$)} + ( \sum_{i=1}^t \eta_i) \times \text{($\mathcal{E}_2$)} \leq 0$.
On the other hand, note that both \text{($\mathcal{E}_1$)}~and \text{($\mathcal{E}_2$)}~become positive numbers when $\eta_{t+1}>2/L$.
Hence, the admissible choices for $\eta_t$ at each iteration are upper bounded by $2/L$, which together with the PPM convergence rate \eqref{conv:prox} implies that $O(\nicefrac{1}{\sum_{t=1}^T\eta_t}) =O(\nicefrac{1}{T})$ is the best convergence rate one can prove.
Indeed, choosing $\eta_t\equiv 1/L$, then we have $\text{($\mathcal{E}_1$)}=0$ and $\text{($\mathcal{E}_2$)}<0$, obtaining the well-known bound of $f(x_T)-f(\xs) \leq \frac{L\norm{x_0-\xs}^2}{2T} =O(\nicefrac{1}{T})$.
\end{proof}\vspace{-7pt}
\noindent To summarize, the first approach only leads to a disappointing result: the approximation is valid only for the small step size regime of $\eta_t = \OO{1}$.
We empirically verify this fact for a quadratic cost in Figure~\ref{fig:1}.
As one can see from Figure~\ref{fig:1}, the lower approximation approach \eqref{gd} overshoots for large step sizes like $\eta_t = \Theta(t)$ and quickly steers away from PPM iterates.\vspace{-7pt}
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{p1.eps} \caption{$\eta_t\equiv 1/3$.}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{p2.eps} \caption{$\eta_t=t/3$.}
\end{subfigure}
\caption{ Iterates comparison between PPM~\eqref{prox}, the first approach~\eqref{gd}, the second approach~\eqref{gd:2-1}, and the combined approach~\eqref{approx:ppm}. For the setting, we choose $f(x,y) = 0.1x^2+y^2$ and $x_0=(10,10)$. }
\label{fig:1} \vspace{-7pt}
\end{figure}
\subsection{Second approach: using smoothness} \label{sec:app2} \vspace{-3pt}
After seeing the disappointing outcome of the first approach, our second approach is to replace $f$ with its upper approximation due to the $L$-smoothness:
\begin{align}
x_{t+1} \leftarrow \argmin_{x} \left\{ f(x_t)+\inp{\nabla f(x_t)}{x-x_t} +\frac{L}{2} \norm{x-x_t}^2+ \frac{1}{2\eta_{t+1}} \norm{x-x_t}^2 \right\}\,. \label{gd:2-1}
\end{align}
Writing the optimality condition, \eqref{gd:2-1} actually leads to a conservative update of gradient descent:
\begin{align}
x_{t+1} = x_t -\frac{1}{L+\eta_{t+1}^{-1}} \nabla f(x_{t})\,. \label{gd:2-2}
\end{align}
Note that regardless of how large $\eta_{t+1}$ we choose, the actual update step size in \eqref{gd:2-2} is always upper bounded by $\nicefrac{1}{L}$.
Although this conservative update prevents the overshooting phenomenon of the first approach, as we increase $\eta_{t}$, this conservative update becomes too tardy to be a good approximation of PPM; see Figure~\ref{fig:1}.
\section{Nesterov's acceleration via alternating two approaches}
\label{sec:deriv}
In the previous section, we have seen that the two simple approximations of PPM both have limitations.
Nonetheless, observe that their limitations are opposite to each other: while the first approach is too ``reckless,'' the second approach is too ``conservative.''
This observation motivates us to consider a \emph{combination} of the two approaches which could mitigate each other's limitation.
Let us implement this idea by alternating between the two approximations \eqref{gd} and \eqref{gd:2-1} of PPM.
The key modification is that for both approximations, we introduce an additional sequence of points $\{y_t\}$ for cost function approximation; i.e., we use the following approximations for the $t$-th iteration:
\begin{align*}
f(y_t) +\inp{\nabla f(y_t)}{x-y_t} \leq f(x) \leq f(y_t) +\inp{\nabla f(y_t)}{x-y_t}+\frac{L}{2}\norm{x-y_t}^2\,.
\end{align*}
Indeed, this modification is crucial: if we just use approximations at $x_t$, the resulting alternation merely concatenates \eqref{gd} and \eqref{gd:2-1} during each iteration, and the two limitations we discussed in Section~\ref{sec:warmup} will remain in the combined approach.
Having introduced a separate sequence $\{y_t\}$ for cost approximations, we consider the following alternation where during each iteration, we update $x_{t}$ with \eqref{gd} and $y_{t}$ with \eqref{gd:2-1}:
\begin{mdframed}[backgroundcolor=gray!20]
{\bf Approximate PPM with alternating two approaches.} Given $x_0\in \mathbb{R}^d$, let $y_0=x_0$ and run:
\begin{subequations} \label{approx:ppm}
\begin{align}
\label{ppm:a} &{\textstyle x_{t+1} \leftarrow \argmin_{x } \left\{ f(y_t)+\inp{\nabla f(y_t)}{x-y_t} + \frac{1}{2\eta_{t+1}} \norm{x-x_t}^2 \right\}}, \\
\label{ppm:b}
&{\textstyle y_{t+1}\leftarrow \argmin_{x} \left\{ f(y_t)+\inp{\nabla f(y_t)}{x-y_t} +\frac{L}{2} \norm{x-y_t}^2+ \frac{1}{2\eta_{t+1}} \norm{x-x_{t+1}}^2 \right\} }.
\end{align}
\end{subequations}
\end{mdframed}
In Figure~\ref{fig:1}, we empirically verify that \eqref{approx:ppm} indeed gets the best of both worlds: this combined approach successfully approximates PPM even for the regime $\eta_t=\Theta(t)$.
More remarkably, \eqref{approx:ppm} exactly recovers Nesterov's AGM.
More specifically, turning \eqref{approx:ppm} into the equational form by writing the optimality conditions, and introducing an auxiliary iterate $z_{t+1}:= y_t -\nicefrac{1}{L}\nabla f(y_t)$, we obtain the following ($z_0:=x_0=y_0$):
\begin{mdframed}
\begin{minipage}{0.50\textwidth}
{\bf An equivalent representation of \eqref{approx:ppm}:}
\begin{subequations} \label{agm}
\begin{align}
& \textstyle y_{t} =\frac{\nicefrac{1}{L}}{ \nicefrac{1}{L}+\eta_t} x_t+\frac{ \eta_t }{ \nicefrac{1}{L}+\eta_t} z_t \,, \label{agm:a}\\
& \textstyle x_{t+1} = x_t -\eta_{t+1} \nabla f(y_{t})\,,\label{agm:b}\\
& \textstyle z_{t+1} = y_{t}-\frac{1}{L}\cdot \nabla f(y_{t})\,. \label{agm:c}
\end{align}
\end{subequations}
\end{minipage}
\begin{minipage}{0.50\textwidth}
\vspace*{-12pt}
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=0.35]
\coordinate (a1) at (-2,0);
\coordinate (a2) at (-2,-4.5*0.6);
\coordinate (a3) at (-2,-4.5);
\coordinate [label=left:{ $\boldsymbol{x_t}$}] (xt) at (0,0);
\coordinate [label=right:{ $\boldsymbol{x_{t+1}}$}] (xt1) at (11,0);
\coordinate [label=left:{ $\boldsymbol{z_{t}}$}] (zt) at (2,-4.5);
\coordinate [label=left: {$\boldsymbol{y_{t}}$}] (yt1) at (2*0.6,-4.5*0.6);
\coordinate [label=right: { $\boldsymbol{z_{t+1}}$}] (zt1) at (6.0,-4.5*0.6);
\draw [->,>=stealth,dotted, line width=1pt,red] (xt) -- node[below] {\tiny $\textcolor{red}{\boldsymbol{-\eta_{t+1} \nabla f(y_{t})}}$} (xt1);
\draw [->,>=stealth, dotted, line width=1pt,red] (yt1) -- node[below] {\tiny $\textcolor{red}{\boldsymbol{-\frac{1}{L}\nabla f(y_{t})}}$} (zt1);
\draw [<->,dotted] (a1) -- node[left] {\small $ {\eta_t}$} (a2);
\draw [<->,dotted] (a2) -- node[left] {\small $ {\nicefrac{1}{L}}$} (a3);
\draw [dashed] (zt) -- (zt1);
\draw [dashed] (zt1) -- (xt1);
\draw [dashed] (xt) -- (zt);
\end{tikzpicture}
\vspace*{-4pt}
\caption{The iterates of \eqref{agm}.}
\label{fig:2}
\end{figure}
\end{minipage}
\end{mdframed}
Hence, we arrive at AGM without relying on any non-trivial derivations in the literature such as estimate sequence~\cite{nesterov2018lectures} or linear coupling~\cite{allen2014linear}.
To summarize, we have demonstrated:
\begin{center}
\emph{Nesterov's accelerated method is a simple approximation of the proximal point method!}
\end{center}
\begin{remark}
Our derivation is inspired by the one in the recent work by Defazio~\cite[Sections 5 and 6]{defazio2019curved}.
However, unlike the approach in \cite{defazio2019curved}, our derivation does not rely on duality, which could be advantageous in the settings where duality fails.
\end{remark}
\begin{remark}[Understanding mysterious parameters of AGM] \label{rmk:par}
It is often the case in the literature that the interpolation step \eqref{agm:a} is written as an abstract form $y_t = \tau_t x_t + (1-\tau_t) z_t$ with a weight parameter $\tau_t>0$ to be chosen~\cite{allen2014linear,lessard2016analysis,wilson2016lyapunov,bansal2019potential, ahn2020nesterov}.
That said, in the previous works, $\tau_t$ is carefully chosen according to the analysis without conveying much intuition.
One important aspect of our PPM view is that it reveals a close relation between the weight parameter $\tau_t$ and the step size $\eta_t$.
More specifically, $\tau_t$ is chosen so that the ratio of the distances $\norm{y_t-x_t}:\norm{y_t-z_t}$ is equal to $\eta_t:\nicefrac{1}{L}$ (see Figure~\ref{fig:2}).
\end{remark}
\subsection{Understanding the accelerated convergence rate}
\label{subsec:conv}
In order to determine $\eta_t$'s in \eqref{agm}, we revisit the analysis of PPM from Section~\ref{sec:warmup}.
In turns out that following Section~\ref{sec:app1}, one can derive from first principles the following inequalities using Proposition~\ref{prop:per} (we defer the derivations to Appendix~\ref{app:agm}):
\begin{align*}
\text{Counterpart of }\eqref{ineq:1}: &\quad\eta_{t+1}(f(z_{t+1})- f(\xs)) + \frac{1}{2} \norm{\xs -x_{t+1}}^2-
\frac{1}{2} \norm{\xs -x_{t}}^2\leq \text{($\mathcal{F}_1$)}\quad \text{and}\\
\text{Counterpart of }\eqref{ineq:2}: &\quad f(z_{t+1}) -f(z_t) \leq \text{($\mathcal{F}_2$)}\,,
\end{align*}
where $\text{($\mathcal{F}_1$)}:=(\frac{\eta_{t+1}^2}{2}-\frac{\eta_{t+1}}{2L}) \norm{\nabla f(y_{t})}^2 + L\eta_t\eta_{t+1}\inp{\nabla f(y_{t})}{ z_{t}-y_t}$ and $\text{($\mathcal{F}_2$)}:= -\frac{1}{2L} \norm{\nabla f(y_{t})}^2$ $- \inp{\nabla f(y_{t})}{z_{t}-y_t}$.
Hence, we modify the Lyapunov function \eqref{def:lya} by replacing the first $x_t$ with $z_t$:
\begin{align}
\textstyle \Phi_t:= (\sum_{i=1}^{t} \eta_i)\cdot (f(z_t)-f(\xs)) + \frac{1}{2}\norm{\xs-x_t}^2\,. \label{lya2}
\end{align}
Then as before, to prove the validity of the chosen Lyapunov function, it suffices to verify $\text{($\mathcal{F}_1$)} + ( \sum_{i=1}^t \eta_i)\cdot \text{($\mathcal{F}_2$)} \leq 0$, which is equivalent to
\begin{align}
\label{cond:1} \textstyle \frac{1}{2L}\left( L\eta_{t+1}^2 - \sum_{i=1}^{t+1} \eta_i\right) \norm{\nabla f(y_t)}^2 + \left( L\eta_t\eta_{t+1} - \sum_{i=1}^t \eta_i\right)\inp{\nabla f(y_{t})}{z_{t}-y_t} \leq 0
\end{align}
From \eqref{cond:1}, it suffices to choose $\{\eta_t\}$ so that
$L\eta_t\eta_{t+1}=\sum_{i=1}^t \eta_i$.
Indeed, with such a choice, the coefficient of the inner product term in \eqref{cond:1} becomes zero and the coefficient of the squared norm term becomes $\nicefrac{1}{2L}(L\eta_{t+1}^2 - L\eta_{t+1}\eta_{t+2})\leq 0$ (if $\{\eta_t\}$ is increasing).
Indeed, one can quickly notice that choosing $\eta_{t} =\nicefrac{t}{2L}$ satisfies the desired relation.
Therefore, we obtain the well known accelerated convergence rate of $f(z_T)-f(\xs)\leq \frac{2L \norm{x_0-\xs}^2}{T(T+1)}= O(\nicefrac{1}{T^2})$~\cite{nesterov1983method}.
\subsection{Separating step sizes for flexibility} \label{subsec:two}
Since \eqref{approx:ppm} is an approximation of PPM, it is helpful to give \eqref{approx:ppm} more flexibility when we try to extend it to other settings.
In particular, we relax \eqref{approx:ppm} by separating the two step sizes:
\begin{mdframed}
{\bf Approximate PPM with two separate step sizes $\{\eta_t\}$ and $\{\widetilde{\eta}_t\}$.} Given $x_0=y_0\in \mathbb{R}^d$,
\begin{subequations}\label{approx:ppm2}
\begin{align}
\label{ppm2:a} &{\textstyle x_{t+1} \leftarrow \argmin_{x } \left\{ f(y_t)+\inp{\nabla f(y_t)}{x-y_t} + \frac{1}{2\eta_{t+1}} \norm{x-x_t}^2 \right\}}, \\
\label{ppm2:b}
&{\textstyle y_{t+1}\leftarrow \argmin_{x} \left\{ f(y_t)+\inp{\nabla f(y_t)}{x-y_t} +\frac{L}{2} \norm{x-y_t}^2+ \frac{1}{2\widetilde{\eta}_{t+1}} \norm{x-x_{t+1}}^2 \right\} }.
\end{align}
\end{subequations}
\end{mdframed}
As we shall see in the next subsection, this simple relaxation allows us to recover a well known general version of AGM~\cite[Section 2.2]{nesterov2018lectures}.
\subsection{Acceleration for strongly convex costs} \label{sec:str}
Let us apply our PPM view to the strongly convex cost case. We begin with the definition:
\begin{definition}[Strong convexity] \label{def:str}
For $\mu>0$, we say a differentiable function $f:\mathbb{R}^d\to \mathbb{R}$ is $\mu$-strongly convex if $f(x) \geq f(y) + \inp{\nabla f(y)}{x-y} +\frac{\mu}{2}\norm{x-y}^2 $ for any $x,y\in\mathbb{R}^d$.
\end{definition}
\noindent Since $f$ is additionally assumed to be strongly convex, one can strengthen the step \eqref{ppm2:a} by
\begin{align}
\label{ppm:str}
x_{t+1} \leftarrow \argmin_{x\in \mathbb{R}^d} \left\{ f(y_t)+\inp{\nabla f(y_t)}{x-y_t} + \frac{\mu}{2}\norm{x-y_t}^2 + \frac{1}{2\eta_{t+1}} \norm{x-x_t}^2 \right\}\,.
\end{align}
Writing the optimality condition of \eqref{ppm:str}, it is straightforward to check that \eqref{approx:ppm2} is equivalent to the following form (again, we introduce another auxiliary iterate $w_t$, and let $z_0:=x_0=y_0$):
\begin{mdframed}
\begin{minipage}{0.5\textwidth}
{\bf Approximate PPM for strongly convex costs:}
\vspace*{-10pt}
\begin{subequations}\label{agm2}
\begin{align}
& \textstyle y_{t} =\frac{\nicefrac{1}{L}}{ \nicefrac{1}{L}+\widetilde{\eta}_t} x_t +\frac{ \widetilde{\eta}_t }{ \nicefrac{1}{L}+\widetilde{\eta}_t} z_t\,, \label{agm2:a}\\
& \textstyle w_t = \frac{\nicefrac{1}{\mu}}{ \nicefrac{1}{\mu}+\eta_{t+1}}x_t +\frac{\eta_{t+1}}{ \nicefrac{1}{\mu} +\eta_{t+1}}y_t\\
& \textstyle x_{t+1} = w_t- \frac{\nicefrac{1}{\mu}\cdot \eta_{t+1}}{ \nicefrac{1}{\mu}+\eta_{t+1}}\nabla f(y_{t})\,,\label{agm2:b}\\
& \textstyle z_{t+1} = y_{t}-\frac{1}{L}\nabla f(y_{t})\,. \label{agm2:c}
\end{align}
\end{subequations}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\vspace{-15pt}
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=0.35]
\coordinate (a1) at (-3,0);
\coordinate (a2) at (-3,-7*0.7);
\coordinate (a3) at (-3,-7);
\coordinate (b1) at (-2.5,0);
\coordinate (b2) at (-2.5,-7*0.3);
\coordinate (b3) at (-2.5,-7*0.7);
\coordinate [label=left:{ $\boldsymbol{x_t}$}] (xt) at (0,0);
\coordinate [label=right:{$\boldsymbol{x_{t+1}}$}] (xt1) at (11,0);
\coordinate [label=left:{ $\boldsymbol{w_t}$}] (wt) at (1*0.3,-7*0.3);
\coordinate [label=left:{ $\boldsymbol{z_{t}}$}] (zt) at (1,-7);
\coordinate [label=left: {$\boldsymbol{y_{t}}$}] (yt1) at (1*0.7,-7*0.7);
\coordinate [label=below: {$\boldsymbol{z_{t+1}}$}] (zt1) at (1+10*0.3/0.7,-7+7*0.3/0.7);
\draw [->,>=stealth,dotted, line width=1pt,red] (wt) -- node[above] {\tiny $\textcolor{red}{\boldsymbol{-\frac{\eta_{t+1}\cdot \nicefrac{1}{\mu}}{ \eta_{t+1}+\nicefrac{1}{\mu}} \nabla f(y_{t})}}$} (xt1);
\draw [->,>=stealth, dotted, line width=1pt,red] (yt1) -- node[above] {\tiny $\textcolor{red}{\boldsymbol{-\frac{1}{L}\nabla f(y_{t})}}$} (zt1);
\draw [<->,dotted] (a1) -- node[left] {\small $ {\widetilde{\eta}_t}$} (a2);
\draw [<->,dotted] (a2) -- node[left] {\small $ {\nicefrac{1}{L}}$} (a3);
\draw [dashed] (zt) -- (zt1);
\draw [dashed] (zt1) -- (xt1);
\draw [dashed] (xt) -- (zt);
\draw [<->,dotted] (b1) -- node[right] {\scriptsize $ {\eta_{t+1}}$} (b2);
\draw [<->,dotted] (b2) -- node[right] {\small $ {\nicefrac{1}{\mu}}$} (b3);
\end{tikzpicture}
\vspace*{-4pt}
\caption{The iterates of \eqref{agm2}.}
\label{fig:2-2}
\end{figure}
\end{minipage}
\end{mdframed}
Paralleling Remark~\ref{rmk:par}, our derivation provides new insights into the choices of the AGM step sizes by expressing them in terms of the PPM step sizes $\eta_t$'s and $\widetilde{\eta}_t$'s.
Furthermore, our derivation actually demystifies the mysterious parameter choices made in the Nesterov's book~\cite[Section 2.2]{nesterov2018lectures}.
To see this, let us recall the well known convergence rate of PPM for strongly convex costs due to Rockafellar~\cite[(1.14)]{rockafellar1976monotone}:
\begin{align}
\boxed{\textstyle f(x_{T})- f(\xs) \leq \OO{\prod_{t=1}^{T} (1+\mu\eta_t)^{-1}}\quad \text{for any $T\geq 1$.}} \label{conv:prox:str}
\end{align}
In light of \eqref{conv:prox:str}, it turns out that for the approximate PPM \eqref{agm2}, choosing the following step sizes
\begin{center}
$\eta_t \equiv \eta:= \mu^{-1}\cdot (\sqrt{\kappa}-1)^{-1}$ and $\widetilde{\eta}_t \equiv \widetilde{\eta}:= \mu^{-1}\cdot (\sqrt{\kappa})^{-1}$ (where $\kappa:=\nicefrac{L}{\mu}$)
\end{center}
actually recovers the well known parameters choice~\cite[(2.2.22)]{nesterov2018lectures} which leads to the convergence rate of $\OO{(1+\eta \mu)^{-T}}=\OO{(1+\nicefrac{1}{(\sqrt{\kappa}-1)})^{-T}}$. See \cite[Section 5.5]{bansal2019potential} for a simple Lyapunov function proof of this convergence rate.
\begin{remark}[Nesterov's general method]
Remarkably, \eqref{agm2} also exactly recovers a general version of AGM~\cite[(2.2.19)]{nesterov2018lectures} which smoothly interpolates between the strongly convex case and the non-strongly convex case.
To see this, we follow a simple equivalent representation of Nesterov's general step sizes given in \cite{ahn2020nesterov}.
More specifically, given a sequence $\{\xi_t\}$ of positive numbers defined as per the nonlinear recursion $\frac{\xi_{t+1}(\xi_{t+1}-\kappa^{-1})}{1-\xi_{t+1}}=\xi_t^2$ with an initial value $\xi_0>0$, choosing $\eta_t=\mu^{-1}\cdot (\kappa \xi_t-1)^{-1}$ and $\widetilde{\eta}_t = \mu^{-1}\cdot (\kappa \xi_{t+1}-1)^{-1}\cdot (1-\xi_{t+1})$ exactly recovers the choices in~\cite[Section 2]{ahn2020nesterov} which are shown to be equivalent to Nesterov's choices~\cite[(2.2.19)]{nesterov2018lectures}.
\end{remark}
\section{Simple generalizations with similar triangles}
\label{sec:tri}
In the previous section, we have demonstrated that Nesterov's method is nothing but an approximation of PPM.
This view point has not only provided simple derivations of versions of AGM, but also offered clear explanations of the step sizes. In this section, we demonstrate that these interpretations offered by PPM actually lead to a great simplification of Nesterov's AGM in the form of the \emph{method of similar triangles}~\cite{nesterov2018lectures,gasnikov2018universal} which admits simple generalizations to practically relevant settings.
Our starting point is the observations made in the previous section: (i) from Remark~\ref{rmk:par}, we have seen $\norm{y_t-x_t}:\norm{y_t-z_t}=\eta_t:\nicefrac{1}{L}$; (ii) from Section~\ref{subsec:conv}, we have seen that we need to choose $\eta_t=\Theta(t)$, and hence, $\eta_{t+1}\approx\eta_t \gg 1$.
From these observations, one can readily see that the triangle $\triangle x_tx_{t+1}z_t$ is approximately similar to $\triangle y_tz_{t+1} z_t$.
Therefore, one can simplify AGM by further exploiting this fact: we modify the updates so that the two triangles are indeed \emph{similar}:
\begin{mdframed}
\begin{minipage}{0.50\textwidth}
{\bf Similar triangle approximation of PPM:}
\begin{subequations}\label{sim}
\begin{align}
& \textstyle y_{t} = \frac{\nicefrac{1}{L}}{ \nicefrac{1}{L}+\eta_t} x_t+\frac{ \eta_t }{ \nicefrac{1}{L}+\eta_t} z_t\,, \label{sim:a}\\
& \textstyle x_{t+1} = x_t -\eta_{t+1} \nabla f(y_{t})\,,\label{sim:b}\\
& \textstyle z_{t+1} = \frac{\nicefrac{1}{L}}{ \nicefrac{1}{L}+\eta_t} x_{t+1}+\frac{ \eta_t }{ \nicefrac{1}{L}+\eta_t } z_t\,. \label{sim:c}
\end{align}
\end{subequations}
\end{minipage}
\begin{minipage}{0.50\textwidth}
\vspace*{-12pt}
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=0.35]
\coordinate (a1) at (-2,0);
\coordinate (a2) at (-2,-4.5*0.6);
\coordinate (a3) at (-2,-4.5);
\coordinate [label=left:{ $\boldsymbol{x_t}$}] (xt) at (0,0);
\coordinate [label=right:{ $\boldsymbol{x_{t+1}}$}] (xt1) at (11,0);
\coordinate [label=left:{ $\boldsymbol{z_{t}}$}] (zt) at (2,-4.5);
\coordinate [label=left: {$\boldsymbol{y_{t}}$}] (yt) at (2*0.6,-4.5*0.6);
\coordinate [label=right: { $\boldsymbol{z_{t+1}}$}] (zt1) at (2+9*0.4,-4.5*0.6);
\draw [->,>=stealth, dotted, line width=0.8pt,red] (xt) -- node[below] {\tiny $\textcolor{red}{\boldsymbol{-\eta_{t+1} \nabla f(y_{t})}}$} (xt1);
\draw [->,>=stealth,dotted, line width=0.8pt,red] (yt) -- (zt1);
\draw [<->,dotted] (a1) -- node[left] {\small $ {\eta_t}$} (a2);
\draw [<->,dotted] (a2) -- node[left] {\small $ {\nicefrac{1}{L}}$} (a3);
\draw [dashed] (zt) -- (xt1);
\draw [dashed] (xt) -- (zt);
\draw pic[draw=black, fill=gray!10, -, angle eccentricity=1.2, angle radius=0.5cm, line width=1pt]
{angle=xt--xt1--zt};
\draw pic[draw=black, fill=gray!10, -, angle eccentricity=1.2, angle radius=0.5cm, line width=1pt]
{angle=yt--zt1--zt};
\draw pic[draw=black, fill=gray!10,-, densely dotted, angle eccentricity=1.2, angle radius=0.3cm, line width=1pt]
{angle=zt--yt--zt1};
\draw pic[draw=black, fill=gray!10, -, densely dotted, angle eccentricity=1.2, angle radius=0.3cm, line width=1pt]
{angle=zt--xt--xt1};
\end{tikzpicture}
\vspace*{-5pt}
\caption{The similar triangle updates \eqref{sim}.}
\label{fig:sim}
\end{figure}
\end{minipage}
\end{mdframed}
We provide a PPM-based analysis of \eqref{sim} for a more general setting in Section~\ref{sec:gen}.
\begin{remark}
The updates akin to \eqref{sim} can be found in~\cite[Algorithm 1]{tseng2008accelerated} (note that the step sizes are slightly different).
Our derivation based on the PPM view indeed clarifies why such similar triangles are natural updates to consider.
We also remark that the updates based on similar triangles is useful in developing universal methods for stochastic composite optimizations~\cite{gasnikov2018universal}.
\end{remark}
\begin{remark}[Momentum methods] \label{rmk:momentum}
An alternative way to approximate PPM with similar triangles is:
\begin{mdframed}
\begin{minipage}{0.50\textwidth}
{\bf Alternative similar triangle approximation:}
\begin{subequations}\label{sim2}
\begin{align}
& \textstyle y_{t} = \frac{\nicefrac{1}{L}}{ \nicefrac{1}{L}+\eta_t} x_t+\frac{ \eta_t }{ \nicefrac{1}{L}+\eta_t} z_t\,, \label{sim2:a}\\
& \textstyle z_{t+1} = y_t -\frac{1}{L} \nabla f(y_{t})\,,\label{sim2:b}\\
& \textstyle x_{t+1} =z_{t+1} + L\eta_t (z_{t+1}-z_t)\,. \label{sim2:c}
\end{align}
\end{subequations}
\end{minipage}
\begin{minipage}{0.50\textwidth}
\vspace*{-12pt}
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=0.35]
\coordinate (a1) at (-2,0);
\coordinate (a2) at (-2,-4.5*0.6);
\coordinate (a3) at (-2,-4.5);
\coordinate [label=left:{ $\boldsymbol{x_t}$}] (xt) at (0,0);
\coordinate [label=right:{ $\boldsymbol{x_{t+1}}$}] (xt1) at (11,0);
\filldraw (2,-4.5) circle (3pt) node[] {};
\coordinate [label=left:{ $\boldsymbol{z_{t}}$}] (zt) at (2,-4.5);
\filldraw (2*0.6,-4.5*0.6) circle (3pt) node[] {};
\coordinate [label=left: {$\boldsymbol{y_{t}}$}] (yt) at (2*0.6,-4.5*0.6);
\filldraw (2+9*0.4,-4.5*0.6) circle (3pt) node[] {};
\coordinate [label=below: { $\boldsymbol{z_{t+1}}$}] (zt1) at (2+9*0.4,-4.5*0.6);
\filldraw (2+9*0.6,-4.5*0.4) circle (3pt) node[] {};
\coordinate [label=above: { $\boldsymbol{y_{t+1}}$}] (yt1) at (2+9*0.6,-4.5*0.4);
\draw [->,>=stealth, dotted, line width=0.8pt,red] (xt) -- (xt1);
\draw [->,>=stealth,dotted, line width=0.8pt,red] (yt) --
node[above] {\tiny $\textcolor{red}{\boldsymbol{-\frac{1}{L} \nabla f(y_{t})}}$} (zt1);
\draw [<->,dotted] (a1) -- node[left] {\small $ {\eta_t}$} (a2);
\draw [<->,dotted] (a2) -- node[left] {\small $ {\nicefrac{1}{L}}$} (a3);
\draw [dashed] (zt) -- (xt1);
\draw [dashed] (xt) -- (zt);
\draw pic[draw=black, fill=gray!10, -, angle eccentricity=1.2, angle radius=0.5cm, line width=1pt]
{angle=xt--xt1--zt};
\draw pic[draw=black, fill=gray!10, -, angle eccentricity=1.2, angle radius=0.5cm, line width=1pt]
{angle=yt--zt1--zt};
\draw pic[draw=black, fill=gray!10,-, densely dotted, angle eccentricity=1.2, angle radius=0.3cm, line width=1pt]
{angle=zt--yt--zt1};
\draw pic[draw=black, fill=gray!10, -, densely dotted, angle eccentricity=1.2, angle radius=0.3cm, line width=1pt]
{angle=zt--xt--xt1};
\end{tikzpicture}
\vspace*{-5pt}
\caption{The updates of \eqref{sim2}.}
\label{fig:sim2}
\end{figure}
\end{minipage}
\end{mdframed}
In fact, \eqref{sim2} can be equivalently expressed without $\{x_t\}$: at the $t$-iteration, we compute \eqref{sim2:b} and then update $y_{t+1}$ via $y_{t+1}= z_{t+1} + \frac{L\eta_t}{L\eta_{t+1}+1}(z_{t+1}-z_t)$ (these updates are illustrated with dots in Figure~\ref{fig:sim2}).
Hence, \eqref{sim2} is equal to the well-known momentum based AGM~\cite{nesterov1983method,beck2009fast}.
Notably, it turns out that our PPM-based analysis suggests the choice of $\{\eta_t\}$ as per the recursive relation $(L\eta_{t+1}+\frac{1}{2})^2= (L\eta_t+1)^2 +\frac{1}{4}$, which after substitution $L\eta_t+1\leftarrow a_t$ exactly recovers the popular recursive relation $a_{t+1} = \frac{1}{2} (1+\sqrt{1+4a_t^2})$ in \cite{nesterov1983method,beck2009fast}.
See Appendix~\ref{app:rmk} for precise details.
\end{remark}
The main advantage of this similar triangles approximation \eqref{sim} becomes clearer in the constraint optimization case: when there is a constraint set, the steps \eqref{agm:b} and \eqref{agm:c} both become projections steps which could be costly when the constraint set does not admit simple projections.
On the other hand, since \eqref{sim} only requires a single projection in each iteration, it minimizes such costly computations.
\subsection{Extension to composite costs and general norms}
\label{sec:gen}
It turns out \eqref{sim} admits a simple extension to the practically relevant setting of composite costs and general norms (see e.g. \cite[Section 6.1.3]{nesterov2018lectures}).
More specifically, for a closed convex set $Q\subseteq \mathbb{R}^d$ and a closed\footnote{This means that the epigraph of the function is closed. See \cite[Definition 3.1.2]{nesterov2018lectures}. } convex function $\Psi:Q\to \mathbb{R}$, consider \begin{align*}
\textstyle \min_{x\in Q} f^{\Psi}(x):= f(x)+\Psi(x) \,,
\end{align*}
where $f:Q\to \mathbb{R}$ is a differentiable convex function which is $L$-smooth with respect to a norm $\norm{\cdot}$ that is not necessarily the $\ell_2$ norm (i.e., we regard the norm in Definition~\ref{def:ell2} to be our chosen norm).
\noindent For the general norm case, we use the Bregman divergence for the regularizer\footnote{The reason why we need the Bregman divergence in place of the norm squared regularization is because for norms other than the $\ell_2$ norm, $\frac{1}{2}\norm{u-v}^2$ is not strongly convex with respect to the chosen norm. }:
\begin{definition}
Given a $1$-strongly convex (w.r.t the chosen norm $\norm{\cdot}$) function $h:Q\to \mathbb{R} \cup \{\infty\}$ that is differentiable on the interior of $Q$, $\breg{h}{u}{v}:=h(u)-h(v)-\inp{\nabla h(v)}{u-v}$ for all $u,v\in Q$.
\end{definition}
\noindent Under the above setting and assumption, \eqref{sim} admits a simple generalization:
\begin{mdframed}
{\bf Similar triangle approximations of PPM for composite costs and general norms:}
\begin{subequations}\label{simg}
\begin{align}
&\textstyle y_{t} =\frac{\nicefrac{1}{L}}{\nicefrac{1}{L}+\eta_t } x_t+\frac{ \eta_t }{ \nicefrac{1}{L}+\eta_t } z_t\,, \label{simg:a}\\
&\textstyle x_{t+1} \leftarrow \argmin_{x\in Q} \left\{ f(y_t)+\inp{\nabla f(y_t)}{x-y_t} + \frac{1}{\eta_{t+1}} \breg{h}{x}{x_t} +\Psi(x) \right\}\,,\label{simg:b}\\
& \textstyle z_{t+1} =\frac{\nicefrac{1}{L}}{\nicefrac{1}{L}+\eta_t} x_{t+1}+\frac{ \eta_t }{\nicefrac{1}{L}+ \eta_t } z_t \,. \label{simg:c}
\end{align}
\end{subequations}
\end{mdframed}
Again, the similar triangle approximation \eqref{simg} is computationally advantageous in that it only requires a single projection in each iteration. Now we provide a simple PPM-based analysis of \eqref{simg}:
\begin{proof}[{\bf PPM-based analysis of \eqref{simg}}]
To obtain counterparts of \eqref{ineq:1} and \eqref{ineq:2}, we now use a generalization of Proposition \ref{prop:per} to the Bregman divergence (\cite[Lemma 3.1]{teboulle2018simplified}).
With such a generalization, we obtain the following inequality for $\phi^{\Psi}(x):=\eta_{t+1}[f(y_t)+\inp{\nabla f(y_t)}{x-y_t} +\Psi(x)]$:
\begin{align}
\phi^{\Psi}(x_{t+1}) -\phi^{\Psi}(\xs) + \breg{h}{\xs}{x_{t+1}}+\breg{h}{x_{t+1}}{x_{t}}-
\breg{h}{\xs}{x_{t}} \leq 0\,, \label{gen:1}
\end{align}
where $\xs\in \argmin_{x\in Q}f^{\Psi}(x)$.
Now using \eqref{gen:1}, one can derive from first principles the following inequalities (we defer the derivations to Appendix~\ref{app:sim}):
\begin{align*}
\text{Counterpart of }\eqref{ineq:1}: &\quad \eta_{t+1}(f^{\Psi}(z_{t+1})- f^{\Psi}(\xs) ) + \breg{h}{\xs}{x_{t+1}}-
\breg{h}{\xs}{x_{t}} \leq \text{($\mathcal{G}_1$)} \quad \text{and}\\
\text{Counterpart of }\eqref{ineq:2}: &\quad f^{\Psi}(z_{t+1}) -f^{\Psi}(z_t) \leq \text{($\mathcal{G}_2$)}\,,
\end{align*}
where $\text{($\mathcal{G}_1$)}:= \textstyle-\frac{1}{2}\norm{ x_{t+1}-x_t}^2 +\eta_{t+1}[\frac{L}{2}\norm{z_{t+1}-y_{t}}^2 +\inp{\nabla f(y_{t})}{z_{t+1}-x_{t+1}} +\Psi(z_{t+1}) -\Psi(x_{t+1})]$ and $\text{($\mathcal{G}_2$)}:= \frac{L}{2} \norm{z_{t+1}-y_t}^2 + \inp{\nabla f(y_{t})}{z_{t+1}-z_t} +\Psi(z_{t+1})-\Psi(z_{t})$.
Similarly to Section~\ref{subsec:conv}, yet replacing the norm squared term with the Bregman divergence, we choose
\begin{align*}
\textstyle\Phi_t:= (\sum_{i=1}^{t} \eta_i)\cdot (f^{\Psi}(z_t)-f^{\Psi}(\xs)) +\breg{h}{\xs}{x_t}.
\end{align*}
Then, it suffices to show $\text{($\mathcal{G}_1$)} + ( \sum_{i=1}^t \eta_i)\cdot \text{($\mathcal{G}_2$)} \leq 0$.
Using the facts (i) $z_{t+1}-x_{t+1} = L\eta_t (z_t-z_{t+1})$ and (ii) $\norm{x_{t+1}-x_t}= (L\eta_t+1)\norm{z_{t+1}-y_t}$ (both are immediate consequences of the similar triangles) and rearranging, one can easily check that $\text{($\mathcal{G}_1$)} + ( \sum_{i=1}^t \eta_i)\cdot \text{($\mathcal{G}_2$)}$ is equal to
\begin{align}
\textstyle \frac{1}{2}\left(-(L\eta_t+1)^2 +L \eta_{t+1}+ L\sum_{i=1}^{t}\eta_{i} \right)\norm{z_{t+1}-y_{t}}^2 \label{a1}\\
\textstyle +\left(L\eta_t \eta_{t+1} - \sum_{i=1}^t\eta_i\right)\inp{\nabla f(y_{t})}{z_{t}-z_{t+1}} \label{a2}\\
\textstyle +\eta_{t+1}[ \Psi(z_{t+1}) -\Psi(x_{t+1}) ] + \left( \sum_{i=1}^t \eta_i\right)\cdot [\Psi(z_{t+1})-\Psi(z_{t})].
\label{a3}
\end{align}
Now choosing $\eta_t= \nicefrac{t}{2L}$ analogously to Section~\ref{subsec:conv}, one can easily verify $\eqref{a1}+\eqref{a2}+\eqref{a3} \leq 0$.
Indeed, for \eqref{a1}, since $L \eta_t \eta_{t+1}=\sum_{i=1}^t\eta_i$, the coefficient becomes $\nicefrac{1}{2}(L\eta_t+1)(L\eta_{t+1}-L\eta_t-1)$ which is a negative number since $L\eta_{t+1}-L\eta_t-1=-\nicefrac{1}{2}$;
for \eqref{a2}, the coefficient becomes zero due to the relation $L \eta_t \eta_{t+1}=\sum_{i=1}^t\eta_i$;
lastly, for \eqref{a3}, we have
\begin{align}
\eqref{a3}= \eta_{t+1}\left[ (1+L\eta_t) \Psi(z_{t+1}) -\Psi(x_{t+1}) -L\eta_{t}\Psi(z_{t})\right] \leq 0\,,
\end{align}
where the equality is due to the relation $L \eta_t \eta_{t+1} =\sum_{i=1}^t\eta_i$, and the inequality is due to the update \eqref{simg:c} (which can be equivalently written as $(1+L\eta_t) z_{t+1} = x_{t+1}+ L\eta_t z_t$) and the convexity of $\Psi$.
Hence, we obtain the accelerated rate of $f^{\Psi}(z_T)-f^{\Psi}(\xs)\leq \frac{4L\breg{h}{\xs}{x_0}}{T(T+1)} = O(\nicefrac{1}{T^2})$.
\end{proof}
\section{Related work} \label{sec:related}
Motivated by the obscure scope of Nesterov's estimate sequence technique, there have been a flurry of works on developing alternative approaches to Nesterov's acceleration.
The most contributions are made based on understanding the continuous limit dynamics of Nesterov's AGM~\cite{su2014differential,krichene2015accelerated,wibisono2016variational}.
These continuous dynamics approaches have brought about new intuitions about Nesterov's acceleration, and follow-up works have developed analytical techniques for such dynamics~\cite{wilson2016lyapunov,diakonikolas2019approximate}.
However, these approaches share a limitation that when applying discretization techniques to obtain optimization algorithms, some auxiliary modifications are required to recover/obtain analyzable algorithms.
In contrast, our PPM approach directly yields accelerated methods and does not require additional adjustments.
Another notable contribution is made based on the linear coupling framework~\cite{allen2014linear}.
The main observation is that the two most popular first-order methods, namely gradient descent and mirror descent, have complementary performances, and hence, one can come up with a faster method by linearly coupling the two methods.
This view indeed offers a general framework of developing fast optimization algorithms; however, for understanding Nesterov's acceleration, this view has less expressive power compared to our PPM view.
More specifically, it is \emph{a priori} not clear why one needs to couple two methods \emph{linearly}.
Moreover, with the linear coupling view, one cannot interpret the interpolation weight as we did in Remark~\ref{rmk:par}.
It is also important to note that PPM has been given new attention as a building block for designing and analyzing fast optimization methods~\cite{drusvyatskiy2017proximal}.
To list few instances, PPM has given rise to methods for weakly convex problems~\cite{davis2019proximally}, the prox-linear methods for composite optimizations~\cite{burke1995gauss,nesterov2007modified,lewis2016proximal}, accelerated methods for stochastic optimizations~\cite{lin2015universal}, and methods for saddle-point problems~\cite{mokhtari2019unified}.
Moreover, using Proposition~\ref{prop:per} (and its generalization to the Bregman divergence) as a unified tool for analyzing first-order methods has been discussed in~\cite{teboulle2018simplified}.
\section{Conclusion}
This work provides a complete understanding of Nesterov's acceleration by making analytical and quantitative connections to the proximal point method.
The key observation is that an alternation of two simple approximations of the PPM exatly recovers Nesterov's AGM.
Through this connection, we are able to explain all the step sizes of AGM in terms of the PPM step sizes, demystifying the mysterious choices made in the literature.
This view naturally extends to the strongly convex case and recovers Nesterov's general accelerated method from his book.
Moreover, our PPM view motivates a simplification of AGM using similar triangles, which admits a simple PPM-based analysis as well as a simple extension to the general norm and composite optimization case.
For future directions, it would be interesting to connect our PPM view to accelerated stochastic methods
~\cite{lin2015universal,lan2018optimal} and other accelerated methods, including geometric descent~\cite{bubeck2015geometric}.
\section*{Acknowledgement} The author thanks Suvrit Sra, Alp Yurtsever and Jingzhao Zhang for detailed comments and stimulating discussions, Aaron Defazio for clarifications that help the author develop Section~\ref{sec:gen}, and Heinz Bauschke for constructive suggestions on the presentation of Section~\ref{sec:deriv} as well as Remark~\ref{rmk:momentum}.
\bibliographystyle{alpha}
| {
"timestamp": "2020-06-04T02:18:37",
"yymm": "2005",
"arxiv_id": "2005.08304",
"language": "en",
"url": "https://arxiv.org/abs/2005.08304",
"abstract": "The proximal point method (PPM) is a fundamental method in optimization that is often used as a building block for designing optimization algorithms. In this work, we use the PPM method to provide conceptually simple derivations along with convergence analyses of different versions of Nesterov's accelerated gradient method (AGM). The key observation is that AGM is a simple approximation of PPM, which results in an elementary derivation of the update equations and stepsizes of AGM. This view also leads to a transparent and conceptually simple analysis of AGM's convergence by using the analysis of PPM. The derivations also naturally extend to the strongly convex case. Ultimately, the results presented in this paper are of both didactic and conceptual value; they unify and explain existing variants of AGM while motivating other accelerated methods for practically relevant settings.",
"subjects": "Optimization and Control (math.OC); Machine Learning (cs.LG)",
"title": "Understanding Nesterov's Acceleration via Proximal Point Method",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795129500637,
"lm_q2_score": 0.8128673110375458,
"lm_q1q2_score": 0.802283382740865
} |
https://arxiv.org/abs/2202.03821 | A Lower Bound on the Failed Zero Forcing Number of a Graph | Given a graph $G=(V,E)$ and a set of vertices marked as filled, we consider a color-change rule known as zero forcing. A set $S$ is a zero forcing set if filling $S$ and applying all possible instances of the color change rule causes all vertices in $V$ to be filled. A failed zero forcing set is a set of vertices that is not a zero forcing set. Given a graph $G$, the failed zero forcing number $F(G)$ is the maximum size of a failed zero forcing set. An open question was whether given any $k$ there is a an $\ell$ such that all graphs with at least $\ell$ vertices must satisfy $F(G)\geq k$. We answer this question affirmatively by proving that for a graph $G$ with $n$ vertices, $F(G)\geq \lfloor\frac{n-1}{2}\rfloor$. | \section{Introduction}
Given a graph $G=(V,E)$ and a subset $S\subseteq V$ of filled vertices, we consider the following the color change rule. If any filled vertex $v$ is adjacent to exactly one unfilled vertex $w$, then $v$ ``forces'' $w$, and we mark $w$ as filled. The set of filled vertices after all possible instances of the rule are applied is called the derived set of $S$. If the derived set of $S$ is all of $V$, we say that $S$ is a zero forcing set for $G$. Otherwise it is a failed zero forcing set. The zero forcing number of a graph $G$, denoted $Z(G)$ is the minimum size of any zero forcing set of $G$. The failed zero forcing number $G$, denoted $F(G)$ is the maximum size of any failed zero forcing set of $G$. Zero forcing numbers and their relation to minimum rank problems have been studied extensively, for example in \cite{barioli}. Failed zero forcing numbers were introduced and studied in \cite{fetcie}.
Any path $P_n$ satisfies $Z(P_n)=1$. This can be seen by taking the initial set of filled vertices to consist of exactly one end vertex of the path. The end vertex forces its neighbor, which in turn forces its unfilled neighbor, and so on until the entire path is filled. Hence there are graphs with arbitrarily large vertex set whose zero forcing number is 1.
In \cite{fetcie}, the authors asked whether a similar property holds for the failed zero forcing number. Intuition tells us that for any fixed integer $k$ there should not be arbitrarily large graphs satisfying $F(G)=k$. The larger the graph, the more room there is to distribute a fixed number of filled vertices sparsely in such a way that no filled vertex can force. We confirm this intuition by proving that for any graph $G$ on $n$ vertices $F(G)\geq\lfloor \frac{n-1}{2}\rfloor$. This is the best bound possible, as it was shown in \cite{fetcie} that any path $P_n$ satisfies
$F(P_n)=\lfloor\frac{n-1}{2}\rfloor$. Strikingly, for any given number of vertices, the minimum possible zero forcing number and maximum possible failed zero forcing number are both obtained by paths.
The bound makes it possible in principle to classify all graphs with a given failed zero forcing number $k$ by exhaustively checking all graphs such that $n\leq 2k + 2$. This classification was done in \cite{fetcie} for $k=0,1$, and in \cite{gomez} for $k=2$, the latter using a special case of the bound proven here.
In what follows we prove our main result by showing a somewhat stronger result for graphs with minimum vertex degree of at least three. We then show the desired bound inductively for graphs with minimum degree one and two.
\section{Graphs with minimum degree at least three}
The main result of this section is the following lower bound on the failed zero forcing number of a graph of minimum degree three or more:
\begin{theorem}\label{delta3}
Let $G=(V,E)$ be a connected graph on $n$ vertices satisfying $\delta(G)\geq 3$. Then $F(G) \geq \lfloor \frac{n}{2}\rfloor$. If $G$ contains a circuit of even length (and $n$ is odd), we can improve the bound to $F(G) \geq \lceil\frac{n}{2}\rceil$.
\end{theorem}
\tikzset{
unfilled/.style = {circle, draw = black, minimum size = 0.5cm},
filled/.style = {unfilled, fill = blue!40, minimum size = 0.5cm},
collection/.style = {ellipse, draw = black, thick}
}
We develop machinery needed to prove this result, starting with some terminology. If a set of vertices $S\neq V$ is its own derived set, then we say that $S$ is stalled. Note that any maximal failed zero forcing set is stalled. We will distinguish between vertices that cannot force because all of their neighbors are filled and vertices that cannot force because they have multiple unfilled neighbors.
\begin{defn}
Let $G=(V,E)$, $S\subseteq V$ be a set of filled vertices, and $v\in S$. We say $v$ is \emph{spent} if all neighbors of $v$ are in $S$. Otherwise, $v$ is \emph{unspent}.
\end{defn}
\noindent\textbf{Observation:} If we have a maximal failed zero forcing set $S$ in $G$ in which all vertices are unspent, then all filled vertices are adjacent to at least two unfilled vertices. In this case, we can add any number of edges to $G$, and $S$ will still be a failed zero forcing set. This is a special case of lemma 3 in \cite{gomez}.
We next have a result about bipartite graphs.
\begin{lemma}\label{bipartite}
If $G=(V,E)$ is a bipartite graph with bipartition $V=L\sqcup R$, and $\delta(G)\geq 2$, then $F(G)\geq\max\{|L|,|R|\}$. In particular, $F(G)\geq\lceil\frac{n}{2}\rceil$.
\end{lemma}
\begin{proof}
If we fill all the vertices in, say, $L$, then every filled vertex is adjacent to at least two (unfilled) vertices in $R$, and therefore cannot force.
\end{proof}
Note that all vertices in the failed zero forcing set obtained in lemma (\ref{bipartite}) are unspent. Combining the lemma with the observation we have:
\begin{corollary}
If $G$ has a bipartite spanning subgraph $H$ such that $\delta(H)\geq 2$, then $F(G)\geq\lceil\frac{n}{2}\rceil$
\end{corollary}
A vertex cut in a connected graph $G$ is a set of vertices whose
removal causes $G$ to be disconnected or trivial. The connectivity of a connected graph $G$, denoted $\kappa(G)$, is the size of the smallest vertex cut.
We need the following:
\begin{lemma}\label{cut_vert}
Let $G=(V,E)$ be a connected graph on $n$ vertices satisfying $\delta(G)\geq 3$ and $\kappa(G)=1$. Then $F(G)\geq \lfloor\frac{n+1}{2}\rfloor$
\end{lemma}
\begin{figure}[H]
\caption{The graph $G$ with the vertices in $S \cup \{v\}$ colored.}
\label{fig:cut_vert}
\begin{tikzpicture}
\draw[fill = blue!40] (0, 0) ellipse(2.5cm and 1cm);
\draw (0, -3) ellipse(0.75cm and 0.75cm);
\node[filled, label=right:{$v$}] (v) at (0, -1.75){};
\node (d2) at (-1.5, 0){};
\node (d3) at (1.5, 0){};
\node (d1) at (0, -3){};
\node[unfilled, label=right:{$w$}] (w) at (0, -2.75){};
\node (w1) at (-.25, -3.5){};
\node (w2) at (.25, -3.5){};
\draw (v) -- (w);
\draw (v) -- (d2);
\draw (v) -- (d3);
\draw (w) -- (w1);
\draw (w) -- (w2);
\end{tikzpicture}
\centering
\end{figure}
\begin{proof}
Since $\kappa(G) = 1$ there is a vertex cut consisting of a single vertex $v$. Let $G'$ be the graph induced by $V-\{v\}$. Since $G'$ is disconnected, we can label $k$ connected components of $G'$, $D_1, D_2,...,D_k$ with corresponding vertex sets $Y_1, Y_2,...,Y_k$, in order of increasing size.
Let $S = \bigcup_{i = 2}^k Y_i$. Note that $|S| + 1 \ge \lfloor \frac{n+1}{2} \rfloor$.
The only vertex in $V - D_1$ adjacent to a vertex in $D_1$ is $v$. No vertex in $S$ can force a vertex in $D_1$, so we need only consider the color changing rule applied to $v$. If $v$ is adjacent to at least two vertices in $D_1$, then $S \cup \{v\}$, is stalled. If $v$ is adjacent to exactly one vertex in $D_1$, call it $w$ (see Figure \ref{fig:cut_vert}). Since $\delta(G) \ge 3$, $w$ has at least two neighbors in $D_1$. The derived set of $S \cup \{v\}$ would then include $w$, but not $w$'s neighbors in $D_1$ since there would be at least two of them. In either case, the derived set of $S \cup v$ excludes vertices in $D_1$, making it a failed zero forcing set. Thus, $F(G) \ge |S \cup \{v\}| \ge |S| + 1 \ge \lfloor \frac{n+1}{2} \rfloor$.
\end{proof}
Now we need to handle the case where $G$ has no cut vertices. We give an algorithm for finding a failed zero forcing set with at least $\lfloor \frac{n-1}{2}\rfloor$ vertices in any graph $G$ satisfying $\delta(G)\geq 3$ and $\kappa(G)\geq 2$. The algorithm for produces a stalled zero forcing set of the required size, all of whose vertices are unspent.
\begin{algo}\label{algo} Our strategy will be to partition $V$ into three subsets $L$, $R$ and $O$ such that any vertex in $L$ is adjacent to at least two vertices in $R\cup O$, and any vertex in $R$ is adjacent to at least two vertices in $L\cup O$. We may then obtain a stalled set by filling either all the vertices in $L$, or all of the vertices in $R$. We will have that $|O|= 0$ or $|O|=1$, so that we are guaranteed to be able to fill at least $\lfloor \frac{n-1}{2}\rfloor$ vertices by choosing the larger of $L$ and $R$. In what follows, we refer to $L$ and $R$ as the ``sides'' of the partition. At any stage of the algorithm we let $A=L\cup R\cup O$ be the set of vertices that have already been assigned to one set in the partition, and we use $U$ the set of all vertices that remain unassigned.
Since $\delta(G)\geq 3$, $G$ has at least one cycle. If $G$ has a cycle $C
$ of even length, then assign alternating vertices of $C$ to $L$ and $R$ and let $O=\emptyset$. This guarantees that all assigned vertices are adjacent to two vertices on the other side of the partition.
If $G$ has no even cycles, then choose any odd cycle $C$ and let $O=\{v_0\}$ for an arbitrary chosen $v_0\in C$. Starting from either neighbor of $v_0$ in $C$ assign alternating vertices to $L$ and $R$. The neighbors of $v_0$ are each adjacent to a vertex in $O$ and a vertex on the other side of the partition, and all other vertices in $C$ are adjacent to two vertices on the other side of the partition.
Now, while any vertices remained unassigned we find a new vertex or vertices to assign by checking the following conditions in order of priority:
\begin{enumerate}
\item If there is any vertex $v$ that is adjacent to at least two vertices in $L\cup O$, we assign $v$ to $R$.
\item If there is any $v$ that is adjacent to at least two vertices in $R\cup O$, we assign $v$ to $L$.
\item If there is any $v$ that is adjacent to exactly two vertices $w_1,w_2$ in $A$, where $w_1\in L$ and $w_2\in R$, we proceed as follows. Note that there must be a path $u_0,u_1,\ldots,v$ from some other vertex $u_0\in A$ to $v$, or else $v$ would be a cut vertex. We may assume the interior vertices of the path are in $U$. We assign $u_1$ to the opposite side of the partition as $u_0$ (or either side if $u_0\in O$), and proceed to alternate assignments until we reach (and assign) $v$. Interior assigned vertices are assigned two neighbors on the opposite side of the partition. The vertex $v $ itself has one neighbor that was just assigned to the opposite side of the partition. Together with one of its previously assigned neighbors, it now has two neighbors with the opposite assignment as itself.
\begin{figure}[H]
\caption{Case 3 of Algorithm. A red label indicates the label was newly assigned in this stage. }\label{case3}
\centering
\begin{tikzpicture}
\tikzstyle{vertex} = [circle, draw=black]
\tikzstyle{filled_vertex} = [circle, draw=black, fill = blue!40]
\draw(0,0) ellipse(3.5cm and 1.75cm);
\node (v0) at (0,1){$A$};
\node[vertex,label=above:{$w_1$}] (v1) at (-1.5,-1){\tiny L};
\node[vertex,label=above:{$w_2$}] (v2) at (-.5,-1){\tiny R};
\node[vertex,text=red,label=below:{$v$}] (v3) at (-1,-2.25){\tiny R};
\node[vertex,text=red] (v4) at (0,-2.25){\tiny L};
\node[vertex,text=red] (v5) at (1,-2.25){\tiny R};
\node[vertex,text=red,label=below:{$u_1$}] (v6) at (2,-2.25){\tiny L};
\node[vertex,label=above:{$u_0$}] (v7) at (2,-.75){\tiny R};
\draw (v1) -- (v3);
\draw (v2) -- (v3);
\draw (v3) -- (v4);
\draw (v4) -- (v5);
\draw (v6) -- (v5);
\draw (v6) -- (v7);
\end{tikzpicture}
\end{figure}
\item If none of the previous three cases apply at any stage, we have that all vertices in $U$ that have adjacencies in $A$ have exactly one neighbor in $A$. Let $H$ be the subgraph induced by $U$. Note that $\delta(H)\geq 2$, so $H$ contains a cycle. If $H$ contains a cycle of even length, then we may simply assign vertices in the cycle to alternating sides of the partition, as in the initial step of the algorithm. So we may assume that all cycles have odd length. Choose any cycle $C$ in $H$. Let $P$ be a path from any vertex $c_0\in C$ to a vertex $v_U\in U$, such that $v_u$ is adjacent to a vertex $v_A\in A$. We may choose $P$ so that $P\cap C=\{c_0\}$ and $P\cap A =\{v_A\}$. (It is possible that $c_0=v_U$.) Note that there must be a path $Q$ from some vertex $c_i\in C$, $c_i\neq c_0$, to a vertex $w_U\in U$ that is adjacent to a vertex $w_A\in A$, or else $c_0$ would be a cut vertex of $G$. We may also assume $Q\cap C=\{c_i\}$ and $Q\cap A=\{w_A\}$. It is possible that $v_A=w_A$. However, $P\cap Q=\emptyset$, for otherwise we would be able to produce two cycles of opposite parity containing $v_U$ using $P$, $Q$ and the two paths from $c_0$ to $c_i$ in $C$, and hence $U$ would contain an even cycle. We have two paths from $v_U$ to $w_U$ in $H$ via $c_0$ and $c_i$, taking opposite paths around $C$. Since $C$ is of odd length, the paths' lengths have opposite parity. If $v_A$ and $w_A$ are on opposite side of the partition, then give $v_U$ the opposite label of $v_A$, and proceed along the even-length path alternating labels. If $v_A$ and $w_A$ are on opposite side of the partition, then give $v_U$ the opposite label of $v_A$, and proceed along the odd-length path alternating labels.
\end{enumerate}
\begin{figure}[H]
\caption{Case 4 of Algorithm. In the event that $v_A$ and $w_A$ have opposite labels, we alternate labels along an odd-length path $v_A$ and $w_A$ }
\centering
\begin{tikzpicture}
\tikzstyle{vertex} = [circle, draw=black, minimum size = .5cm]
\tikzstyle{filled_vertex} = [circle, draw=black, fill = blue!40]
\node[vertex,label=below:{$v_A$}] (v1) at (0,0){\tiny L};
\node[vertex,text=red,label={[align=left]$c_0=$\\$v_U$}] (v2) at (1,0){\tiny R};
\node[vertex,text=red] (v3) at (2.5,1){\tiny L};
\node[vertex,text=red] (v4) at (2,-1){};
x\node[vertex,text=red] (v5) at (3,-1){};
\node[vertex,text=red,label=below:{$c_i$}] (v6) at (4,0){\tiny R};
\node[vertex,text=red,label=below:{$w_U$}] (v7) at (5,0){\tiny L};
\node[vertex,label=below:{$w_A$}] (v8) at (6,0){\tiny R};
\draw (v1)--(v2);
\draw (v2)--(v3);
\draw (v2)--(v4);
\draw (v4)--(v5);
\draw (v6)--(v5);
\draw (v3)--(v6);
\draw (v7)--(v6);
\draw (v7)--(v8);
\end{tikzpicture}
\end{figure}
\begin{figure}[H]
\caption{Case 4 of Algorithm. In the event that $v_A$ and $w_A$ have the same label, we alternate labels along an even-length path $v_A$ and $w_A$ }
\centering
\begin{tikzpicture}
\tikzstyle{vertex} = [circle, draw=black, minimum size = .5cm]
\tikzstyle{filled_vertex} = [circle, draw=black, fill = blue!40]
\node[vertex,label=below:{$v_A$}] (v1) at (0,0){\tiny L};
\node[vertex,text=red,label={[align=left]$c_0=$\\$v_U$}] (v2) at (1,0){\tiny R};
\node[vertex,text=red] (v3) at (2.5,1){};
\node[vertex,text=red] (v4) at (2,-1){\tiny L};
\node[vertex,text=red] (v5) at (3,-1){\tiny R};
\node[vertex,text=red,label=below:{$c_i$}] (v6) at (4,0){\tiny L};
\node[vertex,text=red,label=below:{$w_U$}] (v7) at (5,0){\tiny R};
\node[vertex,label=below:{$w_A$}] (v8) at (6,0){\tiny L};
\draw (v1)--(v2);
\draw (v2)--(v3);
\draw (v2)--(v4);
\draw (v4)--(v5);
\draw (v6)--(v5);
\draw (v3)--(v6);
\draw (v7)--(v6);
\draw (v7)--(v8);
\end{tikzpicture}
\end{figure}
Any time we make new assignments, we repeat the process, checking which case applies in order. The cases are exhaustive and in any of the cases we can assign labels to more vertices while maintaining the property that any labeled vertex has as least two neighbors each of which either has the opposite label or is labeled $O$. Hence the process will terminate with all vertices assigned a label and the final labeling satisfying the required property.
\end{algo}
In the case where $G$ contains an even cycle, note that $O=\emptyset$, and all vertices are labeled either $L$ or $R$ upon termination. Therefore $F(G)\geq \lceil\frac{n}{2}\rceil$ by Lemma \ref{bipartite}.
Lemma \ref{cut_vert} and Algorithm \ref{algo} exhaust all possibilities for connected graphs with minimum degree at least three. Hence, we have proven Theorem \ref{delta3}.
\section{The General Case}
Next we handle the case where $G$ has vertices of degree less than three with an inductive argument.
\begin{theorem}
Let $G=(V,E)$ be a graph on $n$ vertices. $F(G)\geq \lfloor\frac{n-1}{2}\rfloor$
\end{theorem}
\begin{proof}
If $G$ is disconnected, we may find a failed zero set $S$ satisfying $|S|\geq \lceil\frac{n}{2}\rceil$ by filling all vertices in all but the smallest connected component of $G$. If $G$ is connected and $\delta(G)\geq 3$, we know from Theorem \ref{delta3} that the desired inequality holds. So we only need to consider the cases where $\delta(G)=1$ and $\delta(G)=2$.
We proceed by induction on $n$. If $n=1$, $F(G)=0=\lfloor\frac{n-1}{2}\rfloor$ and the result holds. Now assume the inequality holds for all graphs with fewer than $n$ vertices.
Suppose $\delta(G) = 1$ and let $v \in V$ with $\deg(v) = 1$. Let $v$ be adjacent to $w$ and let $G'$ be the subgraph induced by $V' = V-\{v,w\}$ . By the inductive hypothesis, $F(G') \ge \lfloor \frac{(n - 2) - 1}{2} \rfloor$. Let $S'\subset V'$ be a stalled set that achieves this bound.
\begin{figure}[H]
\caption{The graph $G$ with $\delta(G) = 1$. Case where $w$ is adjacent to $x \in V' - S'$.}
\centering
\begin{tikzpicture}
\node[unfilled, label=below:{$v$}] at (0,0) (v) {};
\node[unfilled, label=below:{$w$}] at (1,0) (w) {};
\node[filled] at (2, -1) (w_1) {};
\node[unfilled, label=right:{$x$}] at (2, 0) (w_2) {};
\node[filled] at (2, 1) (w_3) {};
\draw (v) -- (w);
\draw (w) -- (w_1);
\draw (w) -- (w_2);
\draw (w) -- (w_3);
\node[collection, minimum height = 4cm, minimum width = 3.5cm, dotted] at (3.2, 0) (W) {$G'$};
\end{tikzpicture}
\begin{tikzpicture}
\node[unfilled, label=below:{$v$}] at (0,0) (v) {};
\node[filled, label=below:{$w$}] at (1,0) (w) {};
\node[filled] at (2, -1) (w_1) {};
\node[unfilled, label=right:{$x$}] at (2, 0) (w_2) {};
\node[filled] at (2, 1) (w_3) {};
\draw (v) -- (w);
\draw (w) -- (w_1);
\draw (w) -- (w_2);
\draw (w) -- (w_3);
\node[collection, minimum height = 4cm, minimum width = 3.5cm, dotted] at (3.2, 0) (W) {$G'$};
\end{tikzpicture}
\end{figure}
Suppose $w$ is adjacent to an unfilled vertex $x\in V'$. We claim $S = S' \cup \{w\}$ is stalled in $G$. Consider $y \in S'$. We see $y$ is either only adjacent to vertices in $S' \cup \{w \}$, or it is adjacent to at least 2 unfilled vertices in $G'$. Either way, $y$ cannot force any of its neighbors in $G$. We also see that $w$ cannot force its neighbors since $w$ is adjacent to $v$ and $x$ which are both unfilled. Thus, $S$ is stalled. Note $|S' \cup \{w\}| \ge \lfloor \frac{(n - 2) - 1}{2} \rfloor + 1 = \lfloor \frac{n - 1}{2} \rfloor$.
\begin{figure}[H]
\caption{The graph $G$ with $\delta(G) = 1$. Case where $w$ is adjacent to only vertices in $S' \cup v$.}
\centering
\begin{tikzpicture}
\node[unfilled, label=below:{$v$}] at (0,0) (v) {};
\node[unfilled, label=below:{$w$}] at (1,0) (w) {};
\node[filled] at (2, -1) (w_1) {};
\node[filled] at (2, 0) (w_2) {};
\node[filled] at (2, 1) (w_3) {};
\draw (v) -- (w);
\draw (w) -- (w_1);
\draw (w) -- (w_2);
\draw (w) -- (w_3);
\node[collection, minimum height = 4cm, minimum width = 3.5cm, dotted] at (3.2, 0) (W) {$G'$};
\end{tikzpicture}
\begin{tikzpicture}
\node[filled, label=below:{$v$}] at (0,0) (v) {};
\node[filled, label=below:{$w$}] at (1,0) (w) {};
\node[filled] at (2, -1) (w_1) {};
\node[filled] at (2, 0) (w_2) {};
\node[filled] at (2, 1) (w_3) {};
\draw (v) -- (w);
\draw (w) -- (w_1);
\draw (w) -- (w_2);
\draw (w) -- (w_3);
\node[collection, minimum height = 4cm, minimum width = 3.5cm, dotted] at (3.2, 0) (W) {$G'$};
\end{tikzpicture}
\end{figure}
If $w$ is adjacent to only filled vertices in $G'$, we see then $S = S' \cup \{v, w\}$ is stalled. This is because $v$ and $w$ do not cause any forcing in $G'$. Since $S' \ne V'$, we have $S \ne V$. So $S$ is a failed forcing set with $|S| = |S'| + 2 \ge \lfloor \frac{n + 1}{2} \rfloor$.
In either case, we find a stalled set of size at least $\lfloor \frac{n - 1}{2} \rfloor$.
Now let $\delta(G) = 2$. Then there exists some vertex $v \in V$ such that $\deg(v) = 2$. Let $x,y$ be the neighbors of $v$. Let $X$ be the set of all the vertices in $V - \{v\}$ adjacent to $x$. Let $Y$ be the set of all the vertices in $V - \{v\}$ adjacent to $y$.
We construct a graph $G'$ by condensing the $x-v-y$ subgraph to a single vertex $w$. More precisely, let $G'=(V',E')$ be obtained from the subgraph of $G$ induced by $V-\{v,x,y\}$, by adding a new vertex $w$, and adding the edge $(w,z)$ for each $z\in X\cup Y$.
\begin{figure}[H]
\caption{Graph $G$ (left) with $\delta(G) = 2$ with corresponding $G'$ (right).}
\centering
\begin{tikzpicture}
\node[unfilled, label=above:{$v$}] at (0,0) (v) {};
\node[unfilled, label=above:{$x$}] at (-1,-1) (x) {};
\node[unfilled, label=above:{$y$}] at (1,-1) (y) {};
\node[] at (-1.5, -2) (x_1) {};
\node[] at (-0.5, -2) (x_2) {};
\node[] at (1.5, -2) (y_1) {};
\node[] at (0.5, -2) (y_2) {};
\draw (y) -- (v) -- (x);
\draw (x) -- (x_1);
\draw (x) -- (x_2);
\draw (y) -- (y_1);
\draw (y) -- (y_2);
\node[collection, minimum width = 2.5cm, minimum height = 0.75cm] at (-1, -2) (X) {$X$};
\node[collection, minimum width = 2.5cm, minimum height = 0.75cm] at (1, -2) (Y) {$Y$};
\end{tikzpicture}
\begin{tikzpicture}
\node[unfilled, label=above:{$w$}] at (0,0) (w) {};
\node[] at (-1.5, -1) (x_1) {};
\node[] at (-0.5, -1) (x_2) {};
\node[] at (1.5, -1) (y_1) {};
\node[] at (0.5, -1) (y_2) {};
\draw (w) -- (x_1);
\draw (w) -- (x_2);
\draw (w) -- (y_1);
\draw (w) -- (y_2);
\node[collection, minimum width = 4cm, minimum height = 0.75cm] at (0, -1) (XY) {$X \cup Y$};
\end{tikzpicture}
\end{figure}
We inductively assume $F(G') \ge \lfloor \frac{(n - 2) - 1}{2} \rfloor$. Let $S'$ be a corresponding stalled set. We consider the following cases:
\begin{enumerate}
\item
\begin{figure}[H]
\caption{Case when $w \not \in S'$ and corresponding stalled set in $G$.}
\centering
\begin{tikzpicture}
\node[unfilled, label=above:{$w$}] at (0,0) (w) {};
\node[] at (-1.5, -1) (x_1) {};
\node[] at (-0.5, -1) (x_2) {};
\node[] at (1.5, -1) (y_1) {};
\node[] at (0.5, -1) (y_2) {};
\draw (w) -- (x_1);
\draw (w) -- (x_2);
\draw (w) -- (y_1);
\draw (w) -- (y_2);
\node[collection, minimum width = 4cm, minimum height = 0.75cm] at (0, -1) (XY) {};
\end{tikzpicture}
\begin{tikzpicture}
\node[filled, label=above:{$v$}] at (0,0) (v) {};
\node[unfilled, label=above:{$x$}] at (-1,-1) (x) {};
\node[unfilled, label=above:{$y$}] at (1,-1) (y) {};
\node[] at (-1.5, -2) (x_1) {};
\node[] at (-0.5, -2) (x_2) {};
\node[] at (1.5, -2) (y_1) {};
\node[] at (0.5, -2) (y_2) {};
\draw (y) -- (v) -- (x);
\draw (x) -- (x_1);
\draw (x) -- (x_2);
\draw (y) -- (y_1);
\draw (y) -- (y_2);
\node[collection, minimum width = 2.5cm, minimum height = 0.75cm] at (-1, -2) (X) {};
\node[collection, minimum width = 2.5cm, minimum height = 0.75cm] at (1, -2) (Y) {};
\end{tikzpicture}
\end{figure}
If $w \not \in S'$, then $S = S' \cup \{v\}$ is stalled in $G$. We see that any $z \in S'$ that could not force $G'$ also cannot force in $G$ since $x$ and $y$ are unfilled to match the unfilled $w$. We also see that $v$ cannot zero force in $G$ since $v$ is adjacent to $x,y \not \in S$. Thus, $S$ is a stalled set.
\item If $w \in S'$, it is either adjacent to only filled vertices or adjacent to at least two unfilled vertices in $G'$.
\begin{enumerate}
\item
\begin{figure}[H]
\caption{Case when $w$ is adjacent to filled vertices in $G'$ and corresponding stalled set in $G$.}
\centering
\begin{tikzpicture}
\node[filled, label=above:{$w$}] at (0,0) (w) {};
\node[filled] at (-1.5, -1) (x_1) {};
\node[filled] at (-0.5, -1) (x_2) {};
\node[filled] at (1.5, -1) (y_1) {};
\node[filled] at (0.5, -1) (y_2) {};
\draw (w) -- (x_1);
\draw (w) -- (x_2);
\draw (w) -- (y_1);
\draw (w) -- (y_2);
\node[collection, minimum width = 4cm, minimum height = 0.75cm] at (0, -1) (XY) {};
\end{tikzpicture}
\begin{tikzpicture}
\node[filled, label=above:{$v$}] at (0,0) (v) {};
\node[filled, label=above:{$x$}] at (-1,-1) (x) {};
\node[filled, label=above:{$y$}] at (1,-1) (y) {};
\node[filled] at (-1.5, -2) (x_1) {};
\node[filled] at (-0.5, -2) (x_2) {};
\node[filled] at (1.5, -2) (y_1) {};
\node[filled] at (0.5, -2) (y_2) {};
\draw (y) -- (v) -- (x);
\draw (x) -- (x_1);
\draw (x) -- (x_2);
\draw (y) -- (y_1);
\draw (y) -- (y_2);
\node[collection, minimum width = 2.5cm, minimum height = 0.75cm] at (-1, -2) (X) {};
\node[collection, minimum width = 2.5cm, minimum height = 0.75cm] at (1, -2) (Y) {};
\end{tikzpicture}
\end{figure}
If $w$ is adjacent to only filled vertices in $G'$, then $S = (S' - \{w\}) \cup \{v,x,y\}$ is a stalled set in $G$. If a vertex was spent in $S'$, we see it must also be spent in $S$ since both $x$ and $y$ get filled. If a vertex $z$ was adjacent to at least two unfilled vertices in $S'$, it must also be adjacent to at least two unfilled vertices in $S$ as $z$ could not not have been adjacent to $x$ or $y$. Thus, $S$ is stalled.
\item
If $w$ is adjacent to at least two vertices not in $S'$, we will fill $x$ and $y$ in $G$. We first observe that every vertex in $(S' - \{w\})$ in $G$ is spent or adjacent to at least two unfilled vertices. If a vertex $z \in S'$ was adjacent to $w$, then $z$ is adjacent to $x$ or $y$ or both in $G$. Since we fill $x$ and $y$ to match $w$ being filled, we see $z$ cannot force in $G'$. Since $w$ is adjacent to at least two unfilled vertices in $G'$, at least two vertices in $X \cup Y$ are not in $S'$.
\begin{figure}[H]
\caption{Case when $w$ is adjacent to exactly one unfilled vertex in $X$ and one in $Y$. Corresponding stalled set in $G$.}
\label{fig:oneXoneY}
\centering
\begin{tikzpicture}
\node[filled, label=above:{$w$}] at (0,0) (w) {};
\node[filled] at (-1.5, -1) (x_1) {};
\node[unfilled] at (-0.5, -1) (x_2) {};
\node[filled] at (1.5, -1) (y_1) {};
\node[unfilled] at (0.5, -1) (y_2) {};
\draw (w) -- (x_1);
\draw (w) -- (x_2);
\draw (w) -- (y_1);
\draw (w) -- (y_2);
\node[collection, minimum width = 4cm, minimum height = 0.75cm] at (0, -1) (XY) {};
\end{tikzpicture}
\begin{tikzpicture}
\node[unfilled, label=above:{$v$}] at (0,0) (v) {};
\node[filled, label=above:{$x$}] at (-1,-1) (x) {};
\node[filled, label=above:{$y$}] at (1,-1) (y) {};
\node[filled] at (-1.5, -2) (x_1) {};
\node[unfilled] at (-0.5, -2) (x_2) {};
\node[filled] at (1.5, -2) (y_1) {};
\node[unfilled] at (0.5, -2) (y_2) {};
\draw (y) -- (v) -- (x);
\draw (x) -- (x_1);
\draw (x) -- (x_2);
\draw (y) -- (y_1);
\draw (y) -- (y_2);
\node[collection, minimum width = 2.5cm, minimum height = 0.75cm] at (-1, -2) (X) {};
\node[collection, minimum width = 2.5cm, minimum height = 0.75cm] at (1, -2) (Y) {};
\end{tikzpicture}
\end{figure}
\begin{enumerate}
\item
If there is at least one vertex in $X - S'$ and at least one vertex in $Y - S'$, then both $x$ and $y$ are adjacent to at least two unfilled vertices in $G$ since both $x$ and $y$ are adjacent to some unfilled vertex in $(X \cup Y)$ as well as $v$. This means $S = (S' - \{w\}) \cup \{x, y\}$ is a stalled set in $G$. (As seen in Figure \ref{fig:oneXoneY}).
\item
\begin{figure}[H]
\caption{Case when $w$ is adjacent to at least two unfilled vertices in $X$ or $Y$. Corresponding stalled set in $G$.}
\centering
\begin{tikzpicture}
\node[filled, label=above:{$w$}] at (0,0) (w) {};
\node[filled] at (-1.5, -1) (x_1) {};
\node[filled] at (-0.5, -1) (x_2) {};
\node[unfilled] at (1.5, -1) (y_1) {};
\node[unfilled] at (0.5, -1) (y_2) {};
\draw (w) -- (x_1);
\draw (w) -- (x_2);
\draw (w) -- (y_1);
\draw (w) -- (y_2);
\node[collection, minimum width = 4cm, minimum height = 0.75cm] at (0, -1) (XY) {};
\end{tikzpicture}
\begin{tikzpicture}
\node[filled, label=above:{$v$}] at (0,0) (v) {};
\node[filled, label=above:{$x$}] at (-1,-1) (x) {};
\node[filled, label=above:{$y$}] at (1,-1) (y) {};
\node[filled] at (-1.5, -2) (x_1) {};
\node[filled] at (-0.5, -2) (x_2) {};
\node[unfilled] at (1.5, -2) (y_1) {};
\node[unfilled] at (0.5, -2) (y_2) {};
\draw (y) -- (v) -- (x);
\draw (x) -- (x_1);
\draw (x) -- (x_2);
\draw (y) -- (y_1);
\draw (y) -- (y_2);
\node[collection, minimum width = 2.5cm, minimum height = 0.75cm] at (-1, -2) (X) {};
\node[collection, minimum width = 2.5cm, minimum height = 0.75cm] at (1, -2) (Y) {};
\end{tikzpicture}
\end{figure}
If $X - S' = \emptyset$ or $Y - S' = \emptyset$, we claim $S = (S' - \{w\}) \cup \{v, x, y\}$ is stalled in $G$. Without loss of generality, assume $X - S' = \emptyset$ and $|Y - S'| \ge 2$. Note $x$ has neighbors $X \cup \{v\}$ in $G$. Since $v \in S$ and $X \subset S$, all of $x$'s neighbors are in $S$. Also, since $x, y \in S$, all of $v$'s neighbors are in $S$. Finally, $y$ has at least two vertices in $V - S$ since $|Y - S'| \ge 2$. Thus, none of $x, y$, or $v$ can force, making $S$ a stalled set.
\end{enumerate}
\end{enumerate}
\end{enumerate}
In any case, we see we can fill at least one more vertex in $G$ than we could $G'$. That is, $|S| \ge |S'| + 1$ and thus $|S| \ge \lfloor \frac{n-1}{2} \rfloor$.
\end{proof}
\section{Conclusions and Future Work}
We now know there are finitely many graphs where $F(G) = k$ for any given $k$ and we can enumerate them by checking all graphs with fewer than $2k + 2$ vertices. For $k\leq 4$ we can do this fairly quickly by computing $F(G)$ for all graphs $G$ with no more than ten vertices. For $k > 4$ the number of graphs one must check gets very large.
We let $F_k$ be the set of all connected graphs $G$ such that $F(G) = k$, and $F_k(n)$ be the number of graphs in $F_k$ on $n$ vertices. Using a exhaustive search on all connected graphs up to 10 vertices we get:
\begin{figure}[H]
\caption{}
\centering
\begin{tabular}{c|cccccccc}
$n =$ & 3& 4& 5& 6& 7& 8& 9& 10 \\
\hline
$F_1(n)$ & 2& 1& & & & & & \\
$F_2(n)$ & & 5& 5& 2& & & & \\
$F_3(n)$ & & &16&29&16& 1& & \\
$F_4(n)$ & & & &81&277&268&14& 1\\
\end{tabular}
\end{figure}
The rows of the table correspond to graphs in $F_k$ while the columns correspond to graphs with $n$ vertices. For example, $F_1$ contains three graphs: two with three vertices and one with four vertices. We can sum the rows to get $|F_2| = 12$, $|F_3| = 62$, and $|F_4| = 641$. This confirms the result by \cite{gomez} where it was shown that there are $15$ graphs with a failed zero forcing number of two, twelve of which were connected.
We can apply our lower bound to another question problem posed in $\cite{fetcie}$: the classification of graphs that satisfy $Z(G) = F(G)$. For small $k$ we can enumerate all graphs where $F(G) = k$, and see which of these satisfy $Z(G) = k$. This gives us an exhaustive list of $G$ for which $F(G) = Z(G) = k$. We define $E_k$ to be the set of all connected graphs $G$ such that $F(G) = Z(G) = k$, and $E_k(n)$ to be the number of graphs on $n$ vertices in $E_k$. We have computed $E_k(n)$ for $1 \le k \le 4$ and have found the following:\\
\begin{figure}[H]\label{ekn}
\caption{}
\centering
\begin{tabular}{c|cccccccc}
$n = $ & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\
\hline
$E_1(n)$ & 1& 1& & & & & & \\
$E_2(n)$ & & 4& 4& 1& & & & \\
$E_3(n)$ & & & 9&10& 4& & & \\
$E_4(n)$ & & & &19&29& 2& & \\
\end{tabular}
\end{figure}
The columns of the table correspond to graphs with $n$ vertices; the sum across a row is then $|E_k|$. For example, the pair of $1$s in the first row corresponds to two graphs in $E_1$, one with three vertices and the other with four. These graphs are $P_3$ and $P_4$, the only graphs such that $F(G) = Z(G) = 1$. Furthermore, we see that $|E_1| = 2$, $|E_2| = 9$, $|E_3| = 23$, and $|E_4| = 50$. In \cite{fetcie} it was claimed that graphs where $F(G) = Z(G)$ seem to be rare. The above gives us a quantitative method to evaluate this claim, by finding the number of graphs with a fixed failed zero forcing number $k$ that have the same zero forcing number. For example, we see that among the 641 graphs with failed zero forcing number of four, exactly 50 have zero forcing number four.
Interesting questions to consider include finding bounds on $|E_k|$ and characterizing $n$ for which $E_k(n)\neq 0$. Toward the latter question, we note $k = F(G) \le n - 2$ for any connected graph $G$ and so any graph with $n$ vertices in $E_k$ must satisfy $n \ge k + 2$. This inequality gives us the lower diagonal in Figure 12 and Figure 13. However, we see an upper diagonal in Figure 13 that is stricter than the one in Figure 12. That is, given a connected graph $G$ with $n$ vertices in $E_k$, we ask whether $n \le k + 4$. We have checked this holds true for all connected graphs $G$ up to ten vertices.
| {
"timestamp": "2022-02-09T02:19:12",
"yymm": "2202",
"arxiv_id": "2202.03821",
"language": "en",
"url": "https://arxiv.org/abs/2202.03821",
"abstract": "Given a graph $G=(V,E)$ and a set of vertices marked as filled, we consider a color-change rule known as zero forcing. A set $S$ is a zero forcing set if filling $S$ and applying all possible instances of the color change rule causes all vertices in $V$ to be filled. A failed zero forcing set is a set of vertices that is not a zero forcing set. Given a graph $G$, the failed zero forcing number $F(G)$ is the maximum size of a failed zero forcing set. An open question was whether given any $k$ there is a an $\\ell$ such that all graphs with at least $\\ell$ vertices must satisfy $F(G)\\geq k$. We answer this question affirmatively by proving that for a graph $G$ with $n$ vertices, $F(G)\\geq \\lfloor\\frac{n-1}{2}\\rfloor$.",
"subjects": "Combinatorics (math.CO)",
"title": "A Lower Bound on the Failed Zero Forcing Number of a Graph",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795110351222,
"lm_q2_score": 0.8128673087708699,
"lm_q1q2_score": 0.8022833789471088
} |
https://arxiv.org/abs/1612.04018 | Trigonometric Interpolation and Quadrature in Perturbed Points | The trigonometric interpolants to a periodic function $f$ in equispaced points converge if $f$ is Dini-continuous, and the associated quadrature formula, the trapezoidal rule, converges if $f$ is continuous. What if the points are perturbed? With equispaced grid spacing $h$, let each point be perturbed by an arbitrary amount $\le \alpha h$, where $\alpha\in [\kern .5pt 0,1/2)$ is a fixed constant. The Kadec 1/4 theorem of sampling theory suggests there may be be trouble for $\alpha\ge 1/4$. We show that convergence of both the interpolants and the quadrature estimates is guaranteed for all $\alpha<1/2$ if $f$ is twice continuously differentiable, with the convergence rate depending on the smoothness of $f$. More precisely it is enough for $f$ to have $4\alpha$ derivatives in a certain sense, and we conjecture that $2\alpha$ derivatives is enough. Connections with the Fejér--Kalmár theorem are discussed. | \section{Introduction and summary of results}
The basic question of robustness of mathematical algorithms is,
what happens if the data are perturbed? Yet little
literature exists on the effect on interpolants, or on quadratures,
of perturbing the interpolation points.
The questions addressed in this paper arise in two almost equivalent
settings: interpolation by algebraic polynomials (e.g., in
Gauss or Chebyshev points) and periodic interpolation by
trigonometric polynomials (e.g., in equispaced points).
Although we believe essentially the same results hold in
the two settings, this paper deals with just the trigonometric case.
Let $f$ be a real or complex function on $[-\pi,\pi)$, which we take
to be $2\pi$-periodic in the sense that any assumptions of
continuity or smoothness made for $f$ apply periodically at $x=-\pi$ as
well as at interior points.
For each $N\ge 0$, set $K = 2N+1$, and consider the
centered grid of $K$ equispaced points in $[-\pi,\pi)$,
\begin{equation}
x_k^{} = kh, \quad -N \le k \le N, \quad h = {2\pi\over K}.
\end{equation}
There is a unique degree-$N$ trigonometric
interpolant through the data $\{f(x_k^{})\}$, by which we mean a function
\begin{equation}
t_N^{}(x) = \sum_{k=-N}^N c_k^{} e^{ikx}
\end{equation}
with $t_N^{}(x_k^{}) = f(x_k^{})$ for each $k$.
If $I$ denotes the integral of $f$,
\begin{equation}
I = \int_{-\pi}^\pi f(x) \,dx,
\end{equation}
the associated quadrature approximation
is the integral of $t_N^{}(x)$, which can be shown to be
equal to the result of applying the trapezoidal rule to $f$:
\begin{equation}
I_N^{} = h\!\sum_{k=-N}^N f(x_k^{}) = \int_{-\pi}^\pi t_N^{}(x) \,dx = 2\pi c_0^{} .
\label{traprule}
\end{equation}
It is known that if $f$ is continuous, then
\begin{equation}
\lim_{N\to\infty} | I - I_N^{} | = 0,
\label{quadconv}
\end{equation}
and if $f$ is Dini-continuous, for which H\"older or Lipschitz continuity
are sufficient conditions, then
\begin{equation}
\lim_{N\to\infty} \| f - t_N^{} \| = 0 .
\label{approxconv}
\end{equation}
Moreover, the convergence rates are tied to the
smoothness of $f$, with exponential convergence
if $f$ is analytic. Here and throughout,
$\|\cdot\|$ is the maximum norm on $[-\pi,\pi)$.
The problem addressed in this paper is the generalization of
these results to configurations in which
the interpolation points are perturbed. For fixed
$\alpha\in ( 0,1/2)$, consider a set of points
\begin{equation}
\tilde x_k^{} = x_k^{} + s_k^{} h, \quad -N\le k \le N,
\quad |s_k|\le \alpha \, .
\label{ppoints}
\end{equation}
Note that since $\alpha < 1/2$, the $\tilde x_k^{}$ are necessarily
distinct. Let $\tilde t_N^{}(x)$ be the unique degree-$N$
trigonometric interpolant to $\{f(\tilde x_k^{})\}$, and let
$\tilde I_N^{} = \int \tilde t_N^{}(x) \kern .7pt dx$ be the
corresponding quadrature approximation. As in (\ref{traprule}),
this will be a linear combination of the function values, although
no longer with equal weights in general.
Let $\sigma>0$ be any positive real number, and write $\sigma
= \nu + \gamma$ with $\gamma\in (\kern .5pt 0,1]$. We say
that $f$ {\em has $\sigma$ derivatives} if $f$ is $\nu$
times continuously differentiable and, moreover, $f^{(\nu)}$
is H\"older continuous with exponent $\gamma$. Note that
if $\sigma$ is an integer, then for $f$ to ``have $\sigma$
derivatives'' means that $f$ is $\sigma-1$ times continuously
differentiable and $f^{(\sigma-1)}$ is Lipschitz continuous.
We will prove the following main theorem, whose central estimate
is the bound on $\|f-\tilde t_N^{}\|$ in (\ref{thmrate2}).
The estimates (\ref{thmconv})--(\ref{thmrate2}) are new, whereas
(\ref{thmrate3}) follows from the work of Kis~\cite{kis}, as
discussed in Section~\ref{confluent}. Numerical illustrations
of these bounds can be found in~\cite{austin}.
\medskip
\begin{theorem}
\label{thm1}
For any $\alpha\in (\kern .5pt 0,1/2)$,
if $f$ is twice continuously differentiable, then
\begin{equation}
\lim_{N\to\infty} | I - \tilde I_N^{} | =
\lim_{N\to\infty} \| f - \tilde t_N^{} \| = 0.
\label{thmconv}
\end{equation}
More precisely, if
$f$ has $\sigma$ derivatives for some $\sigma > 4\alpha$, then
\begin{equation}
| I - \tilde I_N^{} | ,
\| f - \tilde t_N^{} \| = O(N^{4\alpha-\sigma}).
\label{thmrate2}
\end{equation}
If $f$ can be analytically continued
to a $2\pi$-periodic
function for $-a < {\rm Im}\kern 2pt x < a$ for some $a>0$, then
for any\/ $\hat a < a$,
\begin{equation}
| I - \tilde I_N^{} | ,
\| f - \tilde t_N^{} \| = O(e^{-\hat aN}).
\label{thmrate3}
\end{equation}
\end{theorem}
\medskip
Our proofs are based on combining standard estimates of
approximation theory, the Jackson theorems, with a new bound
on the Lebesgue constants associated with perturbed grids,
Theorem~\ref{thmbound}. Our bounds are close to sharp,
but not quite. Based on extensive numerical experiments
presented in Section 3.3.2 of~\cite{austin}, we conjecture that
$4\alpha$ can be improved to $2\alpha$ in (\ref{thmrate2}) and
(\ref{bigestimate}); for (\ref{bigestimate}) the result would
probably then be sharp, but for (\ref{thmrate2}) a slight further
improvement may still be possible. For the quadrature problem
in particular, further experiments presented in Section 3.5.2
of~\cite{austin} lead us to conjecture that $\tilde I_N^{}
\to I$ as $N\to\infty$ for all continuous functions $f$ for
all $\alpha<1/2$. This conjecture is based on the theory of
P\'olya in 1933~\cite{polya}, who showed that such convergence
is ensured if and only if the sums of the absolute values of the
quadrature weights are bounded as $N\to\infty$. Experiments
indicate that for all $\alpha<1/2$, these sums are indeed bounded
as required. On the other hand, $\tilde I_N^{}\to I$ cannot be
guaranteed for any $\alpha\ge 1/2$, since in that case the
interpolation points may come together, making the quadrature
weights unbounded.
Theorems~\ref{thm1} and~\ref{thmbound} suggest that from the
point of view of approximation and quadrature, $\alpha = 1/4$
is not a special value. In Section~\ref{sampling} we
comment on the significance of the appearance of this number
in the Kadec 1/4 theorem and more generally on the relationship
between approximation theory and sampling theory, two subjects
that address closely related questions and yet have little
overlap of literatures or experts.
All the estimates reported here were worked out by the first
author and presented in his D.\ Phil.\ thesis~\cite{austin}.
This work was motivated by work of the second author with
Weideman in the review article ``The exponentially convergent
trapezoidal rule''~\cite{tw}. It is well known that on an
equispaced periodic grid, the trapezoidal rule is exponentially
convergent for periodic analytic integrands~\cite{davis,tw}.
With perturbed points, it seemed to us that exponential
convergence should still be expected, and we were surprised
to find that there seemed to be no literature on this subject.
A preliminary discussion was given in~\cite[Sec.~9]{tw}.
Section~\ref{lebesguesec} reduces Theorem~\ref{thm1} to a
bound on the Lebesgue constant, Theorem~\ref{thmbound}.
Sections~\ref{confluent} and \ref{sampling} are devoted
to comments on problems with $\alpha\ge 1/2$ and on
the link with sampling theory and Kadec's theorem,
respectively. Section~\ref{bound} outlines the proof of
Theorem~\ref{thmbound}.
\section{\label{lebesguesec}Reduction to a Lebesgue constant estimate}
A fundamental tool of approximation theory is the Lebesgue
constant: for any linear projection $L:f\mapsto t$, the
Lebesgue constant is the operator norm $\Lambda = \|L\|$.
For our problem the operator is the map $\tilde L_N^{}$ from a
function $f$ to its trigonometric interpolant $\tilde t_N^{}$
through the values $\{f(\tilde{x}_k^{})\}$, and the norm on $L$
is the operator norm induced by $\|\cdot\|$, the $\infty$-norm
on $[-\pi,\pi)$. We denote the Lebesgue constant by $\tilde
\Lambda_N$.
Lebesgue constants are linked to quality of approximations
by the following well-known bound. If
$\tilde \Lambda_N^{}$ is the Lebesgue constant
associated with the projection $\tilde L_N^{}:
f \mapsto \tilde t_N^{}$ and $t_N^*$ is the best approximation to
$f$ of degree $N$, then
\begin{equation}
\|f-\tilde t_N^{}\| \le (1+\tilde \Lambda_N^{}) \| f-t_N^*\| .
\label{basicbound}
\end{equation}
It follows that if $\tilde \Lambda_N^{}$ is small, then $\tilde
t_N^{}$ is a near-optimal approximation to $f$. If $f$ has
a certain smoothness property for which the optimal approximations
$t_N^*$ are known to converge at a certain rate, this
implies that the interpolants $\tilde t_N^{}$ converge at nearly
the same rate.
Applying (\ref{basicbound}), we prove Theorem~\ref{thm1}
by combining a bound on the Lebesgue constants $\tilde
\Lambda_N^{}$ with bounds on the best approximation errors $\|f-
t_N^*\|$. Our estimates of best approximations are standard
Jackson theorems, going back to Dunham Jackson in 1911 and 1912.
The nonstandard part of the argument, which from a technical
point of view is the main contribution of this paper, is the
following estimate of the Lebesgue constant, the proof of which
is outlined in Section~\ref{bound}.
\medskip
\begin{theorem}
\label{thmbound}
There is a universal constant $C$ such that
\begin{equation}
\tilde \Lambda_N^{} \le {C(N^{4\alpha}-1)\over \alpha(1-2\alpha)}
\label{bigestimate}
\end{equation}
for all\/ $\alpha\in [\kern .5pt 0,1/2)$ and $N> 0$.
For $\alpha=0$ this bound is to be interpreted by its
limiting value given, for example, by L'Hopital's rule,
$\tilde\Lambda_N^{} \le 4\kern1pt C\log N$.
\end{theorem}
\medskip
The $\log N$ bound for an equispaced grid with $\alpha = 0$
is standard, so the substantive result here concerns $\alpha\in
(0,1/2)$. This is what we can prove, but as mentioned in the
previous section, based on numerical experiments, we conjecture that
(\ref{bigestimate}) actually holds with $N^{4\alpha}$ replaced
by $N^{2\alpha}$.
Given Theorem~\ref{thmbound}, we prove Theorem~\ref{thm1} as follows.
\medskip
{\em Proof of Theorem\/~$\ref{thm1}$, given Theorem\/~$\ref{thmbound}$.}
The Jackson theorems of approximation theory relate the smoothness
of a function $f$ to the accuracy of its best
approximations~\cite{jackson,meinardus}.
According to one of these theorems, given for example as
Theorem~41 of~\cite{meinardus},
if $f$ is a periodic function
on $[-\pi,\pi)$ that has $\sigma$ derivatives for
some $\sigma>0$ in the sense defined in Section 1, then
\begin{equation}
\| f-t_N^*\| = O(N^{-\sigma}).
\end{equation}
Combining this with Theorem~\ref{thmbound}
gives (\ref{thmrate2}). The bound (\ref{thmrate3})
follows similarly from the estimate
\begin{equation}
\| f-t_N^*\| = O(e^{-\hat aN})
\end{equation}
for any $2\pi$-periodic function $f$ analytic and bounded in
the strip of half-width $\hat a>0$ about the real axis; see, for
example, eq.~(7.17) of~\cite{tw}. \endproof
\section{\label{confluent}\boldmath $\alpha \ge 1/2$, confluent
points, and analytic functions} Our framework (\ref{ppoints}) for
perturbed points can be generalized to values $\alpha \ge 1/2$.
For $\alpha\in [1/2, 1)$, two grid points may coalesce, so one
must assume that $f'$ exists in order to ensure that there are
appropriate data to define an interpolation problem (in this case,
trigonometric Hermite interpolation). Similarly for $\alpha\in
[1,3/2)$, three points may coalesce, so one must assume $f''$
exists; and so on analogously for any finite value of $\alpha$.
(We wrap grid points around as necessary if the perturbation moves
them outside of $[-\pi, \pi)$; equivalently, one could extend $f$
periodically.)
Looking at the statement of Theorem~\ref{thm1} but considering
values $\alpha \ge 1/2$, one notes that the assumption of
$\sigma>4\alpha$ derivatives is enough to ensure that the necessary
derivatives exist for the interpolation problem to make sense; the
conjectured sharper condition of $\sigma> 2\alpha$ derivatives is
also (\kern .7pt just) enough. This coincidence seems suggestive,
and we consider it possible that Theorem~\ref{thm1} and its
conjectured improvement with $2\alpha$ may in fact be valid for
arbitrary $\alpha>0$, not just $\alpha \in (\kern .5pt 0,1/2)$.
We have not attempted to prove this, however. As a practical
matter, trouble can be expected in floating-point arithmetic as
sample points coalesce, so we regard the case $\alpha \ge 1/2$
as somewhat theoretical.
Going further, what if we allow arbitrary perturbations of the
interpolation points, so that each $\tilde x_k^{}$ may lie
anywhere in $[-\pi,\pi)$? Doing so makes sense mathematically
if $f$ is infinitely differentiable; so in particular, it
makes sense if $f$ is analytic, which implies that it can be
analytically continued to a $2\pi$-periodic function on the
whole real line. We are now in an area of approximation theory
(and potential theory) going back to the work of Runge~\cite{runge}
and Fej\'er~\cite{fejer}, in which a major contributor was Joseph
Walsh~\cite{gaier,walsh}. For arbitrary $x_k^{}$, convergence
will occur if $f$ is analytic in a sufficiently wide strip around
the real axis in the complex $x$-plane. Repeated points
are permitted, with interpolation at such points interpreted
in the Hermite sense involving values of both the function
and its derivatives. If the points $x_k$
are {\em uniformly distributed\/} in the sense that the fraction
of points falling in any interval $[a,b)\subseteq [-\pi,\pi)$
converges to $(b-a)/2\pi$ as $N\to\infty$, then it is enough for
$f$ to be analytic in {\em any\/} strip around the real axis.
Such results were first developed for polynomial approximation
on the unit circle of functions analytic in the unit disk, the
so-called Fej\'er--Kalm\'ar theorem~\cite{fejer,kalmar,walsh}.
The extension to functions analytic in an annulus was considered by
Hlawka~\cite{hlawka}, and the equivalent problem of trigonometric
interpolation of $2\pi$-periodic functions on $[-\pi,\pi)$ was
considered by Kis~\cite{kis}. All these results may fail in
practice because of rounding errors on the computer, however.
For example, Figure~3.7 of~\cite{austin} shows an example with
uniformly distributed random interpolation points in $[-\pi,\pi)$,
with rounding errors beginning to take over at $N\approx 20$.
For the case of interpolation by algebraic polynomials, this
kind of effect is familiar in the context of the Runge phenomenon,
where polynomial interpolants in equispaced points in $[-1,1]$
will diverge on a computer as $N\to\infty$ even for a function
like $f(x) = \exp(x)$ for which in principle they should converge.
\section{\label{sampling}Sampling theory and the Kadec \boldmath$1/4$
theorem} The field of approximation theory
goes back to Borel, de la Vall\'ee Poussin, Fej\'er, Jackson,
Lebesgue, and others at the beginning of the 20th century, and
its central question might be characterized like this:
\medskip
\begin{quotation}
\noindent
\em Given a function $f$ of a certain regularity, how
fast do its approximations of a given kind converge?
\end{quotation}
\medskip
\noindent
For example, if $f$ is periodic and analytic on $[-\pi, \pi)$, then
its equispaced trigonometric interpolants converge exponentially.
The same holds if $f$ is analytic in a strip surrounding the
whole real line and satisfies a decay condition at $\infty$,
with trigonometric interpolants generalized to interpolatory
series of sinc functions.
The field of sampling theory goes back to Gabor, Kotelnikov,
Nyquist, Paley, Shannon, J. M. and E. T. Whittaker, and Wiener a
few years later. Its central question might be characterized
like this:
\medskip
\begin{quotation}
\noindent
\em Given a function $f$ of a certain regularity, which of
its approximations of a given kind are
exactly equal to $f\kern 1pt ?$
\end{quotation}
\medskip
\noindent
For example, if $f$ is periodic and analytic on $[-\pi,\pi)$,
then its equispaced trigonometric interpolant is exact if $f$
is band-limited (has a Fourier series of compact support) and
the grid includes at least two points per wavelength for each
wave number present in the series. The same holds if $f$ is a
band-limited analytic function on the whole real line, with the
Fourier series generalized to the Fourier transform, and again
with trigonometric interpolation generalized to sinc interpolation.
Obviously we have worded these characterizations to highlight
the similarities between the two fields, which in fact differ in
significant ways. Still, it is remarkable how little interaction
there has been between the two. What makes this relevant to the
present paper is that our theorems and orientation are very much
those of approximation theory, whereas most of the scientific
interest in perturbed grids in the past has been from the side
of sampling theory, and the Kadec 1/4 theorem is the known result
in this general area.
Kadec's theorem is an answer to a question of sampling theory that
originates with Paley and Wiener~\cite{pw}. The exponentials
$\{\exp(i\lambda_k^{} x)\}$, $-\infty < k < \infty$, form an
orthonormal basis for $L^2[-\pi,\pi]$ if $\lambda_k^{} = k$
for each $k$. Thus, the sampling theorist would say that one can
recover a function $f\in L^2[-\pi,\pi]$ from its inner products
with the functions $\{\exp(i\lambda_k^{} x)\}$. Now suppose these
wave numbers are perturbed so that $|\tilde \lambda_k^{} - k | \le
\alpha$ for some fixed $\alpha$. Can one still recover the signal?
Specifically, does the family $\{\exp(i\tilde \lambda_k^{} x)\}$
form a {\em Riesz basis} for $L^2[-\pi,\pi]$, that is, a basis
that is related to the original one by a bounded transformation
with a bounded inverse? Paley and Wiener showed that this is
always the case for $\alpha < 1/\pi^2$, and Levinson showed it
is not always the case for $\alpha \ge 1/4$. Kadec's theorem
shows that Levinson's construction was sharp: for any $\alpha <
1/4$, the family $\{\exp(i\tilde \lambda_k^{} x)\}$ forms a Riesz
basis~\cite{aldgro,christensen,kadec,young}.
Note that the standard setting of Kadec's theorem involves
perturbation of wave numbers from equispaced values, in contrast
to the results of this paper, which involve perturbation of
interpolation points from equispaced values. In view of
the Fourier transform, however, these settings are related, so one might
imagine, based on Kadec's theorem, that $\alpha = 1/4$ might be a
critical value for trigonometric interpolation in perturbed points.
Instead, we have found that the critical value is $\alpha = 1/2$.
We explain this apparent discrepancy as follows. The Paley--Wiener
theory and Kadec's theorem are results concerning the $L^2$
norm, which in many applications would represent energy. In our
application of trigonometric interpolation, something related to
the $L^2$ norm does indeed happen at $\alpha = 1/4$. Suppose we
look at a {\em $2$-norm Lebesgue constant} $\tilde\Lambda_N^{(2)}$
for the perturbed grid interpolation problem, defined as the
operator norm on $L: f\mapsto \tilde t_N^{}$ induced by the
discrete $\ell^2$-norm on the data $\{f(\tilde x_k^{})\}$ and
on the Fourier coefficients of the interpolant $\tilde t_k^{}$.
Numerical experiments reported in Section 3.4.3 of~\cite{austin}
indicate that whereas the usual $\infty$-norm Lebesgue constant
is unbounded for all $\alpha$, $\tilde\Lambda_N^{(2)}$ is bounded
as $N\to\infty$ for any $\alpha < 1/4$ but not always bounded for
$\alpha \ge 1/4$. (Indeed Kadec's theorem may imply this result.)
For $\alpha \in (1/4,1/2)$, we conjecture
$\tilde \Lambda_N^{(2)} = O(N^{4\alpha - 1})$.
Thus a sampling theorist might say that for $\alpha \in
[1/4,1/2)$, trigonometric interpolation is {\em unstable} in
the sense that it may amplify signals unboundedly in $\ell^2$
as $N\to\infty$. On the other hand the approximation theorist
might note that the instability is very weak, involving not
even one power of $N$. Assuming that the conjectured sharpening of
the estimate (\ref{thmrate2}) of Theorem~\ref{thm1} is valid,
one derivative of smoothness of $f$ is enough to suppress the
instability, ensuring $\|f-\tilde t_N^{}\|\to 0$ as $N\to\infty$
for all $\alpha < 1/2$. The numerical analyst might add that on a
computer, amplification of rounding errors by $o(N)$ is unlikely
to cause trouble. For $\alpha \ge 1/2$, in strong contrast,
the amplification is unbounded in any norm even for finite $N$,
and trouble is definitely to be expected.
\section{\label{bound}Proof of the Lebesgue constant estimate,
Theorem~\ref{thmbound}} A full proof of Theorem~\ref{thmbound},
filling 20 pages, is the subject of Chapter~4 of the first author's
D.\ Phil.\ thesis~\cite{austin}. Many detailed trigonometric estimates
are involved, and we do not know how to shorten it significantly.
For readers interested in full details, that chapter has been made
available in the Supplementary Materials attached to this paper.
Here, we outline the argument.
To prove the bound (\ref{bigestimate}) on the Lebesgue constant,
\begin{equation}
\tilde \Lambda_N^{} \le {C(N^{4\alpha}-1)\over \alpha(1-2\alpha)},
\label{bigestimateagain}
\end{equation}
we begin by noting that $\tilde \Lambda_N^{}$ is given by
\begin{equation}
\tilde \Lambda_N^{}= \max_{x\in[-\pi,\pi]} \tilde L(x),
\label{maxdef}
\end{equation}
where $\tilde L$ is the Lebesgue function
\begin{equation}
\tilde L(x) = \sum_{k=-N}^N |\tilde \ell_k^{}(x) |,
\label{basicsum}
\end{equation}
where $\tilde{\ell}_k^{}$ is the $k$th
Lagrange cardinal trigonometric polynomial for the perturbed grid,
\begin{equation}
\tilde \ell_k^{}(x) = \prod_{j\ne k}\left.
\sin\Big({x-\tilde x_j^{}\over 2}\Big)
\right/ \sin\Big({\tilde x_k^{}-\tilde x_j^{}\over 2}\Big).
\end{equation}
The function $\tilde\ell_k^{}(x)$ takes the values $1$ at $\tilde
x_k^{}$ and $0$ at the other grid points $\tilde x_j^{}$, and the
sum (\ref{basicsum}) adds up contributions at a point $x$ from
all the $2N+1$ cardinal functions associated with grid points to
its left and right.
The argument begins by showing that on the interval $[x_{-(k + 1)}^*,
x_{-k}^*]$, $\tilde{\ell}_0$ satisfies the bound
\begin{equation}
|\tilde{\ell}_0(x)| \leq M_k, \quad x\in [x_{-(k + 1)}^*, x_{-k}^*], \quad 0 \leq k \leq N,
\label{Mbounds}
\end{equation}
for certain numbers $M_0, \ldots, M_N$, independently of the choice of
perturbed points $\{\tilde{x}_k\}$. The points $x_k^*$ are defined by $x_0^* =
0$, $x_{-(N + 1)}^* = -\pi$, and
\[
x_k^* = 2\arctan\left(\frac{\cos(kh) - \cos(\alpha h) + \tan(\tilde{x}_0/2)\sin(kh)}{\tan(\tilde{x}_0/2)\bigl(\cos(kh) + \cos(\alpha h)\bigr) - \sin(kh)}\right), \quad -N \leq k \leq N, k \neq 0;
\]
the most important fact about them is that they satisfy the inequalities
\[
(k - \alpha)h \leq x_k^* \leq (k + \alpha)h, \quad -N \leq k \leq N.
\]
Thus, (\ref{Mbounds}) bounds $\tilde{\ell}_0$ on certain subintervals of
$[-\pi, 0]$. By exploiting symmetry, these bounds yield similar bounds on
$\tilde{\ell}_0$ on similar subintervals of $[0, \pi]$ as well as bounds on the
other $2N$ contributions to $\tilde{L}$ in (\ref{basicsum}). We are eventually
led to the estimate
\[
\tilde{L}(x) \leq 9 \sum_{k = 0}^N M_k,
\]
which holds uniformly for $x \in [-\pi, \pi]$.
For sufficiently large $N$, the $M_k$ satisfy
\begin{equation}
M_k^{} \le {10 \pi\over 1-2\alpha}, \quad k = 0, 1
\label{Mineq1}
\end{equation}
and
\begin{equation}
M_k^{} \le {3 \pi (k+1)^{2\alpha}\over (1-2\alpha) (k-1)^{1-2\alpha}}, \quad 2 \le k \le N.
\label{Mineq2}
\end{equation}
The bound (\ref{bigestimateagain}) follows by an estimation of the sums of
(\ref{Mineq1}) and (\ref{Mineq2}) over all $k$. The numbers $M_k^{}$ are
defined by
\begin{equation}
M_k^{} =
\max_{x \in [-\pi, 0] \cap R_k^{}} \frac{P_k^{}(x)}{Q_k^{}},
\quad 0 \le k \le N,
\label{Mdef}
\end{equation}
with
\begin{displaymath} P_k^{}(x)
= \prod_{i=1}^N \left|\sin\left(\frac{x - (i -
\alpha)h}{2}\right)\right| \times\prod_{i=1}^k
\left|\sin\left(\frac{x + (i - \alpha)h}{2}\right)\right|
\times\prod_{i = k + 1}^N \left|\sin\left(\frac{x + (i +
\alpha)h}{2}\right)\right|
\end{displaymath}
and
\begin{displaymath}
Q_k^{} = \prod_{i = 1}^N \left|\sin\left(\frac{(2\alpha
- i)h}{2}\right)\right| \times\prod_{i = 1}^k
\left|\sin\left(\frac{ih}{2}\right)\right|
\times\prod_{i = k + 1}^N \left|\sin\left(\frac{(2\alpha +
i)h}{2}\right)\right|.
\end{displaymath}
The set $R_k^{}$ in the definition of the range of
the maximum in (\ref{Mdef}) is the interval
\begin{displaymath}
R_k^{} = [(-k - 1 - \alpha)h, (-k + \alpha)h].
\end{displaymath}
\section*{Acknowledgements}
Many friends and colleagues have helped us along the way. In
particular, we thank Stefan G\"uttel, Andrew Thompson, Alex Townsend,
and Kuan Xu. Thompson showed us how we could improve $N^{4\alpha}$ to
$N^{4\alpha}-1$ in our main estimate (\ref{bigestimate}).
\vskip .1in
~~
| {
"timestamp": "2016-12-14T02:02:46",
"yymm": "1612",
"arxiv_id": "1612.04018",
"language": "en",
"url": "https://arxiv.org/abs/1612.04018",
"abstract": "The trigonometric interpolants to a periodic function $f$ in equispaced points converge if $f$ is Dini-continuous, and the associated quadrature formula, the trapezoidal rule, converges if $f$ is continuous. What if the points are perturbed? With equispaced grid spacing $h$, let each point be perturbed by an arbitrary amount $\\le \\alpha h$, where $\\alpha\\in [\\kern .5pt 0,1/2)$ is a fixed constant. The Kadec 1/4 theorem of sampling theory suggests there may be be trouble for $\\alpha\\ge 1/4$. We show that convergence of both the interpolants and the quadrature estimates is guaranteed for all $\\alpha<1/2$ if $f$ is twice continuously differentiable, with the convergence rate depending on the smoothness of $f$. More precisely it is enough for $f$ to have $4\\alpha$ derivatives in a certain sense, and we conjecture that $2\\alpha$ derivatives is enough. Connections with the Fejér--Kalmár theorem are discussed.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Trigonometric Interpolation and Quadrature in Perturbed Points",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795072052383,
"lm_q2_score": 0.8128673110375458,
"lm_q1q2_score": 0.8022833780710842
} |
https://arxiv.org/abs/1912.00224 | Almost sharp bounds on the number of discrete chains in the plane | The following generalisation of the Erdős unit distance problem was recently suggested by Palsson, Senger and Sheffer. Given $k$ positive real numbers $\delta_1,\dots,\delta_k$, a $(k+1)$-tuple $(p_1,\dots,p_{k+1})$ in $\mathbb{R}^d$ is called a $(\delta,k)$-chain if $\|p_j-p_{j+1}\| = \delta_j$ for every $1\leq j \leq k$. What is the maximum number $C_k^d(n)$ of $(k,\delta)$-chains in a set of $n$ points in $\mathbb{R}^d$, where the maximum is taken over all $\delta$? Improving the results of Palsson, Senger and Sheffer, we essentially determine this maximum for all $k$ in the planar case. error term It is only for $k\equiv 1$ (mod) $3$ that the answer depends on the maximum number of unit distances in a set of $n$ points. We also obtain almost sharp results for even $k$ in $3$ dimension. | \section{Introduction}\label{sec:introduction}
Determining the maximum possible number of pairs $u_d(n)$ at distance $1$ apart in a set of $n$ points in $\mathbb{R}^d$ for $d=2$ is one of the central questions in combinatorial geometry, known as the Erdős Unit Distance problem. The question dates back to 1946, and despite much effort, the best known upper and lower bounds are still very far apart. For some constants $C,c>0$, we have
\[n^{1+c/\log\log n}\leq u_2(n)\le Cn^{4/3},\]
where the lower bound is due to Erdős \cite{erdos} and the upper bound is due to Spencer, Szemerédi and Trotter \cite{SST}. Recently, there has been great progress in a closely related
problem of determining the minimum number of distinct distances between $n$ points on the plane due to Guth and Katz \cite{GK}, but the powerful algebraic machinery they used has not yet given any improvement for the unit distance question.
As in the planar case, the best known upper and lower bounds in the $3$-dimensional case are also far apart.
For every $\varepsilon>0$ there are constants $c,C>0$ such that we have
\begin{equation}\label{eqzahl}
cn^{4/3}\log\log n\le u_3(n)\le Cn^{295/197+\varepsilon},
\end{equation}
where the lower bound is due to Erdős \cite{erdos3}, and the upper bound is due to Zahl \cite{Za2}. The latter is a recent improvement upon the upper bound $O(n^{3/2})$ by Kaplan, Matou\v sek, Safernová, and Sharir \cite{KMSS}, and Zahl \cite{Za}.
This paper can be seen as an effort to find generalisations of the Unit Distance problem that are within the reach of our current methods. In what follows, we describe the generalisation that we work with.
Palsson, Senger and Sheffer \cite{Shef} suggested the following question.
Let $\bm\delta=(\delta_1,\dots,\delta_k)$ be a fixed sequence of $k$ positive reals. A $(k+1)$-tuple $(p_1,\dots,p_{k+1})$ of distinct points in $\mathbb{R}^d$ is called a \emph{$k$-chain} if $\|p_i-p_{i+1}\|=\delta_i$ for all $i=1,\dots,k$. For every fixed $k$ determine $C^d_k(n)$, the maximum number of $k$-chains that can be spanned by a set of $n$ points in $\mathbb{R}^d$. We do not include $\bm\delta$ in the notation, since our results, with the exception of Proposition \ref{3dbound} do not depend on $\bm\delta$ up to the order of magnitude.
The authors of \cite{Shef} give the following lower bound on $C^2_k(n)$:
\[C^2_k(n)=\Omega\left (n^{\lfloor (k+1)/3 \rfloor+1}\right ). \]
They also provided upper bounds in terms of the maximum number of unit distances.
\begin{prop}[Palsson, Senger, and Sheffer \cite{Shef}]
\begin{equation*} C^2_k(n) =
\begin{cases}
O\left (n\cdot u_2(n)^{k/3} \right ) & \text{\rm if $k\equiv 0$ (mod $3$),}\\
O\left (u_2(n)^{(k+2)/3}\right ) & \text{\rm if $k\equiv 1$ (mod $3$),}\\
O\left (n^2\cdot u_2(n)^{(k-2)/3}\right ) & \text{\rm if $k\equiv 2$ (mod $3$).}
\end{cases}
\end{equation*}
\end{prop}
If $u_2(n) = O(n^{1+\varepsilon})$ for any $
{\varepsilon}>0$, which is conjectured to hold, then the upper bounds in
the proposition above almost match the lower bound given above.
However, as we have already mentioned, determining the order of magnitude of $u_2(n)$ has proved to be a very hard problem and is very far from its resolution.
Thus, it is interesting to obtain ``unconditional'' bounds, that depend on the value of $u_2(n)$ as little as possible. In \cite{Shef}, the following ``unconditional'' upper bounds were proved in the planar case.
\begin{thm}[Palsson, Senger, and Sheffer \cite{Shef}]\label{PSS2} $C_2^2(n)=\Theta(n^2)$, and for every $k\geq 3$ we have
\[C^2_k(n)=O\left (n^{2k/5+1+\gamma_k}\right ), \]
where $\gamma_k\leq \frac{1}{12}$, and $\gamma_k \to \frac{4}{75}$ as $k\to \infty$.
\end{thm}
In our main result, in two-third of the cases we almost determine the value of $C^2_k(n),$ no matter what the value of $u_2(n)$ is, by matching the lower bounds given in Theorem~\ref{PSS2}.
Further, we show that in the remaining cases determining $C^2_k(n)$ essentially reduces to determining the maximum number of unit distances.
\begin{thm}\label{main}For every integer $k\geq 1$ we have\,\footnote{In what follows $f(n)= \tilde O(g(n))$ means that there exist positive constants $c, C$ such that $ f(n)/g(n) \le C\log^{c} n$ for every sufficiently large $n.$ We write $f(n) = \tilde{\Omega}(g(n))$ if $g(n) = \tilde O(f(n))$, and $f(n) = \tilde{\Theta}(g(n))$ if $f(n) = \tilde O(g(n))$ and $g(n) = \tilde O(f(n)).$}
\[C^2_k(n)=\tilde{\Theta}\left (n^{\lfloor (k+1)/3 \rfloor+1}\right ) \textrm{ if } k\equiv 0,2 \text{ \rm (mod } 3 \text{\rm )},\]
and for any $\varepsilon> 0$ we have
\[C^2_k(n)=\Omega\left (n^{(k-1)/3}u_2(n)\right)
\textrm{ and } C^2_k(n)=O\left(n^{(k-1)/3+\varepsilon}u_2(n)\right) \textrm { if } k\equiv 1 \text{ \rm(mod $3$)}. \]
\end{thm}
Let us turn our attention to the $3$-dimensional case. The following was proved in \cite{Shef}.
\begin{thm}[Palsson, Senger, and Sheffer \cite{Shef}]\label{thmshefr3} For any integer $k\geq 2$, we have
\[C^3_k(n)=\Omega \left (n^{\lfloor k/2 \rfloor +1}\right ),\]
and
\begin{equation*}
C^3_k(n) =
\begin{cases}
O\left (n^{2k/3+1}\right ) & \text{\rm if $k\equiv 0$ (mod $3$),}\\
O\left (n^{2k/3+23/33+\varepsilon}\right ) & \text{\rm if $k\equiv 1$ (mod $3$),}\\
O\left (n^{2k/3+2/3}\right ) & \text{\rm if $k\equiv 2$ (mod $3$).}
\end{cases}
\end{equation*}
\end{thm}
We improve their upper bound and essentially settle the problem for even $k$.
\begin{thm}\label{3d}For any integer $k\geq 2$ we have
\[C^3_k(n)=\tilde O\left (n^{k/2+1}\right).\] Furthermore, for even $k$ we have \[C^3_k(n)=\tilde{\Theta}\left (n^{k/2+1}\right).\]
\end{thm}
We also improve the lower bound from Theorem~\ref{thmshefr3} for odd $k$ and $\bm\delta=(1,\dots,1)$. Let $us_3(n)$ be the maximum possible number of pairs at unit distance apart in $X\times Y$, where $X$ is a set of $n$ points in $\mathbb{R}^3$ and $Y$ is a set of $n$ points on a sphere in $\mathbb{R}^3$.
\begin{prop}\label{oddk}Let $k\geq 3$ odd. Then for $\bm\delta=(1,\dots,1)$ we have
\[C^3_k(n)=\Omega\left (\max\left \{\frac{u_3(n)^k}{n^{k-1}},us_3(n)n^{(k-1)/2}\right \}\right ).\]
\end{prop}
By using stereographic projection we obtain that $us_3(n)$ equals the maximum number of incidences between a set of $n$ points and a set of $n$ circles (not necessarily of the same radii) in the plane. Thus we have
\[cn^{4/3}\leq us_3(n)=\tilde{O}\left ( n^{15/11} \right )\]
(For the lower bound see \cite{us1}, and for the upper bound see \cite{us2,AS,us3}).)
Therefore, in general we cannot tell which of the two bounds in Proposition~\ref{oddk} is better. However, for large $k$ the second term is larger than the first due to \eqref{eqzahl}.
Finally, we note that
for $d\geq 4$ we have $C_k^d(n)=\Theta(n^{k+1})$. Indeed, we clearly have $C_k^d(n)=O(n^{k+1})$. To see that $C_k^d(n)=\Omega(n^{k+1})$, take two orthogonal circles of radius $1/\sqrt{2}$ centred at the origin and choose $n/2$ points on each of them. Then any sequence of $k+1$ points that alternate between the two circles forms a path in which all edges have unit length. The exact value of $u_d(n)$ for large $n$ and even $d\geq 4$ was determined by Brass \cite{brass} ($d=4)$ and Swanepoel \cite{Sw} ($d\geq 6$), by using stability results form extremal graph theory.
\section{Preliminaries}
We denote by $u_d(m,n)$ the maximum number of incidences between a set of $m$ points and $n$ spheres\footnote{circles, if $d=2$} of fixed radius in $\mathbb{R}^d$. In other words, $u_d(m,n)$ is the maximum number of red-blue pairs spanning a given distance in a set of $m$ red and $n$ blue points in $\mathbb{R}^d$. By the result of Spencer, Szemerédi and Trotter \cite{SST}, we have
\begin{equation}\label{2drich}
u_2(m,n)=O\left (m^{\frac{2}{3}}n^{\frac{2}{3}}+m+n\right).
\end{equation}
For given $r$ and $\delta$ we say that a point $p$ is \emph{$r$-rich} with respect to a set $P\subseteq \mathbb{R}^d$ and to a distance $\delta$, if the sphere of radius $\delta$ around $p$ contains at least $r$ points of $P$. If $P\subseteq \mathbb{R}^2$ and $|P|=n^x$, then \eqref{2drich} implies that the number of points that are $n^{\alpha}$-rich with respect to $P$ and to a given distance $\delta$ is
\begin{equation}\label{2drichness}
O\left (n^{2x-3\alpha}+n^{x-\alpha}\right ).
\end{equation}
The bound
\begin{equation}\label{eq3d}
u_3(m,n)=O\left (m^{\frac3{4}}n^{\frac34}+m+n\right)
\end{equation}
is due to Zahl \cite{Za2} and Kaplan, Matou\v sek, Safernov\'a, and Sharir \cite{KMSS}.
It implies that for $P\subseteq \mathbb{R}^3$ with $|P|=n^x$ the number of points that are $n^{\alpha}$-rich with respect to $P$ and to a given distance $\delta$ is
\begin{equation}\label{richness}
O\left (n^{3x-4\alpha}+n^{x-\alpha}\right ).
\end{equation}
\section{Bounds in \texorpdfstring{$\bm{\mathbb{R}^2}$}{Lg}}\label{sec3}
For a fixed $\bm\delta=(\delta_1,\dots,\delta_k)$ and $P_1\dots,P_{k+1}\subseteq \mathbb{R}^2$ we denote by $\mathcal{C}_k(P_1,\dots,P_{k+1})$ the family of $(k+1)$-tuples $(p_1,\dots,p_{k+1})$ with $p_i\in P_i$ for all $i\in[k+1]$, $\|p_i-p_{i+1}\|=\delta_i$ for all $i\in[k]$ and with $p_i\neq p_j$ for $i\neq j$.
Let $C_k(P_1,\dots,P_{k+1})=|\mathcal{C}_k(P_1,\dots,P_{k+1})|$ and
\[C_k(n_1,\dots,n_{k+1})=\max C_k(P_1,\dots,P_{k+1}),\]
where the maximum is taken over all sets sets $P_1,\ldots, P_{k+1}$ subject to $|P_i|\le n_i$ for all $i\in [k+1]$.
We have $C^2_k(n)\leq C_k(n,\dots,n)\leq C^2_k\left((k+1)n\right)$. Indeed, for the lower bound choose $P_i=P$ for every $1\leq i \leq k+1$, and for the upper bound note that $|P_1\cup \dots \cup P_{k+1}|\leq (k+1)n$. Since we are only interested in the order of magnitude of $C^2_k(n)$ for fixed $k$, we are going to bound $C_k(n,\dots,n)$ instead of $C^2_k(n)$.
In Section~\ref{sec31}, we are going to prove the lower bounds from Theorem~\ref{main}. In Section~\ref{sec32}, we are going to prove an upper bound on $C_k(n,\dots,n)$, which is almost tight for $k\equiv 0,2$ (mod $3$). The case $k\equiv 1$ (mod $3$) is significantly more complicated. We will explain the case $k=4$ case separately in Section~\ref{sec33}, and then the general case in Section~\ref{sec34}.
\subsection{Lower bounds}\label{sec31}
For completeness, we present constructions for all congruence classes modulo $3$. For $k\equiv 0,2$ they were described in \cite{Shef}.
\begin{prop}\label{proplow}
For any fixed distance-vector $\bm\delta = (\delta_1,\ldots,\delta_k)$ of positive reals we have $$C_k(n,\ldots,n)= \begin{cases} \Omega(n^{\lfloor (k+1)/3\rfloor +1}),& \mbox{if } k \equiv 0,2({\rm mod\ } 3),\\
\Omega(n^{(k-1)/3}\cdot u_2(n)), & \mbox{if } k \equiv 1({\rm mod\ } 3).
\end{cases}$$
\end{prop}
In the proof of the proposition, we shall need the following proposition, using which was suggested to us by D\"om\"ot\"or P\'alv\"olgyi.
\begin{prop}\label{Propprob}Fix $\varepsilon>0$. Then there exists $\gamma = \gamma(\varepsilon)>0$, such that for any $n$ there exist two sets $X_1,X_2$ of points on the plane, $|X_1|,|X_2|\le n$, such that:
\begin{itemize}
\item[(i)] The diameter of $X_2$ is at most $\varepsilon;$
\item[(ii)] The number of unit distances between $X_1$ and $X_2$ is at least $\gamma u_2(n,n).$
\end{itemize}
\end{prop}
\begin{proof}
Take sets $Z_1,Z_2$ of $n$ points on the plane each, such that the number of unit distances between them is $u_2(n,n)$. Consider a bipartite graph $G = (Z_1\cup Z_2,E)$, where edges connect vertices at unit distance apart. Take an infinite grid formed by lines $y = 10i+\eta_1, x = 10j+\eta_2,$ where $i,j\in \mathbb Z$ and $\eta_1,\eta_2$ are chosen from $[0,10)$ uniformly at random. Then the expected number of edges of $G$ that are `cut' by a line in the grid is at most $\frac 12 u_2(n,n)$ and, therefore, putting $G' = G'(\eta_1,\eta_2) = (V, E')$ to be a subgraph of $G$ formed by all edges that are not `cut' by the grid (i.e., which endpoints lie inside the same square of the grid), there exist a choice of $\eta_1$, $\eta_2$ such that $|E'|\ge \frac 12 |E|$ and, moreover, no vertex of $G$ lies on a line of the grid.
For a square $S$ of the grid, let $G'[S]$ be a subgraph of $G'$ induced on the vertices lying in $S$. Note that $G'$ is a disjoint union of $G'[S]$ over all possible $S$. Now translate all vertices in $G'[S]$ by an appropriate vector, so that a) all of them lie within a square $[0,11]\times [0,11]$ and b) no vertices from different translates coincide. Denote the new graph $G'' = (Y_1\cup Y_2,E'')$, where $Y_i$ is the union of translates of vertices from $Z_i$, $i=1,2$. It is clear that $|V(G'')| = |V(G')|$ and $|E''| = |E(G')|\ge \frac 12 u_2(n,n)$. Moreover, $V(G'')$ lies inside the square $[0,11]\times [0,11]$.
Put $X_1:=Y_1$. Next, partition $[0,11]$ into $O(\varepsilon^2)$ squares of side at most $\varepsilon/2$ and choose a square $C$ out of them such that the number of edges from $G''$ emanating from vertices in $X_2:=Y_2\cap C$ is maximal. We claim that $X_1,X_2$ as above satisfy the conditions of the proposition. First, it is easy to see that the diameter of $X_2$ is at most $\varepsilon.$ Second, by the choice of $C,$ the number of edges from $G''$ between $X_1$ and $X_2$ is at least $\Omega(\varepsilon^2 |E''|) = \Omega(\varepsilon^2 u_2(n,n)),$ as desired.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{proplow}] We prove the following slightly stronger statement by induction on $k$. For any fixed $\varepsilon>0$ the lower bound claimed in the proposition can be achieved on sets $P_1,\ldots, P_{k+1}$, such that the diameter of $P_{k+1}$ is at most $\varepsilon$ (with $\Omega$ depending on $\varepsilon$).
First, we show the base of induction ($k=0,1,2$). Note that $C_0(n) = n$ and the set $P_1$ can be chosen to have diameter at most $\varepsilon.$ For $k=1$, we can use the construction from Proposition~\ref{proplow}. For $k=2$, let $P_2 = \{x\}$ for some point $x$, and let $P_1$, $P_3$ be disjoint sets of $n$ points on arcs of length $\varepsilon$ lying on the circles around $x$ of radii $\delta_1$ and $\delta_3$, respectively. Then we have that $C_2(P_1,P_2,P_3)=n^2$.
Next, fix a vector $\bm\delta = (\delta_1,\ldots,\delta_{k+3}),$ put $\varepsilon = \min \{ \delta_{k+1}/3,\delta_{k+2}/3\}$ and let $\bm\delta' = (\delta_1,\ldots,\delta_k)$. We apply the inductive statement with $\varepsilon$ and $\bm \delta '$, and obtain the corresponding sets $P_1,\ldots, P_{k+1}.$ Put $P_{k+3} = \{x\},$ where $x$ is an arbitrary point on the plane that is at distance $\delta_{k+1}+\delta_{k+2}-2\varepsilon$ from some point $y$ in $P_{k+1}$. Put $P_{k+2}$ to be a set of $|P_{k+1}|$ points on the circle $C$ of radius $\delta_{k+2}$ around $x,$ such that for each point in $P_{k+1}$ there is at least one point in $P_{k+2}$ at distance $\delta_{k+1}$. For each point $z$ in $P_{k+1}$ it is possible to choose the corresponding point for $P_{k+2}$ since the minimum of the distance between a point on the circle $C$ and $z$ is at most $\|x-y\|-\delta_{k+2}+{\rm diam}(P_{k+1}) = \delta_{k+1}-\varepsilon<\delta_{k+1}$, while the maximum distance is at least $\|x-y\|+\delta_{k+2}-{\rm diam}(P_{k+1}) =\delta_{k+1}+2\delta_{k+2}-3\varepsilon>\delta_{k+1}$. (Note that for bounding both the maximum and the minimum distances we used triangle inequality and the fact that $P_{k+1}$ has diameter at most $\varepsilon.$) We use flexibility in the choice of $x$ to assure that all additional points are different from the points in $P_1,\ldots, P_{k+1}$.
Finally, put $P_{k+4}$ to be a set of $n$ points on a sufficiently small arc on the circle of radius $\delta_{k+3}$ around $x.$
Since by construction every $k$-chain from $P_1\times \dots \times P_{k+1}$ can be extended to a $k+3$ chain in $P_1\times \dots \times P_{k+1}$ in at least $n$ different ways, we obtain that $C_2(P_1,\ldots, P_{k+4}) \ge n C_2(P_1,\ldots, P_{k+1})$. Further, $P_{k+4}$ can be chosen to satisfy any fixed requirement on the diameter.
\end{proof}
\subsection{Upper bound for the \texorpdfstring{$\bm{{k\equiv} 0,2}$}{Lg} (mod \texorpdfstring{$\bm{3}$}{Lg}) cases}\label{sec32}
We fix $\bm\delta=(\delta_1,\dots,\delta_k)$ throughout the remainder of Section~\ref{sec3}. All logs are base $2$.
\begin{thm}\label{firstbound} For any fixed integer $k\geq 0$ and $x,y\in [0,1]$, we have \begin{equation*}\label{eqmain} C_k(n^{x},n,\ldots,n,n^{y})=\tilde O\left (
n^{\frac {f(k)+x+y}{3}} \right ),
\end{equation*} where $f(k) = k+2$ if $k\equiv 2$ \emph{(mod $3$)} and $f(k) = k+1$ otherwise.
\end{thm}
Theorem \ref{firstbound} implies the upper bounds in Theorem \ref{main} for $k\equiv 0,2$ (mod $3$) by taking $x=y=1$. It is easier, however, to prove this more general statement than the upper bounds in Theorem~\ref{main} directly. Having varied sizes of the first and the last groups of points allows for a seamless use of induction.
\begin{proof}[Proof of Theorem \ref{firstbound}] The proof is by induction on $k$. Let us first verify the statement for $k\le 2.$ (Note that, for $k=0$, we should have $x=y$.) We have
\begin{align}
C_0(n^x)&\le n^x = O\left(n^{\frac {1+x+y}3}\right), \nonumber\\
\label{k=1}
C_1(n^x,n^y)&\leq u_2(n^x,n^y)=O\left (n^{\frac{2}{3}(x+y)}+n^x+n^y\right )=O\left (n^{\frac{2+x+y}3}\right),\\
\label{ktwo}
C_2(n^x,n,n^y)&\leq 2 n^xn^y= O\left (n^{\frac{4+x+y}{3}}\right ),
\end{align}
where \eqref{k=1} follows from \eqref{2drich} and \eqref{ktwo} follows from the fact that each pair $(p_1,p_3)$ can be extended to a $2$-chain $(p_1,p_2,p_3)$ in at most $2$ different ways.
Next, let $k\geq 3$. Take $P_1,\dots,P_{k+1}\subseteq \mathbb{R}^2$ with $|P_1|=n^x$, $|P_{k+1}|=n^y$, and $|P_i|=n$ for $2\leq i \leq k$. Denote by $P_2^{\alpha}\subseteq P_2$ the set of those points in $P_2$ that are at least $n^{\alpha}$-rich but at most $2n^{\alpha}$-rich with respect to $P_1$ and $\delta_1$. Similarly, we denote by $P_k^{\beta}\subseteq P_k$ the set of those points in $P_k$ that are at least $n^{\beta}$-rich but at most $2n^{\beta}$-rich with respect to $P_{k+1}$ and $\delta_k$.
Applying a standard dyadic decomposition argument twice implies that
\[\mathcal{C}_k(P_1,P_2\dots,P_k,P_{k+1})= \bigcup_{\alpha,\beta} \mathcal{C}_k(P_1,P_2^{\alpha},P_3,\dots, P_{k-1},P_k^{\beta},P_{k+1}),\]
where the union is taken over all $\alpha,\beta\in \left \{\frac{i}{\log n}: i=0,\ldots, \lceil\log n\rceil\right\}$. Since the cardinality of the latter set is at most $\log n+2$, it is sufficient to prove that for every $\alpha$ and $\beta$ we have
\begin{equation}\label{alphabeta}
C_k(P_1,P_2^{\alpha},P_3,\dots,P_{k-1},P_k^{\beta},P_{k+1})= \tilde O\left ( n^{\frac{f(k)+x+y}{3}}\right ).
\end{equation}
To prove this, we consider three cases.
\bigskip
{\bf Case 1: $\bm{\alpha\geq \frac{x}{2}}$.} By \eqref{2drichness} we have $|P_2^{\alpha}|=O(n^{x-\alpha})$. Therefore the number of pairs \mbox{$(p_1,p_2)\in P_1\times P_2^{\alpha}$} with $\|p_1-p_2\|=\delta_1$ is at most $O(n^x)$. Since every pair $(p_1,p_2)\in P_1\times P_2^{\alpha}$ and every $(k-3)$-chain $(p_4,\dots,p_{k+1})\in P_4\times\dots \times P_{k}^{\beta}\times P_{k+1}$ can be extended to a $k$-chain $(p_1,\dots,p_{k+1})\in P_1\times \dots\times P_{k+1}$ in at most two different ways, we obtain
\[C_k(P_1,P_2^{\alpha},\dots,P_k^{\beta},P_{k+1})\leq 4O(n^x)C_{k-3}(P_4,\dots,P_k^{\beta},P_{k+1}).\]
By induction we have
\[C_{k-3}(P_4,\dots,P_k^{\beta},P_{k+1})= \tilde O\left ( n^{\frac{f(k-3)+1+y}{3}}\right ).\]
These two displayed formulas and the fact that $f(k-3)=f(k)-3$ imply \eqref{alphabeta}.
\bigskip
{\bf Case 2: $\bm{\beta \geq \frac{y}{2}}$.} By symmetry, this case can be treated in the same way as Case 1.
\bigskip
{\bf Case 3: $\bm{\alpha \leq \frac{x}{2}}$ and $\bm{\beta\leq \frac{y}{2}}$.} By \eqref{2drichness} we have $|P_2^{\alpha}|= O\left (n^{2x-3\alpha}\right )$ and $|P_k^{\beta}|= O\left
(n^{2y-3\beta}\right)$. The number of $(k-2)$-chains in $P_2^{\alpha}\times P_3\times\dots\times P_{k-1}\times P_k^{\beta}$ is $C_{k-2}(P_2^{\alpha},P_3,\dots,P_{k-1}, P_k^{\beta})$, and every $(k-2)$-chain
$(p_2,\dots,p_k)\in P_2^{\alpha}\times P_3\times\dots\times P_{k-1}\times P_k^{\beta}$ can be extended at
most $4n^{\alpha+\beta}$ ways to a $k$-chain in $P_1\times P_2^{\alpha}\times \dots\times P_k^{\beta}\times P_{k+1}$. Thus
\[C_{k}(P_1,P_2^{\alpha},\dots,P_k^{\beta},P_{k+1})\leq 4n^{\alpha+\beta}C_{k-2}(P_2^{\alpha},\dots,P_k^{\beta}).\]
By induction we have
\[C_{k-2}(P_2^{\alpha},\dots,P_k^{\beta})= \tilde O\left (n^{\frac{f(k-2)+2x-3\alpha+2y-3\beta}{3}}\right ).\]
For $k \equiv 0,2$ (mod $3$) we have $f(k)\ge f(k-2)+2,$ and thus
\begin{multline*}
C_{k}(P_1,P_2^{\alpha},\dots,P_k^{\beta},P_{k+1})= \tilde O\left(n^{\alpha+\beta}n^{\frac{f(k-2)+2x-3\alpha+2y-3\beta}{3}}\right)\\[4pt] = \tilde O\left(n^{\frac{f(k)-2+2x+2y}{3}}\right) = \tilde O\left(n^{\frac{f(k)+x+y}{3}}\right).
\end{multline*}
If $k\equiv 1$ (mod $3$) then $f(k)<f(k-2)+2$, and thus the argument above does not work. However, we then have $f(k)= f(k-1)+1$, and we can use the bound
\[C_{k}(P_1,P_2^{\alpha},\dots,P_k^{\beta},P_{k+1})\leq 2n^{\alpha}C_{k-1}(P_2^{\alpha},P_3,\dots,P_{k+1}),\]
obtained in an analogous way. This gives
\[C_{k}(P_1,P_2^{\alpha},P_3,\dots,P_{k+1})= \tilde O\left(n^{\alpha}n^{\frac{f(k-1)+2x-3\alpha+y}{3}}\right) = \tilde O\left(n^{\frac{f(k)-1+2x+y}{3}}\right) = \tilde O\left(n^{\frac{f(k)+x+y}{3}}\right).\]
\end{proof}
\begin{remark} The proof above is not sufficient to obtain an almost sharp bound in the $k \equiv 1$ (mod $3$) case for two reasons. First, for these $k$ any analogue of Theorem \ref{firstbound} would involve taking maximums of two expressions, where one contains $u_2(n^x,n)$ and the other contains $u_2(n^y,n)$. However, due to our lack of good understanding of how $u_2(n^x,n)$ changes as $x$ is increasing, this is difficult to work with.
Second, on a more technical side, while Case 1 and Case 2 in the above proof would go through with any reasonable inductive statement, Case 3 would fail. The main reason for this is that $C_k$ as a function of $k$ makes jumps at every third value of $k$, and remains essentially the same, or changes by $u_2(n,n)/n$ for the other values of $k$. Thus one would need to remove three vertices from the path to make the induction work. However, the path has only two ends, and removing vertices other than the endpoints turns out to be intractable.
\end{remark}
\subsection{Upper bound for \texorpdfstring{$\bm{k=4}$}{Lg}}\label{sec33}
In this section we prove the upper bound in Theorem~\ref{main} for $k=4$. Let $P_1,\dots, P_5$ be five sets of $n$ points. We will show that $C_4(P_1,\dots,P_5)=\tilde O(u_2(n)n)$, which is slightly stronger than what is stated in Theorem \ref{main}.
Instead of \eqref{2drichness} we need the following more general bound on the number of rich points.
\begin{obs}[Richness bound]\label{richnessbound} Let $n^y$ be the maximum possible number of points that are $n^{\alpha}$-rich with respect to a set of $n^{x}$ points and some distance $\delta$. Then we have
\begin{equation}\label{rich1}
n^{y+\alpha}\leq u_2(n^x,n^y),
\end{equation}
or, equivalently
\begin{equation*}
n^{\alpha}\leq \frac{u_2(n^x,n^y)}{n^y}.
\end{equation*}
\end{obs}
The proof of \eqref{rich1} follows immediately from the definition of $n^{\alpha}$-richness and $u_2(n^x,n^y)$.
\medskip
Let $\Lambda:=\big\{\frac i{\log n}: i = 0,\ldots, \lceil\log n\rceil \big\}^4$. For any $\bm{\alpha}=(\alpha_2,\alpha_3,\alpha_4,\alpha_5) \in \Lambda$ let $Q_1^{\bm{\alpha}} = P_1$ and for $i=2,\ldots, 5$ define recursively $Q_i^{\bm\alpha}$ to be the set of those points in $P_i$ that are at least $n^{\alpha_i}$-rich but at most $2n^{\alpha_i}$-rich with respect to $Q_{i-1}$ and $\delta_i$.
It is not difficult to see that
\[\mathcal C_4(P_1,\ldots, P_5)= \bigcup_{\bm\alpha\in \Lambda} \mathcal{C}_4\left(Q_1^{\bm\alpha},\ldots,Q_5^{\bm\alpha}\right).\]
We have $|\Lambda| = \tilde O(1)$ and thus, in order to prove the theorem, it is sufficient to show that for every $\bm{\alpha}\in \Lambda$ we have
\begin{equation*}
C_4\left(Q_1^{\bm{\alpha}},\dots,Q_5^{\bm{\alpha}}\right)= O\left(n \cdot u_2(n,n)\right).
\end{equation*}
From now on, fix $\bm{\alpha}=(\alpha_2,\dots,\alpha_5)$, and denote $Q_i=Q_i^{\bm\alpha}$. Choose $x_i\in [0,1]$ so that $|Q_i|=n^{x_i}$. Then we have \begin{equation}\label{eq51} C_4(Q_1,\dots,Q_5)= O\left(n^{x_5+\alpha_5+\alpha_4+\alpha_3+\alpha_2}\right).\end{equation}
Indeed, each chain $(p_1,\dots,p_5)$ with $p_i\in Q_i$ can be obtained in the following five steps.
\vspace{0.2cm}
\begin{itemize}
\item \bf{Step 1: } \rm Pick $p_5\in Q_5$.
\vspace{0.1cm}
\item \bf{Step i ($2\le i\le 5$): } \rm Pick a point $p_{6-i}\in Q_{6-i}$ at distance $\delta_{6-i}$ from $p_{7-i}$.
\end{itemize}
\vspace{0.2cm}
In the first step we have $n^{x_5}$ choices, and for $i\geq 2$ in the $i$-th step we have at most $2n^{\alpha_{6-i}}$ choices. Further, by Observation~\ref{richnessbound}, for each $i\geq 2$ we have \begin{equation}\label{eq52}n^{\alpha_i}\leq \frac{u_2(n^{x_{i-1}},n^{x_i})}{n^{x_i}}.\end{equation}
Combining \eqref{eq51} and \eqref{eq52}, we obtain
\begin{equation}\label{eq53}
C_4(Q_1,\dots,Q_5)=O\left( u_2(n^{x_4},n^{x_5})\frac{u_2(n^{x_3},n^{x_4})}{n^{x_4}}\frac{u_2(n^{x_2},n^{x_3})}{n^{x_3}} \frac{u_2(n^{x_1},n^{x_2})}{n^{x_2}}\right).
\end{equation}
By \eqref{2drich} we have \[u_2(n^{x_{i-1}},n^{x_i})= O\left (\max \big\{n^{\frac{2}{3}(x_i+x_{i-1})},n^{x_i},n^{x_{i-1}}\big\}\right ).\]
Note that the maximum is attained on the second (third) term iff $x_{i-1}\le \frac {x_i}2$ ($x_i\le \frac{x_{i-1}}2$).
To bound $C_4(Q_1,\dots,Q_5)$
we consider several cases depending on which of these three terms the maximum above is attained on for different $i$.
\bigskip
\bf{Case 1: }\rm For all $2\leq i \leq 5$ we have
$u_2(n^{x_{i-1}},n^{x_i})= O\left( n^{\frac{2}{3}(x_i+x_{i-1})}\right)$.
Then
\begin{equation*}
\frac{u_2(n^{x_4},n^{x_5})u_2(n^{x_3},n^{x_4})u_2(n^{x_2},n^{x_3})}{n^{x_2+x_3+x_4}}=O\left( n^{\frac{2}{3}x_5+\frac{1}{3}x_4+\frac{1}{3}x_3-\frac{1}{3}x_2}\right)
\end{equation*}
and
\begin{equation*}
\frac{u_2(n^{x_3},n^{x_4})u_2(n^{x_2},n^{x_3})u_2(n^{x_1},n^{x_2})}{n^{x_2+x_3+x_4}}=O\left( n^{-\frac{1}{3}x_4+\frac{1}{3}x_3+\frac{1}{3}x_2+\frac{2}{3}x_1}\right).
\end{equation*}
Substituting each of these two displayed formulas into \eqref{eq53} and taking their product, we obtain
\[C_4(Q_1,\dots,Q_5)^2 = O\left( u_2(n^{x_1},n^{x_2})u_2(n^{x_4},n^{x_5})\cdot n^{\frac{2}{3}x_1+\frac{2}{3}x_3+\frac{2}{3}x_5}\right)= O\left( u_2(n,n)^2\cdot n^{2}\right),\]
which concludes the proof in this case.
\bigskip
\bf{Case 2: } \rm
There is an $2\leq i \leq 5$ such that \begin{equation}\label{eq55} \min \{x_{i-1},x_i\}\le \frac 12\max\{x_{i-1},x_i\}\ \ \ \text{and thus}\ \ \ \ u_2(n^{x_{i-1}},n^{x_i})= O\left (\max \{n^{x_{i-1}},n^{x_{i}}\}\right).\end{equation}
We distinguish three cases based on for which $i$ holds.
\bigskip
{\bf Case 2.1:} \eqref{eq55} holds for $i=2$ or $5$. In particular, this implies that $u_2(n^{x_{1}},n^{x_2})= O(n)$ or $u_2(n^{x_{4}},n^{x_5})= O(n)$. The following lemma finishes the proof in this case.
\begin{lem}\label{smalln}
Let $R_1,\ldots, R_5\subseteq \mathbb{R}^2$ such that $|R_i|\leq n$ for every $i\in [5]$. If $u_2(R_1,R_2)= O(n)$ or $u_2(R_4,R_5)= O(n)$ holds, then
$C_4(R_1,\dots,R_5)= O\left(n\cdot u_2(n,n)\right)$.
\end{lem}
\begin{proof} We have \[C_4(R_1,\dots,R_5)\leq 2u_2(R_1,R_2)u_2(R_4,R_5)=O\left(n\cdot u_2(n,n)\right).\] Indeed, every $4$-tuple $(r_1,r_2,r_4,r_5)$ with $r_i\in R_i$ can be extended in at most two different ways to a $4$-chain $(r_1,\dots,r_5)\in R_1\times\dots\times R_5$. At the same time, the number of $4$-tuples with $\|r_1-r_2\|=\delta_1$, $\|r_4-r_5\|=\delta_4$ is at most $u_2(R_1,R_2)u_2(R_4,R_5)$.
\end{proof}
\bigskip
\bf{Case 2.2: }\rm \eqref{eq55} holds for $i=4.$ Note that if $x_4\leq \frac{x_3}{2}\le \frac 12$, then $u_2(n^{x_5},n^{x_4})=O(n)$, and we can apply Lemma~\ref{smalln} to conclude the proof in this case. Thus we may assume that $x_3\le \frac{x_4}2$, and hence $u_2(n^{x_4},n^{x_3})=O(n^{x_4})$.
This means that $n^{\alpha_4}=O(1)$ by Observation \ref{richnessbound}. Thus to finish the proof of this case, it is sufficient to prove the following claim.
\begin{cla}
Let $R_1,\dots,R_5\subseteq \mathbb{R}^2$ such that $|R_i|\leq n$ for all $i\in [5]$ and every point of $R_4$ is $O(1)$ rich with respect to $R_3$ and $\delta_3$. Then $C_4(R_1,\dots,R_5) = O\left(n\cdot u_2(n,n)\right)$.
\end{cla}
\begin{proof} Every $4$-chain $(r_1,\dots,r_5)$ can be obtained in the following steps.
\vspace{0.2cm}
\begin{itemize}
\item Pick a pair $(r_4,r_5)\in R_4\times R_5$ with $\|r_4-r_5\|=\delta_4$.
\vspace{0.1cm}
\item Choose $r_3\in R_3$ at distance $\delta_3$ from $r_4$.
\vspace{0.1cm}
\item Pick a point $r_1\in R_1$.
\vspace{0.1cm}
\item Extend $(r_1,r_3,r_4,r_5)$ to a $4$-chain.
\end{itemize}
In the first step, we have at most $u_2(n,n)$ choices, in the third at most $n$ choices, and in the other two steps at most $O(1)$.
\end{proof}
{\bf Case 2.3 } \eqref{eq55} holds for $i=3$ {\it only}. Arguing as in Case 2.2, we may assume that $u_2(n^{x_3},n^{x_2})=O(n^{x_2})$. Then we have
\begin{multline*}
C_4(Q_1,\dots,Q_5)= O\left( u_2(n^{x_4},n^{x_5})\frac{u_2(n^{x_3},n^{x_4})}{n^{x_4}}\frac{u_2(n^{x_2},n^{x_3})}{n^{x_3}} \frac{u_2(n^{x_1},n^{x_2})}{n^{x_2}}\right)\\[10pt]
= O\left(u_2(n^{x_1},n^{x_2})\cdot n^{\frac{2}{3}(x_4+x_5)+\frac{2}{3}(x_3+x_4)-x_4-x_3}\right)= O\left(u_2(n,n)\cdot n\right),
\end{multline*}
which finishes the proof.
\subsection{Upper bound for the \texorpdfstring{$\bm{k\equiv 1}$}{Lg} (mod \texorpdfstring{$\bm{3}$)}{Lg} case}\label{sec34}
We will prove the upper bound in Theorem \ref{main} for $k\equiv 1$ by induction. The $k=1$ case follows from the definition of $u_2(n,n)$, thus we may assume that $k\geq 4$. For the rest of the section fix ${\varepsilon}'>0$, and sets $P_1,\ldots, P_{k+1}\subseteq \mathbb{R}^2$ of size $n$, further let $\varepsilon=\frac{\varepsilon'}{4k}$. We are going to show that $C_k(P_1,\dots,P_{k+1})=O(n^{(k-1)/3+\varepsilon'}u_2(n))$.
The first step of the proof is to find a certain covering of $P_1\times \dots \times P_{k+1}$, which resembles the one used for the $k=4$ case, although is more elaborate.\footnote{This covering brings in the $\varepsilon$-error term in the exponent, that we could avoid in the $k=4$ case.} (The goal of this covering is to make the corresponding graph between each of the two consecutive parts `regular in both directions' in a certain sense.)
Let \[\Lambda=\Big\{i{\varepsilon}: i=0,\ldots, \Big\lfloor\frac 1{\varepsilon}\Big\rfloor\Big\}^{k+1}.\] We cover the product ${\bf P}=P_1\times\dots\times P_{k+1}$ by fine-grained classes $P_1^{\bm{\gamma}}\times \ldots\times P_{k+1}^{\bm{\gamma}}$ encoded by the sequence $\bm{\gamma} = (\bm{\gamma^1},\bm{\gamma^2},\ldots )$ of length at most $(k+1){\varepsilon}^{-1}+1$ with $\bm{\gamma^j}\in \Lambda$ for each $j=1,2,\dots$. One property that we shall have is
\begin{equation*} P_1\times\dots \times P_{k+1} = \bigcup_{\bm{\gamma}}P_1^{\bm\gamma}\times \ldots\times P_{k+1}^{\bm\gamma}.
\end{equation*}
To find the covering, first we define a function $D$ that receives a parity digit $j\in \{0,1\}$, a product set ${\bf R}:= R_1\times\ldots \times R_{k+1}$ and an $\bm{\alpha}\in \Lambda$, and outputs a product set $D(j,\bm{R},\bm{\alpha})={\bf R(\bm \alpha)}= R_1(\bm\alpha)\times\ldots \times R_{k+1}(\bm\alpha)$.
\medskip
{\bf Definition of $\mathbf{D}$}
\begin{itemize}
\item If $j=1$ then let $R_1(\bm{\alpha}):=R_1$ and for $i=2,\ldots, k+1$ define $R_i(\bm{\alpha})$ iteratively to be the set of points in $R_i$ that are at least $n^{\alpha_i}$, but at most $n^{\alpha_i+{\varepsilon}}$-rich with respect to $R_{i-1}(\bm{\alpha})$ and $\delta_{i-1}.$
\item If $j=0$ then apply the same procedure, but in reverse order. That is, let $R_{k+1}(\bm{\alpha})=R_{k+1}$ and for $i=k,k-1,\dots,1$ define $R_{i}(\bm{\alpha})$ iteratively to be the set of points in $R_i$ that are at least $n^{\alpha_i}$ but at most $n^{\alpha_i+{\varepsilon}}$-rich with respect to $R_{i+1}(\bm{\alpha})$ and $\delta_i$.
\end{itemize}
\vspace{0.2cm}
Note that \begin{equation}\label{equnion}{\bf R}= \bigcup_{\bm{\alpha}\in \Lambda} {\bf R}(\bm{\alpha}).\end{equation}
For a sequence $\bm{\gamma} = (\bm{\gamma^1},\bm{\gamma^2}, \ldots)$ with $\bm{\gamma^j}\in \Lambda,$ we define ${\bf P}^{\bm\gamma}$ recursively as follows. Let ${\bf P}^{\emptyset} :={\bf P}$, and for each $j\geq 1$ let
\[{\bf{P}}^{(\bm{\gamma^1},\ldots,\bm{\gamma^{j}})}=D(j \text{ (mod }2),{\bf{P}}^{(\bm{\gamma^1},\ldots,\bm{\gamma^{j-1}})},\bm{\gamma^j}).\]
We say that a sequence $\bm{\gamma}$ is {\it stable at $j$} if \begin{equation*}\label{eqdecomp} \big|{\bf P}^{(\bm{\gamma^1},\ldots,\bm{\gamma^{j}})}\big|\ge \big|{\bf P}^{(\bm{\gamma^1},\ldots,\bm{\gamma^{j-1}})}\big|\cdot n^{-{\varepsilon}}.\end{equation*}
Otherwise $\bm{\gamma}$ is \emph{unstable at $j$}.
\begin{defi}\label{defups} Let $\Upsilon$ be the set of those sequences $\bm{\gamma}$ that are stable at their last coordinate, but are not stable for any previous coordinate, and for which ${\bf P}^{\bm\gamma}$ is non-empty.
\end{defi}
The set $\Upsilon$ has several useful properties, some of which are summarised in the following lemma.
\begin{lem}\label{decompprop}
1. Any $\bm{\gamma}\in \Upsilon$ has length at most $(k+1) {\varepsilon}^{-1}+1$. \\ \vspace{0.1cm}
2. $|\Upsilon| = O_{{\varepsilon}}(1).$ \\ \vspace{0.1cm}
3. ${\bf P} = \bigcup_{\bm{\gamma}\in \Upsilon}{\bf P}^{\bm\gamma}.$\\
\end{lem}
\begin{proof}
\begin{enumerate}
\item If $\bm{\gamma}$ is unstable at $j$ then \[|{\bf P}^{(\bm\gamma^1,\ldots,\bm\gamma^{j})}|\le |{\bf P}^{(\bm\gamma^1,\ldots,\bm\gamma^{j-1})}|\cdot n^{-{\varepsilon}}.\]
Since $|{\bf P}| = n^{k+1}$ and $|{\bf P}^{\bm\gamma}|\ge 1,$ we conclude that $\bm{\bm\gamma}$ is unstable at at most $(k+1) {\varepsilon}^{-1}$ indices $j$. \vspace{0.1cm}
\item It follows from part 1 by counting all possible sequences of length at most $(k+1) {\varepsilon}^{-1}+1$ of elements from the set $\Lambda$. (Note that $|\Lambda| = O_{{\varepsilon}}(1).$) \vspace{0.1cm}
\item For a nonnegative integer $j$ let $\Lambda^{\le j}$ be the set of all sequences of length at most $j$ of elements from $\Lambda$. Let
\[\Upsilon_j:= \left(\Upsilon\cap \Lambda^{\le j}\right)\cup\Psi_j, \textrm{ where } \ \Psi_j:=\big\{\bm{\gamma}\in \Lambda^j: \bm{\gamma} \text{ is not stable for any }\ell \le j\big\}.\]
By part 1 of the lemma, $\Upsilon_j = \Upsilon$ for $j>(k+1){\varepsilon}^{-1}.$
We prove by induction on $j$ that ${\bf P}= \bigcup_{\bm{\gamma}\in \Upsilon_j}{\bf P}^{\bm\gamma}$.
$\Upsilon_0$ consists of an empty sequence, thus the statement is clear for $j=0$. Next, assume that the statement holds for $j$.
We have
\[{\bf P}=\bigcup_{\bm{\gamma}\in \Upsilon_j}{\bf P}^{\bm\gamma}=\bigcup_{\bm{\gamma}\in \Lambda^{\leq j}}{\bf P}^{\bm\gamma}\cup\bigcup_{\bm{\gamma}\in \Psi_j}{\bf P}^{\bm\gamma}.\]
By $\eqref{equnion}$ we have that ${\bf P}^{\bm\gamma} = \bigcup_{\bm{\gamma'}}{\bf P}^{\bm\gamma'}$ holds for any $\bm\gamma\in \Psi_j$, where the union is taken over the sequences from $\Lambda^{j+1}$ that coincide with $\bm{\gamma}$ on the first $j$ entries. This, together with $\bm\gamma'\in \left(\Upsilon\cap \Lambda^{j+1}\right)\cup \Psi_{j+1}$ when ${\bf P}^{\bm\gamma'}$ is nonempty finishes the proof.
\end{enumerate}
\end{proof}
Parts 2 and 3 of Lemma~\ref{decompprop} imply that in order to complete the proof of the \mbox{$k\equiv 1$ (mod $3$)} case, it is sufficient to show that for any $\bm{\gamma}\in \Upsilon$ we have
\begin{equation}\label{eq61}
C_k(P_1^{\bm\gamma},\ldots, P_{k+1}^{\bm\gamma})= O\left(u_2(n)\cdot n^{\frac{k-1}3+4k{\varepsilon}}\right).
\end{equation}
From now on fix $\bm{\gamma}\in \Upsilon$. For each $i=1,\ldots, k+1$ let $R_i:=P_i^{\bm\gamma}$ and $Q_i:= P_i^{\bm\gamma'}$, where $\bm{\gamma'}$ is obtained from $\bm{\gamma}$ by removing the last element of the sequence.
Without loss of generality, assume that the length $\ell$ of $\bm{\gamma}$ is even. For each $i=1,\ldots, k+1,$ choose $x_i,y_i$ such that
\[|Q_i| = n^{x_i}, \ \ \ \ |R_i| = n^{y_i}.\]
Let $\alpha_i:= \bm\gamma^{\ell-1}_i$ and $\beta_i:= \bm\gamma^{\ell}_i$. By the definition of $\bf{P}^{\bm{\gamma}}$ we have that each point in $Q_i$ is at least $n^{\alpha_i}$-rich but at most $n^{\alpha_i+\varepsilon}$-rich with respect to $Q_{i-1}$ and $\delta_{i-1}$, and each point in $R_i$ is at least $n^{\beta_i}$-rich but at most $n^{\beta_i+\varepsilon}$-rich with respect to $R_{i+1}$ and $\delta_i$.
By Observation~\ref{richnessbound}, we have \begin{equation}\label{eqkey}
n^{\alpha_i}\leq \frac{u_2(n^{x_{i-1}},n^{x_i})}{n^{x_i}}\ \ \ \ \text{and} \ \ \ \ n^{\beta_i}\leq \frac{u_2(n^{y_{i}},n^{y_{i+1}})}{n^{y_i}}\le \frac{u_2(n^{x_{i}},n^{x_{i+1}})}{n^{x_i-{\varepsilon}}}.
\end{equation}
The last inequality follows from two facts: first $u_2(n^{y_{i}},n^{y_{i+1}})\le u_2(n^{x_{i}},n^{x_{i+1}})$ and, second, since $\bm\gamma$ is stable at its last coordinate\footnote{This is the only place where we use the stability of $\bm\gamma$ directly.}, we have $n^{y_i} = |R_i|\ge |Q_i|\cdot n^{-{\varepsilon}} = n^{x_i-{\varepsilon}}.$
In the same fashion as in the beginning of Section~\ref{sec33}, we can show that
\begin{align*}
C_k(R_1,\dots,R_{k+1})\leq& n^{y_{1}}n^{\beta_{1}+\dots+\beta_k+k\varepsilon}, \textrm{ and }\\[5pt]
C_k(R_1,\dots,R_{k+1})\leq C_k(Q_1,\dots,Q_{k+1})\leq& n^{x_{k+1}}n^{\alpha_{k+1}+\alpha_{k}+\dots+\alpha_2+k\varepsilon}.
\end{align*}
Combining the first of these displayed inequalities with \eqref{eqkey}, we have
\begin{equation*}
C_{k}(R_1,\dots,R_{k+1})
\leq u_2(n^{x_1},n^{x_{2}})\prod_{2\leq i \leq k}\frac{u_2\left (n^{x_{i}},n^{x_{i+1}}\right)}{n^{x_i}}n^{2k\varepsilon} .
\end{equation*}
Recall that
\begin{equation}\label{eq66}
u_2(n^{x_{i}},n^{x_{i+1}})=O\left (\max \{n^{\frac{2}{3}(x_i+x_{i+1})},n^{x_i},n^{x_{i+1}}\}\right ).
\end{equation}
To bound $C_k(R_1,\dots,R_{k+1})$,
we consider several cases based on which of these three terms can be used to bound $u_2(n^{x_{i}},n^{x_{i+1}})$ for different values of $i$.
\bigskip
{\bf Case 1: }
Either $u_2(n^{x_1},n^{x_2})= O(n)$ or $u_2(n^{x_k},n^{x_{k+1}})= O(n)$ holds.
As in the proof of Lemma~\ref{smalln}, we have
\begin{multline*}
C_k(R_1,\dots,R_{k+1})\\[5pt] \leq
\min\big\{ 2u_2(n^{y_1},n^{y_{2}})C_{k-3}(R_4,\dots,R_{k+1}),2u_2(n^{y_k},n^{y_{k+1}})C_{k-3}(R_1,\dots,R_{k-2})\big \}.
\end{multline*}
By induction we obtain $C_{k-3}(R_4,\dots,R_{k+1}),C_{k-3}(R_1,\dots,R_{k-2})= O\left(n^{\frac{k-4}{3}+\varepsilon}\cdot u_2(n)\right )$. Together with the assumption of Case 1, and the fact that $u_2(n^{y_1},n^{y_{2}})\leq u_2(n^{x_1},n^{x_{2}})$ and \mbox{$u_2(n^{y_k},n^{y_{k+1}})\leq u_2(n^{x_k},n^{x_{k+1}})$}, this implies \eqref{eq61} and finishes the proof.\\
{\bf Case 2: }
For some $i=1,\ldots, (k-1)/3,$ one of the following holds:
\vspace{0.3cm}
\begin{itemize}
\item $u_2(n^{x_{3i+1}},n^{x_{3i+2}})= O(\max\{n^{x_{3i+1}},n^{x_{3i+2}}\})$;
\vspace{0.1cm}
\item $u_2(n^{x_{3i-1}},n^{x_{3i}})= O(n^{x_{3i-1}})$;
\vspace{0.1cm}
\item $u_2(n^{x_{3i}},n^{x_{3i+1}})= O(n^{x_{3i+1}})$.
\end{itemize}
We will show how to conclude in the first case. The other cases are very similar and we omit the details of their proofs.
If $u_2(n^{x_{3i+1}},n^{x_{3i+2}})= O(n^{x_{3i+2}})$ then $n^{\alpha_{3i+2}}=O(1)$ by \eqref{eqkey}. Every chain $(r_1,\dots,r_{k+1})\in \mathcal C_k(Q_1,\ldots, Q_{k+1})$
can be obtained as follows.
\vspace{0.3cm}
\begin{enumerate}
\item Pick a $(3i-2)$-chain $(r_1,\dots,r_{3i-1})$ with $r_j\in Q_j$ for every $j$.
\vspace{0.2cm}
\item Pick a $(k-3i-1)$-chain $(r_{3i+2},r_{3i+3},\dots,r_{k+1})$ with $r_j\in Q_j$ for every $j$.
\vspace{0.2cm}
\item Extend $(r_{3i+2},r_{3i+3},\dots,r_{k+1})$ to a $(k-3i-2)$ chain $(r_{3i+1},r_{3i+2},\dots,r_{k+1})$.
\vspace{0.2cm}
\item Connect $(r_1,\dots,r_{3i-1})$ and $(r_{3i+1},r_{3i+2},\dots,r_{k+1})$ to obtain a $k$-chain.
\end{enumerate}
In the first step, we have $O\left(n^{\frac{3i-3}3+\varepsilon}\cdot u_2(n)\right)$ choices by induction on $k$. In the second step, we have $\tilde O\left(n^{\frac{k-3i+2}{3}}\right)$ choices by the $k\equiv 0$ (mod $3$) case of Theorem~\ref{main}. In the third step, we have at most $n^{\alpha_{3i+2}+\varepsilon}= O(n^{\varepsilon})$ choices. Finally, in the fourth step we have at most $2$ choices. Thus the number of $k$-chains is at most \[O\left(n^{\frac{3i-3}{3}+\varepsilon}\cdot u_2(n)\right)\cdot \tilde O\left (n^{\frac{k-3i+2}{3}}\right )\cdot O\left(n^{\varepsilon}\right)\cdot 2=O\left(n^{\frac{k-1}{3}+3\varepsilon}\cdot u_2(n)\right),\]
finishing the proof of the first case.
\smallskip
If $u_2(n^{x_{3i+1}},n^{x_{3i+2}})= O(n^{x_{3i+1}})$ then $n^{\beta_{3i+1}}= O( n^{\varepsilon})$ by \eqref{eqkey}.\footnote{This is the key application of \eqref{eqkey}, and the reason why we needed a decomposition with regularity in both directions between the consecutive parts.}
We proceed similarly in this case, but we count the $k$-chains now in $R_1\times\ldots\times R_{k+1}$ instead in $Q_1\times \ldots \times Q_{k+1}$ (and get an extra factor of $n^{{\varepsilon}}$ in the bound). In all cases, we obtain \eqref{eq61}.
\bigskip
{\bf Case 3: } Neither the assumptions of Case 1 nor that of Case 2 hold. We define four sets $S'$, $S'_+$, $S'_{++}$, and $S'_-$ of indices in $\{2,\ldots, k\}$ as follows. Let
\begin{flalign*}
S'& := \setbuilder{i}{u_2(n^{x_i},n^{x_{i-1}})= O(n^{\frac{2}{3}(x_i+x_{i-1})}) \textrm{ and } u_2(n^{x_{i+1}},n^{x_{i}})= O(n^{\frac{2}{3}(x_{i+1}+x_{i})}) }, \\[7pt]
S'_+& := \Big\{i : u_2(n^{x_i},n^{x_{i-1}})= O(n^{\frac{2}{3}(x_i+x_{i-1})}) \textrm{ and } u_2(n^{x_{i+1}},n^{x_{i}})= O(n^{x_{i}}), \textrm{ or } \\
& \phantomrel{=}{} \phantomrel{=}{} \phantomrel{=}{} \phantomrel{=}{} u_2(n^{x_i},n^{x_{i-1}})= O(n^{x_i}) \textrm{ and } u_2(n^{x_{i+1}},n^{x_{i}})= O(n^{\frac{2}{3}(x_{i+1}+x_{i})})
\Big \}, \\[7pt]
S'_{++}& :=\Big\{i : u_2(n^{x_i},n^{x_{i-1}})= O(n^{x_i}) \textrm{ and } u_2(n^{x_{i+1}},n^{x_{i}})= O(n^{x_{i}})
\Big \}, \textrm{ and} \\[7pt]
S'_-& :=\Big\{i : u_2(n^{x_i},n^{x_{i-1}})= O(n^{\frac{2}{3}(x_i+x_{i-1})}) \textrm{ and } u_2(n^{x_{i+1}},n^{x_{i}})= O(n^{x_{i+1}}), \textrm{ or } \\
& \phantomrel{=}{} \phantomrel{=}{} \phantomrel{=}{} \phantomrel{=}{} u_2(n^{x_i},n^{x_{i-1}})= O(n^{x_{i-1}}) \textrm{ and } u_2(n^{x_{i+1}},n^{x_{i}})= O(n^{\frac{2}{3}(x_{i+1}+x_{i})})
\Big \}.
\end{flalign*}
Since the conditions of Case 2 are not satisfied, we have \[\{2,\dots,k\}\subseteq S'\cup S'_+\cup S'_{++}\cup S'_-.\]
Indeed, for each $i\in\{2,\ldots, k\},$ there are $9$ possible pairs of maxima in \eqref{eq66} with $i,i+1.$ The four sets above encompass $6$ possibilities. In total, there are $4$ possible pairs of maxima with only the two last terms from \eqref{eq66} used. For $i \equiv 1,2$ (mod $3$), any of those $4$ are excluded due to the first condition in Case 2 (in fact, then $i\in S'\cup S'_-$). If $i \equiv 0$ (mod $3$), then the second and the third condition in Case 2 rule out all possibilities but the one defining $S'_{++}$.
From these it directly follows that if $i\in S'_{++}$, then $i-1,i+1\in S'_-$, while if $i\in S'_+$ then one of $i-1,i+1$ is in $S'_-$. (Recall that $i\in S'_+\cup S'_{++}$ only if $i\equiv 0$ (mod $3$).) These together imply
\begin{equation}\label{eq67}
|S'_+|+2|S'_{++}|\le |S'_-|.
\end{equation}
We partition $\{2,\dots,k\}$ using these sets as follows: let $S_- = S'_-, S = S'\setminus S'_-, S_+ = S'_+\setminus (S'_-\cup S')$ and $S_{++}=\{2,\dots,k\}\setminus S'_-\cup S'\cup S'_{+}$. Note that the analogue of \eqref{eq67} holds for the new sets. That is, we have
\begin{equation*}\label{eq68}
|S_+|+2|S_{++}|\le |S_-|.
\end{equation*}
Recall that
\begin{equation}\label{24}
C_{k}(R_1,\dots,R_{k+1})
\leq u_2(n^{x_1},n^{x_{2}})\prod_{2\leq i \leq k}\frac{u_2\left (n^{x_{i}},n^{x_{i+1}}\right)}{n^{x_i}}n^{2k\varepsilon}.
\end{equation}
Since the assumptions of Case 1 and 2 do not hold, we have $2,k\in S$. Indeed, $2,k\ne 0$ \mbox{(mod $3$)} and thus $2,k\notin S_+,S_{++}$. Further, if say $k\in S_-=S'_-$ then by the definition of $S'_-$ we either have $u_2(n^{x_{k+1}},n^{x_k}) = O(n)$, or $u_2(n^{x_k},n^{x_{k-1}})=O(n^{x_{k-1}})$. The first case cannot hold since the assumption of Case 1 does not hold. Further, the second case cannot hold either, since it would imply $x_k\le \frac {x_{k-1}}2\le \frac 12$, meaning $u_2(n^{x_{k+1}},n^{x_k}) = O(n)$.
Using $2,k\in S$ and expanding \eqref{24}, we obtain
{\small \begin{equation}\label{first}
C_{k}(R_1,\dots,R_{k+1})\leq
n^{2k\varepsilon}u_2(n^{x_1},n^{x_2})n^{-\frac{1}{3}x_2}n^{\frac{2}{3}x_{k+1}}\prod_{\substack{i\in S,\\i\neq 2}}n^{\frac{1}{3}x_i}\prod_{i\in S_+}n^{\frac{2}{3}x_i}\prod_{i\in S_{++}}n^{x_i}\prod_{i\in S_-}n^{-\frac{1}{3}x_i},
\end{equation}}
and
{\small \begin{equation}\label{second}
C_{k}(R_1,\dots,R_{k+1})\leq
n^{2k\varepsilon}u_2(n^{x_k},n^{x_{k+1}})n^{-\frac{1}{3}x_k}n^{\frac{2}{3}x_1}\prod_{\substack{i\in S,\\ i\neq k}}n^{\frac{1}{3}x_i}\prod_{i\in S_+}n^{\frac{2}{3}x_i}\prod_{i\in S_{++}}n^{x_i}\prod_{i\in S_-}n^{-\frac{1}{3}x_i}.
\end{equation}}
Taking the product of \eqref{first} and \eqref{second} we obtain
\begin{multline*}
C_{k}(R_1,\dots,R_{k+1})^2 \leq \\ n^{4k\varepsilon}\cdot u_2(n^{x_1},n^{x_2})u_2(n^{x_k},n^{x_{k+1}})n^{\frac{2}{3}(x_1+x_{k+1})}\left (\prod_{ \substack{i\in S,\\ i\neq 2,k}} n^{\frac{1}{3}x_i}\prod_{i\in S_+}n^{\frac{2}{3}x_i}\prod_{i\in S_{++}}n^{x_i}\prod_{i\in S_-}n^{-\frac{1}{3}x_i} \right)^2\\[10pt] \leq
n^{4k\varepsilon}\cdot u_2(n,n)^2\cdot n^{2\left(\frac{2}{3}+ \frac{1}{3}|S\setminus\{2,k\}|+\frac{2}{3}|S_+|+|S_{++}|\right ) }= u_2(n,n)^2\cdot n^{\frac{2(k-1)}{3}+4k{\varepsilon}}.
\end{multline*}
The last equality follows from $|S_+|+2|S_{++}|\le |S_-|$, which is equivalent to $\frac 23|S_+|+|S_{++}|\le \frac 13(|S_+|+|S_{++}|+|S_-|)$, and from the fact that $S$, $S_+$,$S_{++}$, and $S_-$ partition $\{2,\dots,k\}$. This finishes the proof.
\section{Bounds in \texorpdfstring{\bm{$\mathbb{R}^3$}}{Lg}}
Similarly as in the planar case, for a fixed $\bm\delta=(\delta_1,\dots,\delta_k)$ and $P_1\dots,P_{k+1}\subseteq \mathbb{R}^3$ we denote by $\mathcal{C}_k^3(P_1,\dots,P_k)$ the family of $(k+1)$-tuples $(p_1,\dots,p_{k+1})$ with $p_i\in P_i$ for all $i\in[k+1]$ and with $\|p_i-p_{i+1}\|=\delta_i$ for all $i\in[k]$. Let $C_k^3(P_1,\dots,P_{k+1})=|\mathcal{C}_k^{3}(P_1,\dots,P_{k+1})|$ and
\[C^3_k(n_1,\dots,n_{k+1})=\max C_k^{3}(P_1,\dots,P_{k+1}),\]
where the maximum is taken over all choices of $P_1,\ldots, P_{k+1}$ subject to $|P_i|\le n_i$ for all $i\in [k+1]$.
Similarly to the planar case it follows that $C^3_k(n)\leq C^3_k(n,\dots,n)\leq C^3_k\left((k+1)n\right)$. Since we are only interested in the order of magnitude of $C^3_k(n)$ for fixed $k$, sometimes we are going to work with $C^3_k(n,\dots,n)$ instead of $C^3_k(n)$.
\subsection{Lower bounds}
For completeness, we recall the constructions from \cite{Shef} for even $k\geq 2$. Let $\bm{\delta}=(\delta_1,\dots,\delta_k)$ be any given sequence. For every even $2\leq i\leq k$, let $P_i=\{p_i\}$ be a single point such that the sphere of radius $\delta_i$ centred at $p_i$ and the sphere of radius $\delta_{i+1}$ centred at $p_{i+2}$ intersect in a circle. Further, let $P_1$ be a set of $n$ points contained in the sphere of radius $\delta_1$ centred at and $p_2$, and $P_{k+1}$ be a set of $n$ points contained in the sphere of radius $\delta_k$ centred at and $p_2$.
Finally, for every odd $3\leq i \leq k-1$, let $P_i$ be a set of $n$ points contained in the intersection of the sphere of radius $\delta_{i-1}$ centred at $p_{i-1}$ and of the sphere of radius $\delta_{i}$ centred at $p_{i+1}$. Then $P_1\times\dots\times P_{k+1}$ contains $n^{\frac{k}{2}+1}$ many $k$-chains, since every element of $P_1\times\dots\times P_{k+1}$ is a $k$-chain, and $|P_1\times\dots\times P_{k+1}|=n^{\frac{k}{2}+1}$.
Next, we prove the lower bounds for odd $k\geq 3$ and $\bm\delta=(1,\dots,1)$ given in Proposition \ref{oddk}.
\begin{proof}[Proof of Proposition \ref{oddk}]
First we show that $C_k^3(n)=\Omega\left (\frac{u_3(n)^k}{n^{k-1}}\right )$. Take a set $P'\subset {\mathbb R}^3$ of size $n$ that contains $u_3(n)$ point pairs at unit distance apart. It is a standard exercise in graph theory to show that since $u_3(n)$ is superlinear, there is $P\subset P'$ such that $\frac{n}{2}\leq |P|\leq n$ and for every $p\in P$ there are at least $\frac{u_3(n)}{4n}$ points $p'\in P$ at distance $1$ from $p$. Then $P$ contains $\Omega\left (\frac{u_3(n)^k}{n^{k-1}}\right )$ many $k$-chains with $\bm\delta=(1,\dots,1)$.
To prove $C_k^3(n)=\Omega\left (us_3(n)n^{(k-1)/2}\right)$, we modify and extend the construction used for $k-1$ as follows. Let $P_1,\dots,P_{k-1}$ be as in the construction for $(k-1)$-chains with $\bm\delta=(1,\dots,1)$ (from the even case). Further, let $P_{k}$ be a set of $n$ points on the unit sphere around $p_{k-1}$, and $P_{k+1}$ be a set of $n$ points such that $u_3(P_k,P_{k+1})=us_3(n)$. Since every $(p_1,\dots,p_{k+1})\in P_1\times \dots \times P_{k+1}$ with $\|p_k-p_{k+1}\|=1$ is a $k$-chain, we obtain that $P_1\times\dots\times P_{k+1}$ contains $\Omega\left (us_3(n)n^{(k-1)/2}\right)$ many $k$-chains with $\bm\delta=(1,\dots,1)$.
\end{proof}
\subsection{Upper bound}
We again fix $\bm\delta=(\delta_1,\dots,\delta_k)$ throughout the section.
The following result with $x=1$ implies the upper bound in Theorem~\ref{3d}.
\begin{thm}\label{3dbound} For any fixed integer $k\geq 0$ and $x\in [0,1]$, we have
\begin{equation*}
C^3_k(n^{x},n,\ldots,n)=\tilde O\left (
n^{\frac{k+1+x}{2}}\right ).
\end{equation*}
\end{thm}
\begin{proof}
The proof is by induction on $k$. For $k=0$ the bound is trivial, and for $k=1$ it follows from \eqref{eq3d}.
For $k\geq 2$ let $P_1,\dots,P_{k+1}\subseteq \mathbb{R}^3$ be sets of points satisfying $|P_1|=n^x$, and $|P_i|=n$ for $2\leq n \leq k+1$. Denote by $P_2^{\alpha}\subseteq P_2$ the set of those points in $P_2$ that are at least $n^{\alpha}$-rich but at most $2n^{\alpha}$-rich with respect to $P_1$ and $\delta_1$.
It is not hard to see that
\[\mathcal{C}^3_k(P_1,P_2\dots,P_{k+1})\subseteq \bigcup_{\alpha\in \Lambda} \mathcal{C}^3_k(P_1,P_2^{\alpha},P_3,\ldots,P_{k+1}),\]
where $\Lambda:= \{\frac{i}{\log n}: i= 0,1,\ldots, \lfloor \log n\rfloor\}$. Since $|\Lambda| = \tilde O(1)$, it is sufficient to prove that, for every $\alpha\in \Lambda,$ we have
\begin{equation*}\label{alpha3}
C^3_k(P_1,P_2^{\alpha},P_3,\ldots, P_{k+1})= \tilde O\left ( n^{\frac{k+1+x}2}\right ).
\end{equation*}
Assume that $|P^{\alpha}_2| = n^y.$ The number of $(k-1)$-chains in $P_2^{\alpha}\times P_3\times \dots\times P_{k+1}$ is at most $C^3_{k-1}(n^{y},n,\dots,n)$, and each of them may be extended in $2n^{\alpha}$ ways. By induction, we get
\[C^3_k(P_1,P_2^{\alpha},P_3,\ldots, P_{k+1})= \tilde O\left(n^{\alpha}\cdot n^{\frac{k+y}{2}}\right),\]
and we are done as long as \begin{equation}\label{eqtocheck}2\alpha+k+y\le k+1+x.\end{equation}
To show this, we need to consider several cases depending on the value of $\alpha.$
Note that $\alpha\le x$.
\vspace{0.1cm}
\begin{itemize}
\item If $\alpha\geq \frac{2x}{3}$, then by \eqref{richness} we have $y\leq x-\alpha$, and the LHS of \eqref{eqtocheck} is at most $\alpha+k+x\le 1+k+x$.
\vspace{0.1cm}
\item If $\frac x2 \le \alpha\le \frac{2x}{3}$ then by \eqref{richness} we have $y\leq 3x-4\alpha.$ The LHS of \eqref{eqtocheck} is at most $k+3x-2\alpha\le k+2x\le k+1+x$.
\vspace{0.1cm}
\item If $\alpha\le \frac x2$ then we use a trivial bound $y\leq 1$. The LHS of \eqref{eqtocheck} is at most $2\alpha+k+1\le x+k+1.$
\end{itemize}
\end{proof}
\section{Concluding remarks}
If $\bf{\delta}=(1,\dots,1)$, then the results of this paper are about the finding the maximum number of $k$-long paths that can be determined by a unit-distance graph on $n$ vertices. As a generalisation, one can study the maximum number of isomorphic copies of a given tree in unit distance graphs. More generally, we propose the following problem. Let $T=(V,E)$ be a tree with $V=\{v_1,\dots,v_{k+1}\}$ and $E=\{(v_{i_1},v_{j_1}),\dots,(v_{i_k},v_{j_k})\}$. For a fixed sequence $\bm{\delta}=\{\delta_1,\dots,\delta_k\},$ a $(k+1)$-tuple of distinct points $(p_1,\dots,p_{k+1})$ in $\mathbb{R}^d$ is a \emph{$T$-tree}, if $\|p_{i_\ell}-p_{j_\ell}\|=\delta_{\ell}$ for every $\ell=1,\ldots, k$. What is the maximum possible number $C_T^d(n)$ of $T$-trees in a set of $n$ points in ${\mathbb R}^d$? For $d=2$ we write $C_T^2(n)=C_T(n)$.
Note that, as in the case of chains, $C_T^d(n) = \Omega\big(n^{|V(T)|}\big)=\Omega(n^{k+1})$ for $d\ge 4$.
However, for $d=2$, $C_T(n)$ depends on $T$, not only on the number of vertices of $T$. Indeed, we saw that if $T$ is a path, then $C_T(n)$ is roughly $n^{k/3}$. At the same time, if $T$ is a star with $k$ leaves then $C_T(n)=\Theta(n^k)$. To see this, fix the centre of the star and distribute the remaining points equally on concentric circles of radii $\delta_1,\ldots, \delta_k$ around the centre of the star. One can similarly find examples in for $d=3$ where $C_T^3(n)$ depends on the tree itself.
For some trees determining $C_T(n)$ trivially reduces to determining $C_k(n)$ for some $k$. However, for many other trees the problem seems challenging, and new ideas are needed to tackle it. Subdivisions of stars show that it in some cases it might not be possible to determine $C_T(n)$ without knowing $u_2(n)$, even in terms of $u_2(n)$. To see this, let $T_{\ell,3}$ be a star-shaped tree on $3\ell+1$ vertices, with one (central) vertex of degree $\ell$, and $\ell$ paths on $3$ vertices joined to the central vertex. (This tree for $\ell =3$ is depicted on Figure \ref{fig2}, right.)
Generalising the lower bound constructions we used for the chains in two different ways, we can obtain $C_{T_{\ell,3}}(n)=\Omega(u_2(n)^{\ell})$ (by fixing the central vertex of the tree) and $C_{T_{\ell,3}}(n)=\Omega(n^{\ell+1})$ (by fixing all vertices that are neighbours of the leaves). This is illustrated on Figure \ref{fig1} for $\ell=3$. It would be interesting to prove that $C_{T_{\ell,3}(n)}$ is the maximum of these two lower bounds.
\begin{figure}[h]
\centering
{\includegraphics[width=10cm]{TwoTree.pdf}}
\caption{Star-shaped trees.}\label{fig2}
\end{figure}
\begin{figure}[h]
\centering
{\includegraphics[width=16.5cm]{2new.pdf}}
\caption{Examples providing lower bounds for the number of copies of $T_{\ell,3}$ with $\ell=3$. Left: $\Omega(n^{\ell+1})$ copies of $T_{\ell,3}$. Right: $\Omega(u_2^{\ell})$ copies of $T_{\ell,3}$. Vertices of red colour indicate parts (corresponding to the vertices of the tree and) that consist of a single vertex.}\label{fig1}
\end{figure}
\begin{pro}\label{pro1}Is it true that $C_{T_{\ell,3}}(n)=\Theta(\max\{n^{\ell+1},u_2(n)^{\ell}\})$?
\end{pro}
Motivated by the constructions described before, we propose the following more general question.
\begin{pro}\label{treeconj}Is it true that for every tree $T$ there are integers $m,\ell$ such that $C_T(n)=\Theta(n^mu_2(n)^\ell)$?
\end{pro}
The smallest tree $T$, for which we cannot determine the order of magnitude of $C_T(n)$, is a star-shaped tree on $7$ vertices with one central vertex of degree $3$ and three and three paths on $2$ vertices joined to the central vertex (see the left tree of Figure \ref{fig2}).
\begin{pro}\label{small}
Is it true that for the star-shaped tree $T$ on $7$ vertices described above we have $C_T(n)=\Theta(n^3)$?
\end{pro}
Note that it may be easier (and also very interesting) to obtain upper bounds in Problems \ref{pro1}-\ref{small} with a poly-logarithmic or $n^{\varepsilon}$ error term, as we did in Theorem \ref{main}.
In a follow-up paper, we find almost sharp bounds for $C_T(n)$ for some `non-trivial' trees (i.e., those that cannot be reduced to the chains case). Further, we will describe a general lower bound construction that motivates Problem \ref{treeconj}. We will also provide some partial results for Problem \ref{small} and connect it to an interesting incidence problem.
Finally, it would be interesting to decide if the lower bounds in Proposition \ref{oddk} are sharp for $\bm\delta=(1,\dots,1)$. The first open case is $k=3$.
\begin{pro}
Is it true that $C_3^3(n)=\Theta\left (\max\left\{ \frac{u_3(n)^3}{n^2},us_3(n)n\right \}\right )$ for $\bm\delta=(1,\dots,1)$?
\end{pro}
\subsection*{Acknowledgements.} We thank Konrad Swanepoel and Peter Allen for helpful comments on the manuscript. We also thank Dömötör Pálvölgyi for suggesting Proposition \ref{Propprob}. The authors acknowledge the financial support from the Russian Government in the framework of MegaGrant no 075-15-2019-1926.
\bibliographystyle{amsplain}
| {
"timestamp": "2020-10-19T02:13:27",
"yymm": "1912",
"arxiv_id": "1912.00224",
"language": "en",
"url": "https://arxiv.org/abs/1912.00224",
"abstract": "The following generalisation of the Erdős unit distance problem was recently suggested by Palsson, Senger and Sheffer. Given $k$ positive real numbers $\\delta_1,\\dots,\\delta_k$, a $(k+1)$-tuple $(p_1,\\dots,p_{k+1})$ in $\\mathbb{R}^d$ is called a $(\\delta,k)$-chain if $\\|p_j-p_{j+1}\\| = \\delta_j$ for every $1\\leq j \\leq k$. What is the maximum number $C_k^d(n)$ of $(k,\\delta)$-chains in a set of $n$ points in $\\mathbb{R}^d$, where the maximum is taken over all $\\delta$? Improving the results of Palsson, Senger and Sheffer, we essentially determine this maximum for all $k$ in the planar case. error term It is only for $k\\equiv 1$ (mod) $3$ that the answer depends on the maximum number of unit distances in a set of $n$ points. We also obtain almost sharp results for even $k$ in $3$ dimension.",
"subjects": "Combinatorics (math.CO); Metric Geometry (math.MG)",
"title": "Almost sharp bounds on the number of discrete chains in the plane",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9840936096877063,
"lm_q2_score": 0.8152324938410783,
"lm_q1q2_score": 0.8022650875987776
} |
https://arxiv.org/abs/2112.05094 | Weak limits of consecutive projections and of greedy steps | Let $H$ be a Hilbert space. We investigate the properties of weak limit points of iterates of random projections onto $K\geq 2$ closed convex sets in $H$ and the parallel properties of weak limit points of residuals of random greedy approximation with respect to $K$ dictionaries. In the case of convex sets these properties imply weak convergence in all the cases known so far. In particular, we give a short proof of the theorem of Amemiya and Ando on weak convergence when the convex sets are subspaces. The question of the weak convergence in general remains open. | \section*{Introduction}
In what follows $H$ is a real Hilbert space
with scalar product $\langle\,\cdot\,,\cdot\,\rangle$ and norm $|\,\cdot\,|$.
Let $A_1,\dots,A_K$ be closed and convex sets in $H$, $K\geq 2$, so that $A_1\cap\dots\cap A_K=\{0\}$. Let $P_i$ denote the metric projection onto $A_i$. Let $i(n)\in \{1,\dots, K\}$ be a fixed sequence containing each $k\in \{1,\dots, K\}$ infinitely often. For $x_0\in H$, we consider the sequence
\begin{equation}\label{pr}
x_n=P_{i(n)} x_{n-1}, \qquad n=1,2,\dots
\end{equation}
In the case when $A_i$ are closed subspaces of $H$ the convergence properties of the sequence $\{x_n\}$ are well understood. If the sequence of the indices $\{i_n\}$ is periodic then the sequence $\{x_n\}$ converges in norm \cite{N}, \cite{Ha}. The rate of the convergence depending on the position of the subspaces and of the initial point is known \cite{BaGM1}, \cite{BaGM2}, \cite{BDH}, \cite{BK}. In this context an interplay with the convergence properties of the greedy algorithm was discovered recently \cite{BK}.
If no extra information about the indices, or about the position of the subspaces is known
already for $K=3$ divergence might occur \cite{P}, \cite{KM}, \cite{KP}, \cite{K20}. The sequence $\{x_n\}$, however, always converges weakly to zero \cite{AA}.
In the lack of linearity, when the sets $A_i$ are just closed and convex, the situation is different.
Already for $K=2$ the sequence $\{x_n\}$ might diverge in norm, although the sequence of indices is inevitably periodic \cite{H},\cite{K08},\cite{MR}.
Weak convergence is known only under additional conditions: when $K\leq 3$ \cite{DR}, or when the indices are periodic \cite{Bre}, or when the sets are ``somewhat symmetric" \cite{DR}.
We denote by
$W=W(x_0)$ the set of all partial weak limits of the sequence (\ref{pr}) and face
\begin{problem}\label{problem1}
Is it true that $W=\{0\}$?
\end{problem}
We investigate the structure of the set $W$ and give new short proofs of the weak convergence in all
of the cases mentioned above. In particular, we give a short proof of the theorem of Amemiya and Ando on weak convergence when the convex sets are subspaces.
The general case remains, however, open.
In the spirit of \cite{BK}, we establish an interplay with the weak convergence problem of greedy approximation with respect to $K$ dictionaries. The structural properties of the set of weak partial limits of this greedy approximation turn out to be the same. We have hit the same bounds of knowledge while seeking weak convergence.
\section{Projections on convex sets}\label{sec:convex}
Let $A_1,\dots,A_K$ be closed and convex sets in $H$, $K\geq 2$ so that $A_1\cap\dots\cap A_K=\{0\}$. Let $i(n)\in \{1,\dots, K\}$ be a fixed sequence containing each $k\in \{1,\dots, K\}$ infinitely often and let the sequence $\{x_n\}$ be defined by (\ref{pr}). We assume $i(n)\not=i(n+1)$ without loss of generality.
We study the structure of the set $W=W(x_0)$ of all partial weak limits of the sequence $\{x_n\}$.
Since the nearest point projection onto a convex set is a $1$-Lipschitz mapping, the norms
$|x_n|$ decrease and the set $W$ is always nonempty. We may assume that $|x_n|\searrow R>0$, as $R=0$ implies convergence in norm and hence $W=\{0\}$.
For $w\in W$, we denote by $J(w)$ the maximal subset of $\{1,\dots, K\}$ such that $w\in A_{J(w)}$. Here we use the notation $A_J=\cap_{j\in J}A_j$.
Since $|x_n-x_{n-1}|^2\leq |x_{n-1}|^2-|x_n|^2$, for $x_n\in A_{i(n)}$ we have
\begin{equation}\label{dist}
{\rm dist\,} (x_n,A_{i(n\pm m)})\to 0 \qquad (n\to \infty)
\end{equation}
for any fixed $m$. Therefore $|J(w)|\geq 2$ for each $w\in W$, and $W$ is a weakly closed subset of $\cup_{|J|\geq 2}A_J \cap B(0,R)$, where $B(0,R)$ is the ball centered at 0 of radius R.
If $w\not=0$, then $|J(w)|<K$, since $\bigcap A_i=\{0\}$.
It also follows from (\ref{dist}) that in case $i(n)\equiv n ({\rm mod}\, K)$ of alternating projections we have $J(w)=\{1,\dots,K\}$ for each $w$, and hence $W=\{0\}$. In particular, if we have just two convex sets then the sequence $\{x_n\}$ converges weakly.
Next we show that if $W$ contains an element of maximal norm, then $W=\{0\}$.
\begin{theorem}\label{theorem1}
For each $w\in W$, $w\not=0$, one can find another element $w'\in W$ with the following properties:
\begin{itemize}
\item[(i)] $|J(w')\setminus J(w)|\geq 1$;
\item[(ii)] $|J(w')\cap J(w)|\geq 2$
\item[(iii)] $|J(w')|\geq 3$;
\item[(iv)] $\langle w'-w,a\rangle\geq 0$ for every $a\in A_{J(w)}$.
\end{itemize}
In particular $|w'|>|w|$ in view of (i), since
$\langle w'-w,w\rangle\geq 0$, hence also $|w'|^2\geq |w|^2+|w-w'|^2$.
\end{theorem}
\begin{proof}
(i) Let
$$
x_{n_k}\rightharpoonup w, \qquad i(n_k)\in J(w).
$$
Taking a subsequence of $k$'s if needed, we can choose $q\notin J(w)$ with the following property:
for any $k$ there is a number $m_k\in (n_k, n_{k+1})$ with $i(m_k)=q$, so that
for any $n\in [n_k, m_k)$ we have $i(n)\in J(w)$, and hence $i(n)\not= q$.
Again taking a subsequence of $k$'s if needed, we get $x_{m_k}\rightharpoonup w'$, and this is the definition of $w'$.
Clearly $w'\in A_q$, hence $J(w')\ni q $ and (i) holds.
(ii) The numbers $i(m_k-1)$ and $i(m_k-2)$ belong to $J(w)$ and are distinct. We choose two different numbers $i,j\in J(w)$ so that $i(m_k-1)=i$ and $i(m_k-2)=j$ for infinitely many $k$'s. In view of (\ref{dist}) this implies $w'\in A_i\cap A_j$, hence $i,j \in J(w')\cap J(w) $ and (ii) holds.
The property (iii) follows from (i) and (ii).
(iv) For any $a\in A_{J(w)}$, we have
$$
\langle w'-w,a\rangle = \lim_{k\to \infty} \langle x_{m_k}-x_{n_k},a\rangle
$$
$$
= \lim_{k\to \infty} \sum_{n=n_k+1}^{m_k}\langle x_{n}-x_{n-1},a\rangle= \lim_{k\to \infty} \sum_{n=n_k+1}^{m_k-1}\langle x_{n}-x_{n-1},a\rangle
$$
$$
= \lim_{k\to \infty}\frac{1}{2} \sum_{n=n_k+1}^{m_k-1}(| x_{n-1}-a|^2-|x_n-a|^2 + |x_n|^2 -|x_{n-1}|^2)
$$
$$
=\frac{1}{2} \lim_{k\to \infty} \left(\sum_{n=n_k+1}^{m_k-1}(| x_{n-1}-a|^2-|P_{i(n)}x_{n-1}-P_{i(n)}a|^2 ) + |x_{m_k-1}|^2 -|x_{n_k}|^2\right)\geq 0,
$$
since every term in the sum is non-negative and $\lim_{k\to \infty}|x_{m_k-1}|=\lim_{k\to \infty}|x_{n_k}|= R$.
\end{proof}
\begin{remark}\label{remark1}
The inequality (iv) holds for $a\in A_{J(w,w')}$, where $J(w,w')=\{i(n): n\in [n_k,m_k-1], k=1,2,\dots\}$.
Since $J(w,w')\subset J(w)$, $A_{J(w,w')}$ can be strictly larger than $A_{J(w)}$.
\end{remark}
The following corollary is a special case of Theorem~2 of \cite{DR}; our proof is different.
\begin{corollary}\label{cor1.1}
If $K\leq 3$, then $W=\{0\}$.
\end{corollary}
\begin{proof}
The case $K=2$ we have explained above Theorem~\ref{theorem1}.
Assume that $K=3$ and that there is $w\in W\setminus \{0\}$. By Theorem \ref{theorem1} there is $w'\in W$ with $|w'|>|w|$ and $J(w')=\{1,2,3\}$ Hence $w'=0$ which is a contradiction.
\end{proof}
Assume all the convex sets $A_i$ are cones. Assume, moreover, that the intersection of any triple
of these cones with the unit sphere has a positive distance to the intersection of any other triple.
Then $W=\{0\}$ according to the next corollary.
\begin{corollary}\label{cor1.2}
Suppose for every $r>0$ there exists $\delta(r)>0$ so that for any two different triples $\{i,j,k\}$
and $\{i,j,l\}$ and elements $u\in A_{\{i,j,k\}}\cap S(0,r)$, $v\in A_{\{i,j,l\}}\cap S(0,r)$ we have $|u-v|>\delta(r)$. Then $W=\{0\}$.
\end{corollary}
\begin{proof}
Suppose $W\not=\{0\}$. Using Theorem \ref{theorem1}, we construct a sequence $w_n\in W$ so that $w_1\not=0$, $w_{n+1}=w_n'$, $|J(w_n)|\ge 3$, $J(w_n)\not= J(w_{n+1})$ and $|J(w_n)\cap J(w_{n+1})|\ge 2$ for each $n$. So we get $w_n\in A_{\{i,j,k\}} $ and $w_{n+1}\in A_{\{i,j,l\}} $ for some $i$, $j$ and $k\not= l$ depending on $n$.
Since the sequence $|w_n|$ is bounded and increasing,
let $r=\lim_{n\to \infty} |w_n|$.
Hence, $u_n=rw_n/(2|w_n|)\in A_{\{i,j,k\}}$ for all sufficiently large $n$.
Denoting $w_n=(1+t_n)u_n$, $t_n>0$, for those $n$ we have
$$
\begin{array}{rcl}
|w_{n+1}|^2&\ge & |w_n|^2+|w_{n+1}-w_n|^2 =\\
&=& |w_n|^2+|(1+t_{n+1})u_{n+1}-(1+t_n)u_n|^2 \\
&\ge & |w_n|^2+|u_{n+1}-u_n|^2+2\langle u_{n+1}-u_n, t_{n+1}u_{n+1}-t_nu_n \rangle\\
&\ge &|w_n|^2+|u_{n+1}-u_n|^2+ 2(r/2)^2(t_{n+1}+t_n-(t_{n+1}+t_n))\\
&=&|w_n|^2+|u_{n+1}-u_n|^2>|w_n|^2+\delta(r/2)^2.
\end{array}
$$
That means, however, that $|w_n|$ is unbounded.
\end{proof}
\begin{theorem}\label{theorem2}
If $W\not=\{0\}$, then one can find two different elements $w,w'\in W$ so that
$$
w={\rm weak}\lim_{k\to \infty} x_{n_k}, \qquad w'={\rm weak}\lim_{k\to \infty} x_{m_k},
$$
where $n_1<m_1<n_2<m_2<\dots$, and $i(n)\in J(w)\cap J(w')$ for any $n\in \cup_k (n_k,m_k)$. Consequently,
$(i)$ $\langle w'-w,a\rangle\geq 0$ for every $a\in A_{J(w)}$,
$(ii)$ $\langle w'-w,b\rangle\geq 0$ for every $b\in A_{J(w')}$.
\end{theorem}
\begin{proof}
Both inequalities (i) and (ii) follow from the first statement of the Theorem. The proof follows that of (iv) of Theorem \ref{theorem1}: all projections between $n_k$ and $m_k$ have indices from $J(w)\cap J(w')$.
To prove the first statement we take $w$ and $w'$ from Theorem \ref{theorem1}. All indices in $J=\{i(n): n\in \cup_k (n_k,m_k)\}$ belong to $J(w)$ by the proof of Theorem \ref{theorem1}. If $J\subset J(w')$, we are done. Otherwise we define $\nu_k$ as the largest numbers $n\in (n_k, m_k)$ such that $i(n)\notin J(w')$.
By taking a subsequence of $k$'s so that all these $i(n)$ are the same, we get $x_{\nu_k}\rightharpoonup v\not= w'$. Then we redefine $w:=v$, $n_k:=\nu_k$. The renewed set $J=\{i(n): n\in \cup_k (n_k,m_k)\}$ is now a subset of $J(w')$, and the number of elements in it has decreased by at least one. If this new $J$ is also included in the new $J(w)$, we stop. Otherwise we this time choose the numbers $\nu_k$ as the least numbers $n\in (n_k, m_k)$ such that $i(n)\notin J(w)$. Then we redefine $w'$ and $m_k$'s. Since $|J|$ is decreasing, this oscillation process stops in a finite number of steps: $|J|$ cannot become less than 2. In case $|J|=2$ obviously $J\subset J(w)\cap J(w')$.
\end{proof}
Dye and Reich used in \cite{DR} the so-called weak internal points (WIP) of a convex set to prove a
nonlinear result that properly contains the
original linear theorem of Amemiya and Ando: if all the $K$ closed convex sets are linear subspaces then the sequence $\{x_n\}$ converges weakly \cite{AA}. In our version of the theorem we assume that
zero is a WIP in each of the convex sets $A_k$. Again, the result is a special case of Theorem~5 of \cite{DR}; our proof is different.
\begin{corollary}\label{cor2.1}
Assume that zero is a weak internal point of each of the $K$ convex sets $A_k$: if $a\in A_k$ then
$-\lambda a\in A_k$ for some $\lambda=\lambda(a,k)>0$. Then $W=\{0\}$.
In particular, if all $A_k$ are closed linear subspaces of $H$ then the sequence (\ref{pr}) converges weakly.
\end{corollary}
\begin{proof}
Assuming $W\neq \{0\}$
we take the two different elements $w,w'\in W$ from Theorem \ref{theorem2}.
Using (i) of Theorem \ref{theorem2} for $a=w$ gives $\langle w'-w,w\rangle\geq 0$ and $|w'|^2\geq |w|^2+|w-w'|^2$.
Using (ii) of Theorem \ref{theorem2} for $b=-\lambda w'$ gives $\langle w'-w,-\lambda w'\rangle\geq 0$ and $|w|^2\geq |w'|^2+|w-w'|^2$.
Hence $w=w'$, which is a contradiction.
\end{proof}
\section{Parallels between projecting onto convex sets and greedy approximation}
A subset $D$ of the the unit sphere $S(H)$ of the Hilbert space $H$ is called a {\em dictionary} if its span is dense in $H$.
Assume, moreover, that $D$ does not lie in a half-space: for any nonzero $v\in H$, there exists $g\in D$ such that $\langle v,g\rangle>0$.
The greedy approximation algorithm then generates for $D$ and for any element $x=x_0\in
H$ the sequence
\begin{equation}
\label{eq1}
x_{n+1}=x_n-\langle x_n,g_{n+1}\rangle g_{n+1}, \qquad n=0,1,\dots,
\end{equation}
where the element $g_{n+1}\in D$ is such that
$$
\langle x_n,g_{n+1}\rangle=\max\{\langle x_n,g\rangle\colon g\in D\}.
$$
The existence of $\max\{\langle x,g\rangle\colon g\in D\}$ for every $x\in H$ is an additional condition on $D$. If the maximum is attained on several elements of $D$, any of them is selected as $g_{n+1}$.
More precisely, this algorithm is called the \textit {pure greedy} algorithm, in contrast to other approximation algorithms whose names contain the word ``greedy'', see~\cite{T}.
For any symmetric dictionary $D$ the pure greedy algorithm converges in norm, see~\cite[Ch.~2]{T}.
That is, $x_n\to 0$ for any initial element $x=x_0$, and $x$ is represented as a norm-convergent series
$\sum_{n=0}^\infty \langle x_n,g_{n+1}\rangle g_{n+1}$. If $D$ is not symmetric, the greedy algorithm may diverge in norm~\cite{B21}, although it always converges weakly to zero~\cite{B20}.
Several details of the divergence construction in~\cite{B21} occur to be similar to that of~\cite{K08}. The ``bridge" between this two seemingly different examples is the theorem of Moreau \cite{M}:
\begin{equation}\label{moreau}
P_A(x)=x-P_{A^*}(x)
\end{equation}
for any $x\in H$, any convex cone $A\subset H$ and its polar cone
$$
A^*=\{y\in H: \langle y, z\rangle\le 0 \quad \forall z\in A\}.
$$
Recall that both papers~\cite{H} and \cite{K08} provide examples of convex cones $A_1,A_2\subset H$ so that $A_1\cap A_2=\{0\}$ and alternating projections on those cones diverge in norm for certain starting elements. The formula (\ref{moreau}) allows us to interpret this result as an example of a divergent greedy algorithm with respect to the dictionary $D=(A_1^*\cup A_2^*)\cap S(H)$. Indeed, $D$ does not lie in a half-space as $A_1\cap A_2=\{0\}$, and for any greedy residual $x_n$ lying in, say, $A_1$, we have
$$
\max\{\langle x_n,g\rangle\colon g\in D\}=\max\{\langle x_n,g\rangle\colon g\in A_2^*\cap S(H)\},
$$
so that $x_{n+1}=x_n-P_{A_2^*}(x_n)=P_{A_2}(x_n)\in A_2$. Thus the author of~\cite{B21} didn't have to reinvent the wheel:~\cite{H} and \cite{K08} both provided the example he needed.
However, the example in \cite{B21} is simpler than those of \cite{H} and \cite{K08}: it uses a discrete dictionary without the extra care needed to build it of convex cones.
The above parallels between projecting onto convex sets and greedy approximation have already been noticed in \cite{BK} in the special case of subspaces. In the context of this paper, these parallels
bring up the question of weak divergence of random greedy steps with respect to several dictionaries. This problem is considered in the next section. It turns to have the same ``bounds of knowledge" as the problem of the weak divergence of random projections onto several convex sets.
\section{Greedy approximation with respect to several dictionaries}
Let $K\ge 2$, $D_1,\dots,D_K$ be subsets of $S(H)$ so that
their union $\bigcup_{i=1}^K D_i$ is contained in no half-space: for any nonzero $v\in H$, there exists $g\in \bigcup_{i=1}^K D_i$ such that $\langle v,g\rangle>0$. This implies that the set $\bigcup_{i=1}^K D_i$ is spanning; we will call here the sets $D_i$ dictionaries.
Assume that for each $x\in H$ and each $i\in \{1,\dots,K\}$ the following condition holds: if $\sup_{g\in D_i} \langle x, g \rangle> 0$, then the supremum is attained on some element $g_i(x)\in D_i$. If it is attained at several elements of $D_i$, then
we denote by $g_i(x)$ any one of them. If $\sup_{g\in D_i} \langle x, g \rangle \le 0$, we put $g_i(x)=0$.
Clearly, our assumption means that the set $\Lambda(D_i)=\{\lambda g: \lambda\ge 0, g\in D_i\} $ is proximal, and the element $\langle x, g_i(x)\rangle g_i(x)$ belongs to the metric projection $P_{\Lambda(D_i)}(x)$.
Let $G_i$ denote the mapping corresponding to one step of the greedy algorithm with respect to the dictionary $D_i$:
$$
G_i(x)=x-\langle x, g_i(x)\rangle g_i(x).
$$
Note that
\begin{equation}\label{Pyth1}
|G_i(x)|^2= |x|^2- |x-G_i(x)|^2.
\end{equation}
Let $i(n)\in \{1,\dots, K\}$ be a fixed sequence containing each $k\in \{1,\dots, K\}$ infinitely often and such that $i(n)\neq i(n+1)$ for all $n\in \mathbb N$. For $x_0\in H$, we consider the sequence
$$
x_n=G_{i(n)} x_{n-1}, \qquad n=1,2,\dots.
$$
As we have already mentioned above, this sequence may diverge in norm even in case of one dictionary. Both examples in \cite{H} and \cite{K08} can be interpreted as norm divergence examples of residuals $x_n$ for alternating greedy steps with respect to two dictionaries. So we are interested in weak convergence, just as in case of projections.
Denoting $W=W(x_0)$ the set of all partial weak limits of the sequence $\{x_n\}$, we face
\begin{problem}\label{problem2}
Is it true that $W=\{0\}$?
\end{problem}
We may assume $x_n\not= x_{n-1}$ for all $n$, that is, $\sup_{g\in D_{i(n)}} \langle x, g \rangle> 0$.
According to (\ref{Pyth1}),
\begin{equation}\label{pyth}
|x_{n+1}|^2=|x_{n}|^2-|x_n-x_{n+1}|^2,
\end{equation}
hence the norms $|x_n|$ are decreasing. We may assume that $|x_n|\searrow R>0$, since $R=0$ implies $W=\{0\}$.
We define the closed convex cones
$$
A_i=\{y\in H: \langle y, g \rangle\le 0 \mbox{ for all } g\in D_i\}, \qquad i=1,\dots, K.
$$
Notice, that $A_i$ is the polar cone of $\overline{{\rm conv\,}}\Lambda(D_i)$.
As in Section~\ref{sec:convex}, for $w\in W$, we denote by $J(w)$ the maximal subset of $\{1,\dots, K\}$ such that $w\in A_{J(w)}$, and again use the notation $A_J=\cap_{j\in J}A_j$. Let us nevertheless stress, that
the set $W$ is here the result of greedy approximation with respect to the dictionaries $D_1, \dots, D_K$.
Let us prove that $|J(w)|\geq 2$ for each $w\in W$. The convergence $x_{n_j}\rightharpoonup w$ implies the convergence $x_{n_j+m}\rightharpoonup w$
for any fixed $m$, since $\lim_{i\to \infty}|x_i-x_{i+m}|=0$ by (\ref{pyth}). Suppose the sequence $i(n_j+m)$ contains some $k$ infinitely often. If $w\notin A_k$, then $ \langle w, g \rangle > \delta> 0$ for some $g\in D_k$, which yields $ \langle x_{n_j+m}, g \rangle > \delta$ for all sufficiently large $j$, so that $|x_{n_j+m+1}|^2\leq |x_{n_j+m}|^2- \delta^2$ for such $j$ with $i(n_j+m)=k$, and a contradiction with $|x_n|\searrow R>0$. So we get $w\in A_k$, and since one can find at least two such $k$'s using different $m$'s, we arrive at $|J(w)|\geq 2$.
The same argument shows that in case $i(n)\equiv n ({\rm mod}\, K)$ of alternating greedy algorithm we have $J(w)=\{1,\dots,K\}$ for each $w$, and hence $W=\{0\}$.
Thus, $W$ is a weakly closed subset of $\cup_{2\le |J|}A_J \cap B(0,R)$.
\begin{theorem}\label{theorem3}
For each $w\in W$, $w\not=0$, one can find another element $w'\in W$ with the following properties:
\begin{itemize}
\item[(i)] $|J(w')\setminus J(w)|\geq 1$;
\item[(ii)] $|J(w')\cap J(w)|\geq 2$
\item[(iii)] $|J(w')|\geq 3$;
\item[(iv)] $\langle w'-w,a\rangle\geq 0$ for every $a\in A_{J(w)}$.
\end{itemize}
In particular $|w'|>|w|$ in view of (i), since
$\langle w'-w,w\rangle\geq 0$, hence also $|w'|^2\geq |w|^2+|w-w'|^2$.
\end{theorem}
\begin{proof}
Theorem \ref{theorem3} is formally identical to Theorem \ref{theorem1}, and the proofs of (i)-(iii) follow the same reasoning.
The proof of (iv) is slightly different. As in the proof of Theorem \ref{theorem1}, we have
two alternating sequences $n_1<m_1<n_2<m_2<\dots$ so that
$$
x_{n_k}\rightharpoonup w, \qquad x_{m_k}\rightharpoonup w',
$$
and $i(n)\in J(w)$ for all $n\in \cup_k[n_k,m_k)$.
For any $a\in A_{J(w)}$, we have
\begin{equation}\notag
\begin{split}
\langle w'-w,a\rangle &= \lim_{k\to \infty} \langle x_{m_k}-x_{n_k},a\rangle \\
&= \lim_{k\to \infty} \sum_{n=n_k+1}^{m_k}\langle x_{n}-x_{n-1},a\rangle= \lim_{k\to \infty} \sum_{n=n_k+1}^{m_k-1}\langle x_{n}-x_{n-1},a\rangle \\
&= \lim_{k\to \infty}\sum_{n=n_k+1}^{m_k-1} (-1)\langle x_{n-1}, g_{i(n)}(x_{n-1})\rangle \langle g_{i(n)}(x_{n-1}),a\rangle
\geq 0.
\end{split}
\end{equation}
The last inequality holds since each of the summands is non-negative:
$\langle x, g_i(x)\rangle\ge 0$ for any $x$ and $i$ by the definition of $g_i$, and $\langle g_{i(n)}(x_{n-1}),a\rangle\le 0$ since $i(n)\in J(w)$ and $a\in A_{J(w)}$.
\end{proof}
\begin{remark}\label{remark2}
The inequality (iv) holds for $a\in A_{J(w,w')}$, where $J(w,w')=\{i(n): n\in [n_k,m_k-1], k=1,2,\dots\}$.
Since $J(w,w')\subset J(w)$, $A_{J(w,w')}$ can be strictly larger than $A_{J(w)}$.
\end{remark}
\begin{corollary}\label{corol3.1.}
If $K\leq 3$, then $W=\{0\}$.
\end{corollary}
\begin{proof}
If $K=2$ we have an alternating greedy algorithm,
hence convergence as we have explained above Theorem~\ref{theorem3}.
Assume that $K=3$ and that there is $w\in W\setminus \{0\}$. By Theorem \ref{theorem3} there is $w'\in W$ with $|w'|>|w|$ and $J(w')=\{1,2,3\}$ Hence $w'=0$ which is a contradiction.
\end{proof}
\begin{corollary}\label{cor3.2}
Suppose for any four indices $i,j,k,l\in \{1,\dots,K\}$ the inequality
\begin{equation}\label{ijkl}
\inf_{s\in S(H)}\sup_{g\in D_i\cup D_j\cup D_k\cup D_l} \langle s, g\rangle>0
\end{equation}
holds. Then $W=\{0\}$.
\end{corollary}
\begin{proof}
The inequalities (\ref{ijkl}) provide $\delta>0$ so that for any distinct $i,j,k,l$ and $u\in A_{\{i,j,k\}}\cap S(H)$ there exists $g\in D_l$ such that $\langle u,g \rangle>\delta$. Hence, for any two different triples $\{i,j,k\}$
and $\{i,j,l\}$ and unit elements $u\in A_{\{i,j,k\}}$, $v\in A_{\{i,j,l\}}$ we have $|u-v|>\delta$:
$$
|u-v|\ge \langle u-v, g \rangle \ge \langle u,g \rangle>\delta.
$$
Further we repeat the proof of Corollary~\ref{cor1.2}.
Suppose $W\not=\{0\}$. By Theorem \ref{theorem3}, we can produce a sequence $w_n\in W$ so that $w_{n+1}=w_n'$, $|J(w_n)|\ge 3$, $J(w_n)\not= J(w_{n+1})$ and $|J(w_n)\cap J(w_{n+1})|\ge 2$ for each $n$. So we get $w_n\in A_{\{i,j,k\}} $ and $w_n\in A_{\{i,j,l\}} $ for some $i$, $j$ and $k\not= l$ depending on $n$. Therefore, using that the sets $A$ are cones, we can refine the inequality from Theorem \ref{theorem3}:
$$
|w_{n+1}|^2\ge |w_n|^2+|w_{n+1}-w_n|^2\ge |w_n|^2+\delta^2|w_n|^2/2.
$$
That, however, means that $|w_n|$ is unbounded.
\end{proof}
\begin{theorem}\label{theorem4}
If $W\not=\{0\}$, then one can find two different elements $w,w'\in W$ so that
$$
w={\rm weak}\lim_{k\to \infty} x_{n_k}, \qquad w'={\rm weak}\lim_{k\to \infty} x_{m_k},
$$
where $n_1<m_1<n_2<m_2<\dots$, and $i(n)\in J(w)\cap J(w')$ for any $n\in \cup_k (n_k,m_k)$. Consequently,
$(i)$ $\langle w'-w,a\rangle\geq 0$ for every $a\in A_{J(w)}$,
$(ii)$ $\langle w'-w,b\rangle\geq 0$ for every $b\in A_{J(w')}$.
\end{theorem}
\begin{proof}
We repeat the proof of Theorem~\ref{theorem2}; it is purely combinatorial. The inequalities follow from the first statement as in the proof of part (iv) of Theorem~\ref{theorem3}.
\end{proof}
\begin{corollary}\label{cor4.1}
If all $D_k$ are symmetric, then $W=\{0\}$.
\end{corollary}
\begin{proof}
Assume that $W\neq \{0\}$.
We take the two different elements $w,w'\in W$ from Theorem~\ref{theorem4}.
Using (i) of Theorem~\ref{theorem4} for $a=w$ gives $\langle w'-w,w\rangle\geq 0$, hence $|w'|^2\geq |w|^2+|w-w'|^2$.
Using (ii) of Theorem~\ref{theorem4} for $b=-\lambda w'$ gives $\langle w'-w,-\lambda w'\rangle\geq 0$, hence $|w|^2\geq |w'|^2+|w-w'|^2$.
Thus we get $w=w'$, which is a contradiction.
\end{proof}
| {
"timestamp": "2021-12-10T02:26:30",
"yymm": "2112",
"arxiv_id": "2112.05094",
"language": "en",
"url": "https://arxiv.org/abs/2112.05094",
"abstract": "Let $H$ be a Hilbert space. We investigate the properties of weak limit points of iterates of random projections onto $K\\geq 2$ closed convex sets in $H$ and the parallel properties of weak limit points of residuals of random greedy approximation with respect to $K$ dictionaries. In the case of convex sets these properties imply weak convergence in all the cases known so far. In particular, we give a short proof of the theorem of Amemiya and Ando on weak convergence when the convex sets are subspaces. The question of the weak convergence in general remains open.",
"subjects": "Functional Analysis (math.FA)",
"title": "Weak limits of consecutive projections and of greedy steps",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9840936082881853,
"lm_q2_score": 0.8152324826183822,
"lm_q1q2_score": 0.802265075413659
} |
https://arxiv.org/abs/1007.4508 | Finite Size Percolation in Regular Trees | In the context of percolation in a regular tree, we study the size of the largest cluster and the length of the longest run starting within the first d generations. As d tends to infinity, we prove almost sure and weak convergence results. | \section{Introduction}
Fix a positive integer $r$ and let $\mathbb{T}$ be the infinite $r$-ary tree, rooted at $\rho_0$.
We consider a Bernoulli percolation on $\mathbb{T}$.
Formally, to each node $v \in \mathbb{T}$, we associate a random variable $X_v$, where the variables $\{X_v:v \in \mathbb{T}\}$ are i.i.d.~Bernoulli with $\pr{X_v = 1} = 1 -\pr{X_v = 0} = p \in (0,1)$.
For a subset $A \subset \mathbb{T}$, let $X_A = \prod_{v \in A} X_v$. We say that $A$ is {\it open} if $X_A = 1$.
\subsection{The size of the largest cluster.}
We use the term {\it cluster} to denote a connected component (i.e.~subtree) of $\mathbb{T}$ when undirected. Let $\mathcal{K}$ denote the set of clusters in $\mathbb{T}$.
For a node $v \in \mathbb{T}$, let ${\rm gen}(v)$ be its generation, i.e.~the number of nodes in the shortest path from the root $\rho_0$ to $v$, not counting $\rho_0$. Note that ${\rm gen}(\rho_0) = 0$.
Let $\mathbb{T}_d$ be the set of nodes with generation not exceeding $d$, namely $\mathbb{T}_d = \{v \in \mathbb{T}: {\rm gen}(v) \leq d\}$.
For a cluster $A \in \mathcal{K}$, we let $|A|$ denote its size (i.e.~number of nodes) and $\rho(A)$ its root, namely $\rho(A) = \argmin\{{\rm gen}(v): v \in A\}$.
For $d \in \mathbb{N}$, define $K_d$ to be the size of the largest open cluster with root of generation not exceeding $d$:
$$K_d = \max\{|A|: A \in \mathcal{K},\, \rho(A) \in \mathbb{T}_d,\, X_A = 1\}.$$
In particular, $K_0$ is the size of the largest open cluster containing the root $\rho_0$.
In this paper we study the limit behavior of $K_d$, as $d \to \infty$.
In the context of the one-dimensional lattice $\mathbb{Z}$, the corresponding results are often referred to as the Erd\"os-R\'enyi Law~\cite{MR0272026} and, in that context, our approach follows that of Arratia, Goldstein and Gordon~\cite{MR972770}.
In higher dimensions, the problem is much more intricate and many questions remain without answer. For a sample of sophisticated results, see e.g.~\cite{MR1868996,MR1372330,MR1880230}. The book by Grimmett~\cite{MR1707339} is a standard reference on percolation. For references more specific to trees, we refer the reader to a survey paper by Pemantle~\cite{MR1368099} and the book of Lyons and Peres~\cite{tree-book}. Though the literature on percolation is vast, most of it focuses on the existence of an infinite cluster and its characteristics when it exists.
On the applications side, Patil and Taillie~\cite{MR2109372} identify regions of interest in a network by thresholding the response from each site in the network and computing connected components, which amounts to extracting the open clusters. One imagines that the largest cluster might receive the most attention. In particular, they mention monitoring water quality in a network of freshwater streams, where each stream may be modeled as a tree, though an irregular one.
It is well-known that, in the supercritical setting where $p > 1/r$, the cluster at the origin has positive probability of being infinite, and, in fact, $\pr{K_d = \infty}$ tends to 1 as $d$ increases.
We restrict our attention to the subcritical and critical cases, i.e~$p < 1/r$ and $p = 1/r$ respectively, where the cluster at the origin is finite with probability one.
We start with the critical case, where we show that $K_d$ behaves like the maximum of $r^d$ independent random variables with distribution the total progeny of a Galton-Watson process with offspring distribution $\text{Bin}(r, 1/r)$. Let $\log_r$ denote the logarithm in base $r$.
\begin{thm} \label{thm:cluster-cri}
Assume $p=1/r$. Then with probability one,
$$\frac{\log_r K_d}{d} \rightarrow 2, \quad d \to \infty.$$
Moreover,
\[
\pr{\log_r K_d \leq 2d +x} \to \to \exp(- \Cl{cluster-cri}\, r^{-x/2}), \quad d \to \infty, \quad \Cr{cluster-cri} := \frac{2\, n^{-1/2}}{\sqrt{2 \pi r (r-1)}}
\]
\end{thm}
For the subcritical case, we obtain similar results without transformation. Here, a Poisson approximation applies showing that $K_d$ behaves like the maximum of $|\mathbb{T}_d| = r^{d+1}/(r-1)$ independent random variables with distribution the total progeny of a Galton-Watson process with offspring distribution $\text{Bin}(r, p)$.
Define
\begin{equation}
\label{kappa}
\kappa = p (1-p)^{r-1}\ \frac{r^r}{(r-1)^{r-1}}.
\end{equation}
Note that $\kappa < 1$ for all $p < 1/r$. Let $[x]$ denote the entire part of $x \in \mathbb{R}$.
\begin{thm}
\label{thm:cluster-sub}
Assume $p<1/r$. Then with probability one,
$$\frac{K_d}{d} \rightarrow \frac{1}{\log_r(1/ \kappa)}, \quad d \to \infty.$$
Moreover, the sequence of random variables $(K_d -\mu_d: d \geq 0)$ is tight, where $\mu_d := \frac{d -\frac{3}{2} \log_r d}{\log_r(1/ \kappa)}$. In addition, a subsequence $K_d -\mu_d$ converges weakly if, and only if, $a := \lim_{d \to \infty} (\mu_d -[\mu_d])$ exists, in which case the weak limit is $[Z+a] -a$, where
\[
\pr{Z \leq z} = \exp\left(-\Cl{cluster}\, \kappa^{z}\right),
\]
for an explicit constant $\Cr{cluster} > 0$ depending only on $(p,r)$
\end{thm}
The behavior of the size of the largest open cluster in the subcritical regime is therefore similar in the context of the regular tree and in the context of the one-dimensional lattice, the latter corresponding to the length of the longest perfect head run in a sequence of coin tosses~\cite[Ex.~3]{MR972770}.
\subsection{The length of the longest run.}
We use the term {\it run} for a path in $\mathbb{T}$ when directed away from the root $\rho_0$.
Note that runs are special clusters.
Let $\mathcal{R}$ denote the set of runs and define $R_d$ to be the length of the longest open run with root of generation not exceeding $d$:
$$R_d = \max\{|A|: A \in \mathcal{R},\, \rho(A) \in \mathbb{T}_d,\, X_A = 1\}.$$
Of course, runs and clusters coincide in the one-dimensional lattice $\mathbb{Z}$. For a general reference on runs in dimension one, see~\cite{MR1882476}. Using the Chen-Stein method, Chen and Huo~\cite{MR2268049} proved results on the longest left-right run in a thin two-dimensional lattice of the form $([0,d]\times[0,a]) \cap \mathbb{Z}^2$, with the width $a$ remaining constant. We also mention the work of Arias-Castro, Donoho and Huo~\cite{MR2275244} who used a statistic based on the longest run in a particular, non-planar graph to detect filaments in point-clouds.
The results we obtain for runs are parallel to those we obtain for clusters.
In the critical case, we show that $R_d$ behaves like the maximum of $r^d$ independent random variables with distribution the height of a Galton-Watson process with offspring distribution $\text{Bin}(r, 1/r)$.
\begin{thm} \label{thm:run-cri}
Assume $p=1/r$. Then with probability one,
$$\frac{\log_r R_d}{d} \rightarrow 1, \quad d \to \infty.$$
Moreover, for any $x \in \mathbb{R}$,
\[
\pr{\log_r R_d \leq d + x} \to \exp(- \Cl{run-cri}\, r^{-x}), \quad d \to \infty, \quad \Cr{run-cri} := \frac{2 rp}{r-1}.
\]
\end{thm}
In the subcritical case, we show that $R_d$ behaves like the maximum of $|\mathbb{T}_d|$ independent random variables with distribution the height of a Galton-Watson process with offspring distribution $\text{Bin}(r, p)$. Again, a Poisson approximation applies. The constant that appears in the exponent is only defined implicitly.
\begin{thm}
\label{thm:run-sub}
Assume $p<1/r$. Then with probability one,
$$\frac{R_d}{d} \rightarrow \frac{1}{\log_r (1/p) -1}, \quad {\rm as} \ d
\rightarrow \infty.$$
Moreover, the sequence of random variables $(K_d -\nu_d: d \geq 0)$ is tight, where $\nu_d := \frac{d}{\log_r (1/p) -1}$. In addition, a subsequence $K_d -\nu_d$ converges weakly if, and only if, $a := \lim_{d \to \infty} (\nu_d -[\nu_d])$ exists, in which case the weak limit is $[Z+a] -a$, where
\[
\pr{Z \leq z} = \exp\left(-\Cl{run}\, (rp)^{z}\right),
\]
for an explicit constant $\Cr{run} > 0$ depending only on $(p,r)$
\end{thm}
\subsection{Contents.}
The rest of the paper is devoted to proving our results. In \secref{proof-cluster} we prove \thmref{cluster-cri} and \thmref{cluster-sub}. In \secref{proof-run} we prove \thmref{run-cri} and \thmref{run-sub}.
\subsection{Additional Notation.}
Let $\partial \mathbb{T}_d = \{v \in \mathbb{T}: {\rm gen}(v) = d\}$.
For a cluster $A$, let $\underline{A}$ denote the set of nodes not in $A$ whose parents belong to $A$, and if $\rho(A) \neq \rho_0$, let $\mathring{A}$ denote the parent of $\rho(A)$.
Also, define $(1 -X)_A = \prod_{v \in A} (1 - X_v)$.
For two sequences of real numbers $(a_n)$ and $(b_n)$, we use the notation $a_n \sim b_n$ to indicate that $a_n/b_n \to 1$ and $a_n \asymp b_n$ to indicate that the ratio $a_n/b_n$ is bounded away from zero and infinity, both understood as $n \to \infty$.
Throughout the paper $C$ denotes a finite, positive constant depending only on $r$ and $p$, whose value may change with each appearance.
\section{The size of the largest open cluster}
\label{sec:proof-cluster}
In this section, we prove \thmref{cluster-cri} and \thmref{cluster-sub}.
We start with some notation.
For a vertex $v \in \mathbb{T}$, let $K(v)$ be the size of the largest open cluster with root $v$,
$$K(v) = \max\{|A|: A \in \mathcal{K},\, \rho(A) = v,\, X_A = 1\}.$$
In particular,
$$K_d = \max\{K(v): v \in \mathbb{T}_d\}.$$
The distribution of $K(v)$ does not depend on $v \in \mathbb{T}$, and, in fact, given $X_v = 1$, coincides with that of the total progeny of a Galton-Watson tree starting with one individual and with offspring distribution Bin$(r,p)$.
Define
\[
\psi_n = \pr{K(v) = n}, \quad \Psi_n = \pr{K(v) > n}.
\]
Applying a well-known identity by Dwass~\cite{MR0253433} (called the Otter-Dwass formula in~\cite{tree-book}), we get
\begin{eqnarray*}
\psi_n
&=& \frac{p}{n} \pr{\xi_1 + \cdots + \xi_n = n-1}, \text{ where } \xi_1, \dots, \xi_n \stackrel{\rm i.i.d.}{\sim} \text{Bin}(r, p) \\
&=& \frac{p}{n} \pr{\text{Bin}(n r, p) = n-1} \\
&=& {\rm Cat}_{n}\, p^{n} (1-p)^{n (r-1) + 1},
\end{eqnarray*}
where
\[
{\rm Cat}_{n} := \frac{1}{n} {n r \choose n-1} = \frac{1}{(r-1) n + 1} {r n \choose n}
\]
is the {\it $n$th generalized Catalan number}~\cite{hil91}, which among other interpretations, is the number of subtrees of $\mathbb{T}$ of size $n$ rooted at the origin, i.e.~
\[
{\rm Cat}_{n} = |\{A \in \mathcal{K}: \rho(A) = \rho_0,\, |A| = n\}|.
\]
We could have obtained the expression for $\psi_n$ using this definition of ${\rm Cat}_n$. Indeed, for $n > 0$, $K(v) = n$ if, and only if, there is a (unique) subtree $A$ with $|A| = n$ , $\rho(A) = v$ and $X_A (1 -X)_{\underline{A}} = 1$, so that $A$ cannot be extended and still be an open cluster. We then use the fact that a subtree of size $n$ has exactly $(r-1)n+1$ children.
With the use of Stirling's formula, we arrive at the following conclusions; see also~\cite{MR0386042, ney}.
\begin{lem}
\label{lem:Kv}
In the critical case $p =1/r$,
\[
\Psi_n \sim \frac{\Cr{cluster-cri}}{\sqrt{n}}.
\]
In the subcritical case $p < 1/r$,
\[
\Psi_n \sim \Cl{cluster-aux}\, \frac{\kappa^{n+1}}{n^{3/2}}, \quad \Cr{cluster-aux} := \frac{1}{\sqrt{2 \pi}(1-\kappa)} \frac{(1-p) r^{1/2}}{(r-1)^{3/2}}.
\]
\end{lem}
\subsection{Proof of \thmref{cluster-cri}}
\label{sec:proof-cluster-cri}
Define
\[
K^\partial_d := \max\{K(v): v \in \partial \mathbb{T}_d\}.
\]
We first prove that the conclusions of \thmref{cluster-cri} hold for $K^\partial_d$.
For $x \in \mathbb{R}$, let $n_d(x) = [r^{2 d + x}].$
As $K^\partial_d$ only involves independent random variables, we have
\begin{equation} \nonumber
\pr{\log_r K^\partial_d \leq 2d +x} = \pr{K^\partial_d \leq n_d(x)} = (1 -\Psi_{n_d(x)})^{r^d} = \exp(- \Cr{cluster-cri}\, r^{-x/2} + O(r^{-d -x})).
\end{equation}
Letting $d \to \infty$, we obtain the weak convergence, and by choosing $x = \varepsilon d$, with $\varepsilon > -2$ fixed, and applying the Borel-Cantelli Lemma, we obtain the almost sure convergence.
It therefore suffices to show that $K_d = (1 + o_P(1)) K^\partial_d$. Clearly, $K_d \geq K^\partial_d$, so we focus on the upper bound.
Define
\[
B_d = \{v \in \partial \mathbb{T}_d: K(v) > r^d/d\}, \quad B^2_d = \{v \in \partial \mathbb{T}_d: K(v) > r^{2d}/d\}.
\]
For any open cluster $A$ with $\rho(A) \in \mathbb{T}_d$, we have
\[
|A| = |A \cap \mathbb{T}_{d-1}| + \sum_{v \in A \cap \partial \mathbb{T}_d} K(v) \leq r^d + r^{2d}/d + \sum_{v \in A \cap B_d} K(v).
\]
We turn to bounding the sum. We first show that, with probability tending to one, there is no open cluster $A$ containing three or more nodes in $B_d$. Indeed, take $v_1, v_2, v_3 \in \partial \mathbb{T}_d$ distinct. Let $w$ denote their most recent common ancestor and let $k = d -{\rm gen}(w)$. Either the paths $v_j \to \rho_0$ meet at $w$ for the first time or two of the paths meet at a node $u$ with ${\rm gen}(u) > {\rm gen}(w)$, in which case we let $\ell = d -{\rm gen}(u)$. Now, the nodes $v_1, v_2, v_3$ belong to the same open cluster if, and only if, the smallest subtree containing $w$ and $v_1, v_2, v_3$ is open, and this subtree is of size $\ell + 2 k + 1$, and therefore, the probability that they belong to the same open cluster is $p^{\ell + 2k + 1}$. In addition, the number of such triplets is bounded by
\[
\left(r^{d-k} {r^k \choose 3}\right) \cdot \left(r^k r^{k -\ell} {r^\ell \choose 2}\right) {r^k \choose 3}^{-1} \asymp r^{d +k +\ell}.
\]
The first factor comes from the fact that the three nodes are leaves of a subtree with root at generation $d-k$. Given that, the second factor comes from the fact that two of them belong to a subtree of that subtree with root at (relative) generation $k -\ell$.
Hence, remembering that $p = 1/r$ and using \lemref{Kv}, we have
\begin{eqnarray*}
\pr{\exists A \in \mathcal{K}: X_A = 1,\, |A \cap B_d| \geq 3}
&\leq& C\, \pr{K(v) > r^d/d}^3 \cdot \sum_{k=0}^d \sum_{\ell=0}^k r^{d +k +\ell} p^{\ell + 2k + 1} \\
&\leq& C\, (r^d/d)^{-3/2} r^{d} \asymp d^{3/2} r^{-d/2}.
\end{eqnarray*}
By the same token, with probability tending to one (in fact of order at most $d/r^d$), there is no open cluster $A$ containing two or more nodes in $B^2_d$.
Now, when $|A \cap B_d| \leq 2$ and $|A \cap B^2_d| \leq 1$, we have
\[
\sum_{v \in A \cap B_d} K(v) \leq \max_{v \in A \cap B_d} K(v) + r^{2d}/d \leq K^\partial_d + r^{2d}/d.
\]
In the end, with probability tending to one,
\[
|A| \leq r^d + 2\, r^{2d}/d + K^\partial_d,
\]
for any open cluster $A$ with $\rho(A) \in \mathbb{T}_d$. Hence,
\[
K_d \leq K^\partial_d + O_P(r^{2d}/d),
\]
and we conclude by the fact that $K^\partial_d$ is of order exceeding $r^{2d}/d$ with probability tending to one.
\subsection{Proof of \thmref{cluster-sub}}
\label{sec:proof-cluster-sub}
The proof of the almost sure convergence may be obtained following the arguments provided in \secref{proof-cluster-cri} or using the bounds we are about to prove below. We omit details.
The proof of the weak convergence is based on the Chen-Stein method for Poisson approximation as formulated by Arratia, Goldstein and Gordon~\cite{MR972770}.
Define
$$Y_A = \left\{\begin{array}{ll}
X_A (1 -X)_{\underline{A}}, & \rho(A) = \rho_0,\\[.05in]
X_A (1-X)_{\mathring{A}} (1 -X)_{\underline{A}}, & \rho(A) \neq \rho_0;
\end{array}\right.$$
Also, let $\mathcal{K}_{d,n}$ be the set of clusters of size exceeding $n$ with root in $\mathbb{T}_d$, and define
$$W_{d,n} = \sum_{A \in \mathcal{K}_{d,n}} Y_A.$$
\vspace{-.1in}
By definition,
$$\{K_d \leq n\} = \{Y_A = 0,\, \forall A \in \mathcal{K}_{d,n}\} = \{W_{d,n} = 0\}.$$
We approximate the law of $W_{d,n}$ by the Poisson distribution with same mean $\lambda_{d,n} = \expect{W_{d,n}}$.
We start by estimating $\lambda_{d,n}$ using \lemref{Kv}, obtaining
\begin{eqnarray*}
\lambda_{d,n}
& = & \sum_{A \in \mathcal{K}_{d,n}} \pr{Y_A = 1} \\
& = & \pr{K(\rho_0) > n} + (1-p) \sum_{v \in \mathbb{T}_d, v \neq \rho_0} \pr{K(v) > n} \\
& = & \Psi_n + (1-p) (|\mathbb{T}_d| -1) \Psi_n.
\end{eqnarray*}
In particular, as $n, d \to \infty$,
\[
\lambda_{d,n} \sim \Cr{cluster}\ r^d n^{-3/2} \kappa^{n+1}, \quad \Cr{cluster} := \frac{\Cr{cluster-aux} (1-p) r}{r-1}.
\]
For a cluster $A \in \mathcal{K}_{d,n}$, define its neighborhood $\mathcal{B}(A)$ as the set of clusters $B \in \mathcal{K}_{d,n}$ such that
$$(\mathring{B} \cup B \cup \underline{B}) \ \cap \ (\mathring{A} \cup A \cup \underline{A}) \neq \emptyset.$$
Define the following sums
\begin{eqnarray*}
F_{d,n} & = & \sum_{A \in \mathcal{K}_{d,n}} \sum_{B \in \mathcal{B}(A)} \pr{Y_A = 1} \pr{Y_B = 1},\\
G_{d,n} & = & \sum_{A \in \mathcal{K}_{d,n}} \sum_{B \in \mathcal{B}(A), B \neq A} \pr{Y_A = Y_B = 1},\\
H_{d,n} & = & \sum_{A \in \mathcal{K}_{d,n}} \expect{\left|\expect{Y_A - \expect{Y_A}|Y_B, B \notin \mathcal{B}(A)}\right|}.
\end{eqnarray*}
Then by the second part of~\cite[Th.~1]{MR972770},
$$\left|\pr{W_{d,n} = 0} - \exp(-\lambda_{d,n})\right| \leq F_{d,n} + G_{d,n} + H_{d,n}.$$
For $x \in \mathbb{R}$, define $n_d(x) = \left[\mu_d +x\right]$.
When $x$ is fixed and $d \to \infty$, $\lambda_{d,n_d(x)} \asymp 1$, with
$$\lambda_{d,n_d(x)} \to \Cr{cluster}\, \kappa^{[a+x] -a +1}, \ \text{ when } \mu_d -[\mu_d] \to a, \ x -[x] \neq 1-a,$$
with
$$\pr{[Z+a] -a \leq x} = \exp(-\Cr{cluster}\, \kappa^{[a+x] -a +1}).$$
Therefore, to conclude it suffices to prove that $F_{d,n}, G_{d,n}, H_{d,n} \to 0$ when $d,n \to \infty$ in such a way that $\lambda_{d,n} \asymp |\mathbb{T}_d| \Psi_n \asymp 1$.
First, $H_{d,n} = 0$ by independence of $Y_A$ and $Y_B, B \notin \mathcal{B}(A)$.
For $G_{d,n}$, the only pairs $A,B \in \mathcal{K}_{d,n}$ that contribute to the sum satisfy either $\mathring{B} \in \underline{A}$ or $\mathring{A} \in \underline{B}$, and in both cases
$$\pr{Y_A = Y_B = 1} = (1-p)^{-1} \pr{Y_A = 1} \pr{Y_B = 1}.$$
Hence, using the fact that there are ${\rm Cat}_{m}$ subtrees of size $m$ with a given root, each with $(r-1)m + 1$ children, and then \lemref{Kv}, we have
\begin{eqnarray*}
G_{d,n}
& \leq & 2 (1-p)^{-1} |\mathbb{T}_d| \sum_{m > n}\ {\rm Cat}_{m}\, p^m (1-p)^{(r-1)m+1} ((r-1)m + 1) \cdot \Psi_n \\
& \leq & C\, \lambda\, \sum_{m > n} m \psi_{m} = C\, \lambda\, \left((n+1) \Psi_n + \sum_{m > n} \Psi_{m}\right)
\asymp n^{-1/2} \kappa^n \to 0, \quad n \to \infty.
\end{eqnarray*}
For $F_{d,n}$, the only pairs $A,B \in \mathcal{K}_{d,n}$ that contribute to the sum satisfy either $\mathring{B} \in \mathring{A} \cup A \cup \underline{A}$ or $\mathring{A} \in \mathring{B} \cup B \cup \underline{B}$. The computations are then similar.
\section{The length of the longest open run}
\label{sec:proof-run}
The arguments are parallel to those provided in \secref{proof-cluster}.
For $A \subset \mathbb{T}$, define its height as $\tau(A) = \sup\{{\rm gen}(v): v \in A\} -{\rm gen}(\rho(A))$.
For a vertex $v \in \mathbb{T}$, let $R(v)$ be the length of the longest run with root $v$,
\[
R(v) = 1 + \max\{\tau(A): A \in \mathcal{K}, \rho(A) = v\}.
\]
In particular,
\[
R_d = \max\{R(v): v \in \mathbb{T}_d\}.
\]
The distribution of $R(v)$ does not depend on $v \in \mathbb{T}$, and, in fact, given $X_v = 1$, coincides with that of the height (plus one), i.e.~extinction time, of a Galton-Watson tree with offspring distribution Bin$(r,p)$.
Define
\[
\phi_h = \pr{R(v) = h}, \quad \Phi_h = \pr{R(v) > h}.
\]
We have the following results on the asymptotic behavior of $\Phi_h$~\cite{ney}.
\begin{lem}
\label{lem:Rv}
In the critical case $p =1/r$,
\[
\Phi_h \sim \frac{\Cr{run-cri}}{h}.
\]
In the subcritical case $p < 1/r$, there is an implicit constant $\Cl{run-aux} > 0$ such that
\[
\Phi_h \sim \Cr{run-aux}\, (rp)^h.
\]
\end{lem}
Let ${\rm Cat}_{n,h}$ denote the number of subtrees rooted at the origin, of size $n$ and height $h$. See~\cite{MR1249127} for some results on ${\rm Cat}_{n,h}$.
As in \secref{proof-cluster}, we can argue that
\[
\phi_h = \sum_{n > h} {\rm Cat}_{n,h}\, p^n (1-p)^{(r-1)n +1}, \ \text{ implying } \ \Phi_h = \sum_{\ell > h} \sum_{n > \ell} {\rm Cat}_{n,\ell}\, p^n (1-p)^{(r-1)n +1}.
\]
\subsection{Proof of \thmref{run-cri}}
The proof is based on the following observation
\[
R^\partial_d \leq R_d \leq R^\partial_d + d, \quad R^\partial_d := \max\{R(v): v \in \partial \mathbb{T}_d\},
\]
where the $d$ term bounds the length of any run in $\mathbb{T}_{d-1}$.
As $R^\partial_d$ only involves independent random variables,
\begin{equation} \label{eq:partial-Rd}
\pr{R^\partial_d \leq h} = (1 -\Phi_{h})^{r^d}.
\end{equation}
Choosing $h = r^{[d/2]}$ and using \lemref{Rv}, we obtain
\[
\pr{R^\partial_d \leq r^{[d/2]}} \leq \exp(- C\, r^{-d/2}), \ \text{ for some } C > 0,
\]
so that, applying the Borel-Cantelli Lemma, $R^\partial_d \geq r^{d/2}$ eventually, with probability one. Hence, $\log_r R_d = (1 +o_P(1)) \log_r R^\partial_d$, and it is therefore enough to prove the results for $R^\partial_d$ in place of $R_d$.
The almost sure convergence is obtained in a similar way by choosing $h = r^{(1+\varepsilon) d}$ with $\varepsilon$ fixed, either positive or negative.
For the weak convergence, fix $x$ and let $h_d(x) = [r^{d+x}]$. By \lemref{Rv} and \eqref{eq:partial-Rd}, we have
\[
\pr{\log_r R^\partial_d \leq d + x} \to \exp(- \Cr{run-cri}\, r^{-x}), \quad d \to \infty.
\]
\subsection{Proof of \thmref{run-sub}}
We again omit the details of the proof of the almost sure convergence and focus on proving the weak convergence. Let $\mathcal{K}_{d,h}$ denote the set of clusters with root in $\mathbb{T}_d$ and height exceeding $h$. We use the notation introduced in \secref{proof-cluster-sub}, with $\mathcal{K}_{d,h}$ in place of $\mathcal{K}_{d,n}$.
By definition,
$$\{R_d \leq h\} = \{W_{d,h} = 0\}.$$
Using \lemref{Rv}, we obtain
\begin{eqnarray*}
\lambda_{d,h}
& = & \sum_{A \in \mathcal{K}_{d,h}} \pr{Y_A = 1}\\
& = & \pr{R(\rho_0) > h} + (1-p) \sum_{v \in \mathbb{T}_d} \pr{R(v) > h}\\
& = & \Phi_h + (1-p) (|\mathbb{T}_d| -1) \Phi_h.
\end{eqnarray*}
In particular, as $h, d \to \infty$,
\[
\lambda_{d,h} \sim \Cr{run}\, r^d (rp)^{h+1}, \quad \Cr{run} := \frac{\Cr{run-aux}\, (1-p)}{p(r-1)}.
\]
For $x \in \mathbb{R}$, define $h_d(x) = [\nu_d + x]$. When $x$ is fixed and $d \to \infty$, we have $\lambda_{d,h_d(x)} \asymp 1$, with
$$\lambda_{d,h_d(x)} \to \Cr{run} (rp)^{[a+x] -a +1}, \ \text{ when } \nu_d -[\nu_d] \to a, \ x -[x] \neq 1-a.$$
It then suffices to show that $F_{d,h}, G_{d,h}, H_{d,h} \to 0$ when $d,h \to \infty$ in such a way that $\lambda_{d,h} \asymp |\mathbb{T}_d| \Phi_h \asymp 1$, and the computations are parallel to those in \secref{proof-cluster-sub}.
We focus on $G_{d,h}$. Fix $\tilde{p} \in (p, 1/r)$ and let $\tilde{\Phi}_h$ be defined as $\Phi_h$, with $\tilde{p}$ in place of $p$. For $h$ large enough, we then have
\begin{eqnarray*}
G_{d,h}
&\leq& 2 \sum_{A \in \mathcal{K}_{d,h}}\ \sum_{\mathcal{B}(A), \mathring{B} \in \underline{A}} (1-p)^{-1} \pr{Y_A = 1} \pr{Y_B = 1} \\
& \leq & C\, |\mathbb{T}_d| \sum_{\ell > h} \sum_{n > \ell} {\rm Cat}_{n,\ell}\ p^n (1-p)^{(r-1)n + 1} ((r-1)n+1) \cdot \Phi_h \\
& \leq & C\, \lambda\, \sum_{\ell > h} \sum_{n > \ell} {\rm Cat}_{n,\ell}\ \tilde{p}^n (1-\tilde{p})^{(r-1)n + 1} \\
& = & C\, \lambda\, \tilde{\Phi}_h
\asymp (r \tilde{p})^h \to 0, \quad d \to \infty.
\end{eqnarray*}
\subsection*{Acknowledgements}
The author would like to thank Philippe Flajolet for fruitful conversations and Jason Schweinsberg for reading an early version of the manuscript, pointing out some errors and helping with the proof of \thmref{cluster-cri}.
This work was partially supported by a grant from the National Science Foundation (DMS-0603890) and a grant from the Office of Naval Research (N00014-09-1-0258).
{\small
\bibliographystyle{abbrv}
| {
"timestamp": "2010-07-27T02:03:30",
"yymm": "1007",
"arxiv_id": "1007.4508",
"language": "en",
"url": "https://arxiv.org/abs/1007.4508",
"abstract": "In the context of percolation in a regular tree, we study the size of the largest cluster and the length of the longest run starting within the first d generations. As d tends to infinity, we prove almost sure and weak convergence results.",
"subjects": "Probability (math.PR)",
"title": "Finite Size Percolation in Regular Trees",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9898303413461358,
"lm_q2_score": 0.8104789155369048,
"lm_q1q2_score": 0.8022366216197404
} |
https://arxiv.org/abs/2209.03325 | Pancyclicity of Hamiltonian graphs | An $n$-vertex graph is Hamiltonian if it contains a cycle that covers all of its vertices, and it is pancyclic if it contains cycles of all lengths from $3$ up to $n$. In 1972, Erdős conjectured that every Hamiltonian graph with independence number at most $k$ and at least $n = \Omega(k^2)$ vertices is pancyclic. In this paper we prove this old conjecture in a strong form by showing that if such a graph has $n = (2+o(1))k^2$ vertices, it is already pancyclic, and this bound is asymptotically best possible. | \section{Introduction}
Hamiltonicity is one of the most central notions in graph theory, and it has been extensively studied by numerous researchers. The problem of deciding Hamiltonicity of a graph is NP-complete and therefore, a central theme in Combinatorics is to derive sufficient conditions for this property. The most classical one is Dirac’s theorem \cite{dirac1952some} which dates back to 1952 and states that every $n$-vertex graph with minimum degree at least $n/2$ contains a Hamilton cycle. Since then, many other interesting results about various aspects of Hamiltonicity have been obtained, see e.g. \cite{ajtai1985first,chvatal1972note,kuhn2013hamilton,krivelevich2011critical,krivelevich2014robust,MR3545109,ferber2018counting, cuckler2009hamiltonian, posa1976hamiltonian}, and the surveys \cite{gould2014recent, MR3727617}.
A related notion to Hamiltonicity is that of pancyclicity. An $n$-vertex graph is said to be \emph{pancyclic} if it contains all cycles of length from $3$ up to $n$. Trivially, pancyclicity is a stronger property than Hamiltonicity, and one might ask how much stronger it really is. In 1973, Bondy \cite{bondy10pancyclic} stated his celebrated meta-conjecture, indicating that the first property should be only slightly stronger than the latter. Indeed, he claimed that any non-trivial condition which implies that a graph is Hamiltonian should also imply that it is pancyclic (up to a certain collection of simple exceptional graphs). As an example, Bondy \cite{bondy1971pancyclic} himself first showed that every $n$-vertex graph with minimum degree of at least $n/2$ is either pancyclic or isomorphic to the complete bipartite graph $K_{n/2,n/2}$, thus strengthening Dirac's theorem. His meta-conjecture sparked a lot of research which in turn has led to various appealing results and methods. For example, Bauer and Schmeichel \cite{bauer1990hamiltonian}, relying on previous results of Schmeichel and Hakimi \cite{schmeichel1988cycle}, showed that the sufficient conditions for Hamiltonicity given by Bondy \cite{bondy1980longest}, Chvátal \cite{chvatal1972hamilton} and Fan \cite{fan1984new} all imply pancyclicity, up to a certain small family of exceptional graphs. Furthermore, much like with Dirac's theorem, the classical result of Chvátal and Erdős \cite{chvatal1972hamilton} that a graph with connectivity number $\kappa(G)$ at least as a large as its independence number $\alpha(G)$ is Hamiltonian, has also been addressed.
Namely, in 1990, Jackson and Ordaz \cite{jackson1990chvatal}, conjectured that if $\kappa(G) > \alpha(G)$, then $G$ must be pancyclic and an approximate form of this was proven by Keevash and Sudakov \cite{keevash2010pancyclicity}, who showed that $\kappa(G) \geq 600\alpha(G)$ is already sufficient.
Bondy's meta-conjecture is about conditions for Hamiltonicity which imply pancyclicity. A natural and closely related question in a similar direction
was first studied by Erd\H{o}s in the 1970s. Let $G$ be a Hamiltonian graph; under which assumptions can we guarantee that $G$ is also pancyclic or more generally, that it has many cycle lengths? One example of such a problem was suggested by Jacobson and Lehel at the 1999 conference “Paul Erd\H{o}s and His Mathematics”. They asked for the minimal number of cycle lengths in a $k$-regular $n$-vertex Hamiltonian graph. They conjectured (see Verstraëte \cite{verstraete2016extremal} for a stronger conjecture) that already when $k \geq 3$, there are $\Omega(n)$ many lengths. Improving on the previously best known lower bound of $\Omega(\sqrt{n})$ by Milans et al. \cite{milans2012cycle}, recently Buci\'c, Gishboliner and Sudakov \cite{bucic2021cycles} showed that any Hamiltonian graph with minimum degree at least $3$ has $n^{1-o(1)}$ different cycle lengths.
As we already mentioned above, the earliest question of this flavor was studied by Erd\H{o}s about 50 years ago. In 1972 he asked the following in \cite{erdos1972some}. Given an $n$-vertex Hamiltonian graph with independence number $\alpha(G)\leq k$, how large does $n$ have to be in terms of $k$ in order to guarantee that $G$ is pancyclic? Erd\H{o}s \cite{erdos1972some} proved that it is enough to have $n=\Omega(k^4)$ and conjectured that already $n=\Omega(k^2)$ should be enough. A simple construction shows that this is best possible. Let $C_1,\ldots, C_k$ be disjoint
cliques of size $2k-2$, and let each $C_i$ have two distinguished vertices $a_i$ and $b_i$. Let $G$ be the graph obtained
by connecting $a_i$ and $b_{i+1}$ by an edge for each $i$ (taking addition modulo
$k$). Notice that this graph has $n = 2k^2- 2k$ vertices, is Hamiltonian and its independence number is $k$. On the other hand, it is easy to check that it does not contain a cycle of length $2k - 1$, and thus it is not pancyclic. Indeed, observe that every cycle must be either a subgraph of
one of the cliques $C_i$, or contain all the vertices $a_i,b_i$ for each $i$. The first type of cycles all have
length at most $2k - 2$ and the latter have length at least $2k$.
In the last 50 years, there have been several improvements upon Erd\H{o}s's initial result. Firstly, Keevash and Sudakov \cite{keevash2010pancyclicity} proved that $n=\Omega(k^{3})$ vertices are enough to guarantee pancyclicity. Then, Lee and Sudakov \cite{lee2012hamiltonicity} improved this to $n=\Omega(k^{7/3})$, and more recently Dankovics \cite{dankovics2020low} showed that $n=\Omega(k^{11/5})$ vertices suffice. In this paper we completely resolve the conjecture of Erd\H{o}s in the following strong form.
\begin{thm}\label{thm:main}
Every Hamiltonian graph $G$ with $\alpha(G)\leq k$ and at least $2k^2+o(k^2)$ vertices is pancyclic.
\end{thm}
\noindent As shown by the previous construction, our result is tight up to the $o(k^2)$ error term. The rest of this paper is organized as follows. In \Cref{sec:preliminarieas}, we state some well-known tools, and introduce some useful definitions. There we also prove the two key propositions that are used in the proof of Theorem~\ref{thm:main} , which is given in \Cref{sec:proof}. Finally, in \Cref{sec:concludingrem} we make some concluding remarks and mention some open questions; we also show how to get a short proof of the conjecture of Erd\H{o}s, that is, a proof of \Cref{thm:main} with a sufficiently large constant $C$ instead of the precise factor of 2.
\section{Preliminaries}\label{sec:preliminarieas}
\subsection{Notation and definitions}
We mostly use standard graph theoretic notation. Let $G$ be a finite graph. Denote by $V(G)$ its vertex set, and let $S_1,S_2\subseteq V(G)$. We denote by $G[S_1]$ the subgraph of $G$ induced by $S_1$, and by $E[S_1,S_2]$ the set of edges with one endpoint in $S_1$ and the other in $S_2$. Let $H$ be a subgraph of $G$. We denote by $G[H]$ the graph $G[V(H)]$.
A path $P=(x_0,x_1,\ldots,x_\ell)$ of length $\ell$ is a graph on vertex set $\{x_0,x_1,\ldots,x_\ell\}$ with an edge between $x_{i-1}$ and $x_{i}$ for all $i\in[\ell]$. We say that $x_0$ and $x_\ell$ are the endpoints of $P$, and we call $P$ an $x_0x_\ell$-path.
If the vertices of the graph $G$ come with a given ordering, then we say that a path $P=(x_0,x_1,\ldots,x_\ell)$ contained in $G$ is increasing if $x_0<x_1<\ldots<x_\ell$.
We denote by $\alpha(G)$ the independence number of $G$. Given a digraph $D$, its independence number $\alpha(D)$ is defined as the independence number of the underlying graph.
Given sets $A_1,A_2\subset \mathbb N$, we denote by $A_1+A_2$ the set of integers $c$ such that $c=a_1+a_2$ for some $a_1\in A_1$ and $a_2\in A_2.$ Throughout the paper we omit floor and ceiling signs for clarity of presentation, whenever it does not impact the argument.
\begin{defn}
Let $a,b,p$ be positive real numbers. Given a graph $G$, and two vertices $x$ and $y$, we say that the pair $xy$ is $p$-dense in the interval $[a,b]$ if for every subinterval $[a',b']$ with $b'-a'\geq
p$ there is an integer $\ell\in[a',b']$ and an $xy$-path in $G$ of length $\ell$.
\end{defn}
\subsection{Standard tools}
\noindent Here we state and prove some standard facts which we use in our proof. We start with the following well-known result about directed graphs of Gallai and Milgram \cite{gallai1960verallgemeinerung}. A \emph{path cover} in a directed graph is a partition of its vertex set into directed paths, and its size is the number of such paths.
\begin{lem}[\cite{gallai1960verallgemeinerung}]\label{lem:partition}
Every directed graph $D$ has a path cover of size at most $\alpha(D)$.
\end{lem}
\noindent We also use the celebrated Ramsey's theorem.
\begin{thm}\label{lem:ramsey}
For every two positive integers $k,t$, there exists a large enough integer $n$, such that for any $k$-coloring of the edges of $K_n$, there is a monochromatic copy of $K_t$ in $K_n$.\end{thm}
\noindent The next lemma shows that we can partition a large proportion of the vertex set of a graph into sets with small diameter, such that there are no edges between the parts.
\begin{lem}\label{lem:BFSpartition}
Let $G$ be an $n$-vertex graph and let $0<\gamma <\frac{1}{2}$. Then, there exists a collection of vertices $v_1, v_2, \ldots, v_r$ and disjoint sets $U_1, U_2, \ldots, U_r \subseteq V(G)$ such that the following hold.
\begin{enumerate}
\item $v_j \in U_j$ for all $j$, and $\left| \bigcup_j U_j \right| \geq (1-\gamma)n$.
\item Every vertex $u \in U_j$ has $\text{dist}(v_j,u) \leq \log_{1+\gamma} n$.
\item There is no edge between two sets $U_j, U_{j'}$ with $j \neq j'$.
\end{enumerate}
\end{lem}
\begin{proof}
We find the required sets and vertices with the following process. We start with an arbitrary vertex $v_1 \in V(G)$ and consider the breadth-first-search tree rooted at $v_1$, that is, consider the sets $V^{(1)}_0,V^{(1)}_1,V^{(1)}_2, \ldots \subseteq V(G)$ defined as $V^{(1)}_i := \{u \in G : \text{dist}(v_1,u) = i\}$. Now, define $i_1 \geq 0$ to be the minimal $i$ such that $|V^{(1)}_{i+1}| \leq \gamma \left|V^{(1)}_0 \cup \ldots \cup V^{(1)}_i \right|$ and let $U_1 := V^{(1)}_0 \cup \ldots \cup V^{(1)}_{i_1}$. We now continue the process and do the same on the graph $G' := G \setminus \left(U_1 \cup V^{(1)}_{i_1 + 1} \right)$. More precisely, take an arbitrary vertex $v_2 \in G'$ and consider again the breadth-first-search tree in $G'$ rooted at $v_2$. Like before, this gives an $i_2$ and sets $V^{(2)}_0, \ldots, V^{(2)}_{i_2+1}, U_2$. We then repeat this on the graph $G'' := G' \setminus \left(U_2 \cup V^{(2)}_{i_1 + 1} \right)$ and continue doing this until we have no vertices left. By construction, the desired properties hold. Indeed, the only vertices not contained in $\bigcup_j U_j$ are in some $ V^{(j)}_{i_j+1}$, hence there are at most $\gamma n$ of them. For the second property, observe that each $U_j$ is of size at least $(1+\gamma)^{i_j}$, so
$i_j\leq \log_{1+\gamma}n$. The third condition holds by construction, since we deleted all the neighbors of $U_i$ before defining $U_{i+1}$ .
\end{proof}
\noindent Finally, we state a trivial observation used throughout the proof of Theorem \ref{thm:main}. It will be used to state that appropriate combinations of internally vertex-disjoint paths of different lengths result in the construction of cycles of many different lengths.
\begin{obs}\label{obs:combining}
Let $G$ be a graph whose vertex set contains $t$ disjoint sets $S_1,\ldots, S_t$ and another set of $t$ vertices $v_1,\ldots,v_t$ outside of $\bigcup_{i=1}^t S_i$. For each $i\in[t]$, let $A_i\subset \mathbb N$ and suppose that for every $i$ the induced subgraph $G[v_i\cup S_i\cup v_{i+1}]$ is such that it contains a $v_iv_{i+1}$-path of length $\ell$ for each $\ell\in A_i$ (with $v_{t+1}=v_1$). Then for every $\ell\in A_1+\ldots+A_t$, the graph $G$ contains a cycle of length $\ell$.
\end{obs}
\subsection{Finding consecutive path lengths}
Next we show that in a graph with small independence number, we can find two vertices between which there exist paths of almost all `possible' lengths. We believe that this result is of independent interest and pose a problem related to it after its proof.
\begin{prop}\label{lem:pathlengthsininterval}
Let $G$ be a $n$-vertex graph with $\alpha(G) = k$ and let $0<\gamma <1/2$. Then there exist two vertices $u,v \in V(G)$ such that for every $\ell\in[\log_{1+\gamma}n, (1-\gamma)\frac{n}{k}]$ there is a $uv$-path of length $\ell$.
\end{prop}
\begin{proof}
First, we apply Lemma \ref{lem:BFSpartition} to $G$ to get vertices $v_i$ and sets $U_i$ for all $i\leq r$. Fix the graph $H := G[U_1 \cup \ldots \cup U_r]$ and let us orient the edges of $H$ in the following manner. For an edge $xy$ in $H$ with $x,y \in U_j$ (recall property \textit{3} of the sets $U_j$) orient it as $x \rightarrow y$ if $\text{dist}(v_j,x) < \text{dist}(v_j,y)$ and as $y \rightarrow x$ if $\text{dist}(v_j,x) > \text{dist}(v_j,y)$. In the case that $\text{dist}(v_j,x) = \text{dist}(v_j,y)$, orient the edge arbitrarily.
Now, since $\alpha(H) \leq \alpha(G) = k$ and $|H| \geq (1-\gamma)n$, by Lemma \ref{lem:partition} there must exist a directed path $\overrightarrow{P} = x_1 \rightarrow x_2 \rightarrow \ldots \rightarrow x_m$ in $H$ of length $m := \frac{|H|}{\alpha(H)} \geq (1-\gamma)\frac{n}{k}$. Let $j$ be such that $\overrightarrow{P} \subseteq U_j$, and for each $i\in[m]$ denote $d_i:=\text{dist}(v_j,x_i)$ and note that $d_i\leq \log_{1+\gamma}n$.
Now we show that for all $\ell \in \left[d_m, m + d_1 \right] \supseteq \left[\log_{1+\gamma} n, m\right]$ there is path in $G$ of length $\ell$ between $v_j$ and $x_m$. For each $i\in[m]$ look at the $v_jx_m$-path $P_i$ obtained by concatenating the shortest $v_jx_i$-path with the path $x_{i+1}x_{i+2}\ldots x_m$. Since by definition, we have that $d_i \leq d_{\ell}$ for $i<\ell$, these two paths are vertex disjoint and their union is indeed a path as well. Moreover, for each $i$, we have that $|P_i|-1\leq|P_{i+1}|\leq |P_i|$ since $x_i \rightarrow x_{i+1}$ is an edge and thus, $d_i \leq d_{i+1} \leq d_i +1 $. Because $|P_1|=d_1+m$ and $|P_m|=d_m$, each path length in $\left[d_m, m + d_1 \right]$ is then attained by at least one of the constructed paths. Since $d_m\leq \log_{1+\gamma}n$ and $m \geq (1-\gamma)\frac{n}{k}$, this finishes the proof.
\end{proof}
Before moving on to the next section, it is worth noting that the above proposition is asymptotically tight. Indeed, an $n$-vertex graph $G$ with $\alpha(G) = k$ does not even necessarily need to contain a path of length larger than $\frac{n}{k}$, as we can see from a disjoint union of cliques of size $n/k$.
Since the proof of this proposition uses a result about directed paths, we further ask if the following directed variant of it might be is true as well.
\begin{prob}
Does Proposition \ref{lem:pathlengthsininterval} generalize to directed graphs? If $G$ is a directed graph, how large an interval $I \subseteq [0,\frac{n}{k}]$ can we guarantee for which there are vertices $u,v$ with a directed $uv$-path of length $\ell$ for all $\ell \in I$?
\end{prob}
\subsection{Path shortening}
In this section we show that if a graph has small independence number and contains a long path $P$, then we can find a slightly shorter path $P'$ with the same endpoints, which satisfies certain additional properties. In a graph with independence number $k$, a path can clearly be shortened by considering $2k+1$ consecutive vertices on the path, and observing that there must be an edge (not contained in the path) between those vertices. This is the statement of the next simple lemma.
\begin{lem}\label{lem:easyjump}
Let $G$ be a graph with independence number $k$, and $P$ a path in $G$ with endpoints $x,y$ such that $|P| > 2k$. Then, there is an
$xy$-path $P'$ on the same vertex set $V(P)$ with $|P| - 2k \leq |P'| < |P|$.
\end{lem}
\noindent The following proposition is one of the central results of this paper. It shows that we can shorten a path only by a little while also preserving a pre-specified set of vertices in the newly obtained path.
\begin{prop}\label{lem:jumpwithzigzag}
Let $G$ be an $n$-vertex graph with independence number $k$, let $P$ be a path in $G$ with endpoints $x,y$, let $c \in \mathbb{N}$ and $U \subseteq V(P)$. Then there is an
$xy$-path $P'$ with the following properties.
\begin{enumerate}
\item $U \subseteq V(P') \subseteq V(P)$.
\item $|P| - (4c-3) \leq |P'| < |P|$ if for $c$ it holds that $c \left(\frac{|P|-(4c-1)|U|}{2k}-1 \right)>k$.
\end{enumerate}
\end{prop}
\noindent Before we give a proof of Proposition~\ref{lem:jumpwithzigzag}, we introduce some useful concepts. The key idea to prove the proposition is to find a certain structure in our graph which can be used to shorten a path in a graph with low independence number. The properties of this structure are captured by the notion of a \emph{special edge set}, defined below.
\begin{defn}\label{def:special}
Given a graph $H$ with ordered vertex set $(1,2,\ldots, n)$, we say that a sequence of vertices $(v_1,\ldots,v_{\ell})$ is \emph{special} if the following hold (see Figure \ref{fig:special set}).
\begin{itemize}
\item $v_{i+1} > v_{i}$ for all $i\in[\ell-1]$.
\item $(v_{i},v_{i+1}+1)$ is an edge in $G$ for all $i\in [\ell-1]$. We call the set formed by those edges a \emph{special edge set}.
\end{itemize}
\end{defn}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.5,main node/.style={circle,draw,color=black,fill=black,inner sep=0pt,minimum width=3pt}]
\tikzset{cross/.style={cross out, draw=black, fill=none, minimum size=2*(#1-\pgflinewidth), inner sep=0pt, outer sep=0pt}, cross/.default={2pt}}
\tikzset{rectangle/.append style={draw=brown, ultra thick, fill=red!30}}
\foreach \i in {1,...,46}
{
\node[main node, scale=0.4] (aux) at (\i*0.2-0.6,0){};
}
\node[rectangle, color=red, scale=0.4] (a) at (0,0) [label=below:$v_1$]{};
\node[rectangle, color=red, scale=0.4] (a1) at (2,0)[label=below:$v_2$]{};
\node[rectangle, color=red, scale=0.4] (a2) at (3,0)[label=below:$v_3$]{};
\node[rectangle, color=red, scale=0.4] (a3) at (5.4,0)[label=below:$v_4$]{};
\node[rectangle, color=red, scale=0.4] (a4) at (8,0)[label=below:$v_5$]{};
\node[main node] (a4) at (8.2,0){};
\node[main node] (b1) at (2.2,0){};
\node[main node] (b2) at (3.2,0){};
\node[main node] (b3) at (5.6,0){};
\draw[line width= 1 pt] (a) to [bend left=60](b1);
\draw[line width= 1 pt] (a1) to [bend left=60](b2);
\draw[line width= 1 pt] (a2) to [bend left=60](b3);
\draw[line width= 1 pt] (a3) to [bend left=60](a4);
\end{tikzpicture}
\caption{An illustration of a special edge set. The special vertices are colored red.}
\label{fig:special set}
\end{figure}
\noindent We can now find special vertex sequences in graphs in the following manner.
\begin{lem}\label{lem:zigzag}
Let $H$ be a graph on the vertex set $[n]$, let $U' \subseteq [n]$ and suppose $H$ has independence number $k$. Then, there exist a special vertex sequence with at least $\frac{n-|U'|}{2k}$ vertices in $[n]\setminus U'$.
\end{lem}
\begin{proof}
Let $H'$ be the graph obtained from $H$ by removing all edges of the form $\{i,i+1\}$ where $|u-v| = 1$. Since the removed edges form a graph with chromatic number at most 2, we must have that $\alpha (H') \leq 2 \alpha(G) \leq 2k$. Let us also direct the edges of $H'$ so that an edge $ij$ is oriented from $i$ to $j$ if $i < j$. Applying Lemma~\ref{lem:partition} to this directed graph, we obtain a partition of $[n]$ into at most $2k$ directed paths. Denote the digraph formed by the union of those paths as $F$. An important property of $F$ that we are going to use is that all outdegrees and indegreees of vertices in $F$ is at most one.
Having obtained this path cover, we want to find another decomposition but now of the edges of $F$, and into a small number of special edge sets. Simply take $\mathcal{M}$ to be a smallest collection of edge disjoint special edge sets which decompose the edges of $F$ (this exists as one such collection is the set of all edges in $F$). If $(v_1,v_2,\ldots,v_\ell)$ is the special sequence corresponding to a special edge set $M$ in $\mathcal{M}$, then note that $v_\ell$ must be a vertex of out-degree $0$ in $F$. Indeed, if this is not the case, then there exists $u > v_\ell$ such that $v_\ell \rightarrow u$ is an edge of $F$. Let $M' \in \mathcal{M}$ be the special edge set which contains it and note that then $v_\ell$ is the first vertex of this special edge set, since otherwise it would contain the edge $\{v_{\ell -1}, v_{\ell}+1\} \in M$ and contradict the edge-disjointness of the special edge sets in $\mathcal{M}$. But now we can 'concatenate' $M$ and $M'$ to form a larger special edge set $M \cup M'$, which contradicts the minimality of $\mathcal{M}$.
Notice that the special edge sets in $\mathcal{M}$ have disjoint corresponding special vertex sequences. Indeed, since these special edge sets are non-empty and edge-disjoint, the only possibility for a common vertex in two different special vertex sequences would be if some vertex $v$ is the first vertex of one sequence and the last vertex of another one. In turn, as shown in the previous paragraph, this would contradict the maximality of $\mathcal{M}$. Now, a consequence of having disjoint corresponding special vertex sequences in $\mathcal{M}$ and the previous paragraph is that each one of these has a unique vertex of out-degree $0$. Moreover, let $S \subseteq V(H)$ denote the set of vertices which do not belong to any special vertex sequence of $\mathcal{M}$ and notice that all $v \in S$ also have out-degree $0$ in $F$. In turn, since $F$ is a decomposition of $[n]$ with at most $2k$ paths, there are at most $2k$ vertices of outdegree zero and thus $|\mathcal{M}| + |S| \leq 2k$. Hence, there exists one special vertex sequence in $\mathcal{M} \cup S$ (allowing a single vertex to be a special vertex sequence) with at least $\frac{n - |U'|}{|\mathcal{M}|+|S|} \geq \frac{n-|U'|}{2k}$ vertices not in $U'$, which finishes the proof.
\end{proof}
\noindent We can now show the announced proof of our proposition.
\begin{proof}[ of Proposition \ref{lem:jumpwithzigzag}]
Let $(1,2,\ldots,|P|)$ be an ordering of the vertices of $P$, with the endpoints $x=1$ and $y=|P|$.
Let $U_c$ be the set of vertices $x$ in $P$ such that there is a $u\in U$ with $|x-u|\leq 2c-1$. By applying Lemma~\ref{lem:zigzag} to $G[V(P)]$ and the set $U_c$, we obtain a special vertex sequence $(v_1,\ldots,v_\ell)$ corresponding to a special edge set $M$, with at least $\frac{|P|-|U_c|}{2k}$ vertices outside of $U_c$. First, notice that for each $v_{i+1}\notin U_c$ we may assume that $v_{i+1}-v_i\geq 2c$. Indeed, if that was not the case, then we obtain the desired path $P'$ from $P$ by replacing the interval $[v_i,v_{i+1}+1]$ by the edge $(v_i,v_{i+1}+1)$, noting that there are no vertices from $U$ inside of the removed interval, since otherwise $v_{i+1}$ would be in $U_c$, a contradiction.
Now we define, for each special vertex $v = v_i$ which is not in $U_c\cup \{v_1\}$, the $c$-element set $S_v:=\{v-1, v-3,\ldots, v-(2c-1)\}$ contained in the $(2c-1)$-element interval $I_v:=[v-(2c-1),v-1]$, which is disjoint to $U$. Since $v_{i}-v_{i-1}\geq 2c$, all of these sets are disjoint and further, disjoint to $U$. Now, the union $S$ of those sets is of size at least $\left(\frac{|P|-|U_c|}{2k}-1 \right)c$. Therefore, since $|U_c|\leq (4c-1)|U|$, we get that $|S|\geq k+1$ by our assumption on $c$. Hence, there exists an edge in $G$ spanned by $S$, since the independence number of $G$ is $k$. We may assume that this edge does not lie inside some $S_v$, as otherwise we can again get the desired path $P'$ by using this edge instead of the interval which it bridges, avoiding at least one, but at most $2c-1$ vertices which are not in $U$. Hence, the found edge $ab$ with $a<b$ is between two distinct sets $S_{v_i}$ and $S_{v_j}$. Now we can find the required path $P'$ as shown in Figure~\ref{fig:n-1}, avoiding at most $4c-3$ vertices.
\noindent More precisely, we obtain the required path $P'$ as the union of the following paths:
\begin{itemize}
\item The part of the path $P$ which connects $x$ to $a$, plus the edge $(a,b)$.
\item The increasing path $P_2$ obtained by the following iterative procedure. First, initialize $P_2$ to be the edge $(v_i,v_i+1)$. Repeat the following. Let $r$ be the last vertex of $P_2$. If $r$ is a special vertex $r=v_t$, then add the edge $(r,v_{t+1}+1)$ to $P_2$, and update $r=v_{t+1}+1$. If $r$ is not a special vertex, add the edge $(r,r+1)$ to $P_2$, and update $r=r+1$. We stop when either $r=b$ or $r=v_j+1$.
\item The increasing path $P_3$ obtained by the following iterative procedure. First we initialize $P_3$ to be the edge $(v_i,v_{i+1}+1)$. Repeat the following (exactly as for the previous path). Let $r$ be the last vertex of $P_3$. If $r$ is a special vertex $r=v_t$, then add the edge $(r,v_{t+1}+1)$ to $P_3$, and update $r=v_{t+1}+1$. If $r$ is not a special vertex, add the edge $(r,r+1)$ to $P_3$, and update $r=r+1$. We stop when either $r=b$ or $r=v_j+1$.
\item The part of the path $P$ which connects $v_j+1$ to $y$.
\end{itemize}
It is easy to see that this path contains only vertices of $P$ and that it does not contain the vertex $v_j$. Furthermore, the only other vertices from $P$ which the new path $P'$ avoids are the vertices in $I_{v_i}$ which are strictly larger than $a$, and the vertices in $I_{v_j}$ which are strictly larger than $b$. So in total, the new path $P'$ avoids at most $1+(|I_{v_i}|-1)+(|I_{v_j}|-1)=4c-3$ vertices of $P$, which completes the proof.
\end{proof}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1,main node/.style={circle,draw,color=black,fill=black,inner sep=0pt,minimum width=7pt}]
\tikzset{rectangle/.append style={draw=brown, ultra thick, fill=red!30}}
\node[rectangle, color=black, scale = 0.3] (s1) at (0,0) {$x$};
\node[rectangle, color=black, scale = 0.3] (s2) at (17,0) {$y$};
\node[rectangle, scale=0.6, color=red, opacity=1] (a1) at (1,0){};
\node[rectangle, scale=0.6, color=red, opacity=1] (a2) at (4,0){};
\node[rectangle, scale=0.6, color=red, opacity=1] (a3) at (6,0){};
\node[rectangle, scale=0.6, color=red, opacity=1] (a4) at (9,0){};
\node[rectangle, scale=0.6, color=red, opacity=1] (a5) at (11,0){};
\node[rectangle, scale=0.6, color=red, opacity=1] (a6) at (13,0){};
\node[rectangle, scale=0.6, color=red, opacity=1] (a7) at (16,0){};
\node[main node, scale=0.6, color=black, opacity=1] (b0) at (1.3,0){};
\node[main node, scale=0.6, color=black, opacity=1] (b1) at (4.3,0){};
\node[main node, scale=0.6, color=black, opacity=1] (b2) at (6.3,0){};
\node[main node, scale=0.6, color=black, opacity=1] (b3) at (9.3,0){};
\node[main node, scale=0.6, color=black, opacity=1] (b4) at (11.3,0){};
\node[main node, scale=0.6, color=black, opacity=1] (b5) at (13.3,0){};
\node[main node, scale=0.6, color=black, opacity=1] (b6) at (16.3,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p1) at (3.7,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p11) at (3.4,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p111) at (3.1,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p3) at (8.7,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p4) at (8.4,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p5) at (8.1,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p6) at (12.7,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p6) at (12.4,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p6) at (12.1,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p7) at (15.7,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p5) at (15.4,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p9) at (15.1,0){};
\draw[line width= 1 pt] (a1) to [bend left=50](b1);
\foreach \i in {2,...,6}
{
\draw[line width= 2 pt, color = blue] (a\i) to [bend left=50](b\i);
}
\draw[line width= 2 pt, color = blue] (p11) to [bend right=28](p9);
\draw[line width= 2 pt, color = blue] (p11) to (s1);
\draw[line width= 2 pt, color = blue] (a2) to (a3);
\draw[line width= 2 pt, color = blue] (b2) to (a4);
\draw[line width= 2 pt, color = blue] (b3) to (a5);
\draw[line width= 2 pt, color = blue] (b4) to (a6);
\draw[line width= 2 pt, color = blue] (b5) to (p9);
\draw[line width= 2 pt, color = blue] (b6) to (s2);
\node[rectangle, scale=0.6, color=red, opacity=1] (a1) at (1,0){};
\node[rectangle, scale=0.6, color=red, opacity=1] (a2) at (4,0){};
\node[rectangle, scale=0.6, color=red, opacity=1] (a3) at (6,0){};
\node[rectangle, scale=0.6, color=red, opacity=1] (a4) at (9,0){};
\node[rectangle, scale=0.6, color=red, opacity=1] (a5) at (11,0){};
\node[rectangle, scale=0.6, color=red, opacity=1] (a6) at (13,0){};
\node[rectangle, scale=0.6, color=red, opacity=1] (a7) at (16,0){};
\node[main node, scale=0.6, color=black, opacity=1] (b0) at (1.3,0){};
\node[main node, scale=0.6, color=black, opacity=1] (b1) at (4.3,0){};
\node[main node, scale=0.6, color=black, opacity=1] (b2) at (6.3,0){};
\node[main node, scale=0.6, color=black, opacity=1] (b3) at (9.3,0){};
\node[main node, scale=0.6, color=black, opacity=1] (b4) at (11.3,0){};
\node[main node, scale=0.6, color=black, opacity=1] (b5) at (13.3,0){};
\node[main node, scale=0.6, color=black, opacity=1] (b6) at (16.3,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p1) at (3.7,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p11) at (3.4,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p111) at (3.1,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p3) at (8.7,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p4) at (8.4,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p5) at (8.1,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p6) at (12.7,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p6) at (12.4,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p6) at (12.1,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p7) at (15.7,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p5) at (15.4,0){};
\node[main node, scale=0.6, fill=white, opacity=1] (p9) at (15.1,0){};
\node[scale=1] (a) at (0, -0.3){$x$};
\node[scale=1] (a) at (17,-0.3){$y$};
\node[scale=1] (a) at (3.2,0.4){$S_{v_i}$};
\node[scale=1] (a) at (15.4,0.3){$S_{v_j}$};
\node[scale=1] (a) at (3.4,-0.2){$a$};
\node[scale=1] (a) at (15.1,-0.2){$b$};
\end{tikzpicture}
\caption{
The shorter path $P'$ is drawn in blue, the special vertices are represented with red squares, while the vertices after them on $P$ are represented with black dots. The vertices in the sets $S_v$ are represented with white dots. Also note the following about the black dots and red squares which are between $a$ and $b$ in $P$. It could be that one of these black dots and the red square which is after it in our drawing, are the same vertex. However, the construction of the path $P'$ remains the same (for example, the fourth black dot and the fifth red square could be the same vertex). Notice also in the drawing that some of the red squares do not have a corresponding set of white dots. These are precisely those special vertices which are in $U_c \cup \{v_1\}$.
}
\label{fig:n-1}
\end{figure}
\section{Proof of Theorem \ref{thm:main}}\label{sec:proof}
Let $\varepsilon>0$ be a small enough constant, and let $k$ be sufficiently large in terms of $\varepsilon$. Let $G$ be a Hamiltonian graph with $\alpha(G)\leq k$ on $n \geq (2+\varepsilon)k^2$ vertices. Our goal is to prove that $G$ is pancyclic. It will be convenient for us to consider different ranges of cycle lengths, and for each range we have a separate subsection which deals with it.
\subsection{Lower range: from $3$ to $(2+\varepsilon)k$}
Showing that $G$ contains all cycles of lengths between $3$ and $(2+\varepsilon)k$ only requires the fact that $G$ has no independent set of size $k+1$. Indeed, this boils down to the study of cycle-complete Ramsey numbers. Namely, the cycle-complete Ramsey number $r(C_\ell,K_s)$ is the smallest number $N$ such that every graph on $N$ vertices either contains a copy of $C_\ell$ or an independent set of size $s$.
The following result of Erd\H{o}s, Faudree, Rousseau and Schelp \cite{erdos1978cycle}, along with a more recent result by Keevash, Long and Skokan \cite{keevash2021cycle} cover the mentioned range of cycle lengths we need.
\begin{thm}[\cite{erdos1978cycle}]\label{thm:erdos cycle-complete}
Let $\ell\geq 3$ and $s\geq 2$. Then $r(C_\ell, K_s) \leq\left((\ell-2)(s^{1/x}+2)+1\right)(s-1)$, where $x=\lfloor \frac{\ell-1}{2}\rfloor$.
\end{thm}
The next result by Keevash, Long and Skokan gives the precise behaviour of cycle-complete Ramsey numbers in a wide range of parameters, and proves a conjecture from \cite{erdos1978cycle}.
\begin{thm}[\cite{keevash2021cycle}]\label{precise cycle-complete}
There exists $C \geq 1$ so that $r(C_\ell , K_s) = (\ell - 1)(s- 1) + 1$ for $s \geq 3$ and $\ell\geq C \frac{\log s}{\log\log s}$.
\end{thm}
Now, note that since $G$ contains no independent set of size $k+1$, Theorem~\ref{thm:erdos cycle-complete} implies the existence of a cycle of length $\ell$ for every $\ell\in[3,\log k]$, while Theorem~\ref{precise cycle-complete} covers the range of $[\log k,(2+\varepsilon)k]$.
\subsection{Upper range: from $\frac{1000}{\varepsilon^2}k$ to $n$}\label{sec:upperrange}
First, note that all cycle lengths in $[2k^2+2k,n]$ can be obtained by iteratively applying Proposition~\ref{lem:jumpwithzigzag} with $c=1$, and $U=\emptyset$, and always shortening the cycle by one. Indeed, denote by $P$ any Hamilton path contained in a Hamilton cycle in $G$, and let $x$ and $y$ be its endpoints. As long as $n> 2k^2+2k$, by applying the mentioned proposition we get a path which is by $c=1$ shorter than $P$ and has the same endpoints, so adding the edge $xy$ to it creates a cycle of length $n-1$. We remove the remaining vertex and repeat. This gives all cycle lengths in $[2k^2+2k,n]$.
Now we turn to the cycle lengths in $\left[\frac{1000}{\varepsilon^2}k,2k^2+2k\right]$. For this we need the following lemma.
\begin{lem}\label{lem:partitionintomatchingcycle}
Let $G$ be a Hamiltonian graph on $n$ vertices with independence number $k$. Then, there is a partition of the vertices of $G$ into a cycle $C$ and a set $S$ of size $|S|= \frac{\varepsilon n}{20}$, such that there is a matching $M \subseteq E[C,S]$ which covers $S$.
\end{lem}
\begin{proof}
To show this claim, we apply Proposition~\ref{lem:jumpwithzigzag} iteratively $\varepsilon n/20$ times as follows. We always have $c=1$, and in the beginning we set $U_M=S=\emptyset$, and we set $M$ to be an empty matching and $C$ a Hamilton cycle in $G$.
During the procedure $C$ and $S$ are always disjoint and partition the vertices of $G$, $M$ is a matching in $E[C,S]$ which covers $S$, and $U_M$ is the set of endpoints of $M$ in $C$.
In the first step we apply Proposition~\ref{lem:jumpwithzigzag} with the mentioned values of $c=1$ and $U=\emptyset$, to get a cycle $C'$ of length $n-1$, and a vertex $v$ which is not on $C'$. Let $v'$ be a neighbor of $v$ in the cycle $C$. Now we set $C=C'$, $S = \{v\}$, $M=\{vv'\}$ and $U_M=\{v'\}$.
In the $i$-th step of the procedure, we let $U$ denote the set of vertices which are either in $U_M$, or adjacent to a vertex in $U_M$ on the current cycle $C$.
We apply Proposition~\ref{lem:jumpwithzigzag} with $c=1$ and $U$ to the graph $G[C]$, to get a cycle $C'$ of length $|C|-1$ and a vertex $v \in C \setminus C'$ which is not in $U$. Again, we denote by $v'$ the neighbor of $v$ in $C$, and we set $M:=M\cup \{vv'\}$, $S := S \cup \{v\}$, $U_M=U_M \cup\{v'\}$ and $C=C'$. Notice that since we only perform $\varepsilon n/20$ steps, at each point we have that $|U_M|\leq \varepsilon n/20$, and that $|C|\geq n-\varepsilon n/20$. Together with the fact that $|U|\leq 3|U_M|$, this gives that $\frac{|C|-3|U|}{2k}-1> k$, so we can always successfully apply Proposition~\ref{lem:jumpwithzigzag} with $c=1$. The resulting matching is then of size $\frac{\varepsilon n}{20}$, and evidently satisfies the given requirements.
\end{proof}
We are ready to show how to get the cycle lengths in $\left[\frac{1000}{\varepsilon^2}k,2k^2+2k\right]$. We first apply the above lemma to get a cycle $C$ of length $n-\frac{\varepsilon n}{20}\geq 2k^2+\frac{2\varepsilon k^2}{3}$ and outside of it a set $S$ of size $|S|=\frac{\varepsilon n}{20}$, together with a matching $M$ between them which covers $S$. Split the cycle $C$ into $4/\varepsilon$ intervals of (almost) equal size. By pigeonholing, at least one of those intervals contains at least $\frac{\varepsilon^2n}{80}$ endpoints of $M$. Let $S'$ be the subset of $S$ of vertices corresponding to those endpoints in $M$. We apply Lemma~\ref{lem:pathlengthsininterval} to the graph $G[S']$ with say $\gamma=1/100$, and conclude that there are two vertices $x'$ and $y'$ in $G[S']$, between which there exists a path of length $\ell$ in $G[S']$, for every $\ell\in [\frac{\varepsilon^2k}{100}, \frac{\varepsilon^2k}{50}]$.
Let $x$ and $y$ be the vertices in $C$ corresponding to $x'$ and $y'$ in $M$. Let $P$ be the longer path in $C$ which connects $x$ and $y$, so that by the choice of $S'$ we have $|P|\geq |C|-\frac{\varepsilon n}{4}>2k^2+2k$.
Now we use Proposition~\ref{lem:jumpwithzigzag} to show that in $G[P]$ the pair $(x,y)$ is $\frac{\varepsilon^2 k}{100}$-dense in $\left[\frac{900k}{\varepsilon^2},2k^2+2k\right]$. Denote $P_0:=P$, and we obtain the path $P_i$ from path $P_{i-1}$ by applying Proposition~\ref{lem:jumpwithzigzag} with $c=\frac{\varepsilon^2k}{400}$ and $U=\emptyset$. We do this until $|P_i|<\frac{900k}{\varepsilon^2}$ and then we stop our procedure. Notice we could do each step of the procedure as we always had $c \left(\frac{|P_i|}{2k}-1 \right)>k$. Hence we obtain a sequence of $xy$-paths of decreasing lengths, where $|P_i|\geq|P_{i-1}|-4c+3\geq |P_{i-1}|-\frac{\varepsilon^2k}{100}$. Since $|P_0|>2k^2+2k$ and the last path is of length at most $\frac{900k}{\varepsilon^2}$, we indeed get that in $G[P]$ the pair $(x,y)$ is $\frac{\varepsilon^2 k}{100}$-dense in $\left[\frac{900k}{\varepsilon^2},2k^2+2k\right]$.
Now, applying \Cref{obs:combining} to the graph $G[P\cup S']$ with $v_1=x$ and $v_2=y$, gives all cycle lengths in $\left[\frac{1000k}{\varepsilon^2},2k^2+2k\right]$.
\subsection{Middle range : from $(2+\varepsilon) k$ to $\frac{1000}{\varepsilon^2}k$}\label{sec:middlerange}
By Lemma \ref{lem:partitionintomatchingcycle}, there exists a partition of the vertices of $G$ into a cycle $C$ and a set $S$ such that $|S| = \varepsilon n/20$, along with a matching $M \subseteq E[C,S]$ which covers $S$. Denote the vertices along the cycle $C$ with $(1,\ldots, N)$. For each vertex $x \in S$, we denote by $m(x)$ the vertex in $C$ matched to $x$ in $M$. We now remove from $S$ the at most $\frac{1000k}{\varepsilon^2}$ vertices $x$ which have $m(x)\in \{1,2,\ldots,\frac{1000k}{\varepsilon^2}\}$. Hence, there are at least $\varepsilon n/22$ vertices remaining in $S$.
We let $B_1:=S$ and apply Proposition \ref{lem:pathlengthsininterval} with $\gamma=1/100$ to the graph $G[B_1]$, to find
a pair of vertices $x_1,y_1$ such that for all $\ell \in \left[\frac{\varepsilon k}{100}, \frac{\varepsilon k}{50}\right]$, there is an $x_1y_1$ path of length $\ell$ in $G[S]$; we set $B_2:=B_1-\{x_1,y_1\}$.
We repeat this $t= \varepsilon k^2/40$ times, i.e., we apply Proposition~\ref{lem:pathlengthsininterval} to $B_i$ to obtain vertices $x_i,y_i$ such that for all $\ell \in \left[\frac{\varepsilon k}{100}, \frac{\varepsilon k}{50}\right]$, there is an $x_iy_i$-path of length $\ell$ in $G[B_i]\subseteq G[S]$, and we set $B_{i+1}:=B_i-\{x_i,y_i\}$.
Note that each $B_i$ is of size at least $|S|-2t\geq \frac{\varepsilon n}{22} - 2t \geq \frac{\varepsilon k^2}{11} - \frac{\varepsilon k^2}{20} \geq \frac{\varepsilon k^2}{30}$, so we always can successfully apply Proposition~\ref{lem:pathlengthsininterval}.
Before we make our first crucial observation, we introduce some notation. First, with possible relabeling, let us suppose that for each $i$, we have that $m(x_i) < m(y_i)$. Now, for each $i$, let $P_i$ denote the subpath $\left(m(x_i)-\frac{1000k}{\varepsilon^2}, m(x_i)-\frac{1000k}{\varepsilon^2}-1,\ldots, m(x_i) -1,m(x_i)\right)$ of length $\frac{1000k}{\varepsilon^2}$. Notice that, since $m(x_i)>\frac{1000k}{\varepsilon^2}$, none of those paths contains vertex $1$.
\begin{lem}
\label{induced}
If there is an $i$ such that $G[P_i]$ does not contain an increasing path of length $\varepsilon^3k$ as an induced subgraph with $m(x_i)$ as an endpoint \footnote{Recall that a path $P$ is an induced subgraph of $G[P_i]$ if its vertices belong to $V(P_i)$ and except for the edges of $P$, $G[P]$ does not contain any other edges.}, then $G$ contains all cycle lengths in $\left[(2+\varepsilon)k,\frac{1000k}{\varepsilon^2}\right]$.
\end{lem}
\begin{proof}
Assume for sake of contradiction that $G[P_i]$ does not contain such a path. Then, the following holds.
\begin{claim*}
In the graph $G[P_i]$, the endpoints of $P_i$ are $\varepsilon^3k$-dense in $\left[0,\frac{1000k}{\varepsilon^2}\right]$.
\end{claim*}
\begin{proof}[ of Claim]
Consider the following procedure. We begin with $P_i$; by assumption, there exists a chordal edge among the last $\varepsilon^3 k$ vertices of $P_i$ ending with $m(x_i)$. Otherwise, these vertices would induce an increasing path in $G[P_i]$.
Thus we obtain a path $P_i'$ by adding this edge to $P_i$ instead of the interval between the endpoints of this edge. We repeat this procedure, each time finding a chordal edge in the newly obtained path, and we do this until our path has length at most $\varepsilon^3k$. Hence, we obtain a sequence of paths such that two consecutive paths lengths are at most $\varepsilon^3k$ apart, while the last path has length at most $\varepsilon^3k$. Since the endpoints always remain the same, this implies the statement of the claim.
\end{proof}
\noindent Consider now the path $P$ contained in the cycle $C$, and which is spanned by the vertices in the interval
$\left[m(y_i),N]\cup[1,m(x_i)-\frac{1000k}{\varepsilon^2}\right]$. By Lemma \ref{lem:easyjump}, there exists a path $P'$ with the following properties: $V(P') \subseteq V(P)$, the endpoints of $P'$ are the same as those of $P$, and $|P'| \leq 2k$.
Then, in order to finish, recall that $x_i,y_i$ are such that for all $\ell \in \left[\frac{\varepsilon k}{100}, \frac{\varepsilon k}{50}\right]$, there exists an $x_iy_i$-path of length $\ell$ in $G[S]$. Further, by the above claim we got that in the graph $G[P_i]$, the endpoints of $P_i$ are $\varepsilon^3k$-dense in $\left[0,\frac{1000k}{\varepsilon^2}\right]$.
Hence we can use \Cref{obs:combining} on the graph $G$ with $v_1=m(x_i), v_2=m(x_i)-\frac{1000k}{\varepsilon^2}$ and $v_3=m(y_i)$, where $S_1$ are the internal vertices of $P_i$, $S_2$ the internal vertices of $P$ and $S_3=S$ to obtain all cycle lengths in $\left[(2+\varepsilon)k,\frac{1000}{\varepsilon^2}k\right]$.
\end{proof}
We have $t=\varepsilon k^2/40$ paths $P_i$ of length $1000k/\varepsilon^2$ such that each of them corresponds to an interval of vertices in $C$ and therefore intersects at most $2000k/\varepsilon^2$ other such paths. Thus, we can choose a collection of $r=\frac{t}{2000k/\varepsilon^2+1}\geq \sqrt{k}$ of those paths which are all disjoint. With possible renaming, w.l.o.g.\ we may assume that those paths are $P_1,\ldots,P_r$.
Using Lemma \ref{induced}, we may also assume that there are induced increasing subpaths $Q_1, \dots,Q_r$ of $G[P_1],\ldots,G[P_r]$ with endpoints $m(x_1),\ldots,m(x_r)$ respectively and of length $\varepsilon^3 k$.
Let us now define an auxiliary colored complete graph $H$ on $[r]$ in the following manner. For each $i \in [r]$, partition $Q_i$ into three consecutive subpaths $Q^{3}_i,Q^{2}_i,Q^{1}_i$ of size $|Q_i|/3=\varepsilon^3 k/3$, with $Q^{1}_i$ containing $m(x_i)$. Now, for $i,j \in [r]$, we color the edge $ij$ in $H$ \emph{red} if in $G$ both $E[Q^{1}_i,Q^{1}_j]$ and $ E[Q^{3}_i,Q^{3}_j]$ are non-empty. We color it \emph{blue} if $E[Q^{1}_i,Q^{1}_j] = \emptyset$, and in the remaining case, we color it \emph{green}.
\begin{claim*}
There are no blue or green cliques in $H$ of size larger than $6/\varepsilon^3$.
\end{claim*}
\begin{proof}
Suppose there exists a blue clique $\{i_1, \ldots, i_\ell\}$ in $H$. Since each $Q^{1}_{i_j}$ is an induced path, its
odd vertices form an independent set of size $|Q^{1}_{i_j} |/2$. Moreover, by assumption, there are no edges between two $Q^{1}_{i_j}$'s and therefore the set $\bigcup_{1 \leq j \leq \ell} V(Q^{1}_{i_j})$ must contain an independent set of size at least $$\sum_{1 \leq j \leq \ell} \frac{|Q^{1}_{i_j}|}{2} \geq \ell \cdot \left(\varepsilon^3 k/6 \right) .$$ Since $\alpha(G) \leq k$, this implies that $\ell \leq 6/\varepsilon^3$. An analogous argument deals with green cliques.
\end{proof}
Given the above claim and Theorem \ref{lem:ramsey}, and since $H$ has $r>\sqrt{k}$ vertices where we chose $k$ large enough in terms of $\varepsilon$, we have that there exists a red clique in $H$ of size at least $4\varepsilon^{-7}$.
Denote by $I$ the vertices/indices contained in this clique, so that for all $i,j\in I$ we have that there is an edge $e^1_{ij}$ between $Q^{1}_i$ and $Q^{1}_j$ and an edge $e^3_{ij}$ between $ Q^{3}_i$ and $Q^{3}_j$. For simplicity of notation, w.l.o.g.\ we may assume that the indices in $I$ are $\{1, 2, 3, \ldots, |I|\}$ according to the ordering of the vertices $m(x_i)$ for $i \in I$ - that is, we now have $m(x_1) < m(x_2) < \ldots < m(x_{|I|})$. Denote by $z$ the endpoint of $Q_1$ which is not $m(x_1)$.
In order to complete the proof, we will need the following lemma.
\begin{lem}\label{lem:jump with Q}
The path $Q$, defined by the interval $\left[z,m(x_{|I|})\right]$ is such that its endpoints are $3\varepsilon^3k$-dense in $\left[0,\frac{k}{\varepsilon^4}\right]$ in the graph $G[Q]$.
\end{lem}
\begin{proof}
Denote by $R_1$ the path consisting only of vertex $z$. We now recursively define for each $i< |I|$ a path $R_i$ whose one endpoint is $z$ and the other endpoint $z_i$ lies in either $Q_i^1$ or $Q_i^3$ (see Figure~\ref{fig:jumpswithQ} for an illustration). First, let $z_1=z$.
Suppose $z_i$ is in $Q_i^a$ for some $a\in\{1,3\}$, and let $b\in\{1,3\}\setminus\{a\}$; let $R_{i+1}$ be the path obtained from $R_{i}$ by concatenating to it the path contained in $Q_i$ which starts at $z_i$, goes through $Q_i^2$ and touches the edge $e^b_{i,i+1}$, and also add the edge $e^b_{i,i+1}$ itself.
Now we also define paths $R_i'$ for each $i \leq |I|-1$, obtained from $R_i$ as follows. If $z_i\in Q^{a}_i$, again we let $b\neq a$ and $b\in\{1,3\}$. Let $R_i'$ be the path obtained by concatenating with $R_i$ the path starting at $z_i$, going through $Q_i$ until the edge $e^b_{i,|I|}$, then also adding this edge $e^b_{i,|I|}$ itself, together with the path in $Q_{|I|}$ which connects the endpoint of this edge with $m(x_{|I|})$.
Note that the length of two paths $R_i'$ and $R_{i+1}'$ differs by at most $|Q_i|+|Q_{i+1}|+|Q_I|\leq 3\varepsilon^3k$, since the only vertices which belong to exactly one of these two paths are contained in $G[Q_i\cup Q_{i+1}\cup Q_I]$. Furthermore, the length of the first path $R'_1$ obtained by our procedure is at most $|Q_1| + |Q_{|I|}| \leq 2 \varepsilon^3 k$ and the length of the last path $R_{|I|-1}'$ is at least $(|I|-1)\varepsilon^3k/3\geq \frac{k}{\varepsilon^4}$, since it contains all paths $Q_i^2$ for all $i\leq |I|-1$. This implies that the path $Q$, defined as the path between $z=z_1$ and $m(x_{|I|})$ is such that its endpoints are $3\varepsilon^3k$-dense in $\left[0,\frac{k}{\varepsilon^4}\right]$ in the graph $G[Q]$.
\end{proof}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.62,main node/.style={circle,draw,color=black,fill=black,inner sep=0pt,minimum width=7pt}]
\tikzset{rectangle/.append style={draw=brown, ultra thick, fill=blue!30}}
\foreach \i in {1,...,4}
{
\node[main node, scale=0.5, color=black, opacity=1] (a\i) at (5.7*\i+1,0){};
\node[main node, scale=0.5, color=black, opacity=1] (b\i) at (5.7*\i+2,0){};
\node[main node, scale=0.5, color=black, opacity=1] (c\i) at (5.7*\i+3,0){};
\node[main node, scale=0.5, color=black, opacity=1] (d\i) at (5.7*\i+4,0){};
\draw[line width= 1 pt] (a\i) to (d\i);
\node[main node, scale=0.5, color=blue, opacity=1] (z1) at (6.7,0){};
\node[main node, scale=0.5, color=blue, opacity=1] (t1) at (9.2,0){};
\draw[line width= 2 pt, color=blue] (z1) to (t1);
\ifthenelse {\i>1 \and \i<5}
{
\pgfmathtruncatemacro\aa{(-1)^\i}
\node[main node, scale=0.5, color=blue, opacity=1] (z\i) at (5.7*\i+2.5+\aa,0){};
\node[main node, scale=0.5, color=blue, opacity=1] (t\i) at (5.7*\i+2.5-\aa,0){};
\draw[line width= 2 pt, color=blue] (z\i) to (t\i);
}
{}
\ifthenelse {\i<5 \and \i>1}
{
\pgfmathtruncatemacro\aa{\i-1}
\draw[line width= 2 pt, color=blue] (z\i) to [bend right = 50] (t\aa);
}
{}
}
\foreach \i in{1,...,4}
{
\node[main node, scale=0.5, color=black, opacity=1] (a\i) at (5.7*\i+1,0){};
\node[main node, scale=0.5, color=black, opacity=1] (b\i) at (5.7*\i+2,0){};
\node[main node, scale=0.5, color=black, opacity=1] (c\i) at (5.7*\i+3,0){};
\node[main node, scale=0.5, color=black, opacity=1] (d\i) at (5.7*\i+4,0){};
\node[scale=0.8, color=black, opacity=1] (t) at (5.7*\i+1.5,-0.6){$Q_\i^3$};
\node[scale=0.8, color=black, opacity=1] (t) at (5.7*\i+2.5,-0.6){$Q_\i^2$};
\node[scale=0.8, color=black, opacity=1] (t) at (5.7*\i+3.5,-0.6){$Q_\i^1$};
}
\node[main node, scale=0.5, color=black, opacity=1] (a) at (31,0){};
\node[main node, scale=0.5, color=black, opacity=1] (b) at (32,0){};
\node[main node, scale=0.5, color=black, opacity=1] (c) at (33,0){};
\node[main node, scale=0.5, color=black, opacity=1] (d) at (34,0){};
\node[scale=0.8, color=black, opacity=1] (t) at (31.5,-0.6){$Q_{|I|}^3$};
\node[scale=0.8, color=black, opacity=1] (t) at (32.5,-0.6){$Q_{|I|}^2$};
\node[scale=0.8, color=black, opacity=1] (t) at (33.5,-0.6){$Q_{|I|}^1$};
\draw[line width= 1 pt] (a) to (d);
\draw[color=red,line width= 2 pt, opacity =0.8] (t4) to [ bend left = 70](31.5,0);
\draw[color=red,line width= 2.5 pt, opacity=0.8] (34,0) to (31.5,0);
\node[main node, scale=0.5, color=black, opacity=1] (a) at (31,0){};
\node[main node, scale=0.5, color=black, opacity=1] (b) at (32,0){};
\node[main node, scale=0.5, color=black, opacity=1] (c) at (33,0){};
\node[main node, scale=0.5, color=black, opacity=1] (d) at (34,0){};
\node[scale=0.8, color=black, opacity=1] (t) at (6.7,0.6){$z_1$};
\node[scale=0.8, color=black, opacity=1] (t) at (14.9,0.6){$z_2$};
\node[scale=0.8, color=black, opacity=1] (t) at (18.6,0.6){$z_3$};
\node[scale=0.8, color=black, opacity=1] (t) at (26.3,0.6){$z_4$};
\node[scale=0.8, color=black, opacity=1] (t) at (34.3,0.6){$m(x_{|I|})$};
\node[scale=1.2, color=black, opacity=1] (t) at (28.7,0){$\ldots$};
\node[scale=1, color=black, opacity=1] (t) at (28.3,2.85){$e_{4,|I|}^3$};
\node[scale=0.8, color=black, opacity=1] (t) at (10.7,0){$m(x_1)$};
\end{tikzpicture}
\caption{
The thick blue path represents path $R_4$, and adding to it the red path creates $R_4'$.
}
\label{fig:jumpswithQ}
\end{figure}
Let $Q^*$ be the path in $C$ spanned by the interval $[1,z]\cup \left[m(y_{|I|}),N\right]$. By applying Lemma~\ref{lem:easyjump}, we get a path $Q'$ of length at most $2k$ in $G[Q^*]$ with the same endpoints as $Q^*$. Recalling that the pair of vertices $x_{|I|},y_{|I|}$ is connected by paths of all lengths in $\left[\frac{\varepsilon k}{100},\frac{\varepsilon k}{50}\right]$ in the subgraph $G[S]$, and Lemma~\ref{lem:jump with Q} above, we are done by \Cref{obs:combining}. Indeed, we apply it to $G$ with $v_1=z$, $v_2=m(y_{|I|})$ and $v_3=m(x_{|I|})$, while $S_1$ is the set of internal vertices of $Q^*$, $S_2=S$ and $S_3$ are the internal vertices of $Q$ to get all cycle lengths in the interval $\left[(2+\varepsilon) k,\frac{1000}{\varepsilon^2}k\right]$.
\hfill \qedsymbol
\section{Concluding remarks}\label{sec:concludingrem}
In this paper we proved that every Hamiltonian graph on $n \geq 2k^2+o(k^2)$ vertices with independence number $k$ is pancyclic, which is tight up to the $o(k^2)$ error term. Furthermore, our methods allow us to give a short proof of Erd\H{o}s's conjecture that $n = \Omega(k^2)$ vertices are enough for $G$ to be pancyclic. For this, we first note that while getting a bound of $n = \Omega(k^3)$, Keevash and Sudakov \cite{keevash2010pancyclicity} implicitly proved the following result.
\begin{lem}[\cite{keevash2010pancyclicity}]\label{lem:ks}
There exists a large constant $C$ such that every Hamiltonian graph on $n \geq Ck^2$ vertices with independence number $k$, contains all cycle lengths in $[3,n/C]$.
\end{lem}
\noindent This reduces Erd\H{o}s's conjecture to the following problem.
\begin{itemize}
\item[($\ast$)] Does there exist $C' >0$ such that every Hamiltonian graph on $n \geq C'k^2$ vertices with independence number $k$, contains a cycle of length $n-1$?
\end{itemize}
\noindent Indeed, suppose that the above is true for some large constant $C'$ and let $G$ be a Hamiltonian graph on $n$ vertices with independence number $k$. Then, combining this with the above lemma of Keevash and Sudakov, it follows that if $n \geq CC'k^2$ then $G$ is pancyclic, thus proving Erd\H{o}s's conjecture. Indeed, note that by the lemma above, $G$ contains all cycle lengths up to $n/C$ and one can see that it contains all cycle lengths from $n/C$ to $n$ by iteratively applying the assumption that whenever $n' \geq C'k^2$, there is a cycle of length $n'-1$.
The previous results
\cite{keevash2010pancyclicity}, \cite{lee2012hamiltonicity} and \cite{dankovics2020low} are all improvements towards question ($\ast$) above. As discussed in the beginning of Section \ref{sec:upperrange}, applying Proposition \ref{lem:jumpwithzigzag} with $c=1$ and $U = \emptyset$ solves this problem in the following stronger form.
\begin{thm}\label{thm:outlinen-1}
Every Hamiltonian graph on $n > 2k^2+2k$ vertices with independence number $k$, contains a cycle of length $n-1$.
\end{thm}
Let us note that although \Cref{thm:outlinen-1} shows the existence of a cycle of length $n-1$ already with $n>2k^2+2k$, this is not sufficient to prove that $n>2k^2+\varepsilon k^2$ implies pancyclicity. At this threshold, \Cref{lem:ks} does not apply, so one needs a different argument to find the cycle lengths in the interval $[3,2k^2+2k]$. It turns out that in this setting, the cycle lengths which are hardest to find are those around $2k$, that is, precisely the cycle lengths which are missed by the lower bound construction given in the introduction. Finding them is the most technical part of our proof, given in \Cref{sec:middlerange}.
A very interesting open question is to understand the best bound on the number of vertices $n$ in ($\ast$) which guarantees the cycle of length $n-1$. Here the answer might be linear in $n$ as the following question asked in \cite{keevash2010pancyclicity} suggests.
\begin{prob}
Does there exist a constant $C$ such that every Hamiltonian graph with independence number $k$ and $n \geq Ck$ vertices contain a cycle of length $n-1$?
\end{prob}
| {
"timestamp": "2022-09-08T02:21:05",
"yymm": "2209",
"arxiv_id": "2209.03325",
"language": "en",
"url": "https://arxiv.org/abs/2209.03325",
"abstract": "An $n$-vertex graph is Hamiltonian if it contains a cycle that covers all of its vertices, and it is pancyclic if it contains cycles of all lengths from $3$ up to $n$. In 1972, Erdős conjectured that every Hamiltonian graph with independence number at most $k$ and at least $n = \\Omega(k^2)$ vertices is pancyclic. In this paper we prove this old conjecture in a strong form by showing that if such a graph has $n = (2+o(1))k^2$ vertices, it is already pancyclic, and this bound is asymptotically best possible.",
"subjects": "Combinatorics (math.CO)",
"title": "Pancyclicity of Hamiltonian graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9898303410461385,
"lm_q2_score": 0.8104789086703225,
"lm_q1q2_score": 0.8022366145798474
} |
https://arxiv.org/abs/2212.08285 | When is a numerical semigroup a quotient? | A natural operation on numerical semigroups is taking a quotient by a positive integer. If $\mathcal S$ is a quotient of a numerical semigroup with $k$ generators, we call $\mathcal S$ a $k$-quotient. We give a necessary condition for a given numerical semigroup $\mathcal S$ to be a $k$-quotient, and present, for each $k \ge 3$, the first known family of numerical semigroups that cannot be written as a $k$-quotient. We also examine the probability that a randomly selected numerical semigroup with $k$ generators is a $k$-quotient. | \section{Introduction
\label{sec:intro
We denote $\NN=\{0,1,2,\dots\}$, and we define a \emph{numerical semigroup} to be a set $\nsg\subseteq\NN$ that is closed under addition and contains~0. A numerical semigroup can be defined by a set of generators,
\[\langle a_1,\ldots,a_n\rangle = \{a_1x_1+\cdots a_nx_n:\ x_i\in\NN\},\]
and if $a_1,\ldots,a_n$ are the minimal set of generators of $\nsg$, we say that $\nsg$ has \emph{embedding dimension} $\mathsf e(\nsg) = n$. For example,
\[\langle 3,5\rangle = \{0,3,5,6,8,9,10,\ldots\}\]
has embedding dimension 2.
If $\nsg$ is a numerical semigroup, then an interesting way to create a new numerical semigroup
is by taking the \emph{quotient}
\[
\frac{\nsg}{d} = \{ t \in \NN:\ dt \in \nsg\}
\]
by some positive integer $d$. Note that $\frac{1}{d}\nsg$ is itself a numerical semigroup, one that in particular satisfies $\nsg \subseteq \frac{1}{d}\nsg \subseteq \NN$. For example,
\[
\frac{\langle 3,5\rangle}{2}=\{0,3,4,5,\ldots\}=\langle 3,4,5\rangle.
\]
Quotients of numerical semigroups appear through the literature over the past couple of decades~\cite{symmetriconeelement,symmetricquotient} as well as recently~\cite{harrisquotient,nsquotientgens}; see~\cite[Chapter~5]{numerical} for a thorough overview.
\begin{defn}\label{def:quotientrank}
We say a numerical semigroup $\nsg$ is a \emph{$k$-quotient} if $\nsg=\langle a_1,\ldots,a_k\rangle/d$ for some positive integers $d, a_1, \ldots, a_k$. The \emph{quotient rank} of $\nsg$ is the smallest $k$ such that $\nsg$ is a $k$-quotient, and we say $\nsg$ has \emph{full quotient rank} if its quotient rank is $\mathsf e(\nsg)$ (since $\nsg=\frac{\nsg}{1}$, its quotient rank is at most $\mathsf e(S)$).
\end{defn}
Numerical semigroups of quotient rank 2 are precisely the \emph{proportionally modular} numerical semigroups~\cite{openmodularns}, which have been well-studied~\cite{propmodtree,propmodular}.
This includes arithmetical numerical semigroups (whose generators have the form $a, a + d, \ldots, a + kd$ with $\gcd(a,d) = 1$), which have a rich history in the numerical semigroup literature~\cite{diophantinefrob,setoflengthsets,nsfreeresarith}. In fact, generalized arithmetical numerical semigroups~\cite{omidalirahmati}, whose generating sets have the form $a, ah + d, \ldots, ah + kd$, can also be shown to have quotient rank 3.
For quotient rank $k \ge 3$, much less is known. It is identified as an open problem in~\cite{nsgproblems} that no numerical semigroup had been proven to have quotient rank at least 4.
Since then, the only progress in this direction is~\cite{ksquashed}, wherein it is shown there exist infinitely many numerical semigroups with quotient rank at least 4, though no explicit examples are given.
With this in mind, we state the main question of the present paper.
\begin{mainprob}\label{mainprob:whenaquotient}
When is a given numerical semigroup $\nsg$ a $k$-quotient?
\end{mainprob}
Our main structural results, which are stated in Section~\ref{sec:necessary}, are as follows.
\begin{itemize}
\item
We prove a sufficient condition for full quotient rank (Theorem~\ref{thm:necessary}), which we use to obtain, for each $k$, a numerical semigroup of embedding dimension $k + 1$ that is not a $k$-quotient (Theorem~\ref{thm:noquotient}). When $k \ge 3$, this is the first known example of a numerical semigroup that is not a $k$-quotient. We~also construct, for each $k$, a numerical semigroup that cannot be written as an intersection of $k$-quotients (Theorem~\ref{thm:nointersection}), settling a conjecture posed in~\cite{ksquashed}.
\item
We prove quotient rank is sub-additive whenever the denominators are coprime.
This provides a new method of proving a given numerical semigroup is a quotient:\ partition its generating set, and prove that each subset generates a quotient, e.g.,
\begin{align*}
\langle 11,12,13,17,18,19,20 \rangle
&= \langle 11,12,13 \rangle + \langle 17,18,19,20 \rangle
= \frac{\langle 11,13\rangle}{2}+\frac{\langle 17,20\rangle}{3}
\\
&= \frac{3\langle 11,13\rangle + 2\langle 17,20\rangle}{2 \cdot 3}
= \frac{\langle 33,34,39,40\rangle}{6}.
\end{align*}
We~use this result to prove that any numerical semgiroup with \emph{maximal embedding dimension} (that is, the smallest generator equals the embedding dimension) fails to have full quotient rank (Theorem~\ref{thm:maxembdim}).
\end{itemize}
Our remaining results are probabilistic in nature. We examine two well-studied models for ``randomly selecting'' a numerical semigroup:\ the ``box'' model, where the number of generators and a bound on the generators are fixed~\cite{expectedfrob,arnoldfrob,burgeinsinaifrob}; as well as a model where the smallest generator and the number of gaps are fixed~\cite{kaplancounting}, whose prior study has yielded connections to enumerative combinatorics~\cite{kunzcoords} and polyhedral geometry~\cite{kunzfaces1,kunz}.
We prove that under the first model, asymptotically all semigroups have full quotient rank (Theorem~\ref{thm:numericalbox}), while under the second model, asymptotically no semigroups have full quotient rank (Theorem~\ref{thm:maxembdim}).
Our results also represent partial progress on the following question, which has proved difficult.
\begin{prob}\label{prob:algorithm}
Given a numerical semigroup $\nsg$ and a positive number $k$, is there an algorithm to determine whether $\nsg$ is a $k$-quotient?
\end{prob}
\begin{remark}\label{rem:relprime}
Some texts require that the generators of a numerical semigroup be relatively prime, so that $\NN\setminus \nsg$ is finite. This assumption is harmless, since any numerical semigroup can be written as $m\nsg$, where the generators of $\nsg$ are relatively prime, and it also doesn't affect $k$-quotientability: given a positive integer $d$, one can readily check that
\[
\frac{m\nsg}{d} = m' \left(\frac{\nsg}{d'}\right),
\]
where $m' = m/\gcd(m,d)$ and $d' = d/\gcd(m,d)$.
\end{remark}
\section{When is $\nsg$ not a $k$-quotient?
\label{sec:necessary
In this section, we give two structural results.
The first (Theorem~\ref{thm:necessary}) is a necessary condition for a given numerical semigroup $\nsg$ to be a $k$-quotient, which forms the backbone of the constructions in Section~\ref{sec:fullquotientrank} and the probabilistic results in Section~\ref{sec:randomsgps}. The second (Theorem~\ref{thm:sums}) is a constructive proof that quotient rank is sub-additive, provided the denominators are relatively prime.
In what follows, we write $[p] = \{1,2,\dots,p\}$ for any positive integer $p$, and given a collection of vectors $\{\vec v_i\}$ and a set of indices $I$, we define $\vec v_I=\sum_{i\in I}\vec v_i$.
\begin{thm} \label{thm:necessary}
Suppose
\[\nsg =\frac{\langle b_1,\ldots,b_k\rangle}{d}\]
for some $b_i\in \NN$ and positive integer $d$. Given any elements $s_1,\ldots,s_p \in \nsg$ with $p > k$, there exists a nonempty subset $I\subseteq [p]$ such that $s_I/2\in \nsg$.
\end{thm}
\begin{proof}
Let $\vec b=(b_1,\ldots,b_k)$. For $1\le i\le p$, let $\vec c_i=(c_{i1},\ldots,c_{ik})\in\NN^k$ be such that
\[s_i = d (c_{i1}b_1+\cdots c_{ik}b_k),\]
which exist since $s_i\in\nsg$.
For a vector $\vec v\in\ZZ^k$, define $\vec v\bmod 2\in\ZZ_2^k$ to be the coordinate-wise reduction of $\vec v$ modulo 2. For $J\subseteq [p]$, examine $\vec c_J \bmod 2$. There are $2^p$ possible $J$ and $2^k$ possible values for $\vec c_J \bmod 2$, with $p>k$, so there must be two distinct $J_1$ and $J_2$ such that
\[\vec c_{J_1}\bmod 2=\vec c_{J_2}\bmod 2.\]
Let $I=(J_1\setminus J_2)\cup (J_2\setminus J_1)$ be their symmetric difference, which is nonempty. Then
\[\vec c_{I}\bmod 2=\vec c_{J_1}+\vec c_{J_2}-2\vec c_{J_1\cap J_2}\bmod 2=\vec 0,\]
so $\vec c_{I}$ has even coordinates. Let $\vec c_I = (2q_1,\ldots,2q_k)$ where $q_i\in\NN$. Then
\begin{align*}
s_I/2&=\sum_{i\in I}\big(d\cdot (c_{i1}b_1+\cdots c_{ik}b_k)\big)/2
= d\sum_{j=1}^kb_j\sum_{i\in I}c_{ij}/2
=d\sum_{j=1}^k q_jb_j
\end{align*}
is an element of $\nsg$, as desired.
\end{proof}
\begin{cor} \label{cor:necessary}
Let $\nsg = \langle a_1, \dots, a_n \rangle$ be a numerical semigroup. If $\nsg$ does not have full quotient rank, then there exists $I \subseteq [n]$ such that
\[ a_I \in\langle a_j:\ j\notin I \rangle.\]
\end{cor}
\begin{proof}
By applying Theorem~\ref{thm:necessary} to the generating set $\{a_1, \dots, a_n\}$, we obtain that for some $J \subseteq [n]$, $a_J/2 \in \nsg$. So there exist $c_r\in\NN$ such that
\[
\sum_{j\in J} a_j=\sum_{r\in R} 2c_r a_r
\]
where $R=\{r:\ c_r>0\}$.
Letting $I=J\setminus R$ and subtracting each $a_j$ with $j\in J\cap R$ from both sides, we have
\[
a_I=\sum_{i\in I} a_i = \sum_{r\in J\cap R} (2c_r-1) a_r + \sum_{r\in R\setminus J}2c_r a_r
\]
is an element of $\langle a_j:\ j\notin I\rangle$,
as desired. Note that $I$ is nonempty, as otherwise
\[
0 = a_I = \sum_{r\in J\cap R} (2c_r-1) a_r + \sum_{r\in R\setminus J}2c_r a_r \ge \sum_{r\in J\cap R} a_r = \sum_{r\in J} a_r > 0
\]
since $J$ is nonempty,
which is a contradiction.
\end{proof}
\begin{thm} \label{thm:sums}
If $\nsg$ and $\nsgtwo$ are numerical semigroups and $\gcd(c,d) = 1$, then
\[
\frac{\nsg}{c} + \frac{\nsgtwo}{d} = \frac{d\nsg + c \nsgtwo}{cd}.
\]
\end{thm}
\begin{proof}
First suppose that $x \in \frac{1}{c}\nsg + \frac{1}{d}\nsgtwo$. Then $x = s+t$ where $cs \in \nsg$ and $dt \in \nsgtwo$, so
\[
cdx = d(cs) + c(dt) \in d \nsg + c \nsgtwo
\]
which implies $x \in \frac{1}{cd}(d\nsg + c \nsgtwo)$. Note this containment does not require $\gcd(c,d) = 1$.
On the other hand, suppose $cdx \in d \nsg + c \nsgtwo$, so
\begin{equation} \label{eq:sums}
cdx = ds + ct
\qquad \text{for some} \qquad
s \in \nsg, t \in \nsgtwo.
\end{equation}
In particular, $ct = d(cx-s)$ is a multiple of $d$. Since $c$ and $d$ are relatively prime, this implies that $t$ is a multiple of $d$, say $t = bd$. Since $t \in \nsgtwo$, we conclude that $b \in \frac{1}{d}\nsgtwo$. Similarly, we can write $s = ac$ for some $a$ and so $a \in \frac{1}{c}\nsg$.
Substituting $t=bd$ and $s=ac$ into \eqref{eq:sums}, we obtain
\[
cdx = dac + cbd = cd(a+b).
\]
By cancellation, we obtain $x = a + b$ with $a \in \frac{1}{c}\nsg$ and $b \in \frac{1}{d}\nsgtwo$, as desired.
\end{proof}
Given the ease of proving Theorem~\ref{thm:sums}, it is surprisingly more difficult when the denominators do have a common factor. In a follow-up to this current paper, we will translate the quotient operation into a geometric setting, which will allow us to generalize Theorem~\ref{thm:sums} to drop the ``coprime denominators'' hypothesis. Intriguingly, the translation can cause a large blow-up in the numbers, e.g.,
\[
\frac{\langle 11,13\rangle}{2} + \frac{\langle 17,19\rangle}{2}
= \frac{\langle 2416656, 2894591, 3441983, 3869571 \rangle}{25357536}.
\]
Based on experimentation, this blow-up seems necessary.
\section{Some families of numerical semigroups with full quotient rank
\label{sec:fullquotientrank
In this section, we produce two families of numerical semigroups:\ those in the first have embedding dimension $k+1$ but are not $k$-quotients, so in particular have full quotient rank (Theorem~\ref{thm:noquotient}); and those in the second are not even \emph{intersections} of $k$-quotients (Theorem~\ref{thm:nointersection}).
\begin{thm} \label{thm:noquotient}
Given a positive integer $k$, let $a\ge 2^k$ be an integer. Define $a_i=2a+2^i$ for $i=0,1,\dots,k$. Then the numerical semigroup
\[\nsg=\langle a_0,a_1,\ldots,a_k\rangle\]
is not a $k$-quotient.
\end{thm}
\begin{proof}
For $1\le j\le 2^k-1$, let $b_j=\omega(j)a+j$,
where $\omega(j)$ is the number of 1's in the binary representation of $j$. We first prove that, if $\nsgtwo$ is \emph{any} $k$-quotient the contains $a_0,\ldots,a_k$ (so $\nsgtwo=\nsg$ will be an example), then there exists $j$ ($1\le j\le 2^k-1$) such that $b_j\in \nsgtwo$. Indeed, we apply Theorem~\ref{thm:necessary}. We know that there exists a nonempty $I\subseteq\{0,1,\ldots,k\}$ such that $a_I/2\in \nsgtwo$. If $0\in I$, then $a_I$ is odd and $a_I/2$ is not an integer, so we know $I\subseteq\{1,\ldots,k\}$. Let
\[j=\sum_{i\in I}2^{i-1}.\]
We have that $1\le j\le 2^k-1$, and
\[a_I/2=\sum_{i\in I}\left(2a+2^i\right)/2 = \abs{I}a + \sum_{i\in I}2^{i-1} = \omega(j)a+j=b_j,\]
so $b_j\in \nsgtwo$.
Now we this apply to $\nsgtwo=\nsg$. Seeking a contradiction, suppose $\nsg$ is a $k$-quotient, and therefore we have some $b_j\in \nsg$, that is, $b_j=\sum_{i=0}^ka_ix_i$ with $x_i\in\NN$. Examining this sum modulo $a$, and noting that $b_j=j\pmod a$ and $a_i=2^i\pmod a$, we see that
\[\sum_{i=0}^k x_i\ge \omega(j).\] But a sum of $\omega(j)$ generators of $\nsg$ is too large:
\[\omega(j)a+j=b_j\ge \omega(j)\cdot a_0 = \omega(j)(2a+1)\ge \omega(j)a+a\ge \omega(j)a+2^k,\]
a contradiction. Therefore $b_j\notin \nsg$, and so $\nsg$ cannot be a $k$-quotient.
\end{proof}
\begin{thm} \label{thm:nointersection}
Given a positive integer $k\ge 2$, let $a\ge k2^k$ be an integer. As before, define $a_i=2a+2^i$ and $b_j=\omega(j)a+j$,
where $\omega(j)$ is the number of 1's in the binary representation of $j$. Let $N=(2k+1)a$. Then
\[\nsg=\langle a_0,a_1,\ldots,a_k,N-b_1,N-b_2,\ldots,N-b_{2^k-1}\rangle\]
cannot be written as an intersection of $k$-quotients.
\end{thm}
\begin{proof}
Suppose, seeking a contradiction, that $\nsg=\bigcap_{\ell=1}^p \nsg_\ell$, where the $\nsg_\ell$ are $k$-quotients.
Each $\nsg_\ell$ must contain $a_0,a_1,\ldots,a_k$, and we noted in the proof of Theorem~\ref{thm:noquotient} that this implies that $\nsg_\ell$ must contain $b_j$ for some $j$. But then $\nsg_\ell$ contains both $b_j$ and $N-b_j$, and so additive closure implies that it contains $N$. This means $N \in \bigcap_{\ell=1}^p \nsg_\ell = \nsg$. Let
\begin{equation}\label{eq:sumforN}
N=\sum_{i=1}^k a_ix_i + \sum_{j=1}^{2^k-1}(N-b_j)y_j,
\end{equation}
where $x_i,y_j \in \NN$.
We break into three cases.
\begin{itemize}
\item
If $\sum_{j}y_j\ge 2$, then~\eqref{eq:sumforN} would be too large, as for some $j_1,j_2$,
\begin{align*}(2k+1)a
= N
&\ge (N-b_{j_1})+(N-b_{j_2})\\
&= 2N-(\omega(j_1)+\omega(j_2))a -(j_1+j_2)\\
&> 2\cdot (2k+1)a -2ka-2\cdot 2^k\\
&= (2k+2)a-2^{k+1},
\end{align*}
which is impossible since $a\ge 2^{k+1}$.
\item
If $\sum_{j}y_j=1$, then~\eqref{eq:sumforN} uses exactly one $N-b_j$. But then $N=(N-b_j)+b_j$ implies that $b_j\in \langle a_0,a_1\ldots,a_k\rangle$, which we saw was impossible in the proof of Theorem~\ref{thm:noquotient} since $a\ge 2^k$.
\item
If $\sum_{j}y_j=0$, then $N = \sum_{i} a_ix_i$. If $\sum_i x_i \le k$, then
\[(2k+1)a=N\le k(2a+2^k),\]
which is impossible since $a>k2^k$. On the other hand, if $\sum_i x_i > k$, then
\[(2k+1)a=N\ge (k+1)(2a+1)>(2k+1)a,\]
which is also impossible.
\end{itemize}
In each case, we obtain a contradiction.
\end{proof}
\section{How often do numerical semigroups have full quotient rank?
\label{sec:randomsgps
In this section, we consider the question ``how likely is a randomly selected numerical semigroup to have full quotient rank?'' We consider two sampling methods. The first is the ``box'' method, wherein a fixed number of generators are selected uniformly and independently from an interval $[1,M]$. Numerical semigroups selected under this model have high probability (i.e., approaching 1 as $M \to \infty$) of having full quotient rank.
\begin{thm} \label{thm:numericalbox}
Fix a positive integer $n$. If $\nsg = \langle a_1, \dots, a_n \rangle$ where $a_1, \ldots, a_n \in [M]$ are uniformly and independently chosen, then the probability that $\nsg$ has full quotient rank tends to 1 as $M \to \infty$. More precisely, this probability is $1 - O(M^{-\frac{1}{n}})$.
\end{thm}
\begin{proof}
By Corollary~\ref{cor:necessary}, it suffices to bound the probability that there exists $I \subseteq [n]$ such that $a_I \in \langle a_j:\ j\notin I\rangle$. Let $A$ be this event, and let $B$ be the event that $a_i \leq M^{\frac{n-1}{n}}$ for some $i$. We will use that
\[
\Pr(A)
= \Pr(B)\Pr(A \mid B) + \Pr(B^c)\Pr(A \mid B^c)
\leq \Pr(B) + \Pr(A \mid B^c).
\]
For the first term, the union bound gives us that
\[
\Pr(B)
\leq n \left( \frac{M^{\frac{n-1}{n}}}{M} \right)
= \frac{n}{M^{\frac{1}{n}}}.
\]
For the second term, fix a nontrivial subset $I \subsetneq [n]$ and $b_i \in \NN$ for $i \notin I$. If $b_i > nM^{\frac{1}{n}}$ for some $i \notin I$, then since every $a_i$ is greater than $M^{\frac{n-1}{n}}$, we have
\[
\sum_{j \notin I} b_ja_j
\geq b_ia_i
> \left( nM^{\frac{1}{n}} \right) M^\frac{n-1}{n}
= nM.
\]
But $a_I$ cannot be this large because it is the sum of at most $n-1$ integers that are each at most $M$. So we need only consider $b_i \le nM^{\frac{1}{n}}$. Letting $i^\ast = \min(I)$ and $m = nM^{\frac{1}{n}}$,
\begin{align*}
\Pr(A \mid B^c)
&\le \sum_{\substack{I \subsetneq [n] \\ I \ne \emptyset}} \sum_{\substack{b_j \le m \\ j \notin I}} \Pr \bigg( \sum_{i \in I}a_i = \sum_{i \notin I}b_ia_i \biggm\vert a_1, \ldots, a_n > M^{\frac{n}{n-1}} \bigg)
\\
& = \sum_{\substack{I \subsetneq [n] \\ I \ne \emptyset}} \sum_{\substack{b_j \le m \\ j \notin I}} \Pr \bigg( a_{i^\ast} = \sum_{i \notin I}b_ia_i - \!\!\! \sum_{i \in I \setminus \{i^\ast\}} \!\!\! a_i \biggm\vert a_1, \ldots, a_n > M^{\frac{n}{n-1}} \bigg)
\\
& \le \sum_{\substack{I \subsetneq [n] \\ I \ne \emptyset}} \sum_{\substack{b_j \le m \\ j \notin I}} \frac{1}{M-M^{\frac{n-1}{n}}}
\le \frac{\left( 2^n - 2 \right) \left( nM^{\frac{1}{n}} \right)^{n-1}}
{M-M^{\frac{n-1}{n}}}
= \frac{\left( 2^n - 2 \right)n^{n-1}}{M^{\frac{1}{n}}-1},
\end{align*}
where the second inequality comes from the fact that for any choice of the $a_i$ with $i \neq i^\ast$, there is at most one choice of $a_i^\ast$ that makes the linear equation hold. Thus,
\[
\Pr(A) \leq \frac{\left( 2^n - 2 \right)n^{n-1}}{M^{\frac{1}{n}}-1} + \frac{n}{M^{\frac{1}{n}}-1} = O(M^{-\frac{1}{n}}),
\]
which completes the proof.
\end{proof}
\begin{remark}\label{rem:numericalbox}
The ``minimally generated'' and ``finite complement'' conditions, which are often imposed on numerical semigroups, do not affect Theorem~\ref{thm:numericalbox}.
Indeed, under this ``box'' probability model, the chosen generators $a_1, \dots, a_n$ need not form a minimal generating set. Since the quotient rank is at most the embedding dimension, the (asymptotically rare) event that the rank of $\nsg$ is less than $n$ contains the event that the chosen generating set is not minimal.
Additionally, the probability that $a_1,\ldots, a_n$ are relatively prime approaches the positive constant $1/\zeta(n)$ by~\cite{Nym}, where $\zeta(n)$ is the Reimann zeta function $\sum_{i=1}^{\infty}1/i^n$. Therefore, even if one restricts to those $a_1,\ldots, a_n$ that are relatively prime, the conditional probability that the quotient rank of the resulting numerical semigroup is less than $n$ still tends to 0.
\end{remark}
Under the second model, a numerical semigroup $\nsg$ is selected uniformly at random from among the (finitely many) with fixed smallest generator $m$ and number of gaps~$g$. Such numerical semigroups have high probability (i.e., tending to 1 as $g \to \infty$) of having embedding dimension $m$ (such numerical semigroups are said to have \emph{maximal embedding dimension}). We prove that maximal embedding dimension numerical semigroups never have full quotient rank, illustrating a stark contrast in asymptotic behavior to the first model.
We first recall a characterization of quotient rank 2 numerical semigroups, which appears in~\cite{numerical} as a characterization of proportionally modular numerical semigroups in the case $\gcd(\nsg) = 1$. Our statement here is more general, thanks to Remark~\ref{rem:relprime}.
\begin{thm}\label{thm:pmcriterion}
A numerical semigroup $\nsg$ with $\gcd(\nsg) = D$ has quotient rank 2 if and only if there exists an ordering $b_1, \ldots, b_n$ of its minimal generators such that:
\begin{enumerate}[(a)]
\item $\gcd(b_i, b_{i+1}) = D$ for $1 \leq i \leq n-1$; and
\item $b_{i-1} + b_{i+1}$ is divisible by $b_i$ for $2 \leq i \leq n-1$.
\end{enumerate}
\end{thm}
\begin{lemma}\label{lem:plusminusone}
For any $a, b, m \ge 1$, the numerical semigroup $\nsg = \langle m,am-1, bm+1 \rangle$ is a 2-quotient.
\end{lemma}
\begin{proof}
If $\mathsf e(\nsg) \le 2$, then $\nsg$ is clearly a 2-quotient. Otherwise, letting $b_1 = am-1$, $b_2 = m$, and $b_3 = bm+1$, it is clear that $\gcd(b_1, b_2) = \gcd(b_2, b_3) = 1$ and that $b_2 \mid (b_1 + b_3)$. As such, $\nsg$ is a 2-quotient by Theorem \ref{thm:pmcriterion}.
\end{proof}
\begin{thm}\label{thm:maxembdim}
If $m = \min(\nsg \setminus \{0\})$, then $\nsg$ is an $(m-1)$-quotient. In particular, if $\mathsf e(\nsg) = m$, then $\nsg$ does not have full quotient rank.
\end{thm}
\begin{proof}
If $\nsg=\frac{\nsg}{1}$ has embedding dimension less than $m$, then the proof is immediate. If not, then $\nsg$ has $m$ minimal generators, and so they must all have distinct residues modulo $m$. That is,
\[ \nsg = \langle m, b_1m+1, \dots, b_{k-1}m + (m-1) \rangle \]
for some positive integers $b_1, \dots, b_{m-1}$. Write $\nsg = \nsg_1 + \nsg_2$ where
\[ \nsg_1 = \langle m, b_1m + 1, b_{k-1}m + (m-1) \rangle, \: \nsg_2 = \langle b_2m + 2, \dots, b_{m-2}m + (m-2) \rangle. \]
Now by Lemma~\ref{lem:plusminusone}, $\nsg_1$ as a 2-quotient, and $\nsg_2 = \frac{\nsg_2}{1}$ is trivially an $(m-3)$-quotient. Since $1$ is coprime to every integer, Theorem~\ref{thm:sums} implies $\nsg$ is an $(m-1)$-quotient.
\end{proof}
\section*{Acknowledgements
Tristram Bogart was supported by internal research grant INV-2020-105-2076 from the Faculty of Sciences of the Universidad de los Andes.
| {
"timestamp": "2022-12-19T02:06:32",
"yymm": "2212",
"arxiv_id": "2212.08285",
"language": "en",
"url": "https://arxiv.org/abs/2212.08285",
"abstract": "A natural operation on numerical semigroups is taking a quotient by a positive integer. If $\\mathcal S$ is a quotient of a numerical semigroup with $k$ generators, we call $\\mathcal S$ a $k$-quotient. We give a necessary condition for a given numerical semigroup $\\mathcal S$ to be a $k$-quotient, and present, for each $k \\ge 3$, the first known family of numerical semigroups that cannot be written as a $k$-quotient. We also examine the probability that a randomly selected numerical semigroup with $k$ generators is a $k$-quotient.",
"subjects": "Commutative Algebra (math.AC); Combinatorics (math.CO)",
"title": "When is a numerical semigroup a quotient?",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978384664716301,
"lm_q2_score": 0.8198933337131076,
"lm_q1q2_score": 0.8021710644080291
} |
https://arxiv.org/abs/2206.05396 | A systematic approach on some relevant theorems that follows from Kolmogorov's axioms | A selection of the relevant theorems of Probability Theory that comes directly from Kolmogorov's axioms, Set Theory basic results, definitions and rules of inference are listed and proven in a systematic approach, aiming the student who seeks a self-contained account on the matter before moving to more advanced material. | \section{Introduction}
\label{introduction}
Most of the Probability Theory and Statistics books presents the rules of probability as consequences of Andrei Kolmogorov's axioms \cite{ross2010,rozanov1969,magalhaes2006,degroot1989,shiryaev1996,sinai1992,gnedenko1997}. Although they show proofs of the most relevant relations between probabilities of different kinds of events, either directly or through of exercises, I've found no systematic list of relations and proofs. The lacking of some generalizations is also present, so I've made a selection that includes the most common and relevant theorems (and their consequences) that arise, direct or indirectly, from the axioms. The list is neither complete nor fundamentally rigorous, but provides a secure step for the student to base its researches and prove more theorems even before the introduction of random variables. At the end of this paper, in section \ref{probdiagram}, a diagram relating axioms and main results is presented to increase the broad view of the connections among them.
The discussion is intentionally didactic in order to help the student to follow the reasoning. It demands some contact with proof theory and logic beforehand, but nothing more, alongside some Set Theory equations listed briefly in section \ref{sets}.
It's perhaps important to emphasize that the axiomatic system proposed by Kolmogorov, somewhat inspired in the frequentist view of statistics \cite{shafer2006}, was not the only one proposed, as described, for instance, by Terenin and Draper \cite{terenin2017}.
\section{Sets}
\label{sets}
For the proofs to follow, some relations between sets are necessary. I'll present them here, without proof, since they are not the subject of this paper. However, they can be found easily in introductions to mathematical proof \cite{wohlgemuth2011}, for instance, and Probability Theory textbooks:
\begin{itemize}
\item \emph{Empty set}: An empty set, $\emptyset$, have the following properties for any set $A$: $A\cup \emptyset = A$ and $A\cap \emptyset = \emptyset$;
\item \emph{Space set}: An space set, $\Omega$, is the union of all possible sets. In other words, the properties $A \subset \Omega$, $A\cup \Omega = \Omega$ and $A\cap \Omega = A$ are valid for any set $A$;
\item \emph{Complementary set}: A complementary set of $A$, $\overline{A}$, has the following properties: $A \cup \overline{A}= \Omega$ and $A \cap \overline{A}= \emptyset$;
\item \emph{Associative laws for sets}: Given three sets $A$, $B$ and $C$, it can be proven that: $(A\cup B) \cup C = A\cup (B \cup C)$ and $(A\cap B) \cap C = A\cap (B \cap C)$;
\item \emph{Distributive laws for sets}: Given three sets $A$, $B$ and $C$, the following equations are valid: $A\cup(B\cap C) =(A\cup B)\cap (A\cup C)$ and $A\cap(B\cup C) =(A\cap B)\cup (A\cap C)$.
\end{itemize}
\section{Probabilities}
\label{probabilities}
\begin{defi}
$\Omega$ is the set that represents the sample space, the space of all possible events.
\label{definition1}
\end{defi}
\begin{defi}
Events related to the sample space are all subsets of $\Omega$.
\label{definition2}
\end{defi}
\begin{defi}
Two events $A$ e $B$ are pairwise mutually exclusive (PME) if $A\cap B=\emptyset$, that is, are disjoint sets.
\label{definition3}
\end{defi}
\begin{defi}
A class of subsets of $\Omega$, represented by $\mathcal{F}$, is considered a $\sigma$-algebra if it has the following properties \cite{magalhaes2006,sinai1992}:
\begin{enumerate}
\item $\Omega\in\mathcal{F}$.
\item If $A\in\mathcal{F}$, then $\overline{A}\in\mathcal{F}$.
\item (\emph{Closure with respect to countable unions}). If a countable collection $\{A_1,A_2,...\}=\{A_i\}_{i=1}^{\infty}$ of sets $A_i$ is such that $A_i\in\mathcal{F}$ for all $i$, then $\displaystyle\bigcup_{i=1}^{\infty}A_i\in\mathcal{F}$.
\end{enumerate}
\label{definition4}
\end{defi}
\begin{defi}
A partition of the sample space $\Omega$ is defined according the following property:
\begin{equation}
\displaystyle\bigcup_{i=1}^{n}A_i=\Omega
\label{equation1}
\end{equation}
Being $A_i$ and $A_j$ mutually exclusive (ME) for all combinations of sets.
\label{definition5}
\end{defi}
\begin{defi}
Probability $P$ is a function of the subsets of the sample space correspondent to $\mathcal{F}$: $P=P(\mathcal{F})$. Additionally, it should obey the axioms that follow.
\label{definition6}
\end{defi}
\begin{center}
\fbox{\parbox{15cm}{
\begin{ax}[Non-negativity]
$P(A_i)\geq 0$, for all event $A_i\subset\mathcal{F}$.
\label{axiom1}
\end{ax}
\begin{ax}[Normalization]
$P(\Omega)=1$.
\label{axiom2}
\end{ax}
\begin{ax}[Countable additivity]
If $A_i$ and $A_j$ are two PME events (disjoint sets) for all $i\neq j$, then $P\displaystyle\left(\bigcup_{i=1}^{\infty}A_i\right)=\displaystyle\sum_{i=1}^{\infty}P(A_i)$ for all $(A_i,A_j)\in\mathcal{F}$.
\label{axiom3}
\end{ax}
} }
\end{center}
\begin{defi}
If $\mathcal{F}$ is a $\sigma$-algebra of set $\Omega$, and $P$ a function of $\mathcal{F}$ with the properties described by axioms \ref{axiom1}, \ref{axiom2} and \ref{axiom3}, then the triple $\{\Omega,\mathcal{F},P\}$ is called \emph{probability space}.
\end{defi}
\begin{thm}
$P(\emptyset)=0$.
\label{theorem1}
\end{thm}
\begin{proof}
According to axiom \ref{axiom3}, we can choose the sets $A_i$ for $i\geq 2$ such that $A_i=\emptyset_i=\emptyset$. Consequently:
\begin{equation}
P\left(\bigcup_{i=1}^{\infty}A_i\right)=P\left[A_1\cup\left(\bigcup_{i=2}^{\infty}A_i\right)\right]=P\left[A_1\cup\left(\bigcup_{i=2}^{\infty}\emptyset_i\right)\right]=P(A_1)+P\left(\bigcup_{i=2}^{\infty}\emptyset_i\right)
\label{equation2}
\end{equation}
Considering that $\displaystyle\bigcup_{i=2}^{\infty}\emptyset_i=\emptyset$, it follows that:
\begin{equation}
P(A_1\cup\emptyset)=P(A_1)+P(\emptyset)
\label{equation3}
\end{equation}
Given the property of the empty set $A_i\cup\emptyset=A_i$ for all $i$, then $P(A_1\cup\emptyset)=P(A_1)$ which can be substituted in Eq. \ref{equation3}:
\begin{equation}
P(A_1)=P(A_1)+P(\emptyset)
\label{equation4}
\end{equation}
Finally proving that:
\begin{equation}
P(\emptyset)=0
\label{equation5}
\end{equation}
\end{proof}
\subsection{Combination of events}
\label{combinationsofevents}
\begin{thm}
If $A_i$ and $A_j$ are PME events, then $P\displaystyle\left(\bigcup_{i=1}^{n}A_i\right)=\displaystyle\sum_{i=1}^{n}P(A_i)$ for $i\neq j$ and $n\geq 1$.
\label{theorem2}
\end{thm}
\begin{proof}
From axiom \ref{axiom3}, consider that from $i=1$ to $i=n$ we have the sets $A_1$, $A_2$, etc, $A_n$, and for $i>n$ we have $A_i=\emptyset_i$. Therefore:
$$P\left(\bigcup_{i=1}^{\infty}A_i\right)=P\left[\left(\bigcup_{i=1}^{n}A_i\right)\cup\left(\bigcup_{i=n+1}^{\infty}A_i\right)\right]=P\left[\left(\bigcup_{i=1}^{n}A_i\right)\cup\left(\bigcup_{i=n+1}^{\infty}\emptyset_i\right)\right]$$
\begin{equation}
P\left[\left(\bigcup_{i=1}^{n}A_i\right)\cup\left(\bigcup_{i=n+1}^{\infty}\emptyset_i\right)\right]=\sum_{i=1}^{n}P(A_i)+\sum_{i=n+1}^{\infty}P(\emptyset_i)
\label{equation6}
\end{equation}
Since $\bigcup_{i=n+1}^{\infty}\emptyset_i=\emptyset$, and $A\cup\emptyset=A$ for all $A$, including $A=\bigcup_{i=1}^{\infty}A_i$, according to theorem \ref{theorem1} $P(\emptyset_i)=P(\emptyset)=0$, thus:
\begin{equation}
P\left[\left(\bigcup_{i=1}^{n}A_i\right)\cup\left(\bigcup_{i=n+1}^{\infty}\emptyset_i\right)\right]=P\left(\bigcup_{i=1}^{n}A_i\right)=\sum_{i=1}^{n}P(A_i)
\label{equation7}
\end{equation}
\end{proof}
\begin{thm}[Normalization condition]
Let the sets $A_1$, $A_2$, ..., $A_n$ be a partition in $\Omega$. It implies that:
\begin{equation}
\displaystyle\sum_{i=1}^nP(A_i)=1
\label{equation8}
\end{equation}
\end{thm}
\begin{proof}
Given the definition \ref{definition5}, $A_i\cap A_j=\emptyset$ for all $i\neq j$. Therefore, following the result of the theorem \ref{theorem2}:
\begin{equation}
\displaystyle\sum_{i=1}^nP(A_i)=P\left(\bigcup_{i=1}^nA_i\right)=P(\Omega)=1
\label{equation9}
\end{equation}
The second equality in Eq. \ref{equation9} follows from definition \ref{definition5}, and the third is a consequence of the axiom \ref{axiom2}.
\end{proof}
\begin{lem}
Any pair among the sets $A_1$, $A_2$, ..., $A_n$ are mutually exclusive (that is, they are PME) if and only if they are mutually exclusive as a whole (ME for any combination of any number of these sets, including all of them).
\label{lemma1}
\end{lem}
\begin{proof}
If $A_i\cap A_j=\emptyset$ for all $i\neq j$ in the presented sequence of sets, then, given that $A_i\cap\emptyset =\emptyset$:
$$A_1\cap A_2\cap A_3\cap ... \cap A_n=(A_1\cap A_2)\cap A_3\cap ... \cap A_n=\emptyset\cap A_3\cap ... \cap A_n$$
$$A_1\cap A_2\cap A_3\cap ... \cap A_n=(\emptyset\cap A_3)\cap ... \cap A_n=\emptyset\cap ... \cap A_n$$
$$ \ldots $$
\begin{equation}
A_1\cap A_2\cap A_3\cap ... \cap A_n=\emptyset
\label{equation10}
\end{equation}
And since $\bigcap_{i=1}^{n}A_i=\emptyset$, we can deduce that for any $A_i$ and $A_j$ with $i\neq j$:
\begin{equation}
A_1\cap ... \cap A_i ... \cap A_j \cap ... \cap A_n=(A_i\cap A_j)\cap (A_1\cap ... \cap A_n)=\emptyset
\label{equation11}
\end{equation}
Hence, the property $A\cap \emptyset = \emptyset$ for all $A$ can be applied:
$$(A_i\cap A_j)\cap (A_1\cap ... \cap A_n)=\emptyset=\emptyset\cap (A_1\cap ... \cap A_n)$$
\begin{equation}
(A_i\cap A_j)=\emptyset
\label{equation12}
\end{equation}
\end{proof}
\begin{lem}
If $A$ and $B$ are events, then:
\begin{equation}
P(A\cup B)=P(A)+P(B)-P(A\cap B)
\label{equation13}
\end{equation}
\label{lemma2}
\end{lem}
\begin{proof}
By using the relations $A\cup B=A\cup(\overline{A}\cap B)$ and $B=(\overline{A}\cap B)\cup (A\cap B)$, and since $A$ and $\overline{A}\cap B$ are PME, and $\overline{A}\cap B$ and $A\cap B$ too, we can use the theorem \ref{theorem2} for $n=2$ in order to obtain the relations:
\begin{equation}
P(A\cup B)=P(A)+P(\overline{A}\cap B)
\label{equation14}
\end{equation}
\begin{equation}
P(B)=P(\overline{A}\cap B)+P(A\cap B)
\label{equation15}
\end{equation}
Subtracting Eq. \ref{equation15} from \ref{equation14} leads to $P(A\cup B)=P(A)+P(B)-P(A\cap B)$.
\end{proof}
\begin{thm}[Rule of addition of probabilities, or inclusion-exclusion principle, or Poincaré's theorem]
\begin{equation}
P\left(\displaystyle\bigcup_{i=1}^{n}A_i\right)=\sum_{i=1}^{n}P(A_i)-\sum_{i=1}^{n}\sum_{j=i+1}^{n-1}P(A_i\cap A_j)+...
\label{equation16}
\end{equation}
\label{theorem4}
\end{thm}
\begin{proof}
The proof follows from mathematical induction. Eq. \ref{equation16} is refered as proposition $Q(n)$. The case for $n=2$, that is, $Q(2)$, was already proved (lemma \ref{lemma2}), if we change the notation to $A=A_1$ and $B=A_2$. The case $Q(1)$ is trivial, with $P(A)=P(A)$ (or $P(A_1)=P(A_1)$). We can prove the validity of $Q(n+1)$ if the validity of $Q(n)$ is presumed or, equivalently, we might prove $Q(n)$ from $Q(n-1)$. But first we'll prove $Q(3)$ to better understand the structure of $Q(n)$, and of its terms. As already proven:
\begin{equation}
P(A_1\cup A_2)=P(A_1)+P(A_2)-P(A_1\cap A_2)
\label{equation17}
\end{equation}
In order to demonstrate $Q(3)$, we first use $Q(2)$ (lemma \ref{lemma2}) and the associative properties of sets:
\begin{equation}
P(A_1\cup A_2\cup A_3)=P[A_1\cup (A_2\cup A_3)]=P(A_1)+P(A_2\cup A_3)-P[A_1\cap (A_2\cup A_3)]
\label{equation18}
\end{equation}
Then using the distributive property, $A_1\cap (A_2\cup A_3)=(A_1\cap A_2)\cup(A_1\cap A_3)$, and two more applications of $Q(2)$:
$$P(A_2\cup A_3)=P(A_2)+P(A_3)-P(A_2\cap A_3)$$
$$P[A_1\cap (A_2\cup A_3)]=P(A_1\cap A_2)+P(A_1\cap A_3)-P(A_1\cap A_2\cap A_3)$$
\begin{eqnarray}
P(A_1\cup A_2\cup A_3)
& = & P(A_1)+P(A_2)+P(A_3) \nonumber \\
& - & P(A_1\cap A_2)-P(A_1\cap A_3)-P(A_2\cap A_3) \nonumber \\
& + & P(A_1\cap A_2\cap A_3) \nonumber \\
\label{equation19}
\end{eqnarray}
Eq. \ref{equation19} can be written in terms of summations:
\begin{equation}
P\displaystyle\left(\bigcup_{i=1}^{3}A_i\right)=\sum_{i=1}^{3}P(A_i)-\sum_{i=1}^{3-1}\sum_{j=i+1}^{3}P(A_i\cap A_j)+\sum_{i=1}^{3-2}\sum_{j=i+1}^{3-1}\sum_{k=j+1}^{3}P(A_i\cap A_j\cap A_k)
\label{equation20}
\end{equation}
Which for $n$ events can be generalized to:
\begin{eqnarray}
P\displaystyle\left(\bigcup_{i=1}^{n}A_i\right)
& = & \sum_{i=1}^{n}P(A_i)-\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}P(A_i\cap A_j)+\ldots \nonumber \\
& + & (-1)^{L-1}\sum_{i=1}^{n-(L-1)}\sum_{j=i+1}^{n-(L-2)}...\sum_{l=m+1}^{n-(L-L)}P(A_i\cap A_j\cap ...\cap A_m\cap A_l)+\ldots \nonumber \\
& + & (-1)^{n-1}\sum_{i=1}^{n-(n-1)}\sum_{j=i+1}^{n-(n-2)}...\sum_{e=d+1}^{n-(n-n)}P(A_i\cap A_j\cap ...\cap A_d\cap A_e) \nonumber \\
\label{equation21}
\end{eqnarray}
A simpler notation for $Q(n)$ can be:
\begin{eqnarray}
P\displaystyle\left(\bigcup_{i=1}^{n}A_i\right)
& = & \sum_{1\leq i\leq n}P(A_i)-\sum_{1\leq i < j\leq n}P(A_i\cap A_j)+\ldots \nonumber \\
& + & (-1)^{L-1}\sum_{1\leq i < j < ...< m < l\leq n}P(A_i\cap A_j\cap ...\cap A_m\cap A_l) +\ldots \nonumber \\
\label{equation22}
\end{eqnarray}
The equation that corresponds to $Q(n-1)$ can be written as:
\begin{eqnarray}
P\displaystyle\left(\bigcup_{i=2}^{n}A_i\right)
& = & \sum_{2\leq i\leq n}P(A_i)-\sum_{2\leq i < j\leq n}P(A_i\cap A_j)+\ldots \nonumber \\
& + & (-1)^{L-1}\sum_{2\leq i < j < ...< m < l\leq n}P(A_i\cap A_j\cap ...\cap A_m\cap A_l)+\ldots \nonumber \\
\label{equation23}
\end{eqnarray}
And assuming $Q(n-1)$ we should prove $Q(n)$ in order to complete the proof by induction. From the left side of Eq. \ref{equation16}:
\begin{equation}
P\left(\bigcup_{i=1}^{n}A_i\right)=P\left[A_{1}\cup\left(\bigcup_{i=2}^{n}A_i\right)\right]
\label{equation24}
\end{equation}
By applying $Q(2)$:
\begin{eqnarray}
P\left(\bigcup_{i=1}^{n}A_i\right)
& = & P(A_{1})+P\left(\bigcup_{i=2}^{n}A_i\right) - P\left[A_{1}\cap\left(\bigcup_{i=2}^{n}A_i\right)\right] \nonumber \\
& = & P(A_{1})+P\left(\bigcup_{i=2}^{n}A_i\right) - P\left[\bigcup_{i=2}^{n}\left(A_1\cap A_i\right)\right] \nonumber \\
\label{equation25}
\end{eqnarray}
If we apply $Q(n-1)$ in the last two terms on the right of Eq. \ref{equation25}:
\begin{eqnarray}
P\left(\displaystyle\bigcup_{i=1}^{n}A_i\right)
& = & P(A_1)+\displaystyle\sum_{2\leq i\leq n}P(A_i)-\displaystyle\sum_{2\leq i < j\leq n}P(A_i\cap A_j)+\ldots \nonumber \\
& + & (-1)^{L-1}\sum_{2\leq i < j < ...< m < l\leq n}P(A_i\cap A_j\cap ...\cap A_m\cap A_l) +\ldots \nonumber \\
& - & \left\{\displaystyle\sum_{2\leq i \leq n}P(A_1\cap A_i)+\ldots+(-1)^{L-2}\sum_{2\leq i < j < ...< m \leq n}P(A_1\cap A_i\cap ...\cap A_m)+...\right\} \nonumber \\
\label{equation26}
\end{eqnarray}
Performing the substitutions:
\begin{equation}
P(A_1)+\displaystyle\sum_{2\leq i\leq n}P(A_i)=\sum_{1\leq i\leq n}P(A_i)
\label{equation27}
\end{equation}
\begin{equation}
-\displaystyle\sum_{2\leq i \leq n}P(A_1\cap A_i)-\displaystyle\sum_{2\leq i < j\leq n}P(A_i\cap A_j)=-\displaystyle\sum_{1\leq i < j\leq n}P(A_i\cap A_j)
\label{equation28}
\end{equation}
\begin{center}
\ldots
\end{center}
\begin{eqnarray}
\displaystyle -(-1)^{L-2}\sum_{2\leq i < j < ...< m \leq n}P(A_1\cap A_i\cap ...\cap A_m)\nonumber \\
+(-1)^{L-1}\sum_{2\leq i < j < ...< m < l\leq n}P(A_i\cap A_j\cap ...\cap A_m\cap A_l)\nonumber \\
=(-1)^{L-1}\sum_{1\leq i < j < ...< m < l\leq n}P(A_i\cap A_j\cap ...\cap A_m\cap A_l) \nonumber \\
\label{equation29}
\end{eqnarray}
We reach $Q(n)$, and the theorem is proved by mathematical induction.
\end{proof}
\begin{lem}
If $A$ and $B$ are mutually exclusive events, then:
\begin{equation}
P(A\cup B)=P(A)+P(B)
\label{equation30}
\end{equation}
\label{lemma3}
\end{lem}
\begin{proof}[Proof 1 (without theorem \ref{theorem2})]
Since the sets are PME, according to definition \ref{definition3} and the theorem \ref{theorem1}: $P(A\cap B)=P(\emptyset)=0$. Considering the lemma \ref{lemma2} the implication is that $P(A\cup B)=P(A)+P(B)-P(A\cap B)=P(A)+P(B)$.
\end{proof}
\begin{proof}[Proof 2 (by using theorem \ref{theorem2})]
By using the theorem \ref{theorem2} for $n=2$, and changing the notation of $A_1=A$ and $A_2=B$, PME, $P(A\cup B)=P(A)+P(B)$.
\end{proof}
\begin{lem}[Rule of addition of a finite number of ME events]
Let the events $A_1$, $A_2$, \ldots, $A_n$ be ME. The following relation is valid:
\begin{equation}
P\left(\displaystyle\bigcup_{i=1}^{n}A_i\right)=\sum_{i=1}^{n}P(A_i)
\label{equation31}
\end{equation}
\label{lemma4}
\end{lem}
\begin{proof}
Groups of events that are ME as a whole are PME (lemma \ref{lemma4}). Then we can apply the theorem \ref{theorem2} in order to finish the proof.
\end{proof}
\begin{lem}
\begin{equation}
P(\overline{A})=1-P(A)
\label{equation32}
\end{equation}
\label{lemma5}
\end{lem}
\begin{proof}[Proof 1 (based on lemma \ref{lemma3} and two axioms)]
Given that $A$ and $\overline{A}$ are PME, then $P(A\cup\overline{A})=P(A)+P(\overline{A})$, according to axiom \ref{axiom3}. Considering the property of complementary events, $A\cup\overline{A}=\Omega$, we can use the axiom \ref{axiom2}: $P(A\cup\overline{A})=P(\Omega)=1=P(A)+P(\overline{A})$, therefore $P(\overline{A})=1-P(A)$.
\end{proof}
\begin{proof}[Proof 2 (based on \ref{lemma2} and one axiom)]
Assume $B=\overline{A}$, then $P(A\cup\overline{A})=P(A)+P(\overline{A})-P(A\cap\overline{A})$. Since $P(A\cup\overline{A})=P(\Omega)$, according to the axiom \ref{axiom2}, we have $P(A\cup\overline{A})=1$, and given the fact that $A$ and $\overline{A}$ are PME (another property of the complementary sets), $P(A\cap\overline{A})=0$ (definition \ref{definition3}). Therefore, $1=P(A)+P(\overline{A})$, leading to Eq. \ref{equation32}.
\end{proof}
\begin{lem}
If $A\subset B$, then $P(A)\leq P(B)$.
\label{lemma6}
\end{lem}
\begin{proof}
If $A\subset B$, then $B=A\cup(\overline{A}\cap B)$. Being $A$ and $\overline{A}\cap B$ PME, then lemma \ref{lemma3} applies, and $P(B)=P(A)+P(\overline{A}\cap B)$. Following the axiom \ref{axiom1}, $P(\overline{A}\cap B)\geq 0$, then $P(B)\geq P(A)$ and the proof is complete.
\end{proof}
\begin{lem}
\begin{equation}
P(A)\leq 1
\label{equation33}
\end{equation}
\label{lemma7}
\end{lem}
\begin{proof}
By the definition of the space set, $A\subset\Omega$, and from lemma \ref{lemma6} $P(A)\leq P(\Omega)$. The direct application of the axiom \ref{axiom2}, leads to the result: $P(A)\leq P(\Omega)=1$.
\end{proof}
\subsection{Dependency among events}
\label{dependencyevents}
\begin{defi}
The conditional probability of the event $A$ given the event $B$, $P(A|B)$, is defined for $P(B)>0$ as:
\begin{equation}
P(A|B)=\displaystyle\frac{P(A\cap B)}{P(B)}
\label{equation34}
\end{equation}
\label{definition8}
\end{defi}
\begin{lem}
Given $A$ and $B$, and $P(B) > 0$, it's true that $0\leq P(A|B)\leq 1$.
\label{lemma8}
\end{lem}
\begin{proof}
According the definition of $P(A|B)$, that depends on $P(A\cap B)$ and $P(B)>0$, once $0\leq P(C)\leq 1$ for any $C$ (axiom \ref{axiom1} and lemma \ref{lemma7}), then both $0\leq P(A|B)\leq 1$ and $0\leq P(A\cap B) \leq 1$ are true. The first condition is the proof of the lemma, and the second can be used to determine the limits of $P(A\cap B)$, leading to $0\leq P(A\cap B)\leq P(B)$. When $P(A\cap B)=P(B)$, $P(A|B)=1$. And if $P(A\cap B)=0$, that is, the events $A$ and $B$ are PME, then $P(A|B)=0$.
\end{proof}
\begin{prop}
If $A$ and $B$ are PME events, then $P(A|B)=0$.
\label{proposition1}
\end{prop}
\begin{proof}
If $A$ and $B$ are PME, $A\cap B=\emptyset$ (definition \ref{definition3}), and since $P(\emptyset)=0$ (theorem \ref{theorem1}), then $P(A|B)=0$. Based on the definition of $P(A|B)$ (equation \ref{equation34}), we have $P(A|B)=0$.
\end{proof}
\begin{prop}
If the event $B$ implies the event $A$, that is, $B\subset A$, then $P(A|B)=1$.
\label{proposition2}
\end{prop}
\begin{proof}
If $B\subset A$, $P(A\cap B)=P(B)$. By definition, $P(A|B)=P(A\cap B)/P(B)$, hence $P(A|B)=P(B)/P(B)=1$.
\end{proof}
\begin{prop}[Rule of addition for conditional probabilities]
If $A_1$, $A_2$, ..., $A_n$ are ME with union $A=\displaystyle\bigcup_{i=1}^nA_i$, then:
\begin{equation}
P(A|B)=\displaystyle\sum_{i=1}^nP(A_i|B)
\label{equation35}
\end{equation}
\label{proposition3}
\end{prop}
\begin{proof}
According the definition of $P(A|B)$ (definition \ref{definition8}) and its relation with $P(B)>0$ and $P(A\cap B)$:
\begin{equation}
P(A|B)=\displaystyle\frac{P(A\cap B)}{P(B)}=\frac{1}{P(B)}P\displaystyle\left(B\cap\bigcup_{i=1}^{n}A_i\right)=\frac{1}{P(B)}P\displaystyle\left[\bigcup_{i=1}^{n}(A_i\cap B)\right]
\label{equation36}
\end{equation}
Since $A_i$ and $A_j$ are PME for $i\neq j$, then $A_i\cap B$ and $A_j\cap B$ also are PME:
\begin{equation}
A_i\cap A_j=\emptyset \Rightarrow (A_i\cap A_j)\cap B=\emptyset\cap B =\emptyset\Rightarrow (A_i\cap B)\cap (A_j\cap B)=\emptyset
\label{equation37}
\end{equation}
Therefore we can apply lemma \ref{lemma4}:
\begin{equation}
P(A|B)=\displaystyle\frac{1}{P(B)}P\displaystyle\left[\bigcup_{i=1}^{n}(A_i\cap B)\right]=\displaystyle\frac{1}{P(B)}\sum_{i=1}^{n}P(A_i\cap B)=\sum_{i=1}^{n}\frac{P(A_i\cap B)}{P(B)}=\displaystyle\sum_{i=1}^{n}P(A_i|B)
\label{equation38}
\end{equation}
\end{proof}
\begin{defi}
The event $A$ is said to be independent of the event $B$, or statistically independent (SI), if and only if $P(A|B)=P(A)$.
\label{definition9}
\end{defi}
\begin{thm}[Rule of the product of probabilities]
Let the events $A_1$, $A_2$, ..., $A_n$ with $P\left(\displaystyle\bigcup_{i=1}^{n}A_i\right)\geq 0$ and $P\left(\displaystyle\bigcup_{i=1}^{k}A_i\right)> 0$ for $1\leq k < n$. It can be shown that:
\begin{equation}
P\left(\displaystyle\bigcap_{i=1}^{n}A_i\right)=P(A_1)\times P(A_2|A_1)\times \ldots \times P(A_n|A_1\cap ...\cap A_{n-1})
\label{equation39}
\end{equation}
\label{theorem5}
\end{thm}
\begin{proof}
Using the definition of $P(A|B)$ repeatedly for each factor in the product:
$$P\displaystyle\left(\bigcap_{i=1}^{n}A_i\right)=\frac{P\displaystyle\left(\bigcap_{i=1}^{n}A_i\right)}{P\displaystyle\left(\bigcap_{i=1}^{n-1}A_i\right)}\times\frac{P\displaystyle\left(\bigcap_{i=1}^{n-1}A_i\right)}{P\displaystyle\left(\bigcap_{i=1}^{n-2}A_i\right)}\times ...\times \frac{P(A_1\cap A_2)}{P(A_1)}P(A_1)$$
\begin{equation}
P\displaystyle\left(\bigcap_{i=1}^{n}A_i\right)=P\left[A_n|\displaystyle\left(\bigcap_{i=1}^{n-1}A_i\right)\right]\times P\left[A_{n-1}|\displaystyle\left(\bigcap_{i=1}^{n-2}A_i\right)\right]\times ... \times P(A_2|A_1)P(A_1)
\label{equation40}
\end{equation}
\end{proof}
\begin{lem}
If $A$ and $B$ are SI, and both $P(A)$ and $P(B)$ are not zero, then $P(A\cap B)=P(A)P(B)$
\label{lemma9}
\end{lem}
\begin{proof}
The definition for $P(A|B)=P(A\cap B)/P(B)$ also applies backwards, $P(B|A)=P(A\cap B)/P(A)$. If $A$ and $B$ are SI, given the definition \ref{definition9} $P(A|B)=P(A)$ and $P(B|A)=P(B)$. Thus, in both cases one can show that $P(A\cap B)=P(A)P(B)$, since $P(A\cap B)=P(A|B)P(B)=P(A)P(B)$ and $P(A\cap B)=P(B|A)P(A)=P(B)P(A)$.
\end{proof}
\begin{defi}
The events $A_1$, $A_2$, ..., $A_n$ are defined as mutually independents (MI) if $P\displaystyle\left(\bigcap_{i=1}^{n}A_i\right)=\prod_{i=1}^{n}P(A_i)$ for all combinations of sets between $1$ e $n$.
\label{definition10}
\end{defi}
\begin{lem}
If the events $A_1$, $A_2$, ..., $A_n$ are MI, then any pair $A_i$ e $A_j$, for $i\neq j$, are SI.
\label{lemma10}
\end{lem}
\begin{proof}
According the definition \ref{definition10} $A_i$ and $A_j$ must be SI, for the independence is valid for the combination of any number of sets, including pairs. Hence $P(A_i\cap A_j)=P(A_i)P(A_j)$ for any $i\neq j$ if the events $A_1$, $A_2$, ..., $A_n$ are MI. Notice that the opposite is not necessarily true: by assuming $P(A_i\cap A_j)=P(A_i)P(A_j)$ for any pair $(i,j)$ one do not prove $P(A_i\cap A_j\cap A_k)=P(A_i)P(A_j)P(A_k)$ for all $(i,j,k)$, and higher order groups.
\end{proof}
\begin{lem}
If the sets $A$ and $B$ are SI, then they are not PME, and vice versa.
\label{lemma11}
\end{lem}
\begin{proof}
If $A$ and $B$ are SI, then $P(A\cap B)=P(A)P(B)$ (lemma \ref{lemma9}). PME events are such that $A\cap B=\emptyset$, and $P(A\cap B)=P(\emptyset)$. According theorem $\ref{theorem1}$, $P(\emptyset)=0$, thus PME events ($P(A\cap B)=0$) cannot be SI ($P(A\cap B)=P(A)P(B)$). Notice that the events $A$ and $B$ are such that $P(A)>0$ and $P(B)>0$, respectively (lemma \ref{lemma9}).
\end{proof}
\begin{lem}
If the set $\{C_1,...,C_n\}$ is a partition of the sample space, $\Omega$, then for any event $A$:
\begin{equation}
P(A)=\displaystyle\sum_{i=1}^{n}P(A|C_i)P(C_i)
\label{equation41}
\end{equation}
\label{lemma12}
\end{lem}
\begin{proof}
The definition of $P(A|C_i)$ (Eq. \ref{equation34}) implies $P(A|C_i)=P(A\cap C_i)/P(C_i)$. Making the summation from $i=1$ to $n$ on both sides of the equation $P(A|C_i)P(C_i)=P(A\cap C_i)$:
\begin{equation}
\displaystyle\sum_{i=1}^nP(A|C_i)P(C_i)=\sum_{i=1}^nP(A\cap C_i)
\label{equation42}
\end{equation}
Since $C_i$ and $C_j$ are disjoint for any $i\neq j$, then is also true that $(A\cap C_i)\cap (A\cap C_j)=\emptyset$ (see proof of proposition \ref{proposition3}). Therefore, by using the theorem \ref{theorem2}:
\begin{eqnarray}
\displaystyle\sum_{i=1}^nP(A|C_i)P(C_i)
& = & P\left[\bigcup_{i=1}^n (A\cap C_i)\right]=P\left[A\cap \left(\bigcup_{i=1}^n C_i\right)\right]\nonumber \\
& = & P(A\cap \Omega) = P(A|\Omega)P(\Omega)=P(A) \nonumber \\
\label{equation43}
\end{eqnarray}
Where we have used the definition of a partition in $\Omega$ (definition \ref{definition5}), the axiom \ref{axiom2}, the definition of conditional probability and the implicit identity $P(A|\Omega)$ for any $A$ ($A\subset \Omega$, so any element of the set $A$ is also an element of the sample space).
\end{proof}
\begin{thm}[Bayes' theorem]
Given the event $A$ is such that $P(A)\geq 0$, and the set $\{C_1, ...,C_n\}$, which defines a partition in $\Omega$, with $P(C_i)\geq 0$ for every $i$, it's possible to prove that:
\begin{equation}
P(C_i|A)=\displaystyle\frac{P(A|C_i)P(C_i)}{\displaystyle\sum_{i=1}^{n}P(A|C_i)P(C_i)}
\label{equation44}
\end{equation}
\label{theorem6}
\end{thm}
\begin{proof}
Due to the fact that $P(C_i|A)=P(A\cap C_i)/P(A)$ and $P(A|C_i)=P(A\cap C_i)/P(C_i)$ (definition \ref{definition8}), then:
\begin{equation}
P(C_i|A)=\frac{P(A|C_i)P(C_i)}{P(A)}
\label{equation45}
\end{equation}
We can substitute the result of lemma \ref{lemma12} for $P(A)$ in Eq. \ref{equation45} to prove the theorem.
\end{proof}
\subsection{Probabillity theorems diagram}
\label{probdiagram}
In order to represent the main content of this paper, namely, the most relevant equations presented so far, and the relations among them, the diagram in Fig. \ref{diagram} was prepared. It omits some results and assumptions, focusing on the fundamental relations between axioms and results. A line divides the figure in two parts: below are the results proved in section \ref{combinationsofevents}, and above it the results from section \ref{dependencyevents} are listed and interrelated.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth]{diagram}
\caption{Main results from Kolmogorov's axioms (in gray boxes). Those above the dashed line correspond to dependent events, and those below the line are related to combinations of events.}
\label{diagram}
\end{figure}
\section{Conclusion}
\label{conclusion}
I hope the set of proofs and the choice of theorems/lemmas/propositions and definitions, as much as the order they are presented, help students of mathematics, statistics, engineering, chemistry and physics to se a broad picture of the Kolmogorov's axiomatic system, even if in a simplified and incomplete form.
\section{Acknowledgements}
\label{acknowledgements}
I must thank CNPq and CAPES for the PhD scholarship which allowed me to investigate the theme.
| {
"timestamp": "2022-06-14T02:04:32",
"yymm": "2206",
"arxiv_id": "2206.05396",
"language": "en",
"url": "https://arxiv.org/abs/2206.05396",
"abstract": "A selection of the relevant theorems of Probability Theory that comes directly from Kolmogorov's axioms, Set Theory basic results, definitions and rules of inference are listed and proven in a systematic approach, aiming the student who seeks a self-contained account on the matter before moving to more advanced material.",
"subjects": "General Mathematics (math.GM)",
"title": "A systematic approach on some relevant theorems that follows from Kolmogorov's axioms",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771805808551,
"lm_q2_score": 0.8128673133042217,
"lm_q1q2_score": 0.8021189156086744
} |
https://arxiv.org/abs/math-ph/0511032 | A second eigenvalue bound for the Dirichlet Schroedinger operator | Let $\lambda_i(\Omega,V)$ be the $i$th eigenvalue of the Schrödinger operator with Dirichlet boundary conditions on a bounded domain $\Omega \subset \R^n$ and with the positive potential $V$. Following the spirit of the Payne-Pólya-Weinberger conjecture and under some convexity assumptions on the spherically rearranged potential $V_\star$, we prove that $\lambda_2(\Omega,V) \le \lambda_2(S_1,V_\star)$. Here $S_1$ denotes the ball, centered at the origin, that satisfies the condition $\lambda_1(\Omega,V) = \lambda_1(S_1,V_\star)$.Further we prove under the same convexity assumptions on a spherically symmetric potential $V$, that $\lambda_2(B_R, V) / \lambda_1(B_R, V)$ decreases when the radius $R$ of the ball $B_R$ increases.We conclude with several results about the first two eigenvalues of the Laplace operator with respect to a measure of Gaussian or inverted Gaussian density. | \section{Introduction}
In an earlier publication \cite{AB2}, Ashbaugh and one of us have proven the Payne-P\'olya-Weinberger (PPW) conjecture,
which states that the first two eigenvalues $\lambda_1, \lambda_2$ of the Dirichlet-Laplacian on a bounded domain
$\Omega\subset \mathbb R^n$ ($n\ge 2$) obey the bound
\begin{equation} \label{EqPPW}
\lambda_2 / \lambda_1 \le j_{n/2,1}^2/j_{n/2-1,1}^2.
\end{equation}
Here $j_{\nu,k}$ stands for the $k$th positive zero of the Bessel function $J_\nu$. Thus the right hand side of
(\ref{EqPPW}) is just the ratio of the first two eigenvalues of the Dirichlet-Laplacian on an $n$-dimensional ball of
arbitrary radius. This result is optimal in the sense that equality holds in (\ref{EqPPW}) if and only if $\Omega$ is a
ball.
The proof of the PPW conjecture has been generalized in several ways. In \cite{AB1} a corresponding theorem has been
established for the Laplacian operator on a domain $\Omega$ that is contained in a hemisphere of the $n$-dimensional
sphere $\mathbb S^n$. More precisely, it has been shown that $\lambda_2(\Omega) \le \lambda_2(S_1)$, where $S_1$ is the
$n$-dimensional geodesic ball in $\mathbb S^n$ that has $\lambda_1(\Omega)$ as its first Dirichlet eigenvalue.
A further variant of the PPW conjecture has been considered by Haile. In \cite{H} he compares the second eigenvalue
$\lambda_2(\Omega,kr^\alpha)$ of the Schr\"o\-din\-ger operator with the potential $V = kr^\alpha$ ($k>0, \alpha\ge 2$)
with $\lambda_2(S_1,kr^\alpha)$, where $S_1$ is the ball, centered at the origin, that satisfies the condition
$\lambda_1(\Omega,kr^\alpha) = \lambda_1(S_1,kr^\alpha)$. Here and in the following we denote by $\lambda_i(\Omega,V)$
the $i$th eigenvalue of the Schr\"o\-din\-ger operator $-\Delta + V(\vec r)$ with Dirichlet boundary conditions on a
bounded domain $\Omega \subset \mathbb{R}^n$.
We have to mention a gap in \cite{H}, which occurs in the proof of Lemma 3.2. The author claims (and uses) that all
derivatives of the function $Z(\theta)$ (which is equal to $T'(\theta)$ where $T(\theta)=0$) coincide with the
derivatives of $T'(\theta)$ in the points where $T(\theta)=0$. This is not proven and there seems to be no reason why
it should be true. The same problem occurs in the proof of Lemma 3.3. In the present paper we will prove a theorem that
includes Haile's theorem as a special case and thus remedies the situation.
One very important difference between the original PPW conjecture and the extended problems in \cite{AB1,H} is that in
the later cases the ratio $\lambda_2/\lambda_1$ is not scaling invariant anymore. While $\lambda_2/\lambda_1$ is the
same for any ball in $\mathbb R^n$, it is an increasing function of the radius for balls in $\mathbb S^n$ \cite{AB1}.
On the other hand, we will see that $\lambda_2(B_R,V)/\lambda_1(B_R,V)$ on the ball $B_R$ is a decreasing function of
the radius $R$, if $V$ has certain convexity properties. This rises the question which is the `right size' of the
comparison ball in the PPW estimate. We will make some remarks on this problem below.
The main objective of the present work is to prove a PPW type result for a Schr\"odinger operator with a positive
potential. We will state the corresponding theorem in the following section. In Section \ref{SectionGauss} we will
transfer our results to the case of a Laplacian operator with respect to a metric of Gaussian or inverted Gaussian
measure, the two cases of which are closely related to the harmonic oscillator. The rest of the article will be devoted
to the proofs of our results.
\section{Main Results} \label{SectionResults}
Let $\Omega \subset \mathbb{R}^n$ (with $n\ge 2$) be some bounded domain and $V : \Omega \rightarrow \mathbb{R}^+$ some positive
potential such that the Schr\"odinger operator $-\Delta + V$ (subject to Dirichlet boundary conditions) is self-adjoint
in $L^2(\Omega)$. We call $\lambda_i(\Omega,V)$ its $i$th eigenvalue. Further, we denote by $V_\star$ the radially
increasing rearrangement of $V$. Then the following PPW type estimate holds:
\begin{Theorem} \label{TheoremPPW}
Let $S_1 \subset \mathbb{R}^n$ be a ball centered at the origin and of radius $R_1$ and let $\tilde V: S_1 \rightarrow \mathbb{R}^+$ be some
radially symmetric positive potential such that $\tilde V(r) \le V_\star(r)$ for all $0 \le r \le R_1$ and
$\lambda_1(\Omega,V) = \lambda_1(S_1,\tilde V)$. If $\tilde V(r)$ satisfies the conditions
\begin{enumerate}
\item[a)] $\tilde V(0) = \tilde V'(0)=0$ and
\item[b)] $\tilde V'(r)$ exists and is increasing and convex,
\end{enumerate}
then
\begin{equation}\label{EqTheorem}
\lambda_2(\Omega,V) \le \lambda_2(S_1,\tilde V).
\end{equation}
\end{Theorem}
If $V$ is such that $V_\star$ satisfies the convexity conditions stated in the theorem, the best bound is obtained by
choosing $\tilde V = V_\star$. In this case the theorem is a typical PPW result and optimal in the sense that equality holds
in (\ref{EqTheorem}) if $\Omega$ is a ball and $V = V_\star$. For a general potential $V$ we still get a non-trivial
bound on $\lambda_2(\Omega,V)$ though it is not sharp anymore. To show that our Theorem \ref{TheoremPPW} contains
Haile's result \cite{H} as a special case, we state the following corollary:
\begin{Corollary}
Let $\tilde V: \mathbb{R}^n \rightarrow \mathbb{R}^+$ be a radially symmetric positive potential that satisfies the conditions a) and b) of
Theorem \ref{TheoremPPW} and let $S_1 \subset \mathbb{R}^n$ be the ball (centered at the origin) such that
$\lambda_1(\Omega,\tilde V) = \lambda_1(S_1,\tilde V)$. Then
$$\lambda_2(\Omega,\tilde V) \le \lambda_2(S_1,\tilde V).$$
\end{Corollary}
The proof of Theorem \ref{TheoremPPW} follows the lines of the proof in \cite{AB2} and will be presented in Section
\ref{SectionProof}. Let us make a few remarks on the conditions that $\tilde V$ has to satisfy. Condition a) is not a very
serious restriction, because any bounded potential can be shifted such that $V_\star(0) = 0$. Also $V_\star'(0) = 0$
holds if $V$ is somewhat regular where it takes the value zero. Moreover, our method relies heavily on the fact that
\begin{equation} \label{EqLambda}
\lambda_2(B_R,\tilde V) \ge \left(1+\frac 2n\right) \lambda_1(B_R,\tilde V),
\end{equation}
which is a byproduct of our proof and holds for any ball $B_R$ and any potential $\tilde V$ that satisfies the conditions of
Theorem \ref{TheoremPPW}. The conditions a) and b) will be needed to show the above inequality, which is equivalent to
$q''(0) \le 0$ for a function $q$ to be defined in the proof. Numerical studies indicate that b) is somewhat sharp in
the sense that, for example, a potential $r^{2-\epsilon}$ (which violates b) only `slightly') does not satisfy
(\ref{EqLambda}) for every $R$. In this case the statement of Theorem \ref{TheoremPPW} may still be true, but the
typical scheme of the PPW proof will fail. Furthermore, condition a) and b) will allow us to employ the crucial
Baumgartner-Grosse-Martin (BGM) inequality \cite{BGM,AB3}: From a) and b) we see that $V(r) + rV'(r)$ is increasing.
Consequently $rV(r)$ is convex, which is just the condition needed to apply the BGM inequality.
As mentioned above, one has to chose carefully the size of the comparison ball in a PPW estimate if
$\lambda_2/\lambda_1$ is a non-constant function of the ball's radius. In the case of the Laplacian on $\mathbb S^n$,
one compares the second eigenvalues on $\Omega$ and $S_1$, the ball that has the same first eigenvalue as $\Omega$. By
the Rayleigh-Faber-Krahn (RFK) inequality for $\mathbb S^n$ it is clear that $S_1 \subset \Omega^\star$, where
$\Omega^\star$ is the spherically symmetric rearrangement of $\Omega$. It has also be shown in \cite{AB1} that
$\lambda_2/\lambda_1$ on a geodesic ball in $\mathbb S^n$ is an increasing function of the ball's radius. One can
conclude from these two facts that in $\mathbb S^n$ an estimate of the type (\ref{EqTheorem}) is stronger than the
inequality
\begin{equation} \label{EqLL}
\lambda_2(\Omega)/\lambda_1(\Omega) \le \lambda_2(\Omega^\star) / \lambda_1(\Omega^\star).
\end{equation}
It has also been argued in \cite{AB3} why the situation is different in the hyperbolic space $\mathbb H^n$. Here an estimate of
the type (\ref{EqLL}) is not possible, for the following reason: One can show that $\lambda_2/\lambda_1$ on geodesic
balls in $\mathbb H^n$ is a decreasing function of the radius. Now suppose, for example, that $\Omega$ is the ball $B_R$ with
very long and thin tentacles attached to it. Then the first and the second eigenvalue of the Laplacian on $\Omega$ and
$B_R$ are almost the same, while the ratio $\lambda_2/\lambda_1$ on $\Omega^\star$ can be considerably less than on
$B_R$ (and thus on $\Omega$). We will prove a PPW inequality of the type $\lambda_2(\Omega) \le \lambda_2(S_1)$ for
$\mathbb H^n$ and the monotonicity of $\lambda_2/\lambda_1$ on geodesic balls in a future publication.
To shed light on the question which is the right type of PPW inequality for the Schr\"odinger operator on $\Omega$, we
state
\begin{Theorem} \label{TheoremMonotonicity}
Let $V: \mathbb{R}^n \rightarrow \mathbb{R}^+$ be a spherically symmetric potential that satisfies the conditions of Theorem
\ref{TheoremPPW}, i.e.
\begin{enumerate}
\item[a)] $V(0) = V'(0)=0$ and
\item[b)] $V'(r)$ exists and is increasing and convex.
\end{enumerate}
Then the ratio
$$\frac{\lambda_2(B_R, V)}{\lambda_1(B_R, V)}$$
is a decreasing function of $R$.
\end{Theorem}
This theorem shows that one can not replace equation (\ref{EqTheorem}) in our Theorem \ref{TheoremPPW} by an inequality
of the type (\ref{EqLL}), following the same reasoning as in the case of the Laplacian on $\mathbb H^n$. Theorem
\ref{TheoremMonotonicity} will be proven in Section \ref{SectionProof2}.
\section{Connection to the Laplacian operator in Gaussian space} \label{SectionGauss}
Recently, there has been some interest in isoperimetric inequalities in $\mathbb{R}^n$ endowed with a measure of Gaussian ($\,\mathrm{d}
\mu_- = e^{-r^2/2}\,\mathrm{d}^nr$) or inverted Gaussian ($\,\mathrm{d} \mu_+ = e^{+r^2/2}\,\mathrm{d}^nr$) density. For the Gaussian space it
has been known for several years that a classical isoperimetric inequality holds. Yet the ratio of Gaussian perimeter
and Gaussian measure is minimized by half-spaces instead of spherical domains \cite{B}. The `inverted Gaussian' case,
i.e., $\mathbb{R}^n$ with the measure $\mu_+$, is more similar to the Euclidean case: It has been shown recently that a
classical isoperimetric inequality holds and that the minimizers are balls centered at the origin \cite{MPB}.
We consider the Dirichlet-Laplacians $-\Delta_\pm$ on $L^2(\Omega,\,\mathrm{d}\mu_\pm)$, where $\Omega\varsubsetneqq\mathbb{R}^n$ is a
domain of finite measure $\,\mathrm{d}\mu_\pm(\Omega)$. These two operators are defined by their quadratic forms
\begin{equation} \label{EqQuadForm}
h_\pm[\Psi] = \int_\Omega |\nabla \Psi(\vec r)|^2 \,\mathrm{d} \mu_\pm, \quad \Psi \in W^{1,2}_0(\Omega,\,\mathrm{d} \mu_\pm).
\end{equation}
The eigenfunctions $\Psi^\pm_i$ and eigenvalues $\lambda^\pm_i(\Omega)$ in question are determined by the the
differential equation
\begin{equation} \label{EqEVProblem}
-\sum\limits_{k=1}^n \frac{\partial}{\partial r_k} \left(e^{\pm r^2} \frac{\partial \Psi^\pm_i}{\partial r_k}\right) =
\lambda^\pm_i(\Omega) e^{\pm r^2} \Psi^\pm_i(\vec r).
\end{equation}
There is a tight connection between the operators $-\Delta_\pm$ on a domain $\Omega$ and the harmonic oscillator
$-\Delta+r^2$ restricted to $\Omega$. Their eigenfunctions and eigenvalues are related by \cite{BCF}
\begin{eqnarray}
\Psi^\pm_i(\vec r) &=& \Psi_i(\vec r) \cdot e^{\mp r^2/2} \quad \textmd{and} \nonumber\\
\lambda^\pm_i(\Omega) &=& \lambda_i(\Omega, r^2) \pm n, \label{EqCon}
\end{eqnarray}
denoting by $\Psi_i$ the Dirichlet eigenfunctions of $-\Delta + r^2$ on $\Omega$.
There is an equivalent of the RFK inequality in Gaussian space \cite{BCF} stating that $\lambda^-_1(\Omega)$ is
minimized for given $\mu_-(\Omega)$ if $\Omega$ is a half-space. The corresponding fact for the `inverted' Gaussian
space is that $\lambda^+_1(\Omega)$ is minimized for given $\mu_+(\Omega)$ by the ball centered at the origin. It can
be seen by the RFK inequality for Schr\"odinger operators \cite{L} in combination with (\ref{EqCon}).
Concerning the second eigenvalue, we will now show what our results from Section \ref{SectionResults} imply for the
operators $-\Delta_\pm$. We state
\begin{Theorem} \label{TheoremMonotonicity2}
For the operator $-\Delta_+$ on a ball $B_R$ of radius $R$ (centered at the origin) the ratio
$\lambda_2^+(B_R)/\lambda_1^+(B_R)$ is a strictly decreasing function of $R$.
\end{Theorem}
In Section \ref{SectionProof3} we will derive Theorem \ref{TheoremMonotonicity2} from Theorem \ref{TheoremMonotonicity}
in a purely algebraic way using only the relation (\ref{EqCon}). Repeating the argument for $\mathbb H^n$ from the
previous section, we see that by Theorem \ref{TheoremMonotonicity2} the best PPW result we can expect to get is
\begin{Theorem} \label{TheoremPPW2}
Be $S_1$ the ball (centered at the origin) that satisfies the condition $\lambda^+_1(S_1) = \lambda^+_1(\Omega)$. Then
\begin{equation*}
\lambda_2^+(\Omega) \le \lambda_2^+(S_1).
\end{equation*}
\end{Theorem}
Theorem \ref{TheoremPPW2} follows immediately from Theorem \ref{TheoremPPW} and (\ref{EqCon}). In the same way we
easily get the corresponding version for $-\Delta_-$:
\begin{Theorem} \label{TheoremPPW3}
Be $S_1$ the ball (centered at the origin) that satisfies the condition $\lambda^-_1(S_1) = \lambda^-_1(\Omega)$. Then
\begin{equation*}
\lambda_2^-(\Omega) \le \lambda_2^-(S_1).
\end{equation*}
\end{Theorem}
Yet in this case it is not clear anymore whether $S_1$ is the optimal comparison ball: First, in contrast to the
`inverted' Gaussian case the ratio $\lambda_2^-(B_R)/\lambda_1^-(B_R)$ is not a decreasing function of $R$ anymore.
This can be seen by comparing the values of $\lambda_2^-(B_R)/\lambda_1^-(B_R)$ for $R\rightarrow 0$ and $R\rightarrow
\infty$: For small $R$ the ratio is close to the Euclidean value ($\approx 2.539$) while for large $R$ it approaches
infinity (by (\ref{EqCon})). Second, the RFK inequality in Gaussian space states that $\lambda_1^-(\Omega)$ is
minimized by half-spaces, not circles. This means that for general $\Omega$ we don't know whether $\Omega^\star$ is
bigger or smaller than $S_1$. For these differences it remains unclear what is the most natural way to generalize the
PPW conjecture to Gaussian space.
\section{A monotonicity lemma} \label{SectionLemma}
In our proof of Theorem \ref{TheoremPPW} we will need
\begin{Lemma}[Monotonicity of $g$ and $B$] \label{LemmaMonotonicityOfgAndB}
Let $\tilde V$, $S_1$ and $R_1$ be as in Theorem \ref{TheoremPPW} and call $z_1(r)$ and $z_2(r)$ the radial parts (both
chosen positive) of the first two Dirichlet eigenfunctions of $-\Delta + \tilde V$ on $S_1$. Set
\begin{eqnarray*}
g(r) &=& \frac{z_2(r)}{z_1(r)} \quad \textmd{and}\\
B(r) &=& g'(r)^2 + (n-1)\frac{g(r)^2}{r^2}
\end{eqnarray*}
for $0 < r < R_1$. Then $g(r)$ is increasing on $(0,R_1)$ and $B(r)$ is decreasing on $(0,R_1)$.
\end{Lemma}
\begin{proof} \cite{H,AB0}
In this section we abbreviate $\lambda_i = \lambda_i(S_1,\tilde V)$. The functions $z_1$ and $z_2$ are solutions of the
differential equations
\begin{eqnarray}
-z''_{1} - \frac{n-1}{r} z'_{1} + \left(\tilde V -\lambda_1\right) z_{1} &=& 0, \label{EquN1}\\
-z''_{2} - \frac{n-1}{r} z'_{2} + \left(\frac{n-1}{r^2} + \tilde V - \lambda_2\right) z_{2} &=& 0 \nonumber
\end{eqnarray}
with the boundary conditions
\begin{equation}\label{EqBoundary}
z'_{1}(0) = 0, \quad z_{1}(R_1) = 0, \quad z_{2}(0) = 0, \quad z_{2}(R_1) = 0.
\end{equation}
This is assured by the BGM inequality \cite{AB0,BGM}, which is applicable because $r\tilde V$ is convex. As in \cite{AB0} we
define the function
$$q(r) := \frac{rg'(r)}{g(r)}.$$
Proving the lemma is thus reduced to showing that $0 < q(r) < 1$ and
$q'(r) < 0$ for $r \in [0,R]$. Using the definition of $g$ and the equations (\ref{EquN1}), one can show that $q(r)$ is
a solution of the Riccati differential equation
\begin{equation} \label{EqRic}
q' = (\lambda_1-\lambda_2) r + \frac{(1-q)(q+n-1)}r - 2q\frac{z_1'}{z_1}.
\end{equation}
It is straightforward to establish the boundary behavior
$$q(0) = 1, \quad q'(0) = 0, \quad q''(0) = \frac 2n \left(\left(1+\frac 2n\right)\lambda_1 - \lambda_2\right)$$
and
$$q(R_1) = 0.$$
\begin{Fact} \label{Fact1}
For $0 \le r \le R$ we have $q(r) \ge 0$.
\end{Fact}
\begin{proof}
Assume the contrary. Then there exist two points $0 < r_1 < r_2 \le R_1$ such that $q(r_1) = q(r_2) = 0$ but $q'(r_1)
\le 0$ and $q'(r_2) \ge 0$. If $r_2 < R_1$ then the Riccati equation (\ref{EqRic}) yields
$$0 \ge q'(r_1) = (\lambda_1-\lambda_2) r_1 + \frac{n-1}{r_1} > (\lambda_1-\lambda_2) r_2 + \frac{n-1}{r_2} = q'(r_2)
\ge 0,$$ which is a contradiction. If $r_2 = R_1$ then we get a contradiction in a similar way by
$$0 \ge q'(r_1) = (\lambda_1-\lambda_2) r_1 + \frac{n-1}{r_1} > (\lambda_1-\lambda_2) R_1+ \frac{n-1}{R_1} = 3q'(R_1)
\ge 0.$$
\end{proof}
In the following we will analyze the behavior of $q'$ according to (\ref{EqRic}), considering $r$ and $q$ as two
independent variables. For the sake of a compact notation we will make use of the following abbreviations:
\begin{equation*}
\begin{array}{rclrcl}
p(r) &=& z_1'(r) /z_1(r) \quad \quad & N_y &=& y^2 - n + 1 \cr %
\nu &=& n-2 & M_y &=& N_y^2/(2y) - \nu^2y/2 \cr %
E &=& \lambda_2-\lambda_1 & Q_y &=& 2y\lambda_1 + EN_yy^{-1} - 2E
\end{array}
\end{equation*}
We further define the function
\begin{equation}\label{EqT}
T(r,y) := -2 p(r) y - \frac{\nu y + N_y}{r} - E r.
\end{equation}
Then we can write (\ref{EqRic}) as
\begin{equation*}
q'(r) = T(r,q(r))
\end{equation*}
The definition of $T(r,y)$ allows us to analyze the Riccati equation for $q'$ considering $r$ and $q(r)$ as independent
variables. For $r$ going to zero, $p$ is $\mathcal{O}(r)$ and thus
$$T(r,y) = \frac{1}{r}\left((\nu+1+y)(1-y)\right) + \mathcal{O}(r) \quad \textmd{for }y\textmd{ fixed}.$$
Consequently,
\begin{equation*}
\begin{array}{rcll}
\lim_{r\rightarrow 0} T(r,y) &=& +\infty \quad \quad & \textmd{for } 0 \le y < 1 \,\textmd{ fixed,}\cr%
\lim_{r\rightarrow 0} T(r,y) &=& 0 & \textmd{for }\, y = 1 \textmd{ and }\cr%
\lim_{r\rightarrow 0} T(r,y) &=& -\infty & \textmd{for }\, y > 1 \,\textmd{ fixed.}
\end{array}
\end{equation*}
For $r$ approaching $R_1$, the function $p(r)$ goes to minus infinity, while all other terms in (\ref{EqT}) are
bounded. Therefore
$$\lim_{r\rightarrow R_1} T(r,y) = +\infty \quad \textmd{ for } y > 0 \textmd{ fixed}.$$
The partial derivative of $T(r,y)$ with respect to $r$ is given by
\begin{equation} \label{EqTprime}
T' = \frac{\partial}{\partial r} T(r,y) = -2yp' + \frac{\nu y}{r^2} + \frac{N_y}{r^2}- E.
\end{equation}
In the points $(r,y)$ where $T(r,y) = 0$ we have, by (\ref{EqT}),
\begin{equation}\label{EqPAtTZero}
p|_{T=0} = -\frac {\nu}{2r}-\frac{N_y}{2yr}-\frac{Er}{2y}.
\end{equation}
From (\ref{EquN1}) we get the Riccati equation
\begin{equation}\label{EqRicp}
p'+p^2+\frac{\nu+1}{r}p+\lambda_1-\tilde V=0.
\end{equation}
Putting (\ref{EqPAtTZero}) into (\ref{EqRicp}) and the result into (\ref{EqTprime}) yields
\begin{equation}
T'|_{T=0} = \frac{M_y}{r^2} + \frac{E^2r^2}{2y}+Q_y-2y\tilde V.
\end{equation}
If we define the function
$$Z_y(r) := \frac{M_y}{r^2} + \frac{E^2r^2}{2y}+Q_y-2y\tilde V,$$
it is clear that $T'(r,y) = Z_y(r)$ for any $r,y$ with $T(r,y)=0$. The behavior of $Z_y(r)$ at $r=0$ is determined by
$M_y$. From the definition of $M_y$ we get
\begin{equation}\label{EqMMM}
yM_y = \frac 12(y^2-1)\cdot[(y-1)-(n-2)]\cdot[(y+1)+(n-2)].
\end{equation}
This implies that
\begin{eqnarray*}
M_y &>& 0 \quad \textmd{for } 0<y<1,\\
M_1 &=& 0.
\end{eqnarray*}
and therefore
\begin{eqnarray*}
\lim_{r\rightarrow 0} Z_y(r) &=& \infty \quad \textmd{for } 0<y<1.
\end{eqnarray*}
\begin{Fact} \label{Fact2}
There is some $r_0>0$ such that $q(r) \le 1$ for $0<r<r_0$ and $q(r_0) < 1$.
\end{Fact}
\begin{proof}
Suppose the contrary, i.e., $q(r)$ first increases away from $r=0$. Then, because $q(0) = 1$ and $q(R) = 0$ and because
$q$ is continuous and differentiable, we can find two points $r_1 < r_2$ such that $\hat q := q(r_1) = q(r_2) > 1$ and
$q'(r_1) > 0 > q'(r_2)$. Even more, we can chose $r_1$ and $r_2$ such that $\hat q$ is arbitrarily close to one.
Writing $\hat q = 1 + \epsilon$ with $\epsilon > 0$, we can calculate from the definition of $Q_y$ that
$$Q_{1+\epsilon} = Q_1 + \epsilon n\left(\lambda_2 - \left(1-2/n\right)\lambda_1\right) + {\mathcal O}(\epsilon^2).$$
The term in brackets can be estimated by
$$\lambda_2 - (1-2/n)\lambda_1 > \lambda_2 - \lambda_1 >0.$$
We can also assume that $Q_1 \ge 0$, because otherwise $q''(0) = \frac{2}{n^2}Q_1 < 0$ and Fact \ref{Fact2} is
immediately true. Thus, choosing $r_1$ and $r_2$ such that $\epsilon$ is sufficiently small, we can make sure that
$Q_{\hat q} > 0.$ We further note, that in view of (\ref{EqMMM}) the constant $M_{\hat q}$ can be positive or negative
(depending on $n$), but not zero because $1<\hat q<2$.
Now consider the function $T(r,\hat q)$. We have $T(r_1,\hat q) > 0 > T(r_2,\hat q)$ and the boundary behavior
$T(0,\hat q) = -\infty$ and $T(R_1,\hat q) = +\infty$. Thus $T(r,\hat q)$ changes its sign at least thrice on
$[0,R_1]$. Consequently, we can find three points $0 < \hat r_1 < \hat r_2 < \hat r_3 < R_1$ such that
\begin{equation}\label{EqZBeh}
Z_{\hat q}(\hat r_1) \ge 0, \quad Z_{\hat q}(\hat r_2) \le 0, \quad Z_{\hat q}(\hat r_3) \ge 0.
\end{equation}
Let us define
$$h(r) = \frac{E^2 r^2}{2 \hat q} - 2 \hat q\tilde V(r).$$
Then
\begin{equation} \label{EqZg}
Z_{\hat q}(r) = \frac{M_{\hat q}}{r^2} + Q_{\hat q} + h(r).
\end{equation}
By condition b) on $\tilde V$, the function $h'(r)$ is concave. Also $h(0) = h'(0) = 0$. We conclude that if $h'(r_0) < 0$ or
$h(r_0) < 0$ for some $r_0> 0$, then $h'(r)$ is negative and decreasing for all $r>r_0$. We will now show that $Z_{\hat
q}$ cannot have the properties (\ref{EqZBeh}), a contradiction that proves Fact \ref{Fact2}:
Case 1: Assume $M_{\hat q} > 0$. Then from $Z_{\hat q}(\hat r_2) \le 0$ we see that
$$-h(\hat r_2) \ge \frac{M_{\hat q}}{\hat r_2^2} + Q_{\hat q} > 0.$$
By what has been said above about $h(r)$, we conclude that $-h(r)$ is a strictly increasing function on $[\hat r_2,\hat
r_3]$. Therefore
$$-h(\hat r_3) > -h(\hat r_2) \ge \frac{M_{\hat q}}{\hat r_2^2} + Q_{\hat q} > \frac{M_{\hat q}}{\hat r_3^2} + Q_{\hat q},$$
such that $Z_{\hat q}(\hat r_3) < 0$, contradicting (\ref{EqZBeh}).
Case 2: Assume $M_{\hat q} < 0$. Then from $Z_{\hat q}(\hat r_1) \ge 0 \ge Z_{\hat q}(\hat r_2)$ follows that $Z_{\hat
q}'(\hat r) \le 0$ for some $\hat r\in [\hat r_1,\hat r_2]$. In view of (\ref{EqZg}) we have $h'(\hat r) < 0$. But this
means by our above concavity argument that $h'(r)$ is decreasing and thus $h'(r) < 0$ for all $r>\hat r$. Then
$Z'_{\hat q}$ is strictly decreasing for $r\ge \hat r$. Together with $ Z_{\hat q}(\hat r_2) \le 0$ and $Z'_{\hat
q}(\hat r) \le 0$ this implies that $ Z_{\hat q}(\hat r_3) < 0$, a contradiction to (\ref{EqZBeh}).
\end{proof}
\begin{Fact} \label{Fact3}
For all $0\le r\le R_1$ the inequality $q'(r) \le 0$ holds.
\end{Fact}
\begin{proof}
Assume the contrary. Then there are three points $r_1<r_2<r_3$ in $(0,R_1)$ with $0 < \hat q := q(r_1) = q(r_2) =q(r_3)
< 1$ and $q'(r_1) < 0$, $q'(r_2)>0$, $q'(r_3) < 0$. Consider the function $T(r,\hat q)$, which is equal to $q'(r)$ at
$r_1,r_2,r_3$. Taking into account its boundary behavior at $r = 0$ and $r=R_1$, it is clear that $T(r,\hat q)$ must
have at least the sign changes positive-negative-positive-negative-positive. Thus $T(r,\hat q)$ has at least four zeros
$\hat r_1 < \hat r_2 < \hat r_3 <\hat r_4$ with the properties
$$Z_{\hat q}(\hat r_1) \le 0, \quad Z_{\hat q}(\hat r_2) \ge 0, \quad Z_{\hat q}(\hat r_3) \le 0, \quad Z_{\hat q}(\hat r_4) \ge 0.$$
We also know that $Z_{\hat q}(0) = +\infty$. To satisfy all these requirements, $Z_{\hat q}$ must either have at least
three extremal points where $Z_{\hat q}'$ crosses zero or $Z_{\hat q}$ must vanish on a finite interval. But we have
$$Z'_{\hat q}(r) = -\frac{2M_{\hat q}}{r^3} + \frac{E^2r}{\hat q} -2 \hat q\tilde V'(r),$$
which is a strictly concave function (recall $M_{\hat q} > 0$ for $0<\hat q<1$). A strictly concave function can only
cross zero twice and not be zero on a finite interval, which is a contradiction that proves Fact \ref{Fact3}.
\end{proof}
Altogether we have shown that $0 < q(r) < 1$ and $q'(r) \le 0$ for all $r\in[0,R]$, proving Lemma
\ref{LemmaMonotonicityOfgAndB}.
\end{proof}
\section{Proof of Theorem \ref{TheoremPPW}} \label{SectionProof}
\begin{proof}[Proof of Theorem \ref{TheoremPPW}]
We start from the basic gap inequality
\begin{equation} \label{EqGap}
\lambda_2(\Omega,V) - \lambda_1(\Omega,V) \le \frac{\int_\Omega |\nabla P|^2 u_1^2 \,\mathrm{d}^nr}{\int_\Omega P^2 u_1^2 \,\mathrm{d}^n
r},
\end{equation}
where $u_1$ is the first Dirichlet eigenfunction of $-\Delta + V$ on $\Omega$ and $P$ is a suitable test function that
satisfies the condition $\int_\Omega Pu_1^2 \,\mathrm{d}^n r = 0$. We set
\begin{equation}
P_i(\vec r) = g(r) \frac{r_i}{r} \quad \textmd{for } i = 1,2,...,n,
\end{equation}
where
\begin{equation}
g(r) = \left\{\begin{array}{ll} \frac{z_2(r)}{z_1(r)} & \textmd{for } r < R_1\cr %
\lim\limits_{t \uparrow R_1} g(t) & \textmd{for } r \ge R_1.\end{array} \right.
\end{equation}
Here $z_1$ and $z_2$ are the radial parts (both chosen positive) of the first two eigenfunctions of $-\Delta + \tilde V$ on
$S_1$. More precisely, $z_2(r) r_i r^{-1}$ for $i=1,\dots,n$ is a basis of the space of second eigenfunctions. It
follows from the convexity of $r\tilde V$ and the BGM inequality \cite{AB0, BGM} that the second eigenfunctions can be
written in that way.
According to an argument in \cite{AB2} one can always chose the origin of the coordinate system such that $\int_\Omega
P_iu_1^2 \,\mathrm{d}^n r = 0$ is satisfied for all $i$. Putting the functions $P_i$ into (\ref{EqGap}) and summing over all $i$
yields
\begin{equation} \label{EqDifLambda}
\lambda_2(\Omega,V) - \lambda_1(\Omega,V) \le \frac{\int_\Omega B(r) u_1^2 \,\mathrm{d}^nr}{\int_\Omega g(r)^2 u_1^2 \,\mathrm{d}^nr}
\end{equation}
with
\begin{equation*}
B(r) = g'(r)^2 + (n-1)\frac{g(r)^2}{r^2}.
\end{equation*}
By Lemma \ref{LemmaMonotonicityOfgAndB} we know that $B$ is a decreasing and $g$ an increasing function of $r$. Thus,
denoting by $u_1^\star$ the spherically decreasing rearrangement of $u_1$ with respect to the origin, we have
\begin{eqnarray} \label{EqChain1}
\int_\Omega B(r) u_1^2 \,\mathrm{d}^nr &\le& \int_{\Omega^\star} B^\star(r)\, {u_1^\star}^2 \,\mathrm{d}^nr\\
&\le& \int_{\Omega^\star} B(r) \,{u_1^\star}^2 \,\mathrm{d}^nr \le \int_{S_1} B(r)\, z_1^2 \,\mathrm{d}^nr \nonumber
\end{eqnarray}
and
\begin{eqnarray} \label{EqChain2}
\int_\Omega g(r)^2 u_1^2 \,\mathrm{d}^nr &\ge& \int_{\Omega^\star} g_\star(r)^2\, {u_1^\star}^2 \,\mathrm{d}^nr\\
&\ge& \int_{\Omega^\star} g(r)^2 \,{u_1^\star}^2 \,\mathrm{d}^nr \ge \int_{S_1} g(r)^2\, z_1^2 \,\mathrm{d}^nr \nonumber
\end{eqnarray}
In each of the above chains of inequalities the first step follows from general properties of rearrangements and the
second from the monotonicity properties of $g$ and $B$. The third step is justified by a comparison result that we
state below and the monotonicity of $g$ and $B$ again. Putting (\ref{EqChain1}) and (\ref{EqChain2}) into
(\ref{EqDifLambda}) we get
$$\lambda_2(\Omega,V) - \lambda_1(\Omega,V) \le \frac{\int_{S_1} B(r)\, z^2 \,\mathrm{d}^nr}{\int_{S_1} g(r)^2\, z^2 \,\mathrm{d}^nr} =
\lambda_2(S_1,\tilde V) - \lambda_1(S_1,\tilde V).$$%
Keeping in mind that $\lambda_1(\Omega,V) = \lambda_1(S_1,\tilde V)$, Theorem \ref{TheoremPPW} is proven by this last
inequality.
\end{proof}
\begin{Lemma}[Chiti Comparison result] \label{LemmaChiti}
Let $u_1^\star$ be the radially decreasing rearrangement of the first eigenfunction of $-\Delta + V$ on $\Omega$ and
$z_1$ the first eigenfunction of $-\Delta + \tilde V$ on $S_1$. Assume both functions to be positive and normalized in
$L^2(\Omega^\star)$. Then there exists an $r_0$ such that
\begin{eqnarray*}
u_1^\star(r) &\le& z_1(r) \quad \textmd{for } r \le r_0 \textmd{ and}\\
u_1^\star(r) &\ge& z_1(r) \quad \textmd{for } r_0 < r \le R_1.
\end{eqnarray*}
\end{Lemma}
\begin{proof}
By a version of the RFK inequality for Schr\"odinger operators \cite{L} and by domain monotonicity of the first
eigenvalue it is clear that $S_1 \subset \Omega^\star$. This is why we can view $z_1(r)$ as a function in
$L^2(\Omega^\star)$, setting $z_1(r) = 0$ for $r > R_1$.
Both $u_1^\star$ and $z_1$ are positive and spherically symmetric. Moreover, $u_1^\star(r)$ and $z_1(r)$ are decreasing
functions of $r$. For $u_1^\star$ this is clear by definition of the rearrangement. For $z_1$ it follows from a simple
comparison argument using $z_1^\star$ as a test function in the Rayleigh quotient for $\lambda_1$. (Here and in the
sequel we write short-hand $\lambda_1 = \lambda_1(\Omega,V) = \lambda_1(S_1,\tilde V)$.)
We introduce a change of variables via $s = C_n r^n$ and write $u_1^\#(s) \equiv u_1^\star(r)$, $z_1^\#(s) \equiv
z_1(r)$ and $\tilde V_\#(s) \equiv \tilde V(r)$.
\begin{Fact} \label{FactDE}
For the functions $u_1^\#(s)$ and $z_1^\#(s)$ we have
\begin{eqnarray}
-\frac{\,\mathrm{d} u_1^\#}{\,\mathrm{d} s} &\le& n^{-2}C_n^{-2/n}s^{n/2-2} \int_0^s (\lambda_1 - \tilde V_\#(w))\, u_1^\#(w) \,\mathrm{d} w, \label{EqFact1}\\
-\frac{\,\mathrm{d} z_1^\#}{\,\mathrm{d} s} &=& n^{-2}C_n^{-2/n}s^{n/2-2} \int_0^s (\lambda_1 - \tilde V_\#(w))\, z_1^\#(w) \,\mathrm{d} w.
\label{EqFact2}
\end{eqnarray}
\end{Fact}
\begin{proof}
We integrate both sides of $-\Delta u_1 + Vu_1 = \lambda_1 u_1$ over the level set $\Omega_t := \{\vec r\in \Omega:
u_1(\vec r) > t\}$ and use Gauss' Divergence Theorem to obtain
\begin{equation} \label{EqHH}
\int_{\partial\Omega_t} |\nabla u_1| H_{n-1}(\,\mathrm{d} r) = \int_{\Omega_t} (\lambda_1-V(\vec r))\, u_1(\vec r) \,\mathrm{d}^n r,
\end{equation}
where $\partial\Omega_t = \{\vec r\in \Omega: u_1(\vec r) = t\}$. Now we define the distribution function $\mu(t) =
|\Omega_t|$. Using the coarea formula, the Cauchy-Schwarz inequality and the classical isoperimetric inequality,
Talenti derives (\cite{T}, p.709, eq. (32))
\begin{equation} \label{EqHH2}
\int_{\partial\Omega_t} |\nabla u_1| H_{n-1}(\,\mathrm{d} r) \ge - n^2C_n^{2/n} \frac{\mu(t)^{2-2/n}}{\mu'(t)}.
\end{equation}
The left sides of (\ref{EqHH}) and (\ref{EqHH2}) are the same, thus
\begin{eqnarray*}
- n^2C_n^{2/n} \frac{\mu(t)^{2-2/n}}{\mu'(t)} &\le& \int_{\Omega_t} (\lambda_1-V(\vec r))\,u_1(\vec r) \,\mathrm{d}^n r\\
&\le& \int_{\Omega_t^\star} (\lambda_1 - V_\star(\vec r))\, u_1^\star(\vec r) \,\mathrm{d}^n r\\
&\le& \int_{\Omega_t^\star} (\lambda_1 - \tilde V(\vec r))\, u_1^\star(\vec r) \,\mathrm{d}^n r\\
&=& \int_0^{(\mu(t)/C_n)^{1/n}} n C_n r^{n-1} (\lambda_1 - \tilde V(r)) u_1^\star(r) \,\mathrm{d} r.\\
\end{eqnarray*}
Now we perform the change of variables $r \rightarrow s$ on the right hand side of the above chain of inequalities. We
also chose $t$ to be $u_1^\#(s)$. Using the fact that $u_1^\#$ and $\mu$ are essentially inverse functions to one
another, this means that $\mu(t) = s$ and $\mu'(t)^{-1} = (u_1^\#)'(s)$. The result is (\ref{EqFact1}). Equation
(\ref{EqFact2}) is proven analogously.
\end{proof}
Fact \ref{FactDE} enables us to prove Lemma \ref{LemmaChiti}. We have $u_1^\#(|S_1|) > z_1^\#(|S_1|) = 0$. Being
equally normalized, $u_1^\star$ and $z_1$ must have at least one intersection on $[0,R]$. Thus $u_1^\#$ and $z_1^\#$
have at least one intersection on $[0,|S_1|]$. Now assume that they intersect at least twice. Then there is an interval
$[s_1,s_2] \subset [0,|S_1|]$ such that $u_1^\#(s) > z^\#(s)$ for $s \in (s_1,s_2)$, $u_1^\#(s_2) = z_1^\#(s_2)$ and
either $u_1^\#(s_1) = z_1^\#(s_1)$ or $s_1=0$. There is also an interval $[s_3,s_4] \subset [s_2,|S_1|]$ with
$u_1^\#(s) < z_1^\#(s)$ for $s \in (s_3,s_4)$, $u_1^\#(s_3) = z_1^\#(s_3)$ and $u_1^\#(s_4) = z_1^\#(s_4)$. Be further
$\tilde s$ the point where $\tilde V_\#(s) - \lambda_1(S_1,\tilde V)$ crosses zero (Set $\tilde s = |S_1|$ if $\tilde V_\#(s) -
\lambda_1$ doesn't cross zero on $[0,|S_1|]$). To keep our notation compact we will write
$$I_a^b[u] = \int_a^b (\lambda_1-\tilde V_\#(w))\,u(w)\,\mathrm{d} w.$$
\emph{Case 1:} Assume $\tilde s \ge s_2$. Then $\tilde V_\#(s) - \lambda_1(S_1,\tilde V)$ is negative for $s<s_2$. Set
\begin{equation*}
v(s) = \left\{ \begin{array}{ll} u_1^\#(s) & \textmd{on } [0,s_1] \textmd{ if } I_0^{s_1}[u_1^\#] > I_0^{s_1}[z_1^\#],\cr%
z_1^\#(s) & \textmd{on } [0,s_1] \textmd{ if } I_0^{s_1}[u_1^\#] \le I_0^{s_1}[z_1^\#],\cr %
u_1^\#(s) & \textmd{on } [s_1,s_2]\cr %
z_1^\#(s) & \textmd{on } [s_2,|S_1|]\cr %
\end{array}\right.
\end{equation*}
Using Fact \ref{FactDE}, one can check that then $v(s)$ fulfills the inequality
\begin{equation} \label{Eqv}
-\frac{\,\mathrm{d} v}{\,\mathrm{d} s} \le n^{-2}C_n^{-2/n}s^{n/2-2} \int_0^s (\lambda_1 - \tilde V_\#(s)) v(w) \,\mathrm{d} w.
\end{equation}
\emph{Case 2:} Assume $\tilde s < s_2$. Then $\tilde V_\#(s) - \lambda_1(S_1,\tilde V)$ is positive for $s\ge s_3$. Set
\begin{equation*}
v(s) = \left\{ \begin{array}{ll} u_1^\#(s) & \textmd{on } [0,s_3] \textmd{ if } I_0^{s_3}[u_1^\#] > I_0^{s_3}[z_1^\#],\cr%
z_1^\#(s) & \textmd{on } [0,s_3] \textmd{ if } I_0^{s_3}[u_1^\#] \le I_0^{s_3}[z_1^\#],\cr %
u_1^\#(s) & \textmd{on } [s_3,s_4]\cr %
z_1^\#(s) & \textmd{on } [s_4,|S_1|]\cr %
\end{array}\right.
\end{equation*}
Again using Fact \ref{FactDE}, one can check that also in this case $v(s)$ fulfills the inequality (\ref{Eqv}).
Now define the test function
$$\Psi(\vec r) = v(C_nr^n) = v(s).$$
Then we use the Rayleigh characterization of $\lambda_1$, equation (\ref{Eqv}) and integration by parts to calculate
\begin{eqnarray*}
\lambda_1 {\int_{S_1} \Psi(\vec r)^2 \,\mathrm{d}^n r} &<& \int_{S_1} \left(|\nabla \Psi|^2 + \tilde V(\vec r) \Psi^2\right) \,\mathrm{d}^n r\\
&=& \int_0^{|S_1|} \left( v'(s)^2 n^2 s^{2-2/n} C_n^{2/n} + \tilde V_\#(s) v^2(s)\right) \,\mathrm{d} s\\
&\le& \int_0^{|S_1|} \left(-v'(s) \int_0^s (\lambda_1-\tilde V_\#(w)) v(w) \,\mathrm{d} w + \tilde V_\#(s) v^2(s)\right) \,\mathrm{d} s\\
&=& \int_0^{|S_1|} \left(v(s) (\lambda_1 - \tilde V_\#(s)) v(s) + \tilde V_\#(s) v^2(s) \right) \,\mathrm{d} s\\
&=& \lambda_1 {\int_{S_1} \Psi(\vec r)^2 \,\mathrm{d}^n r}.
\end{eqnarray*}
This is a contradiction to our original assumption that $u_1^\#(r)$ and $z_1^\#(r)$ have more than one intersection,
thus proving Lemma \ref{LemmaChiti}.
\end{proof}
\section{Proof of Theorem \ref{TheoremMonotonicity}} \label{SectionProof2}
\begin{proof}[Proof of Theorem \ref{TheoremMonotonicity}]
The first eigenfunction of $-\Delta + V$ on $B_R$ is radially symmetric and will be called $z_1(r)$. Further, a
standard separation of variables and the Baumgartner-Grosse-Martin \cite{BGM, AB3} inequality imply that we can write a
basis of the space of second eigenfunctions in the form $z_2(r) \cdot r_i \cdot r^{-1}$. The radial parts $z_1$ and
$z_2$ of the first and the second eigenfunction, which we assume to be positive, solve the differential equations
\begin{eqnarray}
-z''_1(r) - \frac{n-1}{r} z'_1(r) + \left(V(r) -\lambda_1\right) z_1(r) &=& 0, \label{EqDEs}\\
-z''_2(r) - \frac{n-1}{r} z'_2(r) + \left(\frac{n-1}{r^2} + V(r)- \lambda_2\right) z_2(r) &=& 0 \nonumber
\end{eqnarray}
with the boundary conditions
\begin{equation}\label{EquM3}
z'_1(0) = 0, \quad z_1(R) = 0, \quad z_2(0) = 0, \quad z_2(R) = 0.
\end{equation}
We define the rescaled functions $\tilde z_{1/2}(r) = z_{1/2}(\beta r)$. Putting $\beta r$ (with $\beta>0$) instead of
$r$ into the equations (\ref{EqDEs}) and multiplying by $\beta^2$ yields the rescaled equations
\begin{eqnarray*}
-\tilde z''_1(r) - \frac{n-1}{r} \tilde z'_1(r) + \left(\beta^2 V(\beta r) -\beta^2 \lambda_1\right) \tilde z_1(r) &=& 0, \nonumber\\
-\tilde z''_2(r) - \frac{n-1}{r} \tilde z'_2(r) + \left(\frac{n-1}{r^2} + \beta^2 V(\beta r)- \beta^2 \lambda_2\right)
\tilde z_2(r) &=& 0. \nonumber
\end{eqnarray*}
We conclude that $\tilde z_1$ and $\tilde z_2$ are the radial parts of the first two eigenfunctions of $-\Delta +
\beta^2 V(\beta r)$ on $B_{R/\beta}$ to the eigenvalues $\beta^2 \lambda_1$ and $\beta^2\lambda_2$. Consequently, if we
replace $R$ by $R/\beta$ and $V(r)$ by $\beta^2 V(\beta r)$, then the ratio $\lambda_2/\lambda_1$ doesn't change.
For the rest of this section we shall write $\lambda_{1/2}(R,V)$ instead of $\lambda_{1/2}(B_R,V)$. We also fix two
radii $0 < R_1 < R_2$ and let $\rho(\beta)$ for $\beta > 1$ be the function defined implicitly by
\begin{equation} \label{EqLam}
\lambda_1(\rho(\beta), V(r)) = \lambda_1(R_2/\beta, \beta^2 V(\beta r)).
\end{equation}
Then we have $\rho(1) = R_2$. By domain monotonicity of $\lambda_1$ and because $V(r)$ is increasing and positive we
see that the right hand side of (\ref{EqLam}) is increasing in $\beta$. Therefore, again by domain monotonicity,
$\rho(\beta)$ must be decreasing in $\beta$. One can also check that $\rho(\beta)$ is a continuous function and that
$\rho(\beta)$ goes to zero for $\beta \rightarrow \infty$. Thus we can find $\beta_0 > 1$ such that $\rho(\beta_0) =
R_1$. Then we can apply Theorem \ref{TheoremPPW}, with $B_{R_2/\beta_0}$ for $\Omega$ and $B_{\rho(\beta_0)}$ for
$S_1$, as well as $\beta_0^2 V(\beta_0 r)$ for $V$ and $V(r)$ for $\tilde V$, to get
\begin{equation} \label{EqLam2}
\lambda_2(R_2/\beta_0, \beta_0^2 V(\beta_0 r) \le \lambda_2(\rho(\beta_0), V(r)) = \lambda_2(R_1, V(r)) .
\end{equation}
But by what has been said above about the scaling properties of the problem, we have
\begin{equation} \label{EqLam3}
\frac{\lambda_2(R_2/\beta_0, \beta_0^2 V(\beta_0 r))}{\lambda_1(R_2/\beta_0, \beta_0^2 V(\beta_0 r))} =
\frac{\lambda_2(R_2, V(r))}{\lambda_1(R_2, V(r))}.
\end{equation}
Combining (\ref{EqLam}) for $\beta=\beta_0$, (\ref{EqLam2}) and(\ref{EqLam3}), we get
\begin{equation} \label{EqLam4}
\frac{\lambda_2(R_1,V(r))}{\lambda_1(R_1, V(r))} \ge \frac{\lambda_2(R_2, V(r))}{\lambda_1(R_2, V(r))}.
\end{equation}
Because $R_1$ and $R_2$ were chosen arbitrarily, this proves Theorem \ref{TheoremMonotonicity}.
\end{proof}
\section{Proof of Theorem \ref{TheoremMonotonicity2}} \label{SectionProof3}
Before we prove Theorem \ref{TheoremMonotonicity2} we need to state the following technical Lemma:
\begin{Lemma}\label{LemmaAlgebraic}
Be $a,b,c,d > 0$ with $a\ge b$, $d\ge b$ and $\frac a b < \frac cd$. Then
\begin{equation}
\frac{a+x}{b+x} < \frac{c+x}{d+x}
\end{equation}
holds for any $x>0$.
\end{Lemma}
\begin{proof}
Define the function
$$f(x) := \frac{c+x}{d+x} - \frac{a+x}{b+x}.$$
then $f(0) > 0$. A straightforward calculation shows that $f$ has exactly one zero at
$$x_0 = -\frac{bc-ad}{b+c-a-d}.$$
The numerator $bc-ad$ in the expression for $x_0$ is positive because of the condition $\frac a b < \frac cd$. For the
denominator we get
$$b+c-a-d > c+b-\frac{bc}{d}-d = \frac{(d-b)(c-d)}{d} \ge 0.$$
This means that $x_0<0$, such that $f(x)>0$ for all $x>0$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{TheoremMonotonicity2}]
Choose some $x > 0$. From Theorem \ref{TheoremMonotonicity} we know that
$$\frac{\lambda_2(B_{R+x},r^2)}{\lambda_1(B_{R+x},r^2)} < \frac{\lambda_2(B_{R},r^2)}{\lambda_1(B_{R},r^2)} \quad \textmd{for }x>0.$$ %
Moreover, $\lambda_1(B_R,r^2) \ge \lambda_1(B_{R+x},r^2)$ and $\lambda_2(B_{R+x},r^2) > \lambda_1(B_{R+x},r^2)$. Thus
we can apply first (\ref{EqCon}), then Lemma \ref{LemmaAlgebraic} and then (\ref{EqCon}) again, to get
\begin{equation*}
\frac{\lambda^+_2(B_{R+x})}{\lambda^+_1(B_{R+x})} = \frac{\lambda_2(B_{R+x},r^2) + n}{\lambda_1(B_{R+x},r^2) + n}
< \frac{\lambda_2(B_{R},r^2) + n}{\lambda_1(B_{R},r^2) + n} = \frac{\lambda^+_2(B_{R})}{\lambda^+_1(B_{R})}.%
\end{equation*}
\end{proof}
| {
"timestamp": "2005-11-09T23:22:23",
"yymm": "0511",
"arxiv_id": "math-ph/0511032",
"language": "en",
"url": "https://arxiv.org/abs/math-ph/0511032",
"abstract": "Let $\\lambda_i(\\Omega,V)$ be the $i$th eigenvalue of the Schrödinger operator with Dirichlet boundary conditions on a bounded domain $\\Omega \\subset \\R^n$ and with the positive potential $V$. Following the spirit of the Payne-Pólya-Weinberger conjecture and under some convexity assumptions on the spherically rearranged potential $V_\\star$, we prove that $\\lambda_2(\\Omega,V) \\le \\lambda_2(S_1,V_\\star)$. Here $S_1$ denotes the ball, centered at the origin, that satisfies the condition $\\lambda_1(\\Omega,V) = \\lambda_1(S_1,V_\\star)$.Further we prove under the same convexity assumptions on a spherically symmetric potential $V$, that $\\lambda_2(B_R, V) / \\lambda_1(B_R, V)$ decreases when the radius $R$ of the ball $B_R$ increases.We conclude with several results about the first two eigenvalues of the Laplace operator with respect to a measure of Gaussian or inverted Gaussian density.",
"subjects": "Mathematical Physics (math-ph)",
"title": "A second eigenvalue bound for the Dirichlet Schroedinger operator",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986777179025415,
"lm_q2_score": 0.8128673110375458,
"lm_q1q2_score": 0.802118912107604
} |
https://arxiv.org/abs/1908.09375 | Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization | While deep learning is successful in a number of applications, it is not yet well understood theoretically. A satisfactory theoretical characterization of deep learning however, is beginning to emerge. It covers the following questions: 1) representation power of deep networks 2) optimization of the empirical risk 3) generalization properties of gradient descent techniques --- why the expected error does not suffer, despite the absence of explicit regularization, when the networks are overparametrized? In this review we discuss recent advances in the three areas. In approximation theory both shallow and deep networks have been shown to approximate any continuous functions on a bounded domain at the expense of an exponential number of parameters (exponential in the dimensionality of the function). However, for a subset of compositional functions, deep networks of the convolutional type can have a linear dependence on dimensionality, unlike shallow networks. In optimization we discuss the loss landscape for the exponential loss function and show that stochastic gradient descent will find with high probability the global minima. To address the question of generalization for classification tasks, we use classical uniform convergence results to justify minimizing a surrogate exponential-type loss function under a unit norm constraint on the weight matrix at each layer -- since the interesting variables for classification are the weight directions rather than the weights. Our approach, which is supported by several independent new results, offers a solution to the puzzle about generalization performance of deep overparametrized ReLU networks, uncovering the origin of the underlying hidden complexity control. | \section{Introduction}
\dropcap{I}n the last few years, deep learning has been tremendously
successful in many important applications of machine
learning. However, our theoretical understanding of deep
learning, and thus the ability of developing principled
improvements, has lagged behind. A satisfactory theoretical
characterization of deep learning is emerging. It covers the
following areas: 1) {\it approximation} properties of deep
networks 2) {\it optimization} of the empirical risk 3) {\it
generalization} properties of gradient descent techniques
-- why the expected error does not suffer, despite the
absence of explicit regularization, when the networks are
overparametrized?
\subsection{When Can Deep Networks Avoid the Curse of Dimensionality?}
We start with the first set of questions,
summarizing results in \cite{HierarchicalKernels2015,Hierarchical2015,poggio2015December},
and \cite{Mhaskaretal2016,MhaskarPoggio2016}. The main result is that deep networks have
the theoretical guarantee, which shallow networks do not have, that
they can avoid the {\it curse of dimensionality} for an important class of problems,
corresponding to {\it compositional functions}, that is functions of
functions. An especially interesting subset of such compositional
functions are {\it hierarchically local compositional
functions} where all the constituent functions are local in the
sense of bounded small dimensionality. The deep networks that can
approximate them without the curse of dimensionality are of the deep
convolutional type -- though, importantly, weight sharing is not necessary.
Implications of the theorems likely to be relevant in practice are:
a) {\it Deep convolutional architectures} have the theoretical
guarantee that they can be {\it much better} than one layer architectures such
as kernel machines for certain classes of problems;
b) the problems for which certain deep networks are guaranteed to avoid
the {\it curse of dimensionality} (see for a nice review
\cite{Donoho00high-dimensionaldata}) correspond to input-output
mappings that are {\it compositional with local constituent
functions};
c) the key aspect of convolutional networks that can give them an
exponential advantage is {\it not weight sharing} but {\it locality} at each
level of the hierarchy.
\subsection{Related Work}
Several papers in the '80s focused on the approximation power and
learning properties of one-hidden layer networks (called shallow
networks here). Very little appeared on multilayer networks, (but see
\cite{mhaskar1993approx, mhaskar1993neural, chui1994neural, chui1996, Pinkus1999}).
By now, several papers \cite{poggio03mathematics,MontufarBengio2014,
DBLP:journals/corr/abs-1304-7045} have appeared. \cite{Anselmi2014,anselmi2015theoretical,poggioetal2015,
LiaoPoggio2016, Mhaskaretal2016} derive new upper bounds for
the approximation by deep networks of certain important classes of
functions which avoid the curse of dimensionality. The upper bound for
the approximation by shallow networks of general functions was well
known to be exponential. It seems natural to assume that, since there
is no general way for shallow networks to exploit a compositional
prior, lower bounds for the approximation by shallow networks of
compositional functions should also be exponential. In fact, examples
of specific functions that cannot be represented efficiently by
shallow networks have been given, for instance in
\cite{Telgarsky2015, SafranShamir2016, Theory_I}. An interesting
review of approximation of univariate functions by deep networks has
recently appeared \cite{2019arXiv190502199D}.
\begin{figure
\centering
\includegraphics[trim=10 33 30 73, width=0.9\linewidth,clip]{Figures/Example_2_functions.pdf}
\caption{The top graphs are associated to {\it functions}; each of the
bottom diagrams depicts the ideal {\it network} approximating the
function above. In a) a shallow universal network in 8 variables and
$N$ units approximates a generic function of $8$ variables
$f(x_1, \cdots, x_8)$. Inset b) shows a hierarchical
network at the bottom in $n=8$ variables, which approximates well
functions of the form
$f(x_1, \cdots, x_8) = h_3(h_{21}(h_{11} (x_1, x_2), h_{12}(x_3,
x_4)), \allowbreak h_{22}(h_{13}(x_5, x_6), h_{14}(x_7, x_8))) $ as
represented by the binary graph above. In the approximating network
each of the $n-1$ nodes in the graph of the function corresponds to
a set of $Q =\frac{N}{n-1}$ ReLU units computing the ridge function
$\sum_{i=1}^Q a_i(\scal{\mathbf{v}_i}{\mathbf{x}}+t_i)_+$, with
$\mathbf{v}_i, \mathbf{x} \in \R^2$, $a_i, t_i\in\R$. Each term in
the ridge function corresponds to a unit in the node (this is
somewhat different from todays deep networks, but equivalent to
them \cite{Theory_I}). Similar to the shallow network, a hierarchical network is
universal, that is, it can approximate any continuous function; the
text proves that it can approximate a compositional functions
exponentially better than a shallow network. Redrawn from
\cite{MhaskarPoggio2016}.
}
\label{example_functions}
\end{figure}
\subsection{Degree of approximation}\label{approxsect}
The general paradigm is as follows. We are interested in determining
how complex a network ought to be to {\it theoretically guarantee}
approximation of an unknown target function $f$ up to a given accuracy
$\epsilon>0$. To measure the accuracy, we need a norm $\|\cdot\|$ on
some normed linear space $\mathbb{X}$. As we will see the norm used in
the results of this paper is the $sup$ norm in keeping with the
standard choice in approximation theory. As it turns out, the results
of this section
require the sup norm in order to be independent from the unknown
distribution of the input data.
Let $V_N$ be the be set of all networks of a given kind with $N$ units
(which we take to be or measure of the complexity of the approximant
network). The \textit{degree of approximation} is defined by
$\mathsf{dist}(f, V_N)=\inf_{P\in V_N}\|f-P\|.$ For example, if
$\mathsf{dist}(f, V_N)=\O(N^{-\gamma})$ for some $\gamma>0$, then a
network with complexity $N=\O(\epsilon^{-\frac{1}{\gamma}})$ will be
sufficient to guarantee an approximation with accuracy at least
$\epsilon$. The only a priori information on the class of target
functions $f$, is codified by the statement that $f\in W$ for some
subspace $W\subseteq \mathbb{X}$. This subspace is a smoothness and compositional class,
characterized by the parameters $m$ and $d$ ($d=2$ in the example of
Figure \ref{example_functions} ; it is the size of the kernel
in a convolutional network).
\subsection{Shallow and deep networks}
\label{subprevious}
This section characterizes conditions under which deep networks are
``better'' than shallow network in approximating functions. Thus we
compare shallow (one-hidden layer) networks with deep networks as
shown in Figure \ref{example_functions}. Both types of networks use
the same small set of operations -- dot products, linear combinations,
a fixed nonlinear function of one variable, possibly convolution and
pooling. Each node in the networks corresponds to
a node in the graph of the function to be approximated, as shown in
the Figure. A unit is a neuron which computes
$(\scal{x}{w}+b)_+$, where $w$ is the vector of weights on the vector input
$x$. Both $w$ and the real number $b$ are parameters tuned by
learning. We assume here that each node in the networks computes the
linear combination of $r$ such units $\sum_{i=1}^r c_i
(\scal{x}{w_i}+b_i)_+$. Notice that in our main example of a network corresponding to a
function with a binary tree graph, the resulting architecture is an
idealized version of deep convolutional neural networks described in
the literature. In particular, it has only one output at the top
unlike most of the deep architectures with many channels and many
top-level outputs. Correspondingly, each node computes a single value
instead of multiple channels, using the combination of several
units. However our results hold also for these more
complex networks (see \cite{Theory_I}).
\noindent
The sequence of results is as follows.
\begin{itemize}
\item {\it Both shallow (a) and deep (b) networks are universal}, that
is they can approximate arbitrarily well any continuous function of
$n$ variables on a compact domain. The result for shallow networks
is classical.
\item We consider a special class of functions of $n$ variables on a
compact domain that are {\it hierarchical compositions of local
functions}, such as
$f(x_1, \cdots, x_8) = h_3(h_{21}(h_{11} (x_1, x_2), h_{12}(x_3,
x_4)), \allowbreak h_{22}(h_{13}(x_5, x_6), h_{14}(x_7, x_8))) $
\noindent The structure of the function in Figure \ref{example_functions} b)
is represented by a graph of the binary tree type, reflecting dimensionality $d=2$ for the constituent functions $h$. In
general, $d$ is arbitrary but fixed and independent of the
dimensionality $n$ of the compositional function $f$.
\cite{Theory_I} formalizes the more general compositional case using
directed acyclic graphs.
\item The approximation of functions with a {\it compositional
structure} -- can be achieved with the same degree of accuracy by
deep and shallow networks but the number of parameters are much
smaller for the deep networks than for the shallow network with
equivalent approximation accuracy.
\end{itemize}
We approximate functions with networks in which the activation
nonlinearity is a smoothed version of the so called ReLU, originally
called {\it ramp} by Breiman and given by
$\sigma (x) = x_+ = max(0, x)$ . The architecture of the deep
networks reflects the function graph with each node $h_i$ being a
ridge function, comprising one or more neurons.
Let $I^n=[-1,1]^n$, $\mathbb{X}=C(I^n)$ be the space of all continuous
functions on $I^n$, with $\|f\|=\max_{x\in I^n}|f(x)|$.
Let
$\mathcal{S}_{N,n}$ denote the class of all shallow networks with $N$
units of the form
$$
x\mapsto\sum_{k=1}^N a_k\sigma(\scal{{w}_k}{x}+b_k),
$$
where ${w}_k\in\R^n$, $b_k, a_k\in\R$. The number of trainable
parameters here is $(n+2)N\sim n$. Let $m\ge 1$ be an integer, and
$W_m^n$ be the set of all functions of $n$ variables with continuous
partial derivatives of orders up to $m < \infty$ such that $\|f\|+\sum_{1\le
|\k|_1\le m} \|D^\k f\| \le 1$, where $D^\k$ denotes the partial
derivative indicated by the multi-integer $\k\ge 1$, and $|\k|_1$ is
the sum of the components of $\k$.
For the hierarchical binary tree network, the analogous spaces are
defined by considering the compact set $W_m^{n,2}$ to be the class of
all compositional functions $f$ of $n$ variables with a binary tree
architecture and constituent functions $h$ in $W_m ^2$. We define the
corresponding class of deep networks $\mathcal{D}_{N,2}$ to be the set
of all deep networks with a binary tree architecture, where each of
the constituent nodes is in $\mathcal{S}_{M,2}$, where $N=|V|M$, $V$
being the set of non--leaf vertices of the tree. We note that in the
case when $n$ is an integer power of $2$, the total number of
parameters involved in a deep network in $\mathcal{D}_{N,2}$ is $4N$.
The first theorem is about shallow networks.
\begin{theorem}
\label{optneurtheo}
Let $\sigma :\R\to \R$ be infinitely differentiable, and not a polynomial. For $f\in W_m^n$ the complexity of shallow networks that
provide accuracy at least $\epsilon$ is
\begin{equation}
N= \O(\epsilon^{-n/m})\,\, and\,\,
is\,\, the\,\, best\,\, possible.
\end{equation}
\end{theorem}
The estimate of Theorem \ref{optneurtheo} is the best possible if the only a priori
information we are allowed to assume is that the target function
belongs to $f\in W_m^n$. The exponential dependence on the
dimension $n$ of the number $e^{-n/m}$ of parameters needed to
obtain an accuracy $\O(\epsilon)$ is known as the {\it curse of
dimensionality}. Note that the constants involved in $\O$ in the theorems will depend upon
the norms of the derivatives of $f$ as well as $\sigma$.
Our second and main theorem is about deep networks with smooth
activations (preliminary versions appeared in
\cite{poggio2015December,Hierarchical2015,Mhaskaretal2016}). We
formulate it in the binary tree case for simplicity but it extends
immediately to functions that are compositions of constituent
functions of a fixed number of variables $d$ (in convolutional
networks $d$ corresponds to the size of the kernel).
\begin{theorem}
\label{deeptheo}
For $f\in W_m^{n,2}$ consider a deep network with the same
compositional architecture and with an activation function
$\sigma :\R\to \R$ which is infinitely differentiable, and
not a polynomial. The complexity of the
network to provide approximation with
accuracy at least $\epsilon$ is
\begin{equation}
N =\mathcal{O}((n-1)\epsilon^{-2/m}).
\label{deepnetapprox}
\end{equation}
\end{theorem}
The proof is in \cite{Theory_I}. The assumptions on $\sigma$ in the
theorems are not satisfied by the ReLU function $x\mapsto x_+$, but
they are satisfied by smoothing the function in an arbitrarily small
interval around the origin. The result of the
theorem can be extended to non-smooth ReLU\cite{Theory_I}.
In summary, when the only a priori assumption on the target function
is about the number of derivatives, then to {\it guarantee} an
accuracy of $\epsilon$, we need a shallow network with
$\O(\epsilon^{-n/m})$ trainable parameters. If we assume a
hierarchical structure on the target function as in
Theorem~\ref{deeptheo}, then the corresponding deep network yields a
guaranteed accuracy of $\epsilon$ with $\O(\epsilon^{-2/m})$ trainable
parameters. Note that Theorem~\ref{deeptheo} applies to all $f$ with a
compositional architecture given by a graph which correspond to, or is
a subgraph of, the graph associated with the deep network -- in this
case the graph corresponding to $W_m^{n,d}$.
\section{ The Optimization Landscape of Deep
Nets with Smooth Activation Function}
\label{BezoutBoltzman}
The main question in optimization of deep networks is to the landscape
of the empirical loss in terms of its global minima and local critical
points of the gradient.
\subsection{Related work}
There are many recent papers studying optimization
in deep learning. For optimization we mention work based on the idea
that noisy gradient descent \cite{DBLP:journals/corr/Jin0NKJ17,
DBLP:journals/corr/GeHJY15, pmlr-v49-lee16, s.2018when} can find a
global minimum. More recently, several authors studied the dynamics of
gradient descent for deep networks with assumptions about the input
distribution or on how the labels are generated. They obtain global
convergence for some shallow neural networks
\cite{Tian:2017:AFP:3305890.3306033, s8409482,
Li:2017:CAT:3294771.3294828, DBLP:conf/icml/BrutzkusG17,
pmlr-v80-du18b, DBLP:journals/corr/abs-1811-03804}. Some local
convergence results have also been proved
\cite{Zhong:2017:RGO:3305890.3306109,
DBLP:journals/corr/abs-1711-03440, 2018arXiv180607808Z}. The most
interesting such approach is \cite{DBLP:journals/corr/abs-1811-03804},
which focuses on minimizing the training loss and proving that
randomly initialized gradient descent can achieve zero training loss
(see also \cite{NIPS2018_8038, du2018gradient,
DBLP:journals/corr/abs-1811-08888}). In summary, there is by now an extensive
literature on optimization that formalizes and refines to different
special cases and to the discrete domain our results of \cite{theory_II, theory_IIb}.
\subsection{Degeneracy of global and local minima under the
exponential loss}
The {\it first part} of the argument of this section relies on the
obvious fact (see \cite{theory_III}), that for RELU networks under the
hypothesis of an exponential-type loss function, there are {\it no
local minima that separate the data} -- the only critical points of
the gradient that separate the data are the global minima.
Notice that the global minima are at $\rho = \infty$, when the
exponential is zero. As a consequence, the Hessian is identically zero
with all eigenvalues being zero. On the other hand any point of the
loss at a finite $\rho$ has nonzero Hessian: for instance in the
linear case the Hessian is proportional to $\sum_n^N x_n x^T_n$.
The local minima which are not global minima must
misclassify. How degenerate are they?
Simple arguments \cite{theory_III} suggest that the critical points
which are not global minima cannot be completely degenerate. We thus have the following
\begin{property}
Under the exponential loss, global minima are completely degenerate
with all eigenvalues of the Hessian ($W$ of them with $W$ being the
number of parameters in the network) being zero. The other critical
points of the gradient are less degenerate, with at least one -- and typically $N$
-- nonzero eigenvalues.
\end{property}
For the general case of non-exponential loss and smooth
nonlinearities instead of the RELU the following conjecture has been
proposed \cite{theory_III}:
\begin{conjecture}: For appropriate overparametrization, there are a
large number of global zero-error minimizers which are degenerate;
the other critical points -- saddles and local minima -- are
generically (that is with probability one) degenerate on a set of
much lower dimensionality.
\end{conjecture}
\subsection{SGD and Boltzmann Equation}
The second part of our argument (in \cite{theory_IIb}) is that SGD
concentrates in probability on the most degenerate minima. The argument is based on the
similarity between a Langevin equation and SGD and on the fact
that the Boltzmann distribution is exactly the asymptotic ``solution'' of the
stochastic differential Langevin equation and also of SGDL, defined as
SGD with added white noise (see for instance
\cite{raginskyetal17}). The Boltzmann distribution is
\begin{equation}
p(f) = \frac{1}{Z}e^{-\frac{L}{T}},
\label{Bolzman}
\end{equation}
\noindent where $Z$ is a normalization constant, $L(f)$ is the loss
and $T$ reflects the noise power. The equation implies that SGDL
prefers degenerate minima relative to non-degenerate ones of the same
depth. In addition, among two minimum basins of equal depth, the one
with a larger volume is much more likely in high dimensions as shown
by the simulations in \cite{theory_IIb}. Taken together, these two
facts suggest that SGD selects degenerate minimizers corresponding to
larger isotropic flat regions of the loss. Then SDGL shows concentration --
{\it because of the high dimensionality} -- of its asymptotic
distribution Equation \ref{Bolzman}.
Together \cite{theory_II} and \cite{theory_III} suggest the following
\begin{conjecture}: For appropriate
overparametrization of the deep network, SGD selects with high
probability the global minimizers of the empirical loss, which are
highly degenerate.
\end{conjecture}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth ]{Figures/SGD_vs_SGDL.pdf}
\caption{ Stochastic Gradient Descent and Langevin Stochastic Gradient
Descent (SGDL) on the $2$D potential function shown above leads to
an asymptotic distribution with the histograms shown on the left. As
expected from the form of the Boltzmann distribution, both dynamics
prefer degenerate minima to non-degenerate minima of the same
depth. From \cite{theory_III}.
}
\label{wedge_rbf_sgdl}
\end{figure}
\section{Generalization}
\label{generalization}
Recent results by \cite{2017arXiv171010345S} illuminate the apparent
absence of ''overfitting” (see Figure \ref{no-overfitting}) in the
special case of linear networks for binary classification. They prove
that minimization of loss functions such as the logistic, the
cross-entropy and the exponential loss yields asymptotic convergence
to the maximum margin solution for linearly separable datasets,
independently of the initial conditions and without explicit
regularization. Here we discuss the case of nonlinear multilayer DNNs
under exponential-type losses, for several variations of the basic
gradient descent algorithm. The main results are:
\begin{itemize}
\item classical uniform convergence bounds for generalization suggest
a form of complexity control on the dynamics of the weight {\it
directions $V_k$}: minimize a surrogate loss subject to a
unit $L_p$ norm constraint;
\item gradient descent on the exponential loss with an explicit $L_2$
unit norm constraint is equivalent to a well-known gradient descent
algorithms {\it weight normalization} which is closely related to
batch normalization;
\item unconstrained gradient descent on the exponential loss yields a
dynamics with the same critical points as weight normalization: the
dynamics implicitly respect a $L_2$ unit constraint on the
directions of the weights $V_k$.
\end{itemize}
We observe that several of these results {\it directly apply to kernel
machines} for the exponential loss under the
separability/interpolation assumption, because kernel machines are
one-homogeneous.
\subsection{Related work}
A number of papers have studied gradient descent for deep networks
\cite{NIPS2017_6836, DBLP:journals/corr/abs-1811-04918,
Arora2019FineGrainedAO}. Close to the approach summarized here
(details are in \cite{theory_III}) is the paper
\cite{Wei2018OnTM}. Its authors study generalization assuming a
regularizer because they are -- like us -- interested in normalized
margin. Unlike their assumption of an explicit regularization, we show
here that commonly used techniques, such as weight and batch
normalization, in fact minimize the surrogate loss margin while
controlling the complexity of the classifier without the need to add a
regularizer or to use weight decay. Surprisingly, we will show that
even standard gradient descent on the weights implicitly controls the
complexity through an ``implicit'' unit $L_2$ norm constraint. Two
very recent papers (\cite{2019arXiv190507325S} and
\cite{DBLP:journals/corr/abs-1906-05890}) develop an elegant but
complicated margin maximization based approach which lead to some of
the same results of this section (and many more). The important question of
which conditions are necessary for gradient descent to converge to the
maximum of the
margin of $\tilde{f}$ are studied by \cite{2019arXiv190507325S} and
\cite{DBLP:journals/corr/abs-1906-05890}. Our approach does not need the
notion of maximum margin but our theorem \ref{margin-maxTheorem}
establishes a connection with it and thus with the results of
\cite{2019arXiv190507325S} and
\cite{DBLP:journals/corr/abs-1906-05890}. Our main goal here (and in
\cite{theory_III}) is to achieve a simple understanding of where the
complexity control underlying generalization is hiding in the training
of deep networks.
\subsection{Deep networks: definitions and properties}
We define a deep network with $K$ layers with the usual
coordinate-wise scalar activation functions
$\sigma(z):\quad \mathbf{R} \to \mathbf{R}$ as the set of functions
$f(W;x) = \sigma (W^K \sigma (W^{K-1} \cdots \sigma (W^1 x)))$, where
the input is $x \in \mathbf{R}^d$, the weights are given by the
matrices $W^k$, one per layer, with matching dimensions. We sometime
use the symbol $W$ as a shorthand for the set of $W^k$ matrices
$k=1,\cdots,K$. For simplicity we consider here the case of binary
classification in which $f$ takes scalar values, implying that the
last layer matrix $W^K$ is $W^K \in \mathbf{R}^{1,K_l}$. The labels
are $y_n\in\{-1,1\}$. The weights of hidden layer $l$ are collected in
a matrix of size $h_l\times h_{l-1}$. There are no biases apart form
the input layer where the bias is instantiated by one of the input
dimensions being a constant. The activation function in this section
is the ReLU activation.
For ReLU activations the following
important positive one-homogeneity property holds
$\sigma(z)=\frac{\partial \sigma(z)}{\partial z} z$.
A consequence of one-homogeneity is a structural lemma (Lemma 2.1 of
\cite{DBLP:journals/corr/abs-1711-01530}) $\sum_{i,j} W^{i,j}_k \left(\frac{\partial f(x)}{\partial
W^{i,j}_k}\right)= f(x)$ where $W_k$ is here the vectorized representation of the weight
matrices $W_k$ for each of the different layers (each matrix is a
vector).
For the network, homogeneity implies
$f(W;x)=\prod_{k=1}^K \rho_k f(V_1,\cdots,V_K; x_n)$, where
$W_k=\rho_k V_k$ with the matrix norm $||V_k||_p=1$. Another property
of the Rademacher complexity of ReLU networks that follows from
homogeneity is
$\mathbb{R}_N(\mathbb{F}) = \rho \mathbb{R}_N(\tilde{\mathbb{F}})$
where $\rho=\rho_1 \prod_{k=1}^K \rho_k$, $\mathbb{F}$ is the class of
neural networks described above.
We define $f= \rho \tilde{f}$; $\tilde{\mathbb{F}}$ is the
associated class of normalized neural networks (we call
$f(V;x)=\tilde{f}(x)$ with the understanding that $f(x)=f(W;x)$).
Note that
$\frac{\partial f}{\partial \rho_k} = \frac{\rho}{\rho_k}\tilde{f}
\label{rho}$ and that the definitions of $\rho_k$, $V_k$ and
$\tilde{f}$ all depend on the choice of the norm used in
normalization.
In the case of training data that can be separated by the networks
$f(x_n) y_n>0 \quad \forall n=1,\cdots,N$. We will sometime write
$f(x_n)$ as a shorthand for $y_n f(x_n)$.
\subsection{Uniform convergence bounds: minimizing a surrogate loss
under norm constraint}
\label{Early stopping}
Classical {\it generalization bounds for regression}
\cite{Bousquet2003} suggest that minimizing the empirical loss of a
loss function such as the cross-entropy
subject to constrained {\it complexity of the minimizer} is a way to
to attain generalization, that is an expected loss which is close to the
empirical loss:
\begin{proposition}
The following generalization bounds
apply to $\forall f \in \mathbb{F}$ with probability at least
$(1-\delta)$:
\begin{equation}
L(f) \leq \hat{L}(f) + c_1\mathbb{R}_N(\mathbb{F}) + c_2 \sqrt
\frac{\ln(\frac{1}{\delta})}{2N}
\label{bound}
\end{equation}
\end{proposition}
\vskip0.1in
\noindent where $L(f) = \mathbf E [\ell(f(x), y)]$ is the expected
loss, $\hat{L}(f)$ is the empirical loss, $\mathbb{R}_N(\mathbb{F})$
is the empirical Rademacher average of the class of functions
$\mathbb{F}$, measuring its complexity; $c_1, c_2$ are constants that
depend on properties of the Lipschitz constant of the loss function,
and on the architecture of the network.
Thus minimizing under a constraint on the
Rademacher complexity a surrogate function such as the
cross-entropy (which becomes the logistic loss in the binary
classification case) will minimize an upper bound on the expected classification
error because such surrogate functions are upper bounds on the $0-1$
function. We can choose a class of functions $\mathbf{\tilde{F}}$ with
normalized weights and write $f(x)=\rho \tilde{f}(x)$ and
$\mathbb{R}_N(\mathbb{F})=\rho \mathbb{R}_N(\mathbb{\tilde{F}})$. One
can choose any fixed $\rho$ as a (Ivanov) regularization-type
tradeoff.
In summary, the problem of generalization may be approached by minimizing
the exponential loss -- more in general an exponential-type loss, such
the logistic and the cross-entropy -- under a unit norm constraint on
the weight matrices, since we are interested in the directions of the weights:
\begin{equation}
\lim_{\rho \to \infty} \arg\min_{||V_k||=1, \ \forall k} L(\rho \tilde{f})
\label{UnitNormMin}
\end{equation}
\noindent where we write $f(W) = \rho \tilde{f}(V)$ using the
homogeneity of the network. As it will become clear later, gradient
descent techniques on the exponential loss automatically increase
$\rho$ to infinity. We will typically consider the sequence of
minimizations over $V_k$ for a sequence of increasing $\rho$. The key
quantity for us is $\tilde{f}$ and the associated weights $V_k$;
$\rho$ is in a certain sense an auxiliary variable, a constraint that
is progressively relaxed.
In the following we explore the implications for deep networks of this classical
approach to generalization.
\subsubsection{Remark: minimization of an exponential-type loss implies margin maximization
}
Though not critical for our approach to the question of generalization
in deep networks it is interesting that constrained minimization of
the exponential loss implies margin maximization. This property
relates our approach to the results of several recent papers
\cite{2017arXiv171010345S,
2019arXiv190507325S,DBLP:journals/corr/abs-1906-05890}. Notice that
our theorem \ref{margin-maxTheorem} as in
\cite{DBLP:conf/nips/RossetZH03} is a {\it sufficient condition for
margin maximization}. Necessity is not true for general loss
functions.
To state the margin property more formally, we adapt to our setting a
different result due to \cite{DBLP:conf/nips/RossetZH03} (they
consider for a linear network a vanishing $\lambda$ regularization
term whereas we have for nonlinear networks a set of unit norm
constraints). First we recall the definition of the empirical loss
$L(f)=\sum_{n=1}^N \ell(y_n f(x_n))$ with an exponential loss function
$\ell(yf)= e^{-yf}$. We define $\eta(f)$ a the {\it margin} of $f$,
that is $\eta(f)=\min_n f(x_n)$.
Then our margin maximization theorem (proved in \cite{theory_III}) takes the form
\begin{theorem}
Consider the set of $V_k, k=1,\cdots, K$ corresponding to
\begin{equation}
\min_{{||V_k||}=1} L(f(\rho_k, V_k))
\label{V(rho)}
\end{equation}
\noindent where the norm $||V_k||$ is a chosen $L_p$ norm and
$L(f)(\rho_k, V_K) = L(\tilde{f}(\rho)) = \sum_ n \ell(y_n \rho f(V; x_n))$ is the
empirical exponential loss. For each layer consider a sequence of increasing
$\rho_k$. Then the associated sequence of $V_k$ defined by Equation
\ref{V(rho)}, converges for $\rho \to \infty$ to the maximum
margin of $\tilde{f}$, that is to
$\max_{||V_k|| \leq 1} \eta(\tilde{f})$ .
\label{margin-maxTheorem}
\end{theorem}
\subsection{Minimization under unit norm constraint: weight normalization}
\label{OurWeightNormalization}
The approach is then to minimize the loss function
$ L(f(w))=\sum_{n=1}^N e^{- f(W;x_n) y_n }= \sum_{n=1}^N e^{- \rho
f(V_k;x_n) y_n }$, with $\rho= \prod \rho_k$, subject to
$||V_k||^p_p =1 \ \forall k$, that is under a unit norm constraint for
the weight matrix at each layer (if $p=2$ then
$\sum_{i,j} (V_k)_{i,j}^2= 1$ is the Frobenius norm), since $V_k$ are
the directions of the weights which are the relevant quantity for
classification. The minimization is understood as a sequence of
minimizations for a sequence of increasing $\rho_k$. Clearly these
constraints imply the constraint on the norm of the product of weight
matrices for any $p$ norm (because any induced operator norm is a
sub-multiplicative matrix norm). The standard choice for a loss
function is an exponential-type loss such the cross-entropy, which for
binary classification becomes the logistic function. We study here the
exponential because it is simpler and retains all the basic
properties.
There are several gradient descent techniques that given the
unconstrained optimization problem transform it into a {\it
constrained} gradient descent problem. To provide the background let us
formulate the standard unconstrained gradient descent problem for the
exponential loss as it is used in practical training of deep networks:
\begin{equation}
\dot{W}^{i,j}_k = -\frac{\partial L}{\partial W^{i,j}_k}= \sum_{n=1}^N y_n \frac{\partial{f(x_n; w)}} {\partial W^{i,j}_k} e^{- y_n
f(x_n;W)}
\label{standardynamicsW}
\end{equation}
\noindent where $W_k$ is the weight matrix of layer $k$. Notice that,
since the structural property implies that at a critical point we have
$\sum_{n=1}^N y_n f(x_n; w) e^{- y_n f(x_n;W)} = 0$, the only critical
points of this dynamics that separate the data (i.e.
$y_n f(x_n; w)>0 \ \forall n$) are global minima at infinity. Of
course for separable data, while the loss decreases asymptotically to
zero, the norm of the weights $\rho_k$ increases to infinity, as we
will see later. Equations \ref{standardynamicsW} define a dynamical
system in terms of the gradient of the exponential loss $L$.
The set of gradient-based algorithms enforcing a unit-norm constraints
\cite{845952} comprises several techniques that are equivalent for small
values of the step size. They are all good approximations of the true
gradient method. One of them is the {\it Lagrange multiplier method};
another is the {\it tangent gradient method} based on the following theorem:
\begin{theorem} \cite{845952} Let $||u||_p$ denote a vector norm that is
differentiable with respect to the elements of $u$ and let $g(t)$ be
any vector function with finite $L_2$ norm. Then, calling
$\nu(t)=\frac{\partial ||u||_p}{\partial u}_{u=u(t)}$, the equation
\begin{equation}
\dot{u}=h_g(t)=Sg(t)= (I-\frac{\nu \nu^T}{||\nu||_2^2}) g(t)
\label{dot_u}
\end{equation}
\noindent with $||u(0)|| =1$, describes the flow of a vector $u$ that
satisfies $||u(t)||_p=1$ for all $t \ge 0$.
\label{Theorem1}
\end{theorem}
In particular, a form for $g$ is $g(t)= \mu(t) \nabla_u L$, the
gradient update in a gradient descent algorithm. We call $Sg(t)$ the
tangent gradient transformation of $g$. In the case of $p=2$ we replace $\nu$ in Equation
\ref{dot_u} with $u$ because
$\nu(t)=\frac{\partial ||u||_2}{\partial u}=u$. This gives
$S= I-\frac{u u^T}{||u||_2^2}$ and $\dot{u}=Sg(t).$
Consider now the empirical loss $L$ written in terms of $V_k$ and
$\rho_k$ instead of $W_k$, using the change of variables defined by
$W_k=\rho_k V_k$ but without imposing a unit norm constraint on $V_k$.
The flows in $\rho_k,V_k$ can be computed as
$\dot{\rho_k}=\frac{\partial W_k}{\partial \rho_k} \frac{\partial
L}{\partial W_k}= V_k^T \frac{\partial L}{\partial W_k}$ and
$\dot{V_k}=\frac{\partial W_k}{\partial V_k} \frac{\partial L}{\partial W_k} =
\rho_k \frac{\partial L}{\partial W_k}$, with $\frac{\partial
L}{\partial W_k}$ given by Equations \ref{standardynamicsW}.
We now enforce the unit norm constraint on $V_k$ by using the tangent gradient
transform on the $V_k$ flow. This yields
\begin{equation}
\dot{\rho_k}= V_k^T \frac{\partial L}{\partial W_k} \quad
\dot{V_k}= S_k \rho_k \frac{\partial L}{\partial W_k}.
\label{v-flow-withunitnorm}
\end{equation}
Notice that the dynamics above follows from the classical approach of
controlling the Rademacher complexity of $\tilde{f}$ during optimization (suggested
by bounds such as Equation \ref{bound}. The approach and the
resulting dynamics for the directions of the weights may seem different from the standard
unconstrained approach in training deep networks. It turns out,
however, that the dynamics described by Equations
\ref{v-flow-withunitnorm} is the same dynamics of {\it Weight
Normalization}.
The technique of {\it Weight normalization} \cite{SalDied16} was
originally proposed as a small improvement on standard gradient descent
``to reduce covariate shifts''. It was defined for each layer in
terms of $w=g \frac{v}{||v||}$, as
\begin{equation}
\dot{g}=\frac{v}{||v||} \frac{\partial L}{\partial w}
\dot{v}=\frac{g}{||v||} S \frac{\partial L}{\partial w}
\label{W-normalization}
\end{equation}
\noindent with $S=I- \frac{v v^T}{||v||^2}$.
It is easy to see that Equations \ref{v-flow-withunitnorm} are the
same as the weight normalization Equations \ref{W-normalization}, if
$||v||_2=1$. We now observe, multiplying Equation
\ref{v-flow-withunitnorm} by $v^T$, that $v^T \dot{v}=0$ because
$v^T S=0$, implying that $||v||^2$ is constant in time with a constant
that can be taken to be $1$. Thus the two dynamics are the
same.
\subsection{Generalization with hidden
complexity control}
\label{nounitnorm}
Empirically it appears that GD and SGD converge to solutions that can
generalize even without batch or weight normalization. Convergence
may be difficult for quite deep networks and generalization may not be
as good as with batch normalization but it still occurs. How is this
possible?
We study the dynamical system
$\dot{W_k}^{i,j}$ under the reparametrization
$W^{i,j}_k = \rho_k V^{i,j}_k$ with $||V_k||_2=1$. We consider for each
weight matrix $W_k$ the corresponding ``vectorized'' representation in
terms of vectors $W_k^{i,j} = W_k$. We use the following definitions
and properties (for a vector $w$):
\begin{itemize}
\item Define $\frac{w}{||w||_2}=\tilde{w}$; thus $w=||w||_2\tilde{w}$ with
$||\tilde{w}||_2=1$. Also define $S={I-\tilde{w}\tilde{w}^T}=I- \frac {w
w^T}{||w||_2^2}$.
\item The following relations are easy to check:
\begin{enumerate}
\item $\frac{\partial ||w||_2}{\partial w}=\tilde{w}$
\item $\frac{\partial \tilde{w}}{\partial
w}=\frac{S}{||w||_2}$.
\item $Sw=S \tilde{w}=0$
\item $S^2=S$
\label{Relations}
\end{enumerate}
\end{itemize}
The gradient descent dynamic system used in training deep networks for
the exponential loss is given by Equation \ref{standardynamicsW}.
Following the chain rule {\it for the time derivatives}, the dynamics
for $W_k$ is exactly (see \cite{theory_III}) equivalent to the
following dynamics for $||W_k||=\rho_k$ and $V_k$:
\begin{equation}
\dot{\rho_k}= \frac{\partial ||W_k||}{\partial W_k} \frac{\partial
W_k}{\partial t}= V^T_k \dot{W_k}
\label{rhodot}
\end{equation}
\noindent and
\begin{equation}
\dot{V_k}= \frac{\partial V_k}{\partial W_k} \frac{\partial
W_k}{\partial t}= \frac {S_k}{\rho_k} \dot{W_k}
\label{vdot}
\end{equation}
\noindent where $S_k= I- V_kV_k^T$. We used property 1 in
\ref{Relations} for Equation \ref{rhodot} and property 2 for Equation
\ref{vdot}.
The key point here is that the dynamics of $\dot{V_k}$ includes a unit
$L_2$ norm constraint: using the tangent gradient transform will not
change the equation because $S^2=S$.
As separate remarks , notice that if for $t>t_0$, $f$ separates all
the data, $\frac{d}{dt}{\rho_k} >0$, that is $\rho$ diverges to
$\infty$ with $\lim_{t \to \infty}\dot{\rho}=0$. In the 1-layer
network case the dynamics yields $\rho \approx \log t$
asymptotically. For deeper networks, this is
different. \cite{theory_III} shows (for one support vector) that the
product of weights at each layer diverges faster than logarithmically,
but each individual layer diverges slower than in the 1-layer case.
The norm of the each layer grows at the same rate $\dot{\rho_k^2}$,
independent of $k$. The $V_k$ dynamics has stationary or critical points given by
\begin{equation}
\sum \alpha_n(\rho(t)
\left(\frac{\partial{\tilde{f}(x_n)}} {\partial
V_k^{i,j}}-V_k^{i,j} \tilde{f}(x_n) \right),
\label{wdot4}
\end{equation}
\noindent where $\alpha_n= e^{-y_n \rho(t) \tilde{f}(x_n)}$.
We examine later the linear one-layer case $\tilde{f}(x)=v^T x$ in
which case the stationary points of the gradient are given by $\sum \alpha_n(\rho(t)
(x_n - v v^T x_n)$ and of course coincide with the solutions obtained
with Lagrange multipliers. In the general case the
critical points correspond for $\rho \to \infty$ to degenerate zero
``asymptotic minima'' of the loss.
To understand whether there exists an implicit complexity control in
standard gradient descent of the weight directions, we check whether there exists an
$L_p$ norm for which unconstrained normalization is equivalent to
constrained normalization.
From Theorem \ref{Theorem1} we expect
the constrained case to be given by the action of the following
projector onto the tangent space:
\begin{equation}
S_{p} = I-\frac{\nu \nu^T}{||\nu||_2^2} \quad\textnormal{with}\quad \nu_i=\frac{\partial ||w||_p}{\partial w_i} = \textnormal{sign}(w_i)\circ\left(\frac{|w_i|}{||w||_p}\right)^{p-1}.
\end{equation}
The constrained Gradient Descent is then
\begin{equation}
\dot{\rho_k}= V^T_k \dot{W_k} \quad
\dot{V_k} = \rho_k S_p \dot{W_k}.
\label{ConstrainedGradP}
\end{equation}
On the other hand, reparametrization of
the unconstrained dynamics in the $p$-norm gives (following Equations \ref{rhodot} and \ref{vdot})
\begin{equation}
\begin{split}
\dot{\rho_k}&= \frac{\partial ||W_k||_p}{\partial W_k} \frac{\partial
W_k}{\partial t}= \textnormal{sign}(W_k)\circ\left(\frac{|W_k|}{||W_k||_p}\right)^{p-1} \cdot \dot{W_k} \\
\dot{V_k}&= \frac{\partial V_k}{\partial W_k} \frac{\partial
W_k}{\partial t}= \frac {I - \textnormal{sign}(W_k) \circ\left(\frac{|W_k|}{||W_k||_p}\right)^{p-1}W_k^T}{||W_k||_p^{p-1}} \dot{W_k}.
\end{split}
\end{equation}
These two dynamical systems are clearly different for generic
$p$ reflecting the presence or absence of a regularization-like
constraint on the dynamics of $V_k$.
As we have seen however, for $p=2$ the 1-layer dynamical system obtained by
minimizing $L$ in $\rho_k$ and $V_k$ with $W_k=\rho_k V_k$ under the constraint
$||V_k||_2=1$, is the weight normalization dynamics
\begin{equation}
\dot{\rho_k}=V_k^T \dot{W_k} \quad
\dot{V_k}= S \rho_k \dot{W_k} ,
\label{WN}
\end{equation}
\noindent which is quite similar to the standard gradient
equations
\begin{equation}
\dot{\rho_k}= V_k^T \dot{W_k} \quad
\dot{v} =\frac{S}{\rho_k} \dot{W_k}.
\label{StandardGrad}
\end{equation}
\begin{figure*}[t!]\centering
\includegraphics[width=1.0\textwidth]{Figures/fig2.pdf}
\caption{\it The top left graph shows testing vs
training cross-entropy loss for networks each
trained on the same data sets (CIFAR10) but with a
different initializations, yielding zero
classification error on training set but different
testing errors. The top right graph shows the same
data, that is testing vs training loss for the same
networks, now normalized by dividing each weight by
the Frobenius norm of its layer. Notice that all
points have zero classification error at
training. The red point on the top right refers to a
network trained on the same CIFAR-10 data set but
with randomized labels. It shows zero classification
error at training and test error at chance
level. The top line is a square-loss regression of
slope $1$ with positive intercept. The bottom line
is the diagonal at which training and test loss are
equal. The networks are 3-layer convolutional
networks. The left can be considered as a
visualization of Equation \ref{bound} when the
Rademacher complexity is not controlled. The right
hand side is a visualization of the same relation
for normalized networks that is
$L(\tilde{f}) \leq \hat{L}(\tilde{f}) +
c_1\mathbb{R}_N(\mathbb{\tilde{F}}) + c_2 \sqrt
\frac{\ln(\frac{1}{\delta})}{2N}$. Under our
conditions for $N$ and for the architecture of the
network the terms
$c_1\mathbb{R}_N(\mathbb{\tilde{F}}) + c_2 \sqrt
\frac{\ln(\frac{1}{\delta})}{2N}$ represent a small
offset. From
\cite{DBLP:journals/corr/abs-1807-09659}.
}
\label{main}
\end{figure*}
The two dynamical systems differ only by a $\rho_k^2$ factor
in the $\dot{V_k}$ equations. However, the critical points of
the gradient for the $V_k$ flow, that is the point for which
$\dot{V_k}=0$, are the same in both cases since for any $t>0$
$\rho_k(t)>0$ and thus $\dot{V_k}=0$ is equivalent to
$S\dot{W_k}=0$. Hence, gradient descent with unit $L_p$-norm
constraint is equivalent to the standard, unconstrained
gradient descent but only when $p = 2$. Thus
\begin{fact} The standard dynamical system used in deep
learning, defined by
$\dot{W_k}=-\frac{\partial L}{\partial W_k}$, implicitly
respectss a unit $L_2$ norm constraint on $V_k$ with
$\rho_k V_k =W_k$. Thus, under an exponential loss, if the
dynamics converges, the $V_k$ represent the minimizer under
the $L_2$ unit norm constraint.
\label{w(T)}
\end{fact}
Thus standard GD implicitly enforces the $L_2$ norm constraint on
$V_k=\frac{W_k}{||W_k||_2}$, consistently with Srebro's results on
implicit bias of GD. Other minimization techniques such as coordinate
descent may be biased towards different norm constraints.
\subsection{Linear networks and rates of convergence}
The linear ($f(x)=\rho v^T x$) networks case
\cite{2017arXiv171010345S} is an interesting example of our analysis
in terms of $\rho$ and $v$ dynamics. We start with unconstrained
gradient descent, that is with the dynamical system
\begin{equation}
\dot{\rho}= \frac{1}{\rho} \sum_{n=1}^N e^{-
\rho v^Tx_n} v^Tx_n \quad \dot{v}=\frac{1}{\rho}\sum_{n=1}^N e^{- \rho v^T x_n}
(x_n- v v^T x_n).
\label{Ttt}
\end{equation}
If gradient descent in $v$ converges to $\dot{v}=0$ at finite time,
$v$ satisfies $ v v^T x = x$, where $x= \sum_{j=1}^C \alpha_j x_j$
with positive coefficients $\alpha_j$ and $x_j$ are the $C$ support
vectors (see \cite{theory_III}). A solution $v^T = ||x|| x^\dagger$
then {\it exists} ($x^\dagger$, the pseudoinverse of $x$, since $x$ is
a vector, is given by $x^\dagger= \frac{x^T}{||x||^2}$). On the other
hand, the operator $T$ in $v(t+1)=T v(t)$ associated with equation
\ref{Ttt} is non-expanding, because $||v||=1,\ \forall t$. Thus there
is a fixed point $v \propto x$ which is {\it independent of initial
conditions} \cite{Ferreira1996} and unique (in the linear case)
The rates of convergence of the solutions $\rho(t)$ and $v(t)$,
derived in different way in \cite{2017arXiv171010345S}, may be read
out from the equations for $\rho$ and $v$. It is easy to check that a
general solution for $\rho$ is of the form $\rho \propto C \log t$. A
similar estimate for the exponential term gives
$e^{- \rho v^T x_n} \propto \frac{1}{t}$. Assume for simplicity a
single support vector $x$. We claim that a solution for
the error $\epsilon= v-x$, since $v$ converges to $x$, behaves as
$\frac{1}{\log t}$. In fact we write $v =x+\epsilon$ and plug it in
the equation for $v$ in \ref{T}. We obtain (assuming normalized input $||x||=1$)
\begin{equation}
\dot{\epsilon}=\frac{1}{\rho} e^{- \rho v^T x} (x- (x+\epsilon)
(x+\epsilon)^T x) \approx \frac{1}{\rho} e^{- \rho v^T x} ( x- x - x
\epsilon^T - \epsilon x^T),
\label{T}
\end{equation}
\noindent which has the form
$\dot{\epsilon}=-\frac{1}{t \log t} (2 x \epsilon^T)$. Assuming
$\epsilon$ of the form
$\epsilon \propto \frac{1}{\log t}$ we obtain
$-\frac{1}{t \log^2 t }= -B \frac{1}{t \log^2 t}$. Thus the error
indeed converges as
$\epsilon \propto \frac{1}{\log t}$.
A similar analysis for the weight normalization equations \ref{WN}
considers the same dynamical system with a change in the equation for
$v$, which becomes
\begin{equation}
\dot{v} \propto e^{-\rho} \rho (I- v v^T) x.
\label{T-WN}
\end{equation}
This equation differs by a factor $\rho^2$ from
equation \ref{T}. As a consequence equation \ref{T-WN} is of the form
$\dot{\epsilon}=-\frac{\log t}{t} \epsilon$, with a general solution
of the form $\epsilon \propto t^{-\frac{1}{2}\log t}$. In
summary, {\it GD with weight normalization converges faster to the
same equilibrium than standard gradient descent: the rate for
$\epsilon= v- x$ is $t^{-\frac{1}{2} log(t)}$ vs $\frac{1}{\log t}$.}
Our goal was to find
$\lim_{\rho \to \infty} \arg \min_{||V_k||=1, \ \forall k} L(\rho
\tilde{f}) $. We have seen that various forms of gradient descent
enforce different paths in increasing $\rho$ that empirically have
different effects on convergence rate. It will be an interesting
theoretical and practical challenge to find the optimal way, in terms
of generalization and convergence rate, to grow $\rho\rightarrow \infty$.
Our analysis of simplified batch normalization \cite{theory_III}
suggests that several of the same considerations that we used for
weight normalization should apply (in the linear one layer case BN is
identical to WN). However, BN differs from WN in the multilayer case
in several ways, in addition to weight normalization: it has for
instance separate normalization for each unit, that is for each row of
the weight matrix at each layer.
\begin{figure
\centering
\includegraphics[width=0.9\linewidth]{Figures/no-overfitting.pdf}
\caption{Empirical and expected error in CIFAR 10 as a
function of number of neurons in a 5-layer convolutional
network. The expected classification error does not increase
when increasing the number of parameters beyond the size of
the training set in the range we tested.}
\label{no-overfitting}
\end{figure}
\section{Discussion}
A main difference between shallow and deep networks is in terms of
{\it approximation} power or, in equivalent words, of the ability to
learn good representations from data based on the compositional
structure of certain tasks. Unlike shallow networks, deep local
networks -- in particular convolutional networks -- can avoid the
curse of dimensionality in approximating the class of hierarchically
local compositional functions. This means that for such class of
functions deep local networks represent an appropriate hypothesis
class that allows good approximation with a minimum number of
parameters. It is not clear, of course, why many problems encountered
in practice should match the class of compositional functions. Though
we and others have argued that the explanation may be in either the
physics or the neuroscience of the brain, these arguments are not
rigorous. Our conjecture at present is that compositionality is
imposed by the wiring of our cortex and, critically, is reflected in
language. Thus compositionality of some of the most common visual
tasks may simply reflect the way our brain works.
{\it Optimization} turns out to be surprisingly easy to perform for
overparametrized deep networks because SGD will converge with high
probability to global minima that are typically much more degenerate for
the exponential loss than other local critical points.
More surprisingly, gradient descent yields {\it generalization} in
classification performance, despite overparametrization and even in
the absence of explicit norm control or regularization, because
standard gradient descent in the weights enforces an implicit unit
($L_2$) norm constraint on the {\it directions of the weights} in the
case of exponential-type losses.
In summary, it is tempting to conclude that the practical success of
deep learning has its roots in the almost magic synergy of unexpected
and elegant theoretical properties of several aspects of the
technique: the deep convolutional network architecture itself, its
overparametrization, the use of stochastic gradient descent, the
exponential loss, the homogeneity of the RELU units and of the
resulting networks.
Of course many problems remain open on the way to develop a full
theory and, especially, in translating it to new architectures. More
detailed results are needed in approximation theory, especially for
densely connected networks. Our framework for optimization is missing
at present a full classification of local minima and their dependence
on overparametrization for general loss functions. The analysis of
generalization should include an analysis of convergence of the
weights for multilayer networks (see \cite{2019arXiv190507325S} and
\cite{DBLP:journals/corr/abs-1906-05890}). A full theory would also
require an analysis of the trade-off between
approximation and estimation error, relaxing the separability
assumption.
\showmatmethods{}
\acknow{We are grateful to Sasha Rakhlin and Nate Srebro for useful
suggestions about the structural lemma and about separating critical
points. Part of the funding is from the Center for Brains, Minds and
Machines (CBMM), funded by NSF STC award CCF-1231216, and part by
C-BRIC, one of six centers in JUMP, a Semiconductor Research
Corporation (SRC) program sponsored by DARPA.}
\showacknow{}
| {
"timestamp": "2019-08-27T02:16:15",
"yymm": "1908",
"arxiv_id": "1908.09375",
"language": "en",
"url": "https://arxiv.org/abs/1908.09375",
"abstract": "While deep learning is successful in a number of applications, it is not yet well understood theoretically. A satisfactory theoretical characterization of deep learning however, is beginning to emerge. It covers the following questions: 1) representation power of deep networks 2) optimization of the empirical risk 3) generalization properties of gradient descent techniques --- why the expected error does not suffer, despite the absence of explicit regularization, when the networks are overparametrized? In this review we discuss recent advances in the three areas. In approximation theory both shallow and deep networks have been shown to approximate any continuous functions on a bounded domain at the expense of an exponential number of parameters (exponential in the dimensionality of the function). However, for a subset of compositional functions, deep networks of the convolutional type can have a linear dependence on dimensionality, unlike shallow networks. In optimization we discuss the loss landscape for the exponential loss function and show that stochastic gradient descent will find with high probability the global minima. To address the question of generalization for classification tasks, we use classical uniform convergence results to justify minimizing a surrogate exponential-type loss function under a unit norm constraint on the weight matrix at each layer -- since the interesting variables for classification are the weight directions rather than the weights. Our approach, which is supported by several independent new results, offers a solution to the puzzle about generalization performance of deep overparametrized ReLU networks, uncovering the origin of the underlying hidden complexity control.",
"subjects": "Machine Learning (cs.LG); Machine Learning (stat.ML)",
"title": "Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.975576912786245,
"lm_q2_score": 0.8221891283434877,
"lm_q1q2_score": 0.8021087315557535
} |
https://arxiv.org/abs/1801.01692 | Algebraic stories from one and from the other pockets | We present a number of questions in commutative algebra posed on the problem solving seminar in algebra at Stockholm University during the period Fall 2014 - Spring 2017. | \section{The Waring problem for complex-valued forms}
The following famous result on binary forms was proven by J.~J.~Sylvester in 1851. (Below we use the terms ``forms" and ``homogeneous polynomials" as synonyms.)
\begin{Theorem}[Sylvester's Theorem \cite{Syl2}]\label{th:Sylv}
\rm{(i)} A general binary form $f$ of odd degree $k=2s-1$ with complex coefficients can be written as
$$ f(x, y) =\sum_{j=1}^s(\al_jx+\be_jy)^k.$$
\noindent
\rm{(ii)} A general binary form $f$ of even degree $k=2s$ with complex coefficients can be written as
$$ f(x,y)=\la x^k +\sum_{j=1}^s(\al_jx+\be_jy)^k.$$
\end{Theorem}
Sylvester's result was the starting point of the study of the so-called \emph{Waring problem for polynomials} which we discuss below.
\medskip
Let $S = \bC[x_1,\ldots,x_n]$ be the polynomial ring in $n$ variables with complex coefficients. With respect to the standard grading, we have $S = \bigoplus_{d\geq 0} S_d$, where $S_d$ denotes the vector space of all forms of degree $d$.
\begin{defi+}\label{def1}
Let $f$ be a form of degree $k$ in $S$. A presentation of $f$ as a sum of $k$-th powers of linear forms, i.e., $f=l_1^k+\ldots+l_s^k$, where $l_1,\ldots,l_s \in S_1$, is called a \emph{Waring decomposition} of $f$.
The minimal length of such a decomposition is called the \emph {Waring rank} of $f$, and we denote it as $\rk(f)$.
By $\rk^\circ(k,n)$ we denote the Waring rank of a {\em general} complex-valued form of degree $k$ in $n$ variables.
\end{defi+}
\begin{Rmk}
Besides being a natural question from the point of view of algebraic geometry, the Waring problem for polynomials is partly motivated by its celebrated prototype, i.e., the Waring problem for natural numbers. The latter was posed in 1770 by a British number theorist E. Waring
who claimed that, for any positive integer $k$, there exists a minimal number $g(k)$ such that every natural number can be written as sum of at most $g(k)$ $k$-th powers of positive integers. The famous Lagrange's four-squares Theorem (1770) claims that $g(2) = 4$ while the existence of $g(k)$, for any integer $k\ge 2$, is due to D. Hilbert (1900). Exact values of $g(k)$ are currently known only in a few cases.
\end{Rmk}
\subsection{Generic $k$-rank}
In terms of Definition~\ref{def1}, Sylvester's Theorem claims that the Waring rank of a general binary complex-valued form of degree $k$ equals $\left\lfloor \frac{k}{2} \right\rfloor$.
More generally, the important result of J. Alexander and A. Hirschowitz \cite{al-hi} completely describes the Waring rank $\rk^\circ(k,n)$ of general forms of any degree and in any number of variables.
\begin{Theorem}[Alexander-Hirschowitz Theorem, 1995]\label{th:AH} For all pairs of positive integers $(k,n)$, the generic Waring rank $\rk^\circ(k,n)$ is given by
\begin{equation}\label{rgen}
\rk^\circ(k,n)=\left\lceil \frac{\binom{n+k-1}{n-1}}{n} \right\rceil,
\end{equation}
except for the following cases:
\begin{enumerate}
\item $k = 2$, where $\rk^{\circ}(2,n) = n$;
\item $k=4,\; n=3,4,5$; and $k=3, n=5$, where, $r_{gen}(k,n)$ equals the r.h.s of \eqref{rgen} plus $1$.
\end{enumerate}
\end{Theorem}
Going further, R.F. and B.S. jointly with G.~Ottaviani considered the following natural version of the Waring problem for complex-valued forms, see \cite{FOS}.
\begin{defi+}
Let $k,d$ be positive integers. Given a form $f$ of degree $kd$, a {\em $k$-Waring decomposition} is a presentation of $f$ as a sum of $k$-th powers of forms of degree $d$, i.e., $f = g_1^k + \cdots + g_s^k$, with $g_i \in S_d$. The minimal length of such an expression is called the {\em $k$-rank} of $f$ and is denoted by $\rk_k(f)$.
We denote by $\rk_k^\circ(kd,n)$ the $k$-rank of a general complex-valued form of degree $kd$ in $n$ variables.
\end{defi+}
In this notation, the case $d = 1$ corresponds to the classical Waring rank, i.e., if $k = \deg(f)$, then $\rk(f) = \rk_{k}(f)$ and $\rk^\circ(k,n)=\rk_k^\circ(k,n)$. Since the case $k = 1$ is trivial, we assume below that $k \geq 2$.
\begin{problem}
Given a triple of positive integers $(k,d,n)$, calculate $\rk_k^\circ(kd,n).$
\end{problem}
The main result of \cite{FOS} states that, for any triple $(k,d,n)$ as above,
\begin{equation}\label{ineq:main}
\rk^\circ_k(kd,n)\le k^{n-1}.
\end{equation}
At the same time, by a simple parameter count, one has an obvious lower bound for $\rk^\circ(k,n)$ given by
\begin{equation}\label{ineq:lower bound}
\rk^\circ_k(kd,n)\geq \left\lceil \frac{\binom{n+kd-1}{n-1}} {\binom{n+d-1}{n-1}} \right\rceil.
\end{equation}
\medskip
A remarkable fact about the upper bound given by \eqref{ineq:main} is that it is independent of $d$. Therefore, since the right-hand side of \eqref{ineq:lower bound} equals $k^n$ when $d \gg 0$, we get that for large values of $d$, the bound in \eqref{ineq:main} is actually sharp. As a consequence of this remark, for any fixed $n\ge 1$ and $k\ge 2$, there exists a positive integer $d_{k,n}$ such that $\rk^\circ_k(kd,n)= k^n,$ for all $d\ge d_{k,n}$.
In the case of binary forms, it has been proven that \eqref{ineq:lower bound} is actually an equality \cite{Re1, LuOnReSh}. Exact values of $d_{k,n}$, and the behaviour of $\rk_k(kd,n)$ for $d \leq d_{k,n}$, have also been computed in a few other cases, see \cite[Section 3.3]{On}. These results agree with the following illuminating conjecture suggested by G.~Ottaviani in 2014.
\begin{conjecture}\label{conj:main}
The $k$-rank of a general form of degree $kd$ in $n$ variables is given by
\begin{equation}\label{eq:main}
\rk_k^\circ(kd,n)=\begin{cases} \min \left\{s\ge 1 ~|~ s\binom{n+d-1}{n-1}-\binom {s}{2}\ge \binom{n+2d-1}{n-1}\right\}, & \text{ for } k=2;\\
\min \left\{s\ge 1 ~|~ s\binom{n+d-1}{n-1}\ge \binom{n+kd-1}{n-1}\right\}, & \text{ for } k\ge 3.
\end{cases}
\end{equation}
\end{conjecture}
Observe that, for $k\ge 3$, Conjecture~\ref{conj:main} claims that the na\"ive bound \eqref{ineq:lower bound} obtained by a parameter count is actually sharp, while, for $k = 2$, due to an additional group action there are many {\it defective} cases where the inequality is strict.
\begin{Rmk}
Problems about additive decompositions including the above Waring problems can be usually rephrased geometrically in terms of {\em secant varieties}. In the case of $k$-Waring decompositions, we need to consider the {\em variety of powers} $V_{n,kd}^{(k)}$, i.e., the variety of $k$-th powers of forms of degree $d$ inside the (projective) space of forms of degree $kd$. The {\em $s$-th secant variety} $\sigma_s(V_{n,kd}^{(k)})$ is the Zariski closure of the union of all linear spaces spanned by $s$-tuples of points lying on $V_{n,kd}^{(k)}$. In other words, it is the closure of the set of forms whose $k$-rank is at most $s$. Since the variety of powers is non-degenerate, i.e., it is not contained in any proper linear subspace of the space of forms of degree $kd$, the sequence of secant varieties stratifies the latter space and coincides with it for all sufficiently large $s$. Hence, the $k$-rank of a general form is the smallest value of $s$ for which the $s$-th secant variety of $V_{n,kd}^{(k)}$ coincides with the space of all forms of degree $kd$. Hence, Conjecture \ref{conj:main} can be rephrased as a conjecture about the dimensions of the secant varieties of $V_{n,kd}^{(k)}$. (We refer to \cite[Section 1.3.3]{On} for more details.)
\end{Rmk}
\subsection{Maximal $k$-rank}
A harder problem which is largely open even in the classical case of Waring decompositions, deals with the computation of the $k$-rank of an {\em arbitrary} complex-valued form of degree divisible by $k$.
\begin{defi+}
Given a triple $(k,d,n)$, denote by $\rk^{\max}_k(kd,n)$ the minimal number of terms such that {\em every} form of degree $kd$ in $n+1$ variables can be represented as the sum of at most $\rk^{\max}_k(kd,n)$ $k$-th powers of forms of degree $d$. The number $\rk^{\max}_k(kd,n)$ is called the {\it maximal $k$-rank}.
(Similarly to the above, we omit the subscript when considering the classical Waring rank, i.e., for $d = 1$.)
\end{defi+}
In \cite[Theorem 5.4]{Re1}, B. Reznick proved that the maximal Waring rank of binary forms of degree $k$ equals $k$. Moreover, the maximal value $k$ is attained exactly on the binary forms representable as $\ell_1 \ell_2^{k-1},$ where $\ell_1$ and $\ell_2$ are any two
non-proportional
linear binary forms. (Apparently these claims have been known much earlier, but have never been carefully written down with a complete proof.)
\begin{problem}
Given a triple of positive integers $(k,d,n)$, calculate $\rk_k^{\max}(kd,n).$
\end{problem}
At the moment, we have an explicit conjecture about the maximal $k$-rank only in the case of binary forms.
\begin{conjecture}\label{conj:maxrank}
For any positive integers $k,d$, the maximal $k$-rank $\rk^{\max}_k(kd,2)$ of binary forms equals $k$. Additionally, in the above notation, binary forms representable by $\ell_1 \ell_2^{kd-1}$, where $\ell_1$ and $\ell_2$ are non-proportional linear forms, have the latter maximal $k$-rank.
\end{conjecture}
Conjecture~\ref{conj:maxrank} is obvious for $k=2$ since, for any binary form $f$ of degree $2d$, we can write
\begin{equation} \label{eq:k2}
f = g_1 g_2 = \left(\frac{1}{2} (g_1+g_2)\right)^2 + \left(\frac{i}{2} (g_1-g_2)\right)^2 \text{ with } g_1,g_2 \in S_d.
\end{equation}
The first non-trivial case is the one of binary sextics, i.e., $k=3, d=2$, which has been settled in \cite{LuOnReSh} where it has also been shown that the $4$-rank of $x_1x_2^7$ is equal to $4$.
\begin{Rmk} The best known general result about maximal ranks is due to G. Bleckherman and Z. Teitler, see \cite{BT} where they prove that the maximal rank is always at most twice as big as the generic rank. (This fact is true both for the classical ($d=1$) and for the higher ($d\geq 2$) Waring ranks.)
In the classical case of Waring ranks, this bound is (almost) sharp for binary forms, but in many other cases it is rather crude. At present, better bounds are known only in few special cases of low degrees \cite{BD13, Je14}. To the best of our knowledge, the exact values of the maximal Waring rank are only known for binary forms (classical, see \cite{Re1}), quadrics (classical), ternary cubics (see \cite{Seg42, LT10}), ternary quartics \cite{Kl99}, ternary quintics \cite{DeP15} and quaternary cubics \cite{Seg42}.
\end{Rmk}
\subsection{The $k$-rank of monomials}
Let $m = x_1^{a_1} \cdots x_n^{a_n}$ be a monomial
with $0 < a_1 \leq a_2 \leq \cdots \leq a_n.$ It has been shown in \cite{ca-ch-ge} that the classical Waring rank of $m$ is equal to $\frac{1}{(a_1+1)}\prod_{i=1,\ldots,n} (a_i+1)$.
\medskip
Later E. Carlini and A.O. settled the case of the $2$-rank, see \cite{ca-on}. Namely, if $m$ is a monomial of degree $2d$, then we can write $m=m_1 m_2$, where $m_1$ and $m_2$ are monomials of degree $d$. From identity (\ref{eq:k2}), it follows that the $2$-rank of $m$ is at most two. On the other hand, $m$ has rank one exactly when we can choose $m_1 = m_2$, i.e., when the power of each variable in $m$ is even.
While the cases $k=1$ and $k=2$ are solved, for $k\ge 3$, the question about the $k$-rank of monomials of degree $kd$, is still open. At present, we are only aware of two general results in this direction. Namely, \cite{ca-on} contains the bound
$\rk_k(m) \leq 2^{k-1}$, and recently, S.L., A.O., B.S., together with B. Reznick, have shown that $\rk_k(m) \leq k$ when $d \geq n(k-2)$, see \cite{LuOnReSh}.
Thus, for fixed $k$ and $n$, all but a finite number of monomials of degree divisible by $k$ have $k$-rank less than $k$.
\begin{problem} \label{prob:monrk}
Given $k \geq 3$ and a monomial $m$ of degree $kd$, determine the monomial $k$-rank $\rk_k(m)$.
\end{problem}
In the case of binary forms, a bit more is currently known which motivates the following question.
\begin{problem}\label{prob:C}
Given $k \geq 3$ and a monomial $x^a y^b$ of degree $a+b=kd$, it is known that $\rk_k(x^{a}y^{b}) \leq \max(s,t)+1$, where $s$ and $t$ are the remainders of the division of $a$ and $b$ by $k$, see \cite{ca-on}. Is it true that the latter inequality is, in fact, an equality?
\end{problem}
\subsection{Degree of the Waring map}
Here again, we concentrate on the case of binary forms (i.e., $n=2$). As we mentioned above, in this case, it is proven that
$$\rk_k^\circ(kd,2)= \left \lceil \frac{\dim S_{kd}}{\dim S_{d}}\right \rceil= \left \lceil\frac{kd+1}{d+1}\right \rceil.$$
\begin{defi+}
We say that a pair $(k,d)$ is {\it perfect} if $\frac{kd+1}{d+1}$ is an integer.
\end{defi+}
All perfect pairs are easy to describe.
\begin{Lemma}\label{lm:perfect} The set of all pairs $(k,d)$ for which
$\frac{kd+1}{d+1}\in\bN$ splits
into the disjoint sequences
$E_j := \{(jd+j+1,d) ~|~ d = 1,2,\ldots \}$. In each $E_j$, the corresponding quotient equals $jd+1$.
\end{Lemma}
Given a perfect pair $(k,d)$, set $s:=\frac{kd+1}{d+1}$. Consider the map
$$W_{k,d}: S_d \times \ldots \times S_d \to S_{kd}, ~~(g_1,\ldots,g_s) \mapsto g_1^k + \ldots + g_s^k.$$
Let $\widetilde{W}_{k,d}$ be the same map, but defined up to a permutation of the $g_i$'s. We call it the {\it Waring map}. By \cite[Theorem 2.3]{LuOnReSh}, $\widetilde{W}_{k,d}$ is a generically finite map of complex linear spaces of the same dimension. By definition, its {\it degree} is the cardinality of the inverse image of a generic form in $S_{kd}$.
\begin{problem}\label{pr:deg}
Calculate the degree of $\widetilde{W}_{k,d}$ for perfect pairs $(k,d)$.
\end{problem}
For the classical Waring decomposition ($d = 1$), we have a perfect pair if and only if $k$ is odd. From Sylvester's Theorem, we know that in this case the degree of the Waring map is $1$, i.e., the general binary form of odd degree has a {\it unique} Waring decomposition, up to a permutation of its summands.
\begin{Rmk}
For the case of the classical Waring decomposition, the latter problem has also been considered in the case of more variables. In modern terminology, the cases where the general form of a given degree has a unique decomposition up to a permutation of the summands are called {\it identifiable}. Besides the case of binary forms of odd degree, some other identifiable cases are classically known. These are the quaternary cubics (Sylvester's Pentahedral Theorem \cite{Syl2}) and the ternary quintics \cite{Hilb, Pal03, Ri04, MM13}. Recently, F.~Galuppi and M.~Mella proved that these are the only possible identifiable cases, \cite{GM16}.
\end{Rmk}
\begin{Rmk}
Problems dealing with additive decompositions of homogeneous polynomials similar to those we consider in this section, have a very long story going back to J.~J.~Sylvester and the Italian school of algebraic geometry of the late 19-th century. In the last decades, these problems received renewed attention due to their potential applications. Namely, homogeneous polynomials can be naturally identified with {\it symmetric tensors} and in several applied branches of science where such tensors are used, for example, to encode multidimensional data sets, {\it additive decompositions of tensors} play a crucial role as an efficient way to code those.
We refer to \cite{Lan} for an extensive exposition of these connections.
\end{Rmk}
\section{Ideals of generic forms}
Let $I$ be a homogeneous ideal in $S$, i.e., an ideal generated by homogeneous polynomials. The ideal $I$ and the quotient algebra $R = S/I$ inherit the {\it grading} of the polynomial ring.
\begin{defi+}
Given a homogeneous ideal $I \subset S$, we call the function $$
\HF_R(i) := \dim_{\bC} R_i = \dim_{\bC} S_i - \dim_\bC I_i
$$
the {\it Hilbert function} of $R.$
The power series
$$
\HS_R(i) := \sum_{i\in\bN} \HF_R(i) t^i \in \bC[[t]]
$$
is called the {\em Hilbert series} of $R$.
\end{defi+}
Let $I$ be a homogeneous ideal generated by forms $f_1, \dots, f_r$ of degrees $d_1, \dots, d_r$, respectively.
It was shown in \cite{fr-lo} that, for fixed parameters $(n,d_1,\dots, d_r)$, there exists only a finite number of possible Hilbert series for $S/I$, and that
there is a Zariski open subset in the space of coefficients of the $f_i$'s
on which the Hilbert series of $S/I$ is one and the same and, in the appropriate sense, it is minimal among all possible Hilbert series, see below. We call algebras with this Hilbert series {\it generic}.
There is a longstanding conjecture about this minimal Hilbert series formulated by the first author, see \cite{fr}.
\begin{conjecture}[Fr\"oberg's Conjecture, 1985]\label{conj:fr}
Let $f_1, \dots, f_r$ be generic forms of degrees
$d_1, \dots, d_r$, respectively. Then the Hilbert series of the quotient algebra $R = S/(f_1,\ldots,f_r)$ is given by
\begin{equation}\label{eq:RALF}
\HF_R(t)=\left[\frac{\prod_{i=1}^r(1-t^{d_i})}{(1-t)^n}\right]_+.
\end{equation}
Here
$[\sum_{i\ge0}a_iz^i]_+:=\sum_{i\ge0}b_iz^i$,
with $b_i=a_i$ if $a_j\ge0$ for all $j\le i$ and $b_i=0$. In other words, $[\sum_{i\ge0}a_iz^i]_+$ is the truncation of a power series at its first non-positive coefficient.
\end{conjecture}
Conjecture~\ref{conj:fr} has been
proven in the following cases: for $r\le n$ (easy exercise, since in this case $I$ is a complete intersection); for $n\le2$, \cite{fr}; for $n=3$,
\cite{an3}, for
$r=n+1$, which follows from \cite{st}. Additionally, in \cite{ho-la} it has been proven that \eqref{eq:RALF}
is correct in the first nontrivial degree $\min_{i=1}^r(d_i+1)$. There are also other special results in the case $d_1=\dots =d_r$, see
\cite{ni, au, fr-ho, mi-mi, ne}.
We should also mention that \cite{fr-lu} contains a survey of the existing results on the generic series for various algebras and also it studies the (opposite) problem of finding the maximal Hilbert series for fixed parameters $(n,d_1,\ldots,d_r)$.
\smallskip
It is known that the actual Hilbert series of the quotient ring of any ideal with the same numerical parameters is lexicographically larger than or equal to the conjectured one. This fact implies that if for a given discrete data $(n, d_1, \ldots, d_r)$, one finds just a single example of an algebra with the Hilbert series as in \eqref{eq:RALF}, then Conjecture~\ref{conj:fr} is settled in this case.
\medskip
Although algebras with the minimal Hilbert series constitute a Zariski open set, they are hard to find constructively. We are only aware of two explicit constructions giving the minimal series in the special case $r=n+1$, namely R.~Stanley's choice $x_1^{d_1}, \ldots,x_n^{d_n}, (x_1+\cdots + x_n)^{d_{n+1}}$, and C. Gottlieb's choice $x_1^{d_1}, \ldots,x_n^{d_n}, h_{d_{n+1}}$, where $h_{d}$ denotes the complete homogeneous symmetric polynomial of degree $d$, (private communication). To the best of our knowledge, already in the next case $r=n+2$ there is no concrete guess about how to construct a similar example. There is however a substantial computer-based evidence pointing towards the possibility of replacing generic forms of degree $d$ by a product of generic forms of much smaller degrees. We present some problems and conjectures related to such pseudo-concrete constructions below.
\subsection{Hilbert series of generic power ideals.} Differently from the situation occurring in R.~Stanley's result, if we consider ideals generated by more than $n+1$ powers of generic linear forms, there are known examples of $(n,d_1,...,d_r)$ for which algebras generated by powers of generic linear forms fail to have the Hilbert series as in \eqref{eq:RALF}.
\medskip
Recall that ideals generated by powers of linear forms are usually called {\it power ideals}. Due to their appearance in several areas of algebraic geometry, commutative algebra and combinatorics, they have been studied more thoroughly. In the next section, we will discuss their relation with the so-called {\it fat points}. (For a more extensive survey of power ideals, we refer to a nice paper by F. Ardila and A. Postnikov \cite{AP10}.)
\medskip
Studying Hilbert functions of generic power ideals, A.~Iarrobino formulated the following conjecture, usually referred to as the Fr\"oberg-Iarrobino Conjecture, see \cite{ia, Ch}.
\begin{conjecture}[Fr\"oberg-Iarrobino Conjecture]\label{conj:fr-ia}
Given generic linear forms $\ell_1, \ldots, \ell_r$ and a positive integer $d$, let $I$ be the power ideal generated by $\ell_1^d,\ldots,\ell_r^d$. Then the Hilbert function of $R = S/I$ is as in \eqref{eq:RALF}, except for the cases $(n,r) = (3,7), (3,8), (4,9), (5,14)$ and possibly for $r = n+2$ and $r = n+3$.
\end{conjecture}
This conjecture is still largely open. In \cite{fr-ho} R.F. and J. Hollman checked it for low degrees and low number of variables using the first version of the software package {\it Macaulay2}. In the last decades, some progress has been made in reformulation of Conjecture~\ref{conj:fr-ia} in terms of the ideals of {\it fat points} and {\it linear systems}. We will return to this topic in the next section.
\subsection{Hilbert series of other classes of ideals}
Computer experiments suggest that in order to always generically get the Hilbert function as in \eqref{eq:RALF} we need to replace power ideals by slightly less special ideals.
\medskip
For example, given a partition $\mu = (\mu_1,\ldots,\mu_k) \vdash d$, we call by a {\it $\mu$-power ideal} an ideal generated by forms of the type $({\bf l}_1^\mu,\ldots,{\bf l}_r^\mu)$, where ${\bf l}_i^\mu=l_{i,1}^{\mu_1}\cdots l_{i,k}^{\mu_k}$ and $l_{i,j}$'s are distinct linear forms.
\begin{problem}\label{problem: F}
For $\mu \neq (d)$, does a generic $\mu$-power ideal have the same Hilbert function as in \eqref{eq:RALF}?
\end{problem}
Performed computer experiments suggest a positive answer to the latter problem. L. Nicklasson has also conjectured that ideals generated by powers of generic forms of degree $\geq 2$ have the Hilbert series as in \eqref{eq:RALF}.
\begin{conjecture}[\cite{ni}] \label{conj:nic}
For generic forms $g_1,\ldots,g_r$ of degree $d>1$, the ideal $(g_1^k,\ldots,g_r^k)$
has the same Hilbert series as the one generated by $r$ generic forms of degree $dk$. \end{conjecture}
It was observed in \cite[Theorem A.3]{LuOnReSh} that Conjecture \ref{conj:nic} implies Conjecture \ref{conj:main}, connecting the two first sections of the present paper. It was also shown that Conjecture \ref{conj:nic} holds in the case of binary form by specializing the $g_i$'s to be $d$-th powers of linear forms and applying the fact that generic power ideals in two variables have the generic Hilbert series \cite{GeSh}. The same idea gives a positive answer to Problem \ref{problem: F} in the case of binary forms, by specializing $l_{i,1} = \ldots = l_{i,k}$, for $i = 1,\ldots,r$.
\subsection{Lefschetz properties of graded algebras}
We say that a graded algebra $A$ has the {\it weak Lefschetz property} (WLP) (respectively, the {\it strong Lefschetz property} (SLP)) if the multiplication map $\times l:A_i\rightarrow A_{i+1}$ (respectively, $\times l^k:A_i\rightarrow A_{i+k}$) has the maximal rank, i.e., it is either injective or surjective, for a generic linear form $l$ and all $i$ (resp., for all $i$ and $k$).
(For more references and open problems about the Lefschetz properties, see \cite{MiNa}.)
\begin{problem} It has been conjectured that each complete intersection $R=S/(f_1,\ldots,f_n)$ satisfies the WLP and also the SLP, see \cite{HMNW}. Does the same hold for $R=S/(f_1,\ldots,f_r)$, with $f_1,\ldots,f_r$ being generic forms, and $r > n$?
\end{problem}
It follows from \cite{St} that monomial complete intersections satisfy the SLP. In \cite{BFL}
the following situation has been studied. For the ring $T_{n,d,k}=S/(x_1^d,\ldots,x_n^d)^k$, it is shown that for $k\ge d^{n-2}$, $n\ge3$, $(n,d)\ne(3,2)$, $T_{n,d,k}$
fails the WLP. For $n=3$, there is an explicit conjecture when the WLP holds. Additionally, there is some information about $n>3$.
\begin{problem} When are the WLP and the SLP true for $T_{n,d,k}$?
\end{problem}
We now introduce the concept of the \emph{$\mu$-Lefschetz properties}.
Let $\mu=(\mu_1,\ldots,\mu_k)$ be a partition of $d$, i.e., $\sum_{i=1}^k\mu_i=d$. We say that an algebra has the {\it $\mu$-Lefschetz property} if $\times{\bf l}^\mu:A_i\rightarrow A_{i+d}$ has maximal rank for all $i$, where ${\bf l}^\mu=l_1^{\mu_1}\cdots l_k^{\mu_k}$, and $l_i$'s are
generic linear forms.
\begin{problem}
For $R= S/(f_1,\ldots,f_r)$, where $f_1,\ldots,f_r$ are generic forms, does $R$ satisfy the $\mu$-Lefschetz property for all partitions $\mu$?
\end{problem}
\section{Symbolic powers}
For a prime ideal $\wp$ in a Noetherian ring $R$, define its {\it $m$-th symbolic power} $\wp^{(m)}$ as $$\wp^{(m)}=\wp^mR_\wp\cap R.$$
It is the $\wp$-primary component of $\wp^m$. For a general ideal $I$ in $R$, its {\it $m$-th symbolic power} is defined as
$I^{(m)}=\cap_{\wp\in{\rm Ass}(I)}(I^mR_\wp\cap R)$.
\subsection{Hilbert functions of fat points.}
Let $I_X$ be the ideal in $\bC[x_1,\ldots,x_n]$ defining a scheme of reduced points $X = P_1 + \ldots + P_s$ in $\mathbb P^{n-1}$, say $I_X = \wp_1 \cap \ldots \cap \wp_s$ where $\wp_i$ is the prime ideal defining the point $P_i$. Then, the $m$-th symbolic power $I^{(m)}$ is the ideal $I_X^{(m)} = \wp_1^m \cap \ldots \cap \wp_s^m$ which defines the scheme of {\it fat points} $X = mP_1 + \ldots + mP_s$.
Ideals of $0$-dimensional schemes are classical objects of study since the beginning of the last century. Their Hilbert functions are of particular interest. Study of these ideals and calculation of their Hilbert functions can be often related to the so-called {\it polynomial interpolation problem}. Indeed, the homogeneous part of degree $d$ of the ideal $I_X$ is the space of hypersurfaces of degree $d$ in $\mathbb P^{n-1}$ passing through the $P_i$'s up to order $m-1$, i.e., the space of polynomials of degree $d$ whose partial differentials up to order $m-1$ vanish at every $P_i$.
It is well-known that the Hilbert function of $0$-dimensional schemes is strictly increasing until it reaches the {\it multiplicity} of the scheme, see \cite[Theorem 1.69]{IK06}. Hence, since the degree of a $m$-fat point in $\mathbb P^{n-1}$ is ${n-1+m-1 \choose n-1}$, the expected Hilbert function is $$\HF_{S/I_X}(d) = \min\left\{{n-1+d \choose n-1}, s{n-1+m-1 \choose n-1}\right\}.$$ In the case of simple generic points, i.e., for $m=1$, it is known that the actual Hilbert function is as expected.
In the case of double points ($m=2$), counterexamples were known since the end of the 19-th century. In 1995, after a series of important papers, J. Alexander and A. Hirschowitz proved that the classically known examples were the only counterexamples. For higher multiplicity, very little is known at present. In the case of projective plane, a series of equivalent conjectures have been given by B. Segre \cite{Seg61}, B. Harbourne \cite{Har86}, A. Gimigliano \cite{Gim87} and A. Hirschowitz \cite{Hir89}. These are known as the {\it SHGH-Conjecture}, see \cite{Har00} for a survey of this topic.
\smallskip
{\it Apolarity Theory} is a very useful tool in studying of the ideals of fat points and it is connecting all the algebraic stories we have told above. In particular, the following lemma is crucial.
(We refer to \cite{IK06} and \cite{Ger96} for an extensive description of this issue.)
\begin{Lemma}[Apolarity Lemma]
Let $X = P_1+\ldots+P_s$ be a scheme of reduced points in $\mathbb P^{n-1}$ and let $L_{1},\ldots, L_{s}$ be linear forms in $\bC[x_0,\ldots,x_n]$ such that, for any $i$, the coordinates of $P_i$ are the coefficients of $L_i$. Then, for every $m\geq d$,
$$
\HF_{S/I^{(m)}_X}(d) = \dim_{\bC}[(L_1^{m-d+1},\ldots,L_s^{m-d+1})]_d.
$$
\end{Lemma}
Using this statement we obtain that calculation of the Hilbert function of a scheme of fat points is equivalent to the calculation of the Hilbert function of the corresponding power ideal. In particular, Fr\"oberg-Iarrobino conjecture (Conjecture \ref{conj:fr-ia}) can be rephrased as a conjecture about the Hilbert function of ideals of generic fat points.
\medskip
Recently R.F. raised the question about what happens in case of the ideals of generic fat points in a multi-graded space. A point in multi-projective space $P \in \mathbb P^{n_1-1}\times\ldots\times \mathbb P^{n_t-1}$ is defined by a prime ideal $\wp$ in the multi-graded polynomial ring $S = \bC[x_{1,1},\ldots,x_{1,n_1};\ldots;x_{t,1},\ldots,x_{t,n_t}] = \bigoplus_{I \subset \bN^t} S_I$, where $S_I$ is the vector space of multi-graded polynomials of multi-degree $I = (i_1,\ldots,i_t) \in \bN^t$. A scheme of fat points $X = mP_1+\ldots+mP_s$ is the scheme associated with the multi-graded ideal $\wp_1^m\cap\ldots\cap\wp_s^m$.
\begin{problem}
Given a scheme of generic fat points $X \subset \mathbb P^{n_1-1}\times\ldots\times \mathbb P^{n_t-1}$, what is the multi-graded Hilbert function $\HF_{S/I_X}(I)$, for $I \in \bN^t$?
\end{problem}
This question was first considered by M. V. Catalisano, A. V. Geramita and A. Gimigliano who solved it in the case of double points, i.e., for $m=2$ in $\mathbb P^1 \times \ldots \times \mathbb P^1$. Recently, A.O. jointly with E. Carlini and M. V. Catalisano resolved the case of triple points ($m=3$) in $\mathbb P^1 \times \mathbb P^1$ and computed the Hilbert function for an arbitrary multiplicity except for a finite region in the space of multi-indices, see \cite{CCO17}.
\subsection{Symbolic powers vs. ordinary powers.}
As we mentioned above, if $I$ is the ideal defining a set $X$ of points, the $m$-th symbolic power of $I$ is the ideal of polynomials vanishing up to order $m-1$ at all points in $X$ or, in other words, the space of hypersurfaces which are singular at all points in $X$ up to order $m-1$. For this reason, symbolic powers are interesting from a geometrical point of view, but they are more difficult to study compared to the usual powers which carry less geometrical information. Hence, it is important to find relations between them. Observe that the inclusion $I^m \subset I^{(m)}$ is trivial.
\smallskip
{\it Containment problems} between the ordinary and the symbolic powers of ideals of points have been studied in substantial details. One particularly interesting question is to understand for which pairs of positive integers $(m,r)$, $I^{(m)} \subset I^r$. A very important result in this direction is the fact that, for any ideal $I$ of reduced points in $\mathbb P^n$ and any $r > 1$, we have $I^{(nr)}\subset I^r$. This statement was proven in \cite{ELS01} by L. Ein, R. Lazersfeld and K. Smith for characteristic $0$ and by M. Hochster and C. Huneke in positive characteristic, see \cite{HH02}. At present, the important question is whether the bound in the latter statement is sharp. In \cite{DSTG13}, M. Dumicki, T. Szemberg and H. Tutaj-Gasi\'nska provided the first example of a configuration of points such that $I^{(3)} \not\subset I^2$. (We refer to \cite{SS17} for a complete account on this topic.)
\medskip
In the recent paper \cite{GGSVT16}, F. Galetto, A. V. Geramita, Y. S. Shin and A. Van Tuyl defined the {\it $m$-th symbolic defect of an ideal} as the number of minimal generators of the quotient ideal $I^{(m)} / I^m$. If $I$ defines a set of general points in projective space, it was already known that $I^{(m)} = I^m$ if and only if $I$ is a complete intersection. Additionally, in \cite[Theorem 6.3]{GGSVT16} the authors characterize all cases of $s$ points in $\mathbb P^2$ having the $2$-nd symbolic defect equal to $1$. These cases are exactly $s = 3,5,7,8$.
\begin{problem} For the ideal $I$ of $s$ general points in $\mathbb P^{n-1}$, what is the difference between the Hilbert series of the $m$-th symbolic power and the $m$-th ordinary power?
\end{problem}
\section{Miscellanea}
\subsection{Hilbert series of numerical semigroup rings}
Let $\mathcal{S}=\langle s_1,\ldots,s_k\rangle$ be a numerical semigroup, i.e. $\mathcal{S}$ consists of all linear combinations with non-negative integer coefficients of the positive integers
$s_i$, and let $k[x^{s_1},\ldots,x^{s_k}]=k[\mathcal{S}]$ be the semigroup ring. The Hilbert series of $k[\mathcal{S}]$ is of the form $p(t)/q(t)$, where $p,q$ are polynomials with integer coefficients.
A semigroup is called \emph {cyclotomic} if the polynomial $p(t)$ has all its roots in the unit circle (which in fact, implies that they lie on the unit circle). (Detailed information about numerical semigroups can be found in e.g., \cite{Ci}.)
\begin{conjecture} $\mathcal{S}$ is cyclotomic if and only if $k[\mathcal{S}]$ is a complete intersection.
\end{conjecture}
\subsection{Non-negative forms} The next circle of problems is related to the celebrated article \cite{Hi} of D.~Hilbert and to a number of results formulated in \cite{CLR}.
Denote by $P_{n,m}$ the set of all non-negative real forms, i.e., real homogeneous polynomials of (an even) degree $m$ in $n$ variables which never attain negative values; denote by $\Sigma_{n.m}\subseteq P_{n,m}$ the subset of non-negative forms which can be represented as sums of squares of real forms of degree $\frac{n}{2}$.
(In \cite{Hi} D.~Hilbert proved that $\Delta_{n,m}=P_{n,m}\setminus\Sigma_{n,m}$ is non-empty unless the pair $(n,m)$ is of the form $(n,2)$, $(2,m)$ or $(4,3)$.) Finally, if $\Z(p)$ stands for the real zero locus of a real form $p$, denote by $B_{n,m}$ (resp. $B'_{n,m}$) the supremum of $|\Z(p)|$ over $p\in P_{n,m}$ such that $|\Z(p)|<\infty$
(resp. over $p\in\Sigma_{n,m}$ such that $|\Z(p)|<\infty$). In other words, $B(n,m)$ is the supremum of the number of zeros of non-degenerate forms under the assumption that all these roots are isolated (and similarly for
$B'_{n,m}$). Obviously, $B'_{n,m}<B_{n,m}$.
\medskip
The following basic question was posed in \cite{CLR}.
\begin{problem} Are $B_{n,m}$ and $B'_{n,m}$ finite for any pair $(n,m)$ with even $n$? \end{problem}
In \cite{CLR} it was shown that the answer to this problem is positive for $m=2,3$ and for the pair $(4,4)$. Relatively recently, in \cite{Stu} the following upper bound for $B_{n,m}$ was established
$$B_{n,m}\le 2\frac{(m-1)^{n+1}-1}{m-2}.$$
However this bound can not be sharp, as shown in \cite{Ko}. In case of $B'_{n,m}$, the following guess seems quite plausible and is proven for $m=3$.
\begin{conjecture} For any given pair $(n,m)$ with even $n$, $B'_{n,m}=\left(\frac{n}{2}\right)^{m-1}$.
\end{conjecture}
For $B_{n,m}$, no similar guess is known, but some intriguing information is available in the case $m=3$, see \cite{CLR}. The following problem is related to the classical Petrovski-Oleinik upper bound on the number of real ovals of real plane algebraic curves.
\begin{problem} Determine $\lim_{n\to\infty}\frac{B_{n,3}}{n^2}$. \end{problem}
The latter limit exists and lies in the interval $\left[\frac{5}{18},\frac{1}{2}\right]$, see \cite{CLR}.
\subsection{Polynomial generation}
Let $p$ be a prime number and let $\mathbb{F}_p$ denote the field with $p$ elements. Consider the two maps
$$\phi: \mathbb{F}_p[x_1,\ldots,x_n] \to \mathbb{F}_p[x_1,\ldots,x_n], f \mapsto \sum_{a \in Z(f)} x^a,$$
$$\psi: \mathbb{F}_p[x_1,\ldots,x_n] \to \mathbb{F}_p[x_1,\ldots,x_n], f \mapsto \sum_{a \in \mathbb{F}_p^n} f(a) x^a.$$
Here $x^a := x_1^{a_1} \cdots x_n^{a_n}$, where each $a_i$ is regarded as an integer, and $Z(f)$ is the zero locus of $f$ in $F_p^n$, i.e., $Z(f) := \{a \in \mathbb{F}_p^n \,|\, f(a) = 0\}$.
When $p = 2$, then $\phi$ is a bijective map on the vector space of polynomials of degree at most one in each variable, and $\phi^4(f) = f$, see \cite{lu}.
The map $\psi$, suggested by M. Boij, is a linear bijective map on the vector space of polynomials with degree at most $p-1$ in each variable, and when $p = 2$, these two maps are closely related in the sense that
$\phi(f) = \psi(f) + \sum_{a \in \mathbb{F}_2^n} x^a.$
Consider now the case $n = 1$ and $p > 2$. The map $\phi$ is no longer a bijection, but the sequence $\phi(f),\phi^2(f),\ldots$ will eventually become periodic. It is an easy exercise to show that $0 \mapsto 1 + x + \cdots + x^{p-1} \mapsto x \mapsto 1 \mapsto 0$. When $p \leq 17$, this is the only period, i.e., $\phi(f)^{d(f)} = 0$ for some $d(f)$. For $p=71$, we have found a period of length two;
$1 +x^{63}
\mapsto x^{23} + x^{26} + x^{34} + x^{39} + x^{41} + x^{51} + x^{70} \mapsto 1 + x^{63}.$ One can show that the length of the period is always an even number, but it is not clear which even numbers that can occur as lengths of periods.
\begin{problem}
For $n=1$ and given $p$, what are the (lengths of the) possible periods of $\phi$?
\end{problem}
Let us now turn to the map $\psi$ and the case $n=1$. For $p =3$, $\psi^8(f) = f$ for all polynomials $f$ in $\mathbb{F}_3[x]$ of degree at most two. For $p = 5$, the least $i$ such that $\psi^{i} = {\rm Id}$ on the space of polynomials of degree at most four, is equal to $124$. For $p = 7$, the corresponding number is $1368$.
\begin{problem}
For $n = 1$ and given $p$, find the minimal positive integer $i$ such that $\psi^i$ is the identity map on the space of polynomials of degree at most $p-1$.
\end{problem}
\subsection{Exterior algebras}
Let $f$ be a generic form of even degree in the exterior algebra $E$ over $\mathbb{C}$ with $n$ generators. Moreno and Snellman showed that the Hilbert series of $E/(f)$ is equal to the expected series $[(1+t)^d(1-t^d)]_+$, see \cite{M-S}. When the degree of $f$ is odd, we have $(f)\subseteq{\rm Ann}{f}$. This annihilator ideal shows an unexpected behaviour. The most striking case is when
$(n,d) = (9,3)$. It turns out that $\dim_{\mathbb{C}}({\rm Ann}{f})_3 = 4$, see \cite{lu-ni}. At the same time a naive guess is that $({\rm Ann}{f})_3$ is spanned by $f$ only.
Additionally, computer experiments suggest that $(f)$ and ${\rm Ann}{f}$ agree in low degrees.
\begin{problem} Let $f$ be a form of odd degree $d$ in $E$. Is it true that $({\rm Ann}(f))_i = (f)_i$, for $i < (n-d)/2$? \end{problem}
\medskip
We finish our list of problems with the following conjecture stated in \cite{cr-lu-ne}, which connects the question about the Hilbert series of generic forms in the exterior algebra with the Hilbert series of power ideals in the commutative setting.
\begin{conjecture}
Let $f$ and $g$ be generic quadratic forms in $E$ and let $\ell_1$ and $\ell_2$ be two generic linear forms in $S$. Then the Hilbert series of $E/(f,g)$ is equal to the Hilbert series of $S/(x_1^2,\ldots,x_n^2,\ell_1^2,\ell_2^2)$ and is given by
$1 + a(n,1) t + a(n,2) t^2 + \cdots + a(n,s)t^s + \cdots$, where $a(n,s)$ is the number of lattice paths inside the rectangle $(n+2-2s)\times (n+2)$ starting from the bottom-left corner and ending at the top-right corner by using only moves of two types: either $(x,y)\rightarrow (x+1,y+1) \textrm{\ or\ } (x-1,y+1)$.
\end{conjecture}
\medskip\noindent
{\bf Acknowledgements.} The authors want to thank all the participants of the problem-solving seminar at Stockholm University for their contributions and patience. The
fourth author is sincerely grateful to Dr.~Kh.~Khozhasov for pointing out references \cite{Stu, Ko}.
| {
"timestamp": "2018-01-08T02:15:16",
"yymm": "1801",
"arxiv_id": "1801.01692",
"language": "en",
"url": "https://arxiv.org/abs/1801.01692",
"abstract": "We present a number of questions in commutative algebra posed on the problem solving seminar in algebra at Stockholm University during the period Fall 2014 - Spring 2017.",
"subjects": "Commutative Algebra (math.AC)",
"title": "Algebraic stories from one and from the other pockets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9838471670723236,
"lm_q2_score": 0.8152324915965392,
"lm_q1q2_score": 0.802064177362567
} |
https://arxiv.org/abs/1804.02465 | Reconstructing Point Sets from Distance Distributions | We address the problem of reconstructing a set of points on a line or a loop from their unassigned noisy pairwise distances. When the points lie on a line, the problem is known as the turnpike; when they are on a loop, it is known as the beltway. We approximate the problem by discretizing the domain and representing the $N$ points via an $N$-hot encoding, which is a density supported on the discretized domain. We show how the distance distribution is then simply a collection of quadratic functionals of this density and propose to recover the point locations so that the estimated distance distribution matches the measured distance distribution. This can be cast as a constrained nonconvex optimization problem which we solve using projected gradient descent with a suitable spectral initializer. We derive conditions under which the proposed distance distribution matching approach locally converges to a global optimizer at a linear rate. Compared to the conventional backtracking approach, our method jointly reconstructs all the point locations and is robust to noise in the measurements. We substantiate these claims with state-of-the-art performance across a number of numerical experiments. Our method is the first practical approach to solve the large-scale noisy beltway problem where the points lie on a loop. |
\subsection{Related Work}
In the noiseless case, Lemke and Werman \cite{Lemke:1988} addressed the turnpike problem via polynomial factorization. Namely, the polynomial $Q_\mathcal{D}(a) = N+\sum_{k=1}^K(a^{d_k}+a^{-d_k})$ is invariant to permutations of pairwise distances. If one can factorize it as $Q_\mathcal{D}(a)=R(a)R(a^{-1})$ where $R(a)=\sum_{n=1}^N a^{u_n}$, then the point locations can be read off from the exponents. When the distances are all integers, the factorization runs in a time that is polynomial in the degree of $Q(a)$ \cite{Lenstra1982}, which is the largest pairwise distance. However, this approach quickly becomes impractical, and is brittle in the presence of noise.
The more practical backtracking algorithm by Skiena et al. \cite{Skiena:1990} produces a solution for typical instances in time $\mathcal{O}(n^2\log n)$. It progressively finds the assignment for the remaining largest unassigned distance in $\mathcal{D}$, and adopts the branch-and-bound search strategy to recover the point locations in a depth-first manner. However, there exist examples with exponential runtime \cite{Skiena:1990,ZhangExp:1994}. Abbas and Bahig \cite{Abbas2016} later demonstrated that some of the worst-case scenarios could be avoided by performing a breadth-first search instead. An alternative to clever combinatorial search is to formulate the problem as a binary integer program \cite{Miller:1960,IBARAKI197639,Papadimitriou:1982}, and then relax it to obtain a convex semidefinite program \cite{Dakic:2000}. One drawback of this scheme is that it is computationally infeasible for large-scale problems. In this paper we propose to relax the integer program to a constrained nonconvex optimization problem that can be solved efficiently using projected gradient descent with a spectral initializer.
To address the noisy case where the turnpike problem becomes NP-hard \cite{Cieliebak:2004}, Skiena and Sundaram proposed a modification of the backtracking algorithm where an interval is associated with each recovered point to account for the uncertainty \cite{Skiena:1994}. As a consequence, the number of backtracking paths could grow exponentially large. Pruning can be performed on the paths when the relative errors in the distances are small; however, it requires careful adaptive tuning and could lead to no solution sometimes. Our approach naturally incorporates noise into the problem formulation, thus exhibiting better performance compared to the current state-of-the-art backtracking approach.
We mention that the turnpike problem is also related to the problem of string reconstruction from substring compositions which arises in protein mass spectrometry \cite{Acharya2015StringRF,Bulteau:2014,LEE2013}. The advances presented here for the turnpike problem might inspire similar approaches to solve its string variant.
The beltway problem is more difficult than the turnpike problem \cite{Skiena:1990, Lemke2003}. Due to the loop structure, it can no longer be formulated as a polynomial factorization problem. It is also impossible for the backtracking approach to rely on the remaining largest unassigned distance to find the point locations progressively \cite{Lemke2003}. For small problems, Fomin \cite{Fomin:2016:1,Fomin:2016:2} proposed to avoid an exhaustive search in the noiseless case by further removing the redundant distances from $\mathcal{H}$ sequentially, and later extended it to handle noisy measurements\cite{Fomin:2019:3}. To the best of our knowledge, our work in this paper offers an alternative by providing the first practical approach to solve the large-scale noisy beltway problem.
\subsection{Uniqueness}
One complication with the turnpike problem is that the solution is not necessarily unique (up to a relabeling of the points and up to a congruence). Fortunately, the solution to the uDGP in any dimension is known to be \textit{generically} unique, in the sense made precise in the form of the reconstructability tests for the point configurations by Boutin and Kemper in \cite[Theorem 2.6 and Proposition 2.11]{BOUTIN2004709}. For example, if the points are sampled iid from an absolutely continuous probability distribution, then almost surely the distance distribution specifies their geometry uniquely (up to a relabeling and a congruence).
Boutin and Kemper worked with complete distance measurements. Gortler et al. \cite{GUGR2018} later relaxed the completeness assumption and only required the underlying graph to be generically globally rigid \cite{CGGR2010}. Under this sufficient condition, they proved that the reconstruction of a generic point configuration is unique.
Importantly, beyond uniqueness, Boutin and Kemper \cite{BOUTIN2004709} showed that when the multiset $\mathcal{D}$ in the turnpike problem contains only distinct distances, there is a suitably defined neighborhood around each uniquely reconstructable point configuration such that all configurations within the neighborhood are also uniquely reconstructable, and the forward and backward mappings between the different distance multisets are continuous. To the best of our knowledge, there has not been much work on the uniqueness of beltway reconstructions. In the remainder of this paper, we will assume that the measured distances correspond to a uniquely reconstructable configuration.
\subsection{Our Approach}
The combinatorial turnpike problem can be formulated as an assignment problem \cite{Assignment:1957,Burkard:2009} or a general integer program (when the domain is discrete) \cite{Miller:1960,IBARAKI197639,Papadimitriou:1982}. Most of the prior approaches described in the above sections try to first find the correct assignments of the distances to pairs of points $\alpha(k)$, and then recover the point locations $u_n$. On the other hand, the approach by Daki\'c \cite{Dakic:2000} adopts the integer programming formulation where the point locations are represented by a binary vector in the noiseless case, and directly recovered via semidefinte programming (SDP). Assignments are then a byproduct of the process.
We proceed along the line of integer programming to solve the turnpike problem and the beltway problem in section \ref{sec:main_turnpike} and \ref{sec:main_beltway} respectively. Instead of relaxing the integer program to a convex SDP, we relax it to a constrained minimization of a nonconvex objective, which leads to the proposed approach that is more efficient and suitable for large-scale problems. Importantly, measurement noise is naturally incorporated into our formulation by smoothing the target distance distribution. A key ingredient in the proposed approach is a suitably constructed initializer inspired by the spectral initialization strategy \cite{Netrapalli2015:RPAM,WF:2015}. We analyze the convergence of the projected gradient method to a global optimum. In order to have a fast method, we also propose a computationally efficient projection onto the relaxed constraint set.
Both the turnpike and beltway problems can be formulated in similar ways. Starting with the easier turnpike problem, we shall present the proposed approach in detail in section \ref{sec:main_turnpike}, and then demonstrate how it can be adapted to solve the beltway problem as well in section \ref{sec:main_beltway}. Numerical experiments in section \ref{sec:exp} show that our method achieves state-of-the-art performances for the turnpike recovery, and is the first practical approach to solve the large-scale noisy beltway problem. We conclude this paper with a discussion of our results in Section \ref{sec:con}. The proofs of the formal results can be found in the Appendix.
\subsection{Distance Distribution Matching}
\label{subsec:ddm}
\begin{figure*}[tbp]
\centering
\subfigure[]{
\label{fig:prob_approx_cont}
\includegraphics[width=2.5in]{figures/prob_gaussian_convolve}}
\subfigure[]{
\label{fig:prob_approx_disc}
\includegraphics[width=2.5in]{figures/prob_gaussian_convolve_histogram}}
\caption{(a) The approximated distribution $p(d)$ based on the distance multiset $\mathcal{D}$; (b) The discretized distance distribution $p(y)$ from $p(d)$.}
\label{fig:prob_dist_discretization}
\end{figure*}
Depending on how the distance $d_k$ is measured in various applications, a variety of noise models for $w_k$ may be appropriate \cite{Oppenheim:2009,Vaseghi:2006}. Here we model $w_k$ as iid zero-mean Gaussian noise with variance $\xi^2$
\[w_k\sim\mathcal{N}(0,\xi^2)=\frac{1}{\sqrt{2\pi\xi^2}}\exp\left(-\frac{1}{2\xi^2}w_k^2\right)\,.\]
Ideally, we would like to find a set of point locations so that the \emph{estimated} distance distribution matches the \emph{oracle} distance distribution $g(d)$
\begin{align}
\label{eq:oracle_dist}
g(d)=\frac{1}{K}\cdot\sum_{k=1}^{K}\mathcal{N}\left(d\ \left|\ s_k,\xi^2\right.\right) :=\frac{1}{K}\sum_{k=1}^K\frac{1}{2\pi\xi^2}\exp\left(-\frac{(d-s_k)^2}{2\xi^2}\right)\,.
\end{align}
Since $g(d)$ is in general unknown, we approximate it here using the following distribution $p(d)$ based on the measured distances in $\mathcal{D}$.
\begin{align}
\label{eq:approx_dist}
p(d)=\frac{1}{K}\cdot\sum_{k=1}^{K}\mathcal{N}\left(d\ \left|\ d_k, \sigma^2\right.\right)\,,
\end{align}
where $\sigma^2$ should be tuned according to an a priori estimate of the level of noise in the data and the grid resolution. As shown in Fig. \ref{fig:prob_dist_discretization}, the distribution $p(d)$ is further discretized to $p(y)$ in order to perform distribution matching with respect to the quantized distance $y$.
\begin{align}
\label{eq:obs_dist}
p(y)=\int_{(y-0.5)\Delta l}^{(y+0.5)\Delta l}p(d)\ \textnormal{d}d\,.
\end{align}
Similar to \eqref{eq:quad_form_p}, the estimated distribution $q(y)$ can also be expressed in terms of the solution ${\bm{z}}$
\begin{align}
\label{eq:quad_form_q}
q(y)=\frac{1}{K}\cdot{\bm{z}}^T{\bm{A}}_y{\bm{z}}\,,
\end{align}
where $z_m$ is the estimated (unnormalized) probability that a point is located at $l_m$. We can solve for it by minimizing the mean-squared error between $q(y)$ and $p(y)$ subject to the constraints in \eqref{eq:relaxed_constraint} and \eqref{eq:l1_constraint}:
\begin{align}
\label{eq:constrained_nonconvex}
\begin{split}
\min_{{\bm{z}}}\quad &f({\bm{z}})=\frac{1}{M}\sum_{y=0}^{M-1}\big(q(y)-p(y)\big)^2
\end{split}\\
\label{eq:relaxed_constraint}
\textnormal{subject to}\quad &0\leq z_m\leq 1,\ \forall\ m\in\set{1,\cdots,M}\\
\label{eq:l1_constraint}
&\|{\bm{z}}\|_1 = N\,.
\end{align}
\subsection{Extracting Point Locations from the Estimated Distribution}
In general, the recovered vector ${\bm{z}}$ will not be supported on exactly $N$ indices. In the following we discuss how to extract the $N$ point location estimates when this is the case.
\paragraph{Noiseless case.} If we assume that there is no measurement noise in $d_k$ and no quantization error in the quantized distance $y_k=\left\lfloor\frac{d_k}{\Delta l}\right\rceil$, the vector ${\bm{x}}$ is then binary: ${\bm{x}}\in\set{0,1}^M$. Suppose ${\bm{z}}^\dagger$ is one of the global optimizers of \eqref{eq:constrained_nonconvex} that is different from ${\bm{x}}$ and $f({\bm{z}}^\dagger)=0$. We then have from $q(y=0)=p(y=0)$ that
\[
\|{\bm{z}}^\dagger\|_2^2={{\bm{z}}^\dagger}^T{\bm{A}}_0{\bm{z}}^\dagger = {\bm{x}}^T{\bm{A}}_0{\bm{x}}=\|{\bm{x}}\|_2^2=N\,.
\]
From \eqref{eq:l1_constraint}, we can get
\begin{align}
\label{eq:noiseless_global_opt}
\|{\bm{z}}^\dagger\|_2^2=\|{\bm{z}}^\dagger\|_1=N\,.
\end{align}
If the $m$-th entry $z^\dagger_m\in(0,1)$, then $\|{\bm{z}}^\dagger\|_2^2<\|{\bm{z}}^\dagger\|_1$ which is in contradiction with \eqref{eq:noiseless_global_opt}. Hence $z^\dagger_m\notin(0,1)$, and the global optimizer is integer-valued, ${\bm{z}}^\dagger\in\set{0,1}^M$. The points are estimated at the segments that correspond to the $1$-entries in ${\bm{z}}^\dagger$.
If the solution ${\bm{z}}$ is not a global optimizer, then ${\bm{z}}\in[0,1]^M$. The point locations can be extracted in the same way as in the noisy case which we describe next.
\begin{figure*}[tbp]
\centering
\subfigure{
\label{fig:z_vec}
\includegraphics[width=\textwidth]{figures/z_vec.png}}\\
\subfigure{
\label{fig:h_clustering}
\includegraphics[width=\textwidth]{figures/h_clustering.png}}
\caption{Illustration of agglomerative clustering for $N=5$. The agglomerative clustering produces $8$ clusters, only the centroids of the $5$ clusters with the highest weights are taken as the point locations.}
\label{fig:hierarchical_clustering}
\end{figure*}
\paragraph{Noisy case.} In the noisy case we have ${\bm{x}}\in[0,1]^M$. The $m$-th entry $z_m$ of ${\bm{z}}$ is the \emph{estimated} probability that a point is located at the $m$-th segment $l_m$.
Extracting $N$ point locations from ${\bm{z}}$ can be posed as a clustering problem. As illustrated in Fig. \ref{fig:hierarchical_clustering}, each $l_m$ is viewed as a cluster with the weight $z_m$. We can cluster the $M$ segments using the agglomerative clustering approach \cite{Rokach2005} summarized in Algorithm \ref{alg:agg}. The centroids of the $N$ clusters with the largest weights are taken as the estimated point locations.
\begin{algorithm}[tbp]
\caption{Extracting the point locations via agglomerative clustering }
\label{alg:agg}
\begin{algorithmic}[1]
\REQUIRE The solution ${\bm{z}}$, the smallest distance between two different points $d_{\min}$.
\STATE Treat each segment $l_m$ with a nonzero weight $\omega_m=z_m$ as one cluster $C_m=\set{l_m}$
\STATE Compute the centroid $c_m$ of every cluster $C_m\in\mathcal{C}=\set{C_1,C_2,\cdots}$
\WHILE{$|\mathcal{C}|>N$}
\STATE Merge the two closest clusters\footnotemark $\set{C_i,C_j}$ with weights $\set{w_i<1,w_j<1}$ and centroids $\|c_i-c_j\|<d_{\min}$ into one cluster $C_i$
\STATE Update the weight $w_i$ and the centroid $c_i$ of the new cluster $C_i$
\IF{the clusters cannot be merged further}
\STATE \textbf{break}
\ENDIF
\ENDWHILE
\STATE {\bfseries Return} the set of centroids $\set{c_1,c_2,\cdots}$
\end{algorithmic}
\end{algorithm}
\footnotetext{Randomly pick a pair of clusters in case of a draw.}
\subsection{Projected Gradient Descent}
Let $\mathcal{S}$ denote the convex set defined by the constraints \eqref{eq:relaxed_constraint},\eqref{eq:l1_constraint}. Given a proper initialization ${\bm{z}}_0$, we propose to solve \eqref{eq:constrained_nonconvex} via the projected gradient descent method:
\begin{align}
\label{eq:pgd_update}
{\bm{z}}_{t+1} = \mathscr{P}_\mathcal{S}\big({\bm{z}}_t-\eta\cdot\nabla f({\bm{z}}_t)\big)\,,
\end{align}
where $\eta>0$ is the step size, $\mathscr{P}_\mathcal{S}(\cdot)$ is the projection of the gradient descent update onto $\mathcal{S}$, and $\nabla f({\bm{z}}_t)$ is the gradient
\begin{align}
\nabla f({\bm{z}}_t) = \frac{2}{MK^2}\sum_{y=0}^{M-1}\left({\bm{z}}_t^T{\bm{A}}_y{\bm{z}}_t-{\bm{x}}^T{\bm{A}}_y{\bm{x}}\right)\cdot\left({\bm{A}}_y+{\bm{A}}_y^T\right){\bm{z}}_t\,,
\end{align}
where both $q(y)$ and $p(y)$ are replaced with their quadratic forms in \eqref{eq:quad_form_p} and \eqref{eq:quad_form_q}. An adaptive strategy can be used to determine some suitable step size $\eta>0$ to minimize the objective function. The proposed approach is finally summarized by Algorithm \ref{alg:pdp_pgd}.
\begin{algorithm}[tbp]
\caption{The noisy turnpike problem via projected gradient descent}
\label{alg:pdp_pgd}
\begin{algorithmic}[1]
\REQUIRE the distance multiset $\mathcal{D}$, the number of points $N$, quantization step size $\Delta l$, gradient descent step size $\eta$, adaptive rate $\beta\in(0,1)$, convergence threshold $\epsilon$, the maximum number of iterations $T$.
\STATE Compute the discrete approximated distribution $q(y)$ from $\mathcal{D}$
\STATE Compute the spectral initializer ${\bm{z}}_0$
\FOR{$t=\{0,1,\cdots,T\}$}
\WHILE{true}
\STATE Compute the projected gradient descent update ${\bm{z}}_{t+1}=\mathscr{P}_\mathcal{S}\big({\bm{z}}_t-\eta\cdot\nabla f({\bm{z}}_t)\big)$
\IF {$f({\bm{z}}_{t+1})\leq f({\bm{z}}_t)$}
\STATE Increase the step size $\eta=\frac{1}{\beta}\cdot\eta$
\STATE \textbf{break}
\ELSE
\STATE Decrease the step size $\eta=\beta\cdot\eta$
\ENDIF
\ENDWHILE
\IF {$\frac{\|{\bm{z}}_{t+1}-{\bm{z}}_t\|_2}{\|{\bm{z}}_t\|_2}<\epsilon$}
\STATE Convergence is reached, set ${\bm{z}}={\bm{z}}_{t+1}$
\STATE \textbf{break}
\ENDIF
\ENDFOR
\STATE {\bfseries Return} ${\bm{z}}$
\end{algorithmic}
\end{algorithm}
\subsubsection{Spectral Initialization}
\label{subsec:spec_init}
\begin{figure*}[tbp]
\centering
\includegraphics[width=\textwidth]{figures/spec_init_1000}
\caption{An example of the obtained spectral initializer ${\bm{z}}_0$. The entries corresponding to the neighbourhood of the true point locations (illustrated by vertical lines) in general have larger values, indicating higher confidence in those locations.}
\label{fig:spec_init}
\end{figure*}
A suitable initialization is needed to solve the constrained noncovex problem in \eqref{eq:constrained_nonconvex} via projected gradient descent. Here we can borrow ideas from another problem with quadratic measurements, the phase retrieval problem \cite{Gerchberg:72,Fienup:82}. In phase retrieval, the task is to compute a complex signal ${\bm{v}}\in\mathbb{C}^M$ from its quadratic measurements of the form $\mu_i = |\langle{\bm{v}}, {\bm{a}}_i\rangle|^2$ for $1 \leq i \leq I$. Since $\mu_i = {\bm{v}}^* {\bm{a}}_i {\bm{a}}_i^* {\bm{v}}$, spectral initialization for phase retrieval is based on a weighted sum of the rank-1 measurement matrices ${\bm{a}}_i {\bm{a}}_i^*$. Namely, using matrix concentration results, Netrapalli et al. \cite{Netrapalli2015:RPAM} showed that the leading eigenvector of $\sum_{i=1}^I \mu_i {\bm{a}}_i {\bm{a}}_i^*$ is close to the the true ${\bm{v}}$. Similar arguments can be used for quadratic systems of full-rank random matrices \cite{QuadFR:2019}.
In our formulation of the turnpike problem \eqref{eq:quad_form_p}, the rank-1 matrices ${\bm{a}}_i {\bm{a}}_i^*$ are replaced by ${\bm{A}}_y$ which are not necessarily PSD nor rank-1; they are also deterministic. Notwithstanding, we can use the spectral initialization strategy. As we shall see from the numerical experiments in Section \ref{sec:exp}, this strategy works well empirically, although a rigorous proof remains an open question.
Let $\beta_y = \frac{p(y)\cdot K}{\|{\bm{A}}_y\|_F}$ and ${\bm{H}}_y=\frac{{\bm{A}}_y}{\|{\bm{A}}_y\|_F}$. We can rewrite \eqref{eq:quad_form_p} as
\begin{align}
\beta_y={\bm{x}}^T{\bm{H}}_y{\bm{x}}=\langle{\bm{H}}_y,\ {\bm{x}}\vx^T\rangle =: \langle {\bm{H}}_y,\ {\bm{X}}\rangle\,.
\end{align}
The set $\left\{{\bm{H}}_y,\ 0\leq y\leq M-1\right\}$ can be viewed as an orthonormal basis for the matrix subspace $\mathrm{span} \, \set{{\bm{H}}_1, \ldots {\bm{H}}_{M-1}}$,
\begin{align}
\langle {\bm{H}}_i,\ {\bm{H}}_j \rangle=\left\{\begin{array}{l}
1\\
0
\end{array}
\quad
\begin{array}{l}
\textnormal{if }i=j\\
\textnormal{if }i\neq j\,
\end{array}.
\right.
\end{align}
With this interpretation, $\beta_y$ becomes the expansion coefficient of ${\bm{X}}$ in the direction of ${\bm{H}}_y$. The least squares estimate of ${\bm{X}}$ is then
\begin{align}
\widehat{{\bm{X}}}=\sum_{y=0}^{M-1}\beta_y\cdot{\bm{H}}_y\,,
\end{align}
which is nothing but the orthogonal projection of ${\bm{X}}$ on the subspace spanned by the ${\bm{H}}_y$. Finally, we find the spectral initializer ${\bm{z}}_0$ so that ${\bm{z}}_0 {\bm{z}}_0^T$ is close to $\widehat{{\bm{X}}}$ in Frobenius norm subject to the constraint that $\|{\bm{z}}_0\|^2_2=N$. Let the spectral initializer ${\bm{z}}_0=\sqrt{N}{\bm{e}}_{\max}$, where $\|{\bm{e}}_{\max}\|_2=1$. We have
\begin{align}
\begin{split}
{\bm{e}}_{\max}&=\mathrm{argmin}_{{\bm{e}}}\ \|\widehat{{\bm{X}}}-N{\bm{e}}\ve^T\|_F^2\\
&=\mathrm{argmin}_{{\bm{e}}}\ \|\widehat{{\bm{X}}}\|_F^2+N^2\|{\bm{e}}\ve^T\|_F^2-2N\langle\widehat{{\bm{X}}},{\bm{e}}\ve^T\rangle\\
&=\mathrm{argmax}_{{\bm{e}}}\ {\bm{e}}^T\widehat{{\bm{X}}}{\bm{e}}\,.
\end{split}
\end{align}
\begin{itemize}
\item When $\widehat{{\bm{X}}}$ is symmetric, ${\bm{e}}_{\max}$ is given by the leading singular vector of $\widehat{{\bm{X}}}$ that corresponds to the largest singular value.
\item When $\widehat{{\bm{X}}}$ is not symmetric, we use the method of Lagrange multipliers and find the stationary points of the Lagrangian $\mathcal{L}({\bm{e}},\lambda)={\bm{e}}^T\widehat{{\bm{X}}}{\bm{e}}-\lambda({\bm{e}}^T{\bm{e}}-1)$. Setting the gradients to $0$, we have
\begin{align}
\left(\widehat{{\bm{X}}}+\widehat{{\bm{X}}}^T\right){\bm{e}}&=2\lambda{\bm{e}}\\
{\bm{e}}^T{\bm{e}}&=1
\end{align}
The stationary points are given by the eigenvectors of $\widehat{{\bm{X}}}+\widehat{{\bm{X}}}^T$, with ${\bm{e}}_{\max}$ being the one that corresponds to the largest eigenvalue. In practice we find it via the power iteration.
\end{itemize}
\remove{
\begin{algorithm}[tbp]
\caption{Spectral initialization}
\label{alg:spec_init_max}
\begin{algorithmic}[1]
\REQUIRE the least square estimate $\widehat{{\bm{X}}}$.
\STATE Initialize ${\bm{e}}_0$ with all $1$s.
\FOR{$t=\{0,1,\cdots,T\}$}
\STATE Compute ${\bm{e}}_{t+1}=\frac{\left(\widehat{{\bm{X}}}+\widehat{{\bm{X}}}^T\right){\bm{e}}_t}{\left\|\left(\widehat{{\bm{X}}}+\widehat{{\bm{X}}}^T\right){\bm{e}}_t\right\|_2}$
\IF {Convergence is reached}
\STATE Set ${\bm{z}}_0=\sqrt{{\bm{e}}_{t+1}^T\widehat{{\bm{X}}}{\bm{e}}_{t+1}}\cdot{\bm{e}}_{t+1}$
\STATE \textbf{break}
\ENDIF
\ENDFOR
\STATE {\bfseries Return} ${\bm{z}}_0$
\end{algorithmic}
\end{algorithm}
}
\subsubsection{Efficient Projection onto the $l_1$-ball with Box Constraints}
\begin{figure*}[tbp]
\centering
\includegraphics[height=2.5in]{figures/projection}
\caption{ The gradient descent update ${\bm{z}}={\bm{z}}_t-\eta \nabla f({\bm{z}}_t)$ is projected back to the convex set $\mathcal{S}$.}
\label{fig:projection}
\end{figure*}
As shown in Fig. \ref{fig:projection}, the gradient descent update ${\bm{z}}={\bm{z}}_t-\eta\cdot\nabla f({\bm{z}}_t)$ is projected back onto the convex set $\mathcal{S}$, which is the $l_1$-ball with box constrains defined by \eqref{eq:relaxed_constraint} and \eqref{eq:l1_constraint}. The projection is the solution to the following convex problem
\begin{align}
\label{eq:projection_box_constrains}
\begin{split}
\min_{\bm{s}}\quad&\frac{1}{2}\|{\bm{s}}-{\bm{z}}\|_2^2\\
\textnormal{subject to}\quad&0\leq s_m\leq 1,\ \forall\ m\in[M]\\
&\|{\bm{s}}\|_1=N\,.
\end{split}
\end{align}
Duchi et al. \cite{projection06,Duchi:2008} proposed an efficient algorithm to compute the projection onto the $l_1$-ball when $s_m$ is only lower-bounded by $0$. Gupta et al. \cite{l1_box10, l1_box12} later extended that approach to handle projections with box constraints, when $s_m$ is both lower-bounded and upper-bounded. However, their approach is based on a sequential search for an optimal threshold $\kappa$, which is inefficient and cannot be parallelized for large-scale problems. Building on the work of \cite{projection06}, we address these issues by deriving a closed-form expression for the optimal $\kappa$ in \eqref{eq:kappa_compute} in terms of the entry index $r$ of a sorted ${\bm{s}}$.
Specifically, the Lagrangian of \eqref{eq:projection_box_constrains} is:
\begin{align}
\mathcal{L}=\frac{1}{2}\|{\bm{s}}-{\bm{z}}\|_2^2+\kappa\left(\|{\bm{s}}\|_1-N\right)-{\boldsymbol\zeta}^T\cdot{\bm{s}}+{\boldsymbol\xi}^T\cdot({\bm{s}}-\boldsymbol 1)\,,
\end{align}
where $\kappa\in\mathbb{R}$ is a real Lagrange multiplier, ${\boldsymbol\zeta}\in\mathbb{R}_+^M$, ${\boldsymbol\xi}\in\mathbb{R}_+^M$ are the nonnegative Lagrange multipliers. Taking the subgradient of $\mathcal{L}$ w.r.t. ${\bm{s}}$, and setting it to $0$, we have
\begin{align}
\frac{\partial\mathcal{L}}{\partial s_m}=s_m-z_m+\kappa-\zeta_m+\xi_m=0\,.
\end{align}
Since $\mathcal{S}$ is a closed convex set, the projection solution ${\bm{s}}$ exists and is unique. We need to consider the following two cases.
\begin{enumerate}[label={\arabic*)}]
\item If the solution ${\bm{s}}$ contains only $[0,1]$-entries, there are $N$ entries in ${\bm{s}}$ that equal $1$, and their indices correspond to the top $N$ entries of ${\bm{z}}$.
\item If at least one entry of ${\bm{s}}$ is between $0$ and $1$, the complementary slackness KKT condition indicates that when $0< s_m < 1$, the Lagrange multipliers $\zeta_m=\xi_m=0$. We then have:
\begin{align}
\label{eq:project_one}
s_m=z_m-\kappa\quad\quad\textnormal{if } 0< s_m < 1\,.
\end{align}
The above \eqref{eq:project_one} gives us an efficient way to compute $s_m$ if it happens to be between $0$ and $1$: simply subtract the threshold $\kappa$ from $z_m$. In order to find the optimal solution ${\bm{s}}$, we need to compute $\kappa$ and identify the three types of entries of ${\bm{s}}$: those that equal $0$, those that equal $1$, and those that are between $0$ and $1$. We will make use of the following lemma from \cite{projection06} about the entries of ${\bm{s}}$ that equal $0$:
\begin{lemma}
\label{lemma:lower_bound}
\cite {projection06} Let ${\bm{s}}$ be the optimal solution to the minimization problem in (\ref{eq:projection_box_constrains}). Let $i$ and $j$ be two indices such that $z_i>z_j$. If $s_i=0$ then $s_j$ must be $0$ as well.
\end{lemma}
Similarly, we can prove the following lemma about the entries of ${\bm{s}}$ that equal $1$ (the proof can be found in Appendix \ref{proof:lemma:upper_bound}).
\begin{lemma}
\label{lemma:upper_bound}
Let ${\bm{s}}$ be the optimal solution to the minimization problem in (\ref{eq:projection_box_constrains}). Let $i$ and $j$ be two indices such that $z_i>z_j$. If $s_j=1$ then $s_i$ must be $1$ as well.
\end{lemma}
Since reordering of the entries of ${\bm{z}}$ does not change the value of (\ref{eq:projection_box_constrains}), and adding some constant to ${\bm{z}}$ does not change the solution of (\ref{eq:projection_box_constrains}), without loss of generality we can assume that the entries of ${\bm{z}}$ are all positive in a non-increasing order: $z_1\geq z_2\geq\cdots\geq z_M\geq N$. Lemma \ref{lemma:lower_bound} and \ref{lemma:upper_bound} imply that for the optimal solution ${\bm{s}}$:
\begin{itemize}
\item The entries of ${\bm{s}}$ are in a non-increasing order.
\item The first $\rho$ entries of ${\bm{s}}$ satisfy $0<s_m\leq 1$; the rest of the entries are $0$s.
\end{itemize}
Since $\exists\ s_m\in(0,1)$, we have $\rho>N$ and that at most $N-1$ entries of ${\bm{s}}$ could equal $1$. Suppose the first $r-1$ entries of ${\bm{s}}$ are all $1$s, the following must hold for $1\leq r\leq N<\rho$
\begin{align}
\label{eq:cst_r}
&0<z_r-\kappa< 1\\
\label{eq:cst_rm1}
&1\leq z_{r-1}-\kappa,\quad\textnormal{if }2\leq r\leq N<\rho\,.
\end{align}
\begin{algorithm}[tbp]
\caption{Efficient projection onto the $l_1$-ball with box constraints}
\label{alg:ep_box}
\begin{algorithmic}[1]
\STATE Shift ${\bm{z}}$ s.t. $z_m\geq N$, $\forall\ m\in\set{1,\cdots,M}$; and sort ${\bm{z}}$ in a non-increasing order.
\FOR{$r=1:N$}
\STATE Construct ${\bm{v}}$ out of ${\bm{z}}$ by removing the first $r-1$ entries, and compute $\rho_v$ according to \cite{Duchi:2008}:\\
$\rho_v=\max\left\{l\in[N-r+1]\,:\, v_l-\textstyle\frac{1}{l}\left(\textstyle\sum_{m=1}^lv_m-(N-r+1)\right)>0\right\} $
\STATE Compute $\kappa_v=\frac{1}{\rho_v}\left(\sum_{m=1}^{\rho_v}v_m-(N-r+1)\right)$
\STATE Check if $(\rho_v, \kappa_v)$ satisfy \eqref{eq:cst_r} by examining $\widehat{s}_r=z_r-\kappa_v$
\IF{$0<\widehat{s}_r< 1$}
\IF{$r=1$}
\STATE Set $\kappa=\kappa_v$, $\rho=\rho_v+r-1$ and \textbf{break}
\ELSE
\STATE Check if $(\rho_v, \kappa_v)$ satisfy \eqref{eq:cst_rm1} by examining $\widehat{s}_{r-1}=z_{r-1}-\kappa_v$
\IF{$\widehat{s}_{r-1}\geq 1$}
\STATE Set $\kappa=\kappa_v$, $\rho=\rho_v+r-1$ and \textbf{break}
\ENDIF
\ENDIF
\ELSE
\STATE \textbf{continue}
\ENDIF
\ENDFOR
\IF{$(r,\rho,\kappa)$ can be found}
\STATE Compute ${\bm{s}}=\max\{{\bm{z}}-\kappa, 0\}$ followed by ${\bm{s}}=\min\{{\bm{s}},1\}$
\ELSE
\STATE Compute ${\bm{s}}$ by setting the top $N$ entries of ${\bm{z}}$ to $1$ and the rest entries to $0$
\ENDIF
\STATE {\bfseries Return} ${\bm{s}}$
\end{algorithmic}
\end{algorithm}
We can write the sum of ${\bm{s}}$ as follows:
\begin{align}
\sum_{m=1}^Ms_m=\sum_{m=1}^\rho s_m=(r-1)+\sum_{m=r}^\rho (z_m-\kappa)=N\,.
\end{align}
We then have:
\begin{align}
\label{eq:kappa_compute}
\kappa=\frac{1}{\rho-r+1}\left(\sum_{m=r}^\rho z_m-(N-r+1)\right)\,.
\end{align}
Finally, we can write the minimizing ${\bm{s}}$ as
\begin{align}
\label{eq:min_s}
{\bm{s}}=\left\{
\begin{array}{l}
1, \\
z_m-\kappa, \\
0,
\end{array} \quad
\begin{array}{l}
\textnormal{if }m\leq r-1\\
\textnormal{if }r\leq m\leq \rho\\
\textnormal{if }\rho+1\leq m\leq M\,.
\end{array}
\right.
\end{align}
If $r$ is known, we can find the value of $\rho$ efficiently using the approach in \cite{projection06,Duchi:2008}, and thus identify the three types of entries in ${\bm{s}}$. The threshold $\kappa$ and the solution ${\bm{s}}$ can be then computed using \eqref{eq:kappa_compute} and \eqref{eq:min_s} respectively. According to the following lemma (proved in Appendix \ref{proof:lemma:unique_r}), we can find $r$ by checking the integers in the set $\set{1,\ldots,N}$ one by one or in parallel until the computed $(\rho, \kappa)$ satisfy the two constraints \eqref{eq:cst_r} and \eqref{eq:cst_rm1}.
\begin{lemma}
\label{LEMMA:UNIQUE_R}
If the solution ${\bm{s}}$ has at least one entry $s_m\in(0,1)$, there is one and only one $r\in\set{1,\ldots,N}$ that produces the $(\rho,\kappa)$ satisfying \eqref{eq:cst_r} and \eqref{eq:cst_rm1}.
\end{lemma}
\end{enumerate}
In practice we do not know beforehand what the solution ${\bm{s}}$ is like. Given the uniqueness of the solution, we could start by trying to look for the right $(r,\rho,\kappa)$-values. If they can be found, ${\bm{s}}$ can then be computed using \eqref{eq:min_s}. If they can not be found, it means that ${\bm{s}}$ contains only $[0,1]$-entries and can be obtained straightforwardly. The proposed approach to perform efficient projection on the $l_1$-ball with box constraints is summarized in Algorithm \ref{alg:ep_box}.
\subsection{Convergence Analysis}
\label{subsec:cvg}
In this section we study the convergence behavior of the proposed approach in the neighbourhood $\mathcal{E}(\tau)$ around a global optimizer ${\bm{x}}$.
\[\mathcal{E}(\tau)=\set{{\bm{z}}\ |\ \|{\bm{z}}-{\bm{x}}\|_2<\tau\,,\ {\bm{z}}\in\mathcal{S}}\,,\]
\paragraph{Noiseless recovery.} In this case we have ${\bm{x}}\in\set{0,1}^M$. There exists some $\tau>0$ that depends on ${\bm{x}}$, such that if the $t$-th iterate ${\bm{z}}_t\in\mathcal{E}(\tau)$, the projected gradient descent update in \eqref{eq:pgd_update} is guaranteed to converge linearly to ${\bm{x}}$.
\begin{theorem}
\label{THM:CVG_SED}
In the noiseless case, let ${\bm{h}}={\bm{z}}_t-{\bm{x}}$ and ${\bm{B}}_y={\bm{A}}_y+{\bm{A}}_y^T$. If ${\bm{z}}_t$ satisfies
\[
\|{\bm{h}}\|_2=\|{\bm{z}}_t-{\bm{x}}\|_2<\tau=\left(2-\frac{1}{q}\right)\cdot\sqrt{\frac{\lambda_{\bm{E}}}{4}}\,,
\]
where $q\in\big(\frac{1}{2},1\big)$ is some fixed constant and $\lambda_{\bm{E}}>0$ depends on the matrix ${\bm{E}}=\sum_{y=0}^{M-1}{\bm{B}}_y{\bm{x}}\vx^T{\bm{B}}_y^T$, the projected gradient descent update in \eqref{eq:pgd_update} converges linearly to ${\bm{x}}$,
\begin{equation}
\label{eq:converge}
\|{\bm{z}}_{t+1}-{\bm{x}}\|_2<\mu^{\frac{1}{2}}\|{\bm{z}}_t-{\bm{x}}\|_2\,,
\end{equation}
where $\mu\in(0,1)$ and the step size $\eta>0$ both depend on $\left\{q,\ \tau,\ {\bm{z}}_t\right\}$.
\end{theorem}
As proved in Appendix \ref{proof:thm:cvg_sed}, the size of the convergence neighbourhood varies for different signals ${\bm{x}}$. According to Lemma \ref{LEMMA:E_MIN}, $\lambda_{\bm{E}}$ can be computed via the following convex program:
\begin{align}
\begin{split}
\lambda_{\bm{E}} &= \min_{\widehat{{\bm{h}}}\in\mathcal{G}}\ \widehat{{\bm{h}}}^T{\bm{E}}\widehat{{\bm{h}}}=\min_{\widehat{{\bm{h}}}\in\mathcal{G}}\ \textstyle\sum_{y=0}^{M-1}\left(\widehat{{\bm{h}}}^T{\bm{B}}_y{\bm{x}}\right)^2\,,
\end{split}
\end{align}
where $\widehat{{\bm{h}}}=\frac{{\bm{z}}-{\bm{x}}}{\|{\bm{z}}-{\bm{x}}\|_1}$, ${\bm{z}}\in\mathcal{S}$, ${\bm{z}}\neq{\bm{x}}$, and $\mathcal{G}$ is the convex set defined in Lemma \ref{LEMMA:E_MIN}. Note that $\lambda_{\bm{E}}>0$ in the noiseless case. To see this, let us assume that $\lambda_{\bm{E}}=0$. We then have
\begin{align}
\label{eq:noiseless_lambda_E_zero}
({\bm{z}}-{\bm{x}})^T{\bm{B}}_y{\bm{x}}=0,\ \forall\ y\in\set{0,\cdots,M-1}.
\end{align}
Using ${\bm{B}}_0=2{\bm{I}}$, where ${\bm{I}}$ is the identity matrix, we can get ${\bm{z}}^T{\bm{x}}={\bm{x}}^T{\bm{x}}=N$. Since ${\bm{z}}\in\mathcal{S}$ and ${\bm{x}}$ is a binary vector containing exactly $N$ $1$-entries, the vector ${\bm{z}}$ must equal ${\bm{x}}$ to ensure ${\bm{z}}^T{\bm{x}}=N$. This is in contradiction with the assumption that ${\bm{z}}\neq{\bm{x}}$\footnote{If ${\bm{z}}={\bm{x}}$, then we already have a global optimal solution.}. Hence $\lambda_{\bm{E}}\neq 0$. Since ${\bm{E}}$ is a positive semidefinite matrix, we can get that $\lambda_{\bm{E}}>0$ and $\tau>0$.
\paragraph{Noisy recovery.} In this case we have ${\bm{x}}\in[0,1]^M$. From the proof of Theorem \ref{THM:CVG_SED}, we know that $\lambda_{\bm{E}}>0$ is a sufficient condition for the convergence neighbourhood to exist. We next show how the signal ${\bm{x}}$ plays a role in ensuring $\lambda_{\bm{E}}>0$. First, let us assume $\lambda_{\bm{E}}=0$. From \eqref{eq:noiseless_lambda_E_zero}, we have
\begin{align}
\label{eq:noisy_lambda_E_zero}
{\bm{x}}^T{\bm{B}}_y{\bm{z}} = {\bm{x}}^T{\bm{B}}_y{\bm{x}},\quad\forall\ y\in\set{0,\cdots,M-1}.
\end{align}
Let ${\bm{s}}_y^T={\bm{x}}^T{\bm{B}}_y$ and ${\bm{S}}^T=\left[{\bm{s}}_0\ {\bm{s}}_1\ \cdots\ {\bm{s}}_{M-1}\right]$, we can rewrite \eqref{eq:noisy_lambda_E_zero} as follows:
\begin{align}
\label{eq:noisy_R_z_x}
{\bm{S}}({\bm{z}}-{\bm{x}})={\bm{0}}\,.
\end{align}
Let $\mathcal{V}$ and $\mathcal{T}$ denote the following two sets
\begin{align}
\mathcal{V} &= \textnormal{Null}({\bm{S}})\ \backslash\ \set{{\bm{0}}}\\
\mathcal{T} &= \left\{{\bm{h}}\ |\ {\bm{h}}={\bm{z}}-{\bm{x}}\right\}\,.
\end{align}
If $\mathcal{V}\cap\mathcal{T}=\varnothing$, we can get that ${\bm{z}}={\bm{x}}$ according to \eqref{eq:noisy_R_z_x}. This is in contradiction with the assumption that ${\bm{z}}\neq{\bm{x}}$, hence $\lambda_{\bm{E}}\neq0$. Since ${\bm{E}}$ is positive semidefinite, we further have $\lambda_{\bm{E}}>0$ and $\tau>0$. We can see that $\mathcal{V}\cap\mathcal{T}=\varnothing$ is thus a sufficient condition for the convergence neighbourhood to exist.
\subsection{Noiseless Partial Digestion on Real Genome Data}
\begin{figure*}[t]
\centering
\includegraphics[height=2.5in]{figures/partial_digestion}
\caption{Partial digestion of a DNA with the restriction enzyme when $N=5$.}
\label{fig:partial_digestion}
\end{figure*}
As illustrated in Fig. \ref{fig:partial_digestion}, the DNA strands are partially digested at $N$ restriction sites by the restriction enzymes, producing all possible $\tbinom{N}{2}$ fragments. Here we perform experiments on \emph{E. Coli} K12 MG1655 genome data from the GenBank$^{\tiny{\text{\textregistered}}}$ assembly \cite{E:Coli:K12:MG1655}, which is a nucleotide sequence of length$=4,641,652$. Four letters \texttt{A}, \texttt{C}, \texttt{G}, \texttt{T} are used to represent the four nucleotide bases of the DNA strand \cite{NucleotidesSymbol:1985}. The list of restriction enzymes used in the experiments and the number of restriction sites (including the two ends \texttt{5'} and \texttt{3'} as dummy restriction sites) are shown in Table \ref{tab:restriction_enzyme}. Note that the recognition sequence could also be in a reverse order depending on which way the nucleotide sequence is read.
Since there are four nucleotide bases that cannot be further digested, the unlabeled pairwise distances are all integers in this case, and the DNA sequence has a total of $M=4,641,653$ equally spaced possible locations for the restriction sites. Note that the matrix ${\bm{A}}_y$ has a simple structure and thus needs not be stored during computation. Using our proposed approach, we can reconstruct all the locations of the sites in Table \ref{tab:restriction_enzyme} successfully.
\begin{table}[tbp]
\caption{The list of restriction enzymes used in the partial digest experiments.}
\label{tab:restriction_enzyme}
\centering
\begin{tabular}{llllll}
\toprule
Enzyme &Recognition sequence &$N$ &Enzyme &Recognition sequence &$N$ \\ \midrule
\multirow{2}{*}{SmaI} &\texttt{5'---CCC \textcolor{magenta}{|} GGG---3'} & \multirow{2}{*}{$495$} &\multirow{2}{*}{BamHI} &\texttt{5'---G \textcolor{magenta}{|} GATCC---3'} & \multirow{2}{*}{$512$}\\
&\texttt{3'---GGG \textcolor{magenta}{|} CCC---5'} & & &\texttt{3'---CCTAG \textcolor{magenta}{|} G---5'} &\\ \bottomrule
\end{tabular}
\vspace{-3mm}
\end{table}
\subsection{Turnpike Recovery on Simulated Data}
\label{sec:exp:turnpike}
In the turnpike recovery experiments where the points are located on a line, we compare the proposed approach and the state-of-the-art backtracking approach by \cite{Skiena:1994} through simulated noisy recovery experiments. We first uniformly sample $N=10$ points from the interval $[0,1]$ with the minimum pairwise distance between two different points set to $d_{\min}=1e^{-2}$ and the maximum pairwise distance set to $d_{\max}=1$. The length $L$ of the line $\boldsymbol l$ thus equals $d_{\max}$. The quantization step is set to $\Delta l=1e^{-3}$ to balance the trade-off between reducing the quantization error and computational complexity, creating $M=\frac{L}{\Delta l}=1e^3$ possible locations for the $10$ points. The distance measurement $d_k$ is corrupted with white Gaussian noise, $w\sim\mathcal{N}(0,\xi^2)$. We control the noise level by varying the standard deviation of the noise: $\xi\in\{0,1e^{-3}, 3e^{-3}, 5e^{-3}, 7e^{-3}, 9e^{-3}\}$. The results obtained when $\xi=0$ correspond to the case where there is only quantization error and no measurement noise.
For the proposed approach, the unlabeled pairwise distance measurements are collected and extended to form the multiset $\mathcal{D}$. As discussed in section \ref{subsec:ddm}, the parameter $\sigma$ in the approximated distribution $p(d)$ is unknown, and can be tuned in practice to obtain best performance. In the experiments, $\sigma$ is tuned in the interval $(0, d_{\min}=1e^{-2})$, producing multiple solutions corresponding to each $\sigma$. We shall choose the solution whose distance distribution is closest to the observed distance distribution in terms of the earth mover's distance \cite{EMD:2001}. The exact recovered point locations $\{\widehat{u}_1,\widehat{u}_2,\cdots,\widehat{u}_N\}$ are obtained using the aforementioned agglomerative clustering method in section \ref{subsec:ddm}. For each noise level specified by $\xi$, $100$ random simulations are performed and the number of correctly recovered points is recorded for each random run.
For the backtracking approach, the search path for every distance $d_k$ is performed in an interval $[d_k-\Delta d,\ d_k+\Delta d]$. In order to make a fair comparison, we need to ensure that both approaches are evaluating the distance $d_k$ within roughly the same range. Here we choose $\Delta d=5\sigma_{\max}=5e^{-2}$, where $\sigma_{\max}$ is the largest $\sigma$ tuned by the proposed approach. We should note that the best results are obtained by choosing $\Delta d=1$, i.e. the maximum pairwise distance. However, this essentially becomes performing an exhaustive search over all possible paths, and is simply impractical when the number of points $N$ and the number of possible locations $M$ are large. Since there are only $10$ points to be recovered in this case, we also compute the solution obtained via the exhaustive search as a comparison, which corresponds to the best solution one can hope to achieve given noisy measurements.
\begin{figure*}[tbp]
\centering
\subfigure{
\label{fig:n10_compare}
\includegraphics[height=2.2in]{figures/compare_methods_10.pdf}}
\subfigure{
\label{fig:n100_compare}
\includegraphics[height=2.2in]{figures/compare_methods_100.pdf}}
\caption{The distribution and the mean of the number of correctly recovered points across $100$ random runs in the \emph{``turnpike''} recovery experiments using the proposed approach ({\bfseries P}), the backtracking approach ({\bfseries B}) and the exhaustive search ({\bfseries E}). In each random run, $N$ points are uniformly sampled from the interval $[0,1]$. When $N=10$, the smallest distance between two different points is set to $d_{\min}=1e^{-2}$. When $N=100$, we set $d_{\min}=1e^{-4}$. The distances are further corrupted with white Gaussian noise $w\sim\mathcal{N}(0,\xi^2)$, where we control $\xi<d_{\min}$.}
\label{fig:tp_compare}
\end{figure*}
\begin{figure*}[tbp]
\centering
\subfigure{
\label{fig:n10_compare_bw}
\includegraphics[height=2.2in]{figures/compare_methods_10_bw.pdf}}
\subfigure{
\label{fig:n100_compare_bw}
\includegraphics[height=2.2in]{figures/compare_methods_100_bw.pdf}}
\caption{The distribution and the mean of the number of correctly recovered points across $100$ random runs in the \emph{``beltway''} recovery experiments using the proposed approach ({\bfseries P}). In each random run, $N$ points are uniformly sampled from a loop of length $L=d_{\min}+d_{\max}$, where the largest pairwise distance $d_{\max}$ is set to $1$, the smallest distance $d_{\min}$ between two different points is set to $1e^{-2}$ when $N=10$ and $1e^{-4}$ when $N=100$. The distances are further corrupted with white Gaussian noise $w\sim\mathcal{N}(0,\xi^2)$, where we control $\xi<d_{\min}$.}
\label{fig:bw_compare}
\end{figure*}
The recovered point locations can be matched to the true point locations efficiently using the Hungarian algorithm \cite{Hungarian:1955}. If the distance between a recovered location $\hat{u}_n$ and the true location $u_n$ is less than half the smallest pairwise distance $d_{\min}$, the recovery of the $n$-th point is considered to be a success. The recovery results when $N=10$ are shown in Fig. \ref{fig:tp_compare}: the distribution and the mean of the number of correctly recovered points across $100$ random runs are shown for each approach. We can see that the proposed approach is significantly more robust to noise compared to the conventional backtracking approach, and offers a competitive alternative to the search approach.
In order to test how the two approaches are holding up against large-scale problems, we then uniformly sample $N=100$ points from the interval $[0,1]$ as before, with the minimum pairwise distance set to $d_{\min}=1e^{-4}$ and the maximum pairwise distance set to $d_{\max}=1$. The distance measurement $d_k$ is also corrupted with white Gaussian noise $w\sim\mathcal{N}(0,\xi^2)$, where $\xi\in\{0,1e^{-5}, 3e^{-5}, 5e^{-5}, 7e^{-5}, 9e^{-5}\}$. The quantization step is set to $\Delta l=1e^{-5}$, creating $M=\frac{L}{\Delta l}=1e^5$ possible locations for the $100$ points.
For the proposed approach, the standard deviation $\sigma$ in the noise model is tuned in the interval $\sigma\in(0,d_{\min}=1e^{-4})$. For the backtracking approach, the tolerance threshold $\tau_d$ is chosen to be $\tau_d=5\sigma_{\max}=5e^{-4}$. Since $N$ and $M$ are much larger in this case, we are not able to perform an exhaustive search for comparison here. The recovery results when $N=100$ are shown in Fig. \ref{fig:tp_compare}. We can see that the proposed approach is more robust and has greater advantage over the backtracking approach for large-scale problems. When the noise level is high, the backtracking approach is not able to produce solutions, whereas the proposed approach can still produce partially correct solutions.
\subsection{Beltway Recovery on Simulated Data}
\label{sec:exp:beltway}
We next use the proposed approach to perform the beltway recovery experiments where the points lie on a loop. To the best of our knowledge, our approach is the first practical approach that can solve the large-scale beltway problem efficiently. Note that the exhaustive search is impractical even when $N$ is small but $M$ is large \cite{Lemke2003}. Hence we only present the recovery results obtained using the proposed approach here. We uniformly sample $N$ points from a loop of length $L=d_{\min}+d_{\max}$, where $d_{\min}$ is the minimum distance between two different points and $d_{\max}$ is the maximum pairwise distance. When $N=10$, we set $d_{\min}=1e^{-2}$ and $d_{\max}=1$. The distance $d_k$ is also corrupted with a white Gaussian noise: $w_k\sim\mathcal{N}(0,\xi^2)$, where $\xi\in\set{0,1e^{-3}, 3e^{-3}, 5e^{-3},7e^{-3},9e^{-3}}$. The quantization step is set to $\Delta l=1e^{-3}$, creating $M=\frac{L}{\Delta l}=1.01e^3$ possible locations for the $10$-points case. When $N=100$, we set $d_{\min}=1e^{-4}$ and $d_{\max}=1$. The standard deviation of the white Gaussian noise is chosen from $\xi\in\set{0,1e^{-5},3e^{-5},5e^{-5},7e^{-5},9e^{-5}}$ as before, and the quantization step is set to $\Delta l=1e^{-5}$, creating $M=\frac{L}{\Delta l}=1.0001e^5$ possible locations for the $100$-points case. The recovery results are shown in Fig. \ref{fig:bw_compare}. We can see that the proposed approach is able to reconstruct all the point locations correctly when there is only quantization error and no measurement noise, i.e. $\xi=0$. When measurement noise is added to the distance, the proposed approach could still perform a partial reconstruction.
\subsection{Comparison of Initialization Schemes}
\begin{figure*}[tbp]
\centering
\subfigure{
\label{fig:init_tp_n100_compare}
\includegraphics[width=.48\textwidth]{figures/compare_methods_init_100.pdf}}
\subfigure{
\label{fig:init_bw_n100_compare}
\includegraphics[width=.48\textwidth]{figures/compare_methods_init_100_bw.pdf}}
\caption{The distribution and the mean of the number of correctly recovered points across $100$ random runs in the turnpike and beltway recoveries comparing the \emph{``three initialization schemes''}: the spectral initialization ({\bfseries S}), the random initialization ({\bfseries R}), and the uniform initialization ({\bfseries U}). In each random run, $N=100$ points are uniformly sampled from the interval $[0,1]$, with the smallest distance between two different points set to $d_{\min}=1e^{-4}$. The distances are further corrupted with white Gaussian noise $w\sim\mathcal{N}(0,\xi^2)$, where we control $\xi<d_{\min}$.}
\label{fig:init_compare}
\end{figure*}
A spectral initialization scheme is adopted in the proposed approach to solve the nonconvex turnpike and beltway recoveries. It is meant to provide a good initializer that highlights the possible point locations. Here we put it to test and compare it with the other two initialization schemes, i.e. the ``random'' initialization and the ``uniform'' initialization. In the random initialization scheme, the entries of the initializer ${\bm{z}}_0$ are generated independently according to the white Gaussian distribution $\mathcal{N}(0,0.01)$. In the uniform initialization scheme, the entries of ${\bm{z}}_0$ are set to all ones. We should note that the initializers from all three schemes are projected to the convex set $\mathcal{S}$ defined by \eqref{eq:relaxed_constraint} and \eqref{eq:l1_constraint} before they can be used with the projected gradient descent. Following the same settings when $N=100$ as in sections \ref{sec:exp:turnpike} and \ref{sec:exp:beltway}, we perform simulated noisy turnpike and beltway recoveries using the three initialization schemes. The recovery results are shown in Fig. \ref{fig:init_compare}. For the turnpike recovery, the spectral initialization is more robust than the other two schemes. For the beltway recovery, the spectral initialization and the random initialization perform almost equally well, and they both perform better than the uniform initialization.
\section{Proofs for Projected Gradient Descent}
\remove{
\subsection{Proof of Lemma ~\ref{LEMMA:VALID_STEPSIZE}}
\label{proof:lemma:valid_stepsize}
Since ${\bm{z}}_t\in\mathcal{S}$, we can represent any point ${\bm{v}}$ in the convex set $\mathcal{S}$ by
\begin{align}
{\bm{v}}&={\bm{z}}_t+{\bm{r}}\,,
\end{align}
where $\sum_{m=1}^M r_m=0$, $-{z_t}_m\leq r_m\leq 1-{z_t}_m$, $\forall\ m\in\set{1,\cdots,M}$ and ${z_t}_m$ is the $m$-th entry of ${\bm{z}}_t$.
\begin{enumerate}[label={\arabic*)}]
\item To simplify notation, we use $f^\prime$ to denote the gradient $\nabla f({\bm{z}}_t)$. The projection of ${\bm{z}}_t-\eta\cdot f^\prime$ onto $\mathcal{S}$ can be obtained via the following two equivalent problems:
\begin{align}
\begin{split}
\arg\min_{{\bm{v}}\in\mathcal{S}}\quad\|{\bm{v}}-({\bm{z}}_t-\eta\cdot f^\prime)\|_2^2 \quad \Longleftrightarrow \quad \arg\min_{{\bm{r}}}\quad\|{\bm{r}}+\eta\cdot f^\prime\|_2^2
\end{split}
\end{align}
The gradient $f^\prime$ can be decomposed into the sum of two vectors $f^\prime_1$ and $f^\prime_2$ as follows. Let $\boldsymbol 1$ denote a vector of length $M$ with all $1$s. We have
\begin{align}
f^\prime_1&=\frac{{\boldsymbol 1}^Tf^\prime}{M}\cdot\boldsymbol 1\\
f^\prime_2&=f^\prime-f^\prime_1\,.
\end{align}
Since ${\bm{r}}^Tf^\prime_1=0$, we get:
\begin{align}
\begin{split}
\widehat{{\bm{r}}}=&\arg\min_{{\bm{r}}}\quad\|{\bm{r}}+\eta\cdot(f^\prime_1+f^\prime_2)\|_2^2\\
=&\arg\min_{{\bm{r}}}\quad{\bm{r}}^T{\bm{r}}+2\eta\cdot{\bm{r}}^Tf^\prime_2\,.
\end{split}
\end{align}
Since we can always make ${\bm{r}}^T{\bm{r}}+2\eta\cdot{\bm{r}}^Tf^\prime_2=0$ by setting ${\bm{r}}=\boldsymbol 0$, we then have the following inequality for the minimizing $\widehat{{\bm{r}}}$:
\begin{align}
\label{eq:convergence_ieq1}
\widehat{{\bm{r}}}^T\widehat{{\bm{r}}}+2\eta\cdot\widehat{{\bm{r}}}^Tf^\prime_2\leq 0\,.
\end{align}
Since $\widehat{{\bm{r}}}^T\widehat{{\bm{r}}}\geq 0$ and $\eta>0$, we must have:
\begin{align}
\label{eq:ineq_1}
\widehat{{\bm{r}}}^Tf^\prime_2\leq 0\,.
\end{align}
\item Let ${\bm{B}}_y={\bm{A}}_y+{\bm{A}}_y^T$. We can compute $\nabla^2 f({\bm{z}})$ as follows
\begin{align}
\nabla^2 f({\bm{z}}) = \frac{2}{MK^2}\sum_{y=0}^{M-1}\left[{\bm{B}}_y{\bm{z}}\vz^T{\bm{B}}_y^T+\left({\bm{z}}^T{\bm{A}}_y{\bm{z}}-{\bm{x}}^T{\bm{A}}_y{\bm{x}}\right)\cdot{\bm{B}}_y\right]\,.
\end{align}
Since $\nabla^2 f({\bm{z}}_t)$ is bounded, $\nabla f({\bm{z}}_t)$ is locally Lipschitz continuous in a neighbourhood $\rho({\bm{z}}_t)$ around ${\bm{z}}_t$. In other words, $\forall {\bm{v}}\in\rho({\bm{z}}_t)$, the objective function $f({\bm{v}})$ has Lipschitz continuous gradient. The projection of ${\bm{z}}_t$ onto $\mathcal{S}$ can be computed as ${\bm{s}}={\bm{z}}_t+\widehat{{\bm{r}}}$. We can then obtain a quadratic upper bound on $f({\bm{s}})$. Suppose the Lipschitz constant is $\mathcal{L}$. We have
\begin{align}
\label{eq:ub_fvh}
\begin{split}
f({\bm{s}})&=f\left({\bm{z}}_t+\widehat{{\bm{r}}}\right)\\
&\leq f\left({\bm{z}}_t\right)+\widehat{{\bm{r}}}^Tf^\prime+\frac{\mathcal{L}}{2}\widehat{{\bm{r}}}^T\widehat{{\bm{r}}}\\
&=f\left({\bm{z}}_t\right)+\widehat{{\bm{r}}}^T(f^\prime_1+f^\prime_2)+\frac{\mathcal{L}}{2}\widehat{{\bm{r}}}^T\widehat{{\bm{r}}}\\
&=f\left({\bm{z}}_t\right)+\widehat{{\bm{r}}}^Tf^\prime_2+\frac{\mathcal{L}}{2}\widehat{{\bm{r}}}^T\widehat{{\bm{r}}}\,.
\end{split}
\end{align}
If the step size $\eta$ is chosen such that $0<\eta\leq\frac{1}{\mathcal{L}}$, using \eqref{eq:ineq_1}, we have:
\begin{align}
\label{eq:ll_2_5}
\frac{2}{\mathcal{L}}\cdot\widehat{{\bm{r}}}^Tf^\prime_2 \leq 2\eta\cdot\widehat{{\bm{r}}}^Tf^\prime_2 \,.
\end{align}
Plug \eqref{eq:ll_2_5} into (\ref{eq:convergence_ieq1}). We can get:
\begin{align}
\label{eq:ub_fvh_p2}
\widehat{{\bm{r}}}^Tf^\prime_2+\frac{\mathcal{L}}{2}\widehat{{\bm{r}}}^T\widehat{{\bm{r}}}\leq 0\,.
\end{align}
Combining \eqref{eq:ub_fvh} and \eqref{eq:ub_fvh_p2}, we have $f({\bm{s}})\leq f({\bm{z}}_t)$.
\end{enumerate}
}
\subsection{Proof of Lemma \ref{lemma:upper_bound}}
\label{proof:lemma:upper_bound}
Suppose that $s_i<1$. We construct a vector $\widetilde{{\bm{s}}}\in\mathbb{R}^M$ out of ${\bm{s}}$ by swapping the positions of $s_i$ and $s_j$ in ${\bm{s}}$, i.e. $\widetilde{s}_i=s_j$ and $\widetilde{s}_j=s_i$. $\widetilde{{\bm{s}}}$ also satisfies the constraints in (\ref{eq:projection_box_constrains}). Since $z_i>z_j$ and $s_j=1$, we then have:
\begin{align}
\begin{split}
\|{\bm{s}}-{\bm{z}}\|_2^2-\|\tilde{{\bm{s}}}-{\bm{z}}\|_2^2&=(s_i-z_i)^2+(s_j-z_j)^2-(s_j-z_i)^2-(s_i-z_j)^2\\
&=2(1-s_i)(z_i-z_j)\\
&> 0\,.
\end{split}
\end{align}
This is in contradiction with the fact that ${\bm{s}}$ is the minimizer of (\ref{eq:projection_box_constrains}). Hence $s_i$ must be $1$.
\subsection{Proof of Lemma \ref{LEMMA:UNIQUE_R}}
\label{proof:lemma:unique_r}
Let $\mathcal{S}$ denote the convex set defined by the constraints $0\leq s_m\leq 1$, $\forall\ 1\leq m\leq M$ and $\|{\bm{s}}\|_1=N$. Note that the entries of ${\bm{z}}$ and ${\bm{x}}$ are in a non-increasing order. We will proceed in the following two steps:
\begin{enumerate}[label={\arabic*)}]
\item Since $\mathcal{S}$ is non-empty, the projection onto it exists, i.e. there is one $r\in\set{1,\cdots,N}$ that produces the $(\rho,\kappa)$ that satisfy $1\leq z_{r-1}-\kappa$ if $2\leq r\leq N<\rho$ and $0<z_r-\kappa<1$. In fact, since $\mathcal{S}$ is a closed convex set, the projection is also unique.
\item Without loss of generality, suppose that there are two different $r_1<r_2\in\set{1,\cdots,N}$ that produce the two pairs $(\rho_1,\kappa_1|r_1)$ and $(\rho_2,\kappa_2|r_2)$ that satisfy the constraints (\ref{eq:cst_r}) and (\ref{eq:cst_rm1}). We have:
\begin{gather*}
r_1<r_2 \,\Rightarrow\, r_1\leq r_2-1 \,\Rightarrow\, z_{r_2-1}-\kappa_1\leq z_{r_1}-\kappa_1< 1 \,\Rightarrow\, z_{r_2-1}-1< \kappa_1\\
1< z_{r_2-1}-\kappa_2 \,\Rightarrow\, \kappa_2< z_{r_2-1}-1\,.
\end{gather*}
Hence $\kappa_2<\kappa_1$. We further have:
\begin{gather*}
z_{\rho_1}-\kappa_1>0 \,\Rightarrow\, z_{\rho_1}>\kappa_1\\
z_{\rho_2+1}-\kappa_2\leq 0 \,\Rightarrow\, z_{\rho_2+1}\leq\kappa_2\\
z_{\rho_2+1}\leq\kappa_2<\kappa_1< z_{\rho_1} \,\Rightarrow\, z_{\rho_2+1} < z_{\rho_1}\,.
\end{gather*}
Hence $\rho_2+1>\rho_1\,\Rightarrow\,\rho_2\geq\rho_1$.
\begin{enumerate}
\item If $r_2\leq\rho_1$, we can find the upper bound for the sum of the first $\rho_1$ entries of ${\bm{s}}_1$:
\begin{align}
\label{eq:r_rho_neq_1}
\begin{split}
r_1-1+\sum_{m=r_1}^{\rho_1}(z_m-\kappa_1)=\,&r_1-1+\sum_{m=r_1}^{r_2-1}(z_m-\kappa_1)+\sum_{m=r_2}^{\rho_1}(z_m-\kappa_1)\\
<\,&r_1-1+\sum_{m=r_1}^{r_2-1}1+\sum_{m=r_2}^{\rho_1}(z_m-\kappa_1)\\
<\,&r_2-1+\sum_{m=r_2}^{\rho_1}(z_m-\kappa_2)\\
\leq\,&r_2-1+\sum_{m=r_2}^{\rho_2}(z_m-\kappa_2)\,.
\end{split}
\end{align}
\item If $r_2>\rho_1$, we can compute:
\begin{align}
\label{eq:r_rho_neq_2}
\begin{split}
r_1-1+\sum_{m=r_1}^{\rho_1}(z_m-\kappa_1)\,\leq\, &r_1-1+\sum_{m=r_1}^{\rho_1}1\\
=\,&\rho_1\\
\leq\,&r_2-1\\
<\,&r_2-1+\sum_{m=r_2}^{\rho_2}(z_m-\kappa_2)\,.
\end{split}
\end{align}
\end{enumerate}
Let ${\bm{s}}_1,{\bm{s}}_2$ denote the solutions of \eqref{eq:projection_box_constrains} produced by $r_1,r_2$ respectively. Both (\ref{eq:r_rho_neq_1}) and (\ref{eq:r_rho_neq_2}) show that $\|{\bm{s}}_1\|_1<\|{\bm{s}}_2\|_1$. This is in contradiction with the assumption that they have the same $l_1$ norm, i.e. $\|{\bm{s}}_1\|_1=\|{\bm{s}}_2\|_1=N$. Hence $r_1=r_2$, there is only one $r\in\set{1,\cdots,N}$ that produces the $(\rho,\kappa)$ that satisfy the constraints (\ref{eq:cst_r}) and (\ref{eq:cst_rm1}).
\end{enumerate}
\section{Proofs for Convergence Analysis}
\subsection{Lemma ~\ref{LEMMA:E_MIN}}
\label{proof:lemma:e_min}
\begin{lemma}
\label{LEMMA:E_MIN}
Let ${\bm{B}}_y={\bm{A}}_y+{\bm{A}}_y^T$, ${\bm{E}}=\sum_{y=0}^{M-1}{\bm{B}}_y{\bm{x}}{{\bm{x}}}^T{\bm{B}}_y^T$, $\mathcal{S}$ be the convex set defined by the constraints \eqref{eq:relaxed_constraint},\eqref{eq:l1_constraint}. The following problem is convex $\forall\ {\bm{z}}\in\mathcal{S}$, ${\bm{z}}\neq{\bm{x}}$:
\begin{align}
\label{eq:lambda_min_val}
\begin{split}
\lambda({\bm{E}}) &= \min_{{\bm{z}}\in\mathcal{S}, {\bm{z}}\neq{\bm{x}}}\,\frac{1}{\|{\bm{z}}-{\bm{x}}\|_1^2}({\bm{z}}-{\bm{x}})^T{\bm{E}}({\bm{z}}-{\bm{x}})\\
&=\min_{\widehat{{\bm{h}}}\in\mathcal{G}}\,\widehat{{\bm{h}}}^T{\bm{E}}\widehat{{\bm{h}}}\,,
\end{split}
\end{align}
where $\lambda({\bm{E}})>0$ and $\widehat{{\bm{h}}}=\frac{{\bm{z}}-{\bm{x}}}{\|{\bm{z}}-{\bm{x}}\|_1}$, $\mathcal{G}$ is a convex set defined by the following constraints:
\begin{align}
\label{eq:hh_con_2}
&\sum_{i=1}^M\widehat{h}_i=0\\
\label{eq:hh_con_3}
&\widehat{h}_i\in[0,\,0.5]\quad\textnormal{if $x_i=0$}\\
\label{eq:hh_con_4}
&\widehat{h}_i\in[-0.5,\,0]\quad\textnormal{if $x_i=1$}\\
\label{eq:hh_con_1}
&\|\widehat{{\bm{h}}}\|_1={\bm{r}}^T\widehat{{\bm{h}}}=1\,,
\end{align}
where ${\bm{r}}\in\{-1,1\}^M$ depends on ${\bm{x}}$ and is defined as follows:
\begin{align}
r_i=\left\{
\begin{array}{l}
1\\
-1
\end{array}
\quad
\begin{array}{l}
\textnormal{if $x_i=0$}\\
\textnormal{if $x_i=1$}\,.
\end{array}
\right.
\end{align}
\end{lemma}
\begin{proof}
Since ${\bm{E}}=\sum_{y=0}^{M-1}{\bm{B}}_y{\bm{x}}{{\bm{x}}}^\textrm{T}{\bm{B}}_y^\textrm{T}$, we can see that $\widehat{{\bm{h}}}^T{\bm{E}}\widehat{{\bm{h}}}=\sum_y\left(\widehat{{\bm{h}}}^T{\bm{B}}_y{\bm{x}}\right)^2\geq 0$, $\forall\ \widehat{{\bm{h}}}\in\mathbb{R}^M$. Hence ${\bm{E}}$ is positive-semidefinite. We define the following set $\mathcal{H}$:
\begin{align}
\label{eq:H_set_def}
\mathcal{H}=\left\{\widehat{{\bm{h}}}\,\left|\,\widehat{{\bm{h}}}=\frac{1}{\|{\bm{z}}-{\bm{x}}\|_1}({\bm{z}}-{\bm{x}}),\quad\forall {\bm{z}}\in\mathcal{S}\,,{\bm{z}}\neq{\bm{x}}\right.\right\}\,.
\end{align}
where $\mathcal{S}$ is the convex set defined by \eqref{eq:relaxed_constraint} and \eqref{eq:l1_constraint}. We then have
\begin{align}
\begin{split}
\lambda({\bm{E}}) &= \min_{{\bm{z}}\in\mathcal{S}, {\bm{z}}\neq{\bm{x}}}\,\frac{1}{\|{\bm{z}}-{\bm{x}}\|_1^2}({\bm{z}}-{\bm{x}})^T{\bm{E}}({\bm{z}}-{\bm{x}})\\
& = \min_{\widehat{h}\in\mathcal{H}}\,\widehat{{\bm{h}}}^T{\bm{E}}\widehat{{\bm{h}}}\,.
\end{split}
\end{align}
\begin{enumerate}[label={\arabic*)}]
\item We first prove that $\mathcal{H}$ is a convex set. Let $\widehat{{\bm{h}}}^{(1)},\widehat{{\bm{h}}}^{(2)}\in\mathcal{H}$. We have $\textnormal{sign}(\widehat{h}_i^{(1)})=\textnormal{sign}(\widehat{h}_i^{(2)})$ if $\widehat{h}_i^{(1)}\neq0,\ \widehat{h}_i^{(2)}\neq0$. Let $\widehat{{\bm{h}}}^{(3)}=(1-\rho)\widehat{{\bm{h}}}^{(1)}+\rho\widehat{{\bm{h}}}^{(2)}$, where $\rho\in(0,1)$. We have:
\begin{align}
\label{eq:l1_norm_z3}
\begin{split}
\|\widehat{{\bm{h}}}^{(3)}\|_1 & = \left\|(1-\rho)\widehat{{\bm{h}}}^{(1)}+\rho\widehat{{\bm{h}}}^{(2)}\right\|_1\\
&=\sum_{i=1}^M\left|(1-\rho)\widehat{h}_i^{(1)}+\rho \widehat{h}_i^{(2)}\right|\\
&=\sum_{i=1}^M\left|(1-\rho)\widehat{h}_i^{(1)}\right|+\left|\rho \widehat{h}_i^{(2)}\right|\\
&=(1-\rho)\left\|\widehat{{\bm{h}}}^{(1)}\right\|_1+\rho\left\|\widehat{{\bm{h}}}^{(2)}\right\|_1=1\,.
\end{split}
\end{align}
Let $\nu_1=\frac{1-\rho}{\left\|{\bm{z}}^{(1)}-{\bm{x}}\right\|_1}$, $\nu_2=\frac{\rho}{\left\|{\bm{z}}^{(2)}-{\bm{x}}\right\|_1}$. We have
\begin{align}
\begin{split}
\widehat{{\bm{h}}}^{(3)}&=(1-\rho)\widehat{{\bm{h}}}^{(1)}+\rho \widehat{{\bm{h}}}^{(2)}\\
&=\frac{1-\rho}{\left\|{\bm{z}}^{(1)}-{\bm{x}}\right\|_1}\left({\bm{z}}^{(1)}-{\bm{x}}\right)+\frac{\rho}{\left\|{\bm{z}}^{(2)}-{\bm{x}}\right\|_1}\left({\bm{z}}^{(2)}-{\bm{x}}\right)\\
&=\left(\nu_1+\nu_2\right)\left(\frac{\nu_1}{\nu_1+\nu_2}{\bm{z}}^{(1)}+\frac{\nu_2}{\nu_1+\nu_2}{\bm{z}}^{(2)}-{\bm{x}}\right)\\
&=(\nu_1+\nu_2)({\bm{z}}^{(3)}-{\bm{x}})\,.
\end{split}
\end{align}
Using \eqref{eq:l1_norm_z3}, we can see that $\nu_1+\nu_2=\frac{1}{\|{\bm{z}}^{(3)}-{\bm{x}}\|_1}$. Since ${\bm{z}}^{(1)},{\bm{z}}^{(2)}\in\mathcal{S}$, we have ${\bm{z}}^{(3)}\in\mathcal{S}$. We have shown that $\widehat{{\bm{h}}}^{(3)}$ can be written in the same form given in \eqref{eq:H_set_def} and thus belongs to $\mathcal{H}$.
\begin{align}
\widehat{{\bm{h}}}^{(3)}=\frac{1}{\|{\bm{z}}^{(3)}-{\bm{x}}\|_1}({\bm{z}}^{(3)}-{\bm{x}})\,.
\end{align}
Hence $\widehat{{\bm{h}}}^{(3)}\in\mathcal{H}$, and $\mathcal{H}\subset\mathbb{R}^M$ is a convex set. Minimizing $\widehat{{\bm{h}}}^\textrm{T}{\bm{E}}\widehat{{\bm{h}}}$ with respect to $\widehat{{\bm{h}}}\in\mathcal{H}$ is a convex problem.
\item We next prove that $\lambda({\bm{E}})$ in \eqref{eq:lambda_min_val} is strictly positive. If $\widehat{{\bm{h}}}^T{\bm{E}}\widehat{{\bm{h}}}=\sum_y\left(\widehat{{\bm{h}}}^T{\bm{B}}_y{\bm{x}}\right)^2=0$, we have $({\bm{z}}-{\bm{x}})^T{\bm{B}}_y{\bm{x}}=0$, $\forall\ y\in\set{0,\cdots,M-1}$. When $y=0$, ${\bm{B}}_0=2{\bm{I}}$, where ${\bm{I}}$ is the identity matrix, we get ${\bm{z}}^T{\bm{x}}={{\bm{x}}}^\textrm{T}{\bm{x}}=N$. Since ${\bm{z}}\in\mathcal{S}$ and ${\bm{x}}\in\set{0,1}^M$ in the noiseless case, we have ${\bm{z}}={\bm{x}}$. This is in contradiction with the assumption ${\bm{z}}\neq{\bm{x}}$, hence $\widehat{{\bm{h}}}^T{\bm{E}}\widehat{{\bm{h}}}>0$, $\forall\ \widehat{{\bm{h}}}\in\mathcal{H}$.
\item We finally prove that $\mathcal{H}$ and a new set $\mathcal{G}$ defined by the following constraints are the same:
\begin{align}
\label{eq:hh_con_2}
&\sum_{i=1}^M\widehat{h}_i=0\\
\label{eq:hh_con_3}
&\widehat{h}_i\in[0,\,0.5]\quad\textnormal{if $x_i=0$}\\
\label{eq:hh_con_4}
&\widehat{h}_i\in[-0.5,\,0]\quad\textnormal{if $x_i=1$}\\
\label{eq:hh_con_1}
&\|\widehat{{\bm{h}}}\|_1={\bm{r}}^T\widehat{{\bm{h}}}=1\,,
\end{align}
where ${\bm{r}}\in\{-1,1\}^M$ is defined as follows:
\begin{align}
r_i=\left\{
\begin{array}{l}
1\\
-1
\end{array}
\quad
\begin{array}{l}
\textnormal{if $x_i=0$}\\
\textnormal{if $x_i=1$}\,.
\end{array}
\right.
\end{align}
\begin{itemize}
\item It is easy to verify that if $\widehat{{\bm{h}}}\in\mathcal{H}$, \eqref{eq:hh_con_2} and \eqref{eq:hh_con_1} hold. Since $z_i\in[0,1]$ and $x_i\in\{0,1\}$, if $x_i=0$, $\widehat{h}_i\geq 0$; if $x_i=1$, $\widehat{h}_i\leq 0$. On the other hand, if $|\widehat{h}_i|>0.5$, from \eqref{eq:hh_con_2} we have $\sum_{j\neq i}|\widehat{h}_j|\geq|\sum_{j\neq i}\widehat{h}_j|=|-\widehat{h}_i|>0.5$. This means that $\|\widehat{{\bm{h}}}\|_1=|\widehat{h}_i|+\sum_{j\neq i}|\widehat{h}_j|>1$, which contradicts \eqref{eq:hh_con_1}. Hence $|\widehat{h}_i|\leq 0.5$, \eqref{eq:hh_con_3} and \eqref{eq:hh_con_4} hold. This proves that $\widehat{{\bm{h}}}\in\mathcal{G}$.
\item If $\widehat{{\bm{h}}}\in\mathcal{G}$, we can construct such a $\widehat{{\bm{z}}}={\bm{x}}+\widehat{{\bm{h}}}$. It is easy to verify that $\widehat{{\bm{z}}}\in\mathcal{S}$ and $\|\widehat{{\bm{z}}}-{\bm{x}}\|_1=\|\widehat{{\bm{h}}}\|_1=1$. Hence $\widehat{{\bm{h}}} = \frac{1}{\|\widehat{{\bm{z}}}-{\bm{x}}\|_1}(\widehat{{\bm{z}}}-{\bm{x}})\in\mathcal{H}$.
\end{itemize}
\end{enumerate}
Computing $\lambda({\bm{E}})=\min_{\widehat{{\bm{h}}}\in\mathcal{G}}\widehat{{\bm{h}}}^T{\bm{E}}\widehat{{\bm{h}}}\,>0$ is thus a convex problem, and can be efficiently solved via quadratic programming.
\end{proof}
\subsection{Proof of Theorem ~\ref{THM:CVG_SED}}
\label{proof:thm:cvg_sed}
Let ${\bm{B}}_y={\bm{A}}_y+{\bm{A}}_y^T$. The objective function $f({\bm{z}})$ in \eqref{eq:constrained_nonconvex} can be written as
\begin{align}
f({\bm{z}}) = \frac{1}{4MK^2}\sum_{y=0}^{M-1}\left({\bm{z}}^T{\bm{B}}_y{\bm{z}}-{\bm{x}}^T{\bm{B}}_y{\bm{x}}\right)^2\,.
\end{align}
The gradient $\nabla f({\bm{z}})$ is
\begin{align}
\begin{split}
\nabla f({\bm{z}}) &= \frac{1}{MK^2}\sum_{y=0}^{M-1}{\bm{B}}_y{\bm{z}}\cdot\left({\bm{z}}^T{\bm{B}}_y{\bm{z}}-{\bm{x}}^T{\bm{B}}_y{\bm{x}}\right)\\
&= \frac{1}{MK^2}\sum_{y=0}^{M-1}{\bm{B}}_y{\bm{z}}\cdot({\bm{z}}-{\bm{x}})^T{\bm{B}}_y({\bm{z}}+{\bm{x}})\,.
\end{split}
\end{align}
In order to prove the conditions under which the projected gradient descent updates converge to a global optimum, we first establish the conditions on the gradient descent updates.
\paragraph{Step 1:} When the distance between the current solution ${\bm{z}}_t$ and a global optimum ${\bm{x}}$ is less than some $\tau>0$, i.e. $\|{\bm{z}}_t-{\bm{x}}\|_2<\tau$, we would like to show that the gradient descent update ${\bm{z}}_t-\eta\nabla f({\bm{z}}_t)$ converges linearly to ${\bm{x}}$. In other words, we need to prove the following \eqref{eq:diff} is less than $0$ for some $\mu\in(0,1)$ and $\eta>0$:
\begin{align}
\label{eq:diff}
\begin{split}
\left\|{\bm{z}}_t-\eta\nabla f({\bm{z}}_t)-{\bm{x}} \right\|^2_2-\mu\|{\bm{z}}_t-{\bm{x}}\|^2_2
=\eta^2\|\nabla f({\bm{z}}_t)\|_2^2-2\eta\langle{\bm{z}}_t-{\bm{x}},\, \nabla f({\bm{z}}_t)\rangle+(1-\mu)\|{\bm{z}}_t-{\bm{x}}\|_2^2\,.
\end{split}
\end{align}
We then have:
\begin{align}
\label{eq:first_term_lb}
\begin{split}
\|\nabla f({\bm{z}}_t) \|_2^2& = \frac{1}{K^4}\left\|\frac{1}{M}\sum_y{\bm{B}}_y{\bm{z}}_t\cdot({\bm{z}}_t-{\bm{x}})^\textrm{T}{\bm{B}}_y({\bm{z}}_t+{\bm{x}})\right\|_2^2\\
&\leq \frac{1}{MK^4}\sum_y\left\|{\bm{B}}_y{\bm{z}}_t\cdot({\bm{z}}_t-{\bm{x}})^\textrm{T}{\bm{B}}_y({\bm{z}}_t+{\bm{x}})\right\|_2^2\\
&= \frac{1}{MK^4}\sum_y\left\|{\bm{B}}_y{\bm{z}}_t\right\|_2^2\cdot\left(({\bm{z}}_t-{\bm{x}})^\textrm{T}{\bm{B}}_y({\bm{z}}_t+{\bm{x}})\right)^2\\
&\leq \frac{1}{MK^4}\sum_y\sigma_{\max}^2\left({\bm{B}}_y\right)\|{\bm{z}}_t\|_2^2\cdot\left(({\bm{z}}_t-{\bm{x}})^\textrm{T}{\bm{B}}_y({\bm{z}}_t+{\bm{x}})\right)^2\\
&\leq \frac{4}{MK^4}\|{\bm{z}}_t\|_2^2\sum_y\left(({\bm{z}}_t-{\bm{x}})^\textrm{T}{\bm{B}}_y({\bm{z}}_t+{\bm{x}})\right)^2\\
&= \frac{16}{K^2}\|{\bm{z}}_t\|_2^2\cdot f({\bm{z}}_t)\,,
\end{split}
\end{align}
where $\sigma_{\max}^2\left({\bm{B}}_y\right)\leq 4$, $\forall\ y=\{0,1,\cdots,M-1\}$ according to the Schur's bound \cite{Schur1911}.
Using the Cauchy-Schwarz inequality, we also have that
\begin{align}
\label{eq:second_term}
\begin{split}
\langle{\bm{z}}_t-{\bm{x}},\, \nabla f({\bm{z}}_t)\rangle&=\frac{1}{MK^2}\sum_y({\bm{z}}_t-{\bm{x}})^\textrm{T}{\bm{B}}_y{\bm{z}}_t\cdot({\bm{z}}_t-{\bm{x}})^\textrm{T}{\bm{B}}_y({\bm{z}}_t+{\bm{x}})\\
&=4f({\bm{z}}_t)-\frac{1}{MK^2}\sum_y({\bm{z}}_t-{\bm{x}})^\textrm{T}{\bm{B}}_y({\bm{z}}_t+{\bm{x}})\cdot({\bm{z}}_t-{\bm{x}})^\textrm{T}{\bm{B}}_y{\bm{x}}\\
&\geq 4f({\bm{z}}_t)-\sqrt{4f({\bm{z}}_t)}\sqrt{\frac{1}{MK^2}\sum_y\left(({\bm{z}}_t-{\bm{x}})^\textrm{T}{\bm{B}}_y{\bm{x}}\right)^2}\,.
\end{split}
\end{align}
We proceed by further lower-bounding the above \eqref{eq:second_term}. Let ${\bm{h}}={\bm{z}}_t-{\bm{x}}$. For some $\frac{1}{2}<q<1$, we have
\begin{align}
\label{eq:second_term_lb1}
\begin{split}
&q^2\sum_y\left(({\bm{z}}_t-{\bm{x}})^\textrm{T}{\bm{B}}_y({\bm{z}}_t+{\bm{x}})\right)^2-\sum_y\left(({\bm{z}}_t-{\bm{x}})^\textrm{T}{\bm{B}}_y{\bm{x}}\right)^2\\
=\ &q^2\sum_y\left({\bm{h}}^\textrm{T}{\bm{B}}_y({\bm{h}}+2{\bm{x}})\right)^2-\sum_y\left({\bm{h}}^\textrm{T}{\bm{B}}_y{\bm{x}}\right)^2\\
=\ &q^2\sum_y\left({\bm{h}}^\textrm{T}{\bm{B}}_y{\bm{h}}\right)^2+4q^2\sum_y{\bm{h}}^\textrm{T}{\bm{B}}_y{\bm{h}}\cdot{\bm{h}}^\textrm{T}{\bm{B}}_y{\bm{x}}+(4q^2-1)\sum_y\left({\bm{h}}^\textrm{T}{\bm{B}}_y{\bm{x}}\right)^2\\
\geq \ &q^2\sum_y\left({\bm{h}}^\textrm{T}{\bm{B}}_y{\bm{h}}\right)^2-4q^2\sqrt{\sum_y\left({\bm{h}}^\textrm{T}{\bm{B}}_y{\bm{h}}\right)^2}\sqrt{\sum_y\left({\bm{h}}^T{\bm{B}}_y{\bm{x}}\right)^2}+(4q^2-1)\sum_y\left({\bm{h}}^\textrm{T}{\bm{B}}_y{\bm{x}}\right)^2\\
=\ &\left(q\sqrt{\sum_y\left({\bm{h}}^\textrm{T}{\bm{B}}_y{\bm{h}}\right)^2}-2q\sqrt{\sum_y\left({\bm{h}}^T{\bm{B}}_y{\bm{x}}\right)^2}\right)^2-\left(\sqrt{\sum_y\left({\bm{h}}^\textrm{T}{\bm{B}}_y{\bm{x}}\right)^2}\right)^2\\
=\ &\left(q\sqrt{\sum_y\left({\bm{h}}^\textrm{T}{\bm{B}}_y{\bm{h}}\right)^2}-(2q-1)\sqrt{\sum_y\left({\bm{h}}^T{\bm{B}}_y{\bm{x}}\right)^2}\right)\left(q\sqrt{\sum_y\left({\bm{h}}^\textrm{T}{\bm{B}}_y{\bm{h}}\right)^2}-(2q+1)\sqrt{\sum_y\left({\bm{h}}^T{\bm{B}}_y{\bm{x}}\right)^2}\right)\,.
\end{split}
\end{align}
To make \eqref{eq:second_term_lb1} greater than $0$, either of the following two inequalities should hold:
\begin{align}
\label{eq:second_term_lb1_one}
\sqrt{\sum_y\left({\bm{h}}^\textrm{T}{\bm{B}}_y{\bm{h}}\right)^2}&>(2+\frac{1}{q})\sqrt{\sum_y\left({\bm{h}}^T{\bm{B}}_y{\bm{x}}\right)^2}\\
\label{eq:second_term_lb1_two}
\sqrt{\sum_y\left({\bm{h}}^\textrm{T}{\bm{B}}_y{\bm{h}}\right)^2}&<(2-\frac{1}{q})\sqrt{\sum_y\left({\bm{h}}^T{\bm{B}}_y{\bm{x}}\right)^2}\,.
\end{align}
We can obtain an upper bound on $\|{\bm{h}}\|_2$ to make \eqref{eq:second_term_lb1_two} hold. Specifically, the left-hand side of \eqref{eq:second_term_lb1_two} can be upper bounded via:
\begin{align}
\label{eq:ub_hDh}
\begin{split}
\sum_y\left({\bm{h}}^\textrm{T}{\bm{B}}_y{\bm{h}}\right)^2&=\|{\bm{h}}\|_2^4\cdot\sum_y\left({\bm{u}}^\textrm{T}{\bm{B}}_y{\bm{u}}\right)^2\\
&=\|{\bm{h}}\|_2^4\cdot\sum_y\|{\bm{B}}_y\|_{op}^2\cdot\left(\frac{|{\bm{u}}^T{\bm{B}}_y{\bm{u}}|}{\|{\bm{B}}_y\|_{op}}\right)^2\\
&\leq \|{\bm{h}}\|_2^4\cdot\sum_y\|{\bm{B}}_y\|_{op}^2\cdot\left(\frac{|{\bm{u}}^T{\bm{B}}_y{\bm{u}}|}{\|{\bm{B}}_y\|_{op}}\right)\\
&=\|{\bm{h}}\|_2^4\cdot\sum_y\|{\bm{B}}_y\|_{op}\cdot|{\bm{u}}^T{\bm{B}}_y{\bm{u}}|\\
&\leq \|{\bm{h}}\|_2^4\cdot\sum_y\|{\bm{B}}_y\|_{op}\cdot |{\bm{u}}|^T{\bm{B}}_y|{\bm{u}}|\\
&=\|{\bm{h}}\|_2^4\cdot\sum_y\sigma_{\max}({\bm{B}}_y)\cdot |{\bm{u}}|^T{\bm{B}}_y|{\bm{u}}|\\
&\leq 2\|{\bm{h}}\|_2^4\cdot\sum_y|{\bm{u}}|^T{\bm{B}}_y|{\bm{u}}|\\
&=2\|{\bm{h}}\|_2^2\cdot|{\bm{h}}|^T\sum_y{\bm{B}}_y|{\bm{h}}|\\
&=2\|{\bm{h}}\|_2^2\cdot|{\bm{h}}|^T({\bm{1}}_\textnormal{mat}+{\bm{I}})|{\bm{h}}|\\
&=2\|{\bm{h}}\|_2^2\cdot(\|{\bm{h}}\|_1^2+\|{\bm{h}}\|_2^2)\\
&\leq 4\|{\bm{h}}\|_2^2\cdot\|{\bm{h}}\|_1^2\,,
\end{split}
\end{align}
where ${\bm{u}}=\frac{1}{\|{\bm{h}}\|_2}{\bm{h}}$, ${\bm{1}}_\textnormal{mat}$ is a matrix of all $1$s and ${\bm{I}}$ is the identity matrix. The first inequality in \eqref{eq:ub_hDh} is obtained by $|{\bm{u}}^T{\bm{B}}_y{\bm{u}}|=|\langle{\bm{u}},{\bm{B}}_y{\bm{u}}\rangle|\leq\|{\bm{u}}\|_2\|{\bm{B}}_y{\bm{u}}\|_2\leq\|{\bm{B}}_y\|_{op}$ and hence $\frac{|{\bm{u}}^T{\bm{B}}_y{\bm{u}}|}{\|{\bm{B}}_y\|_{op}}\leq 1$; the second inequality is obtained by $|{\bm{u}}^\textrm{T}{\bm{B}}_y{\bm{u}}|=|\sum_{ij}A_y(i,j)u_iu_j|\leq\sum_{ij}A_y(i,j)|u_i||u_j|=|{\bm{u}}|^\textrm{T}{\bm{B}}_y|{\bm{u}}|$. If we choose the operator norm $\|\cdot\|_{op}$ to be the Euclidean norm, then $\|{\bm{B}}_y\|_{op}=\sigma_{\max}({\bm{B}}_y)\leq 2$; the last inequality is obtained via $\|{\bm{h}}\|_2\leq\|{\bm{h}}\|_1$.
The right-hand side of \eqref{eq:second_term_lb1_two} can be low-bounded as:
\begin{align}
\label{eq:lb_hdx}
\begin{split}
\sum_y\left({\bm{h}}^T{\bm{B}}_y{\bm{x}}\right)^2&=\|{\bm{h}}\|_1^2\cdot{\bm{v}}^\textrm{T}\left(\sum_y{\bm{B}}_y{\bm{x}}{{\bm{x}}}^\textrm{T}{\bm{B}}_y^\textrm{T}\right){\bm{v}}=\|{\bm{h}}\|_1^2\cdot{\bm{v}}^\textrm{T}{\bm{E}}{\bm{v}}\\
&\geq\|{\bm{h}}\|_1^2\cdot\lambda_{\bm{E}}\,,
\end{split}
\end{align}
where ${\bm{v}}=\frac{1}{\|{\bm{h}}\|_1}{\bm{h}}$, ${\bm{E}}=\sum_{y=0}^{M-1}{\bm{B}}_y{\bm{x}}{{\bm{x}}}^T{\bm{B}}_y^T$ and $\lambda_{\bm{E}}>0$ can be computed using Lemma \ref{LEMMA:E_MIN}. Combining \eqref{eq:second_term_lb1_two}, \eqref{eq:ub_hDh} and \eqref{eq:lb_hdx}, we can see that as long as the following \eqref{eq:h_bd} holds, \eqref{eq:second_term_lb1_two} will also hold.
\begin{align}
\label{eq:h_bd}
\|{\bm{h}}\|_2<\tau=\left(2-\frac{1}{q}\right)\cdot\sqrt{\frac{\lambda_{\bm{E}}}{4}}\,.
\end{align}
The above \eqref{eq:h_bd} guarantees that \eqref{eq:second_term_lb1} is always greater than $0$. We then have
\begin{align}
\label{eq:lb_gradient_align}
-\sqrt{\frac{1}{MK^2}\sum_y\left(({\bm{z}}_t-{\bm{x}})^\textrm{T}{\bm{B}}_y{\bm{x}}\right)^2}>-q\sqrt{4f({\bm{z}}_t)}\,.
\end{align}
Plug the above \eqref{eq:lb_gradient_align} into \eqref{eq:second_term}. We have:
\begin{align}
\label{eq:second_term_lb}
\langle{\bm{z}}_t-{\bm{x}},\, \nabla f({\bm{z}}_t)\rangle > 4(1-q)f({\bm{z}}_t)\,.
\end{align}
Plug \eqref{eq:first_term_lb}, \eqref{eq:h_bd} and \eqref{eq:second_term_lb} into \eqref{eq:diff}. We have:
\begin{align}
\label{eq:diff2}
\begin{split}
&\left\|{\bm{z}}_t-\eta\nabla f({\bm{z}}_t)-{\bm{x}} \right\|^2_2-\mu\|{\bm{z}}_t-{\bm{x}}\|^2_2\\
<&\frac{16}{K^2}\|{\bm{z}}_t\|_2^2f({\bm{z}}_t)\cdot\eta^2-8(1-q)f({\bm{z}}_t)\cdot\eta+(1-\mu)\tau^2\\
=&\frac{16}{K^2}\|{\bm{z}}_t\|_2^2f({\bm{z}}_t) \cdot \left(\left(\eta-\frac{(1-q)K^2}{4\|{\bm{z}}_t\|_2^2}\right)^2-\frac{(1-q)^2K^4}{16\|{\bm{z}}_t\|_2^4}+\frac{(1-\mu)K^2\tau^2}{16\|{\bm{z}}_t\|_2^2f({\bm{z}}_t)}\right)\,.
\end{split}
\end{align}
In order to make \eqref{eq:diff2} strictly less than $0$, the following should hold:
\begin{align}
\label{eq:condition1}
\left(\eta-\frac{(1-q)K^2}{4\|{\bm{z}}_t\|_2^2}\right)^2 < \frac{(1-q)^2K^4}{16\|{\bm{z}}_t\|_2^4}-\frac{(1-\mu)K^2\tau^2}{16\|{\bm{z}}_t\|_2^2f({\bm{z}}_t)}\,.
\end{align}
The right hand side of \eqref{eq:condition1} should be strictly greater than $0$ so that a valid $\eta$ can be obtained. This requires
\begin{align}
\label{eq:mu_lb}
\mu>1-\frac{(1-q)^2K^2}{\|{\bm{z}}_t\|_2^2}\frac{f({\bm{z}}_t)}{\tau^2}\,.
\end{align}
We also need $\mu\in(0,1)$ to ensure convergence towards the global optimum ${\bm{x}}$. Hence
\begin{align}
\max\left(0,\ 1-\frac{(1-q)^2K^2}{\|{\bm{z}}_t\|_2^2}\frac{f({\bm{z}}_t)}{\tau^2}\right)<\mu<1\,.
\end{align}
The step size $\eta$ is then:
\begin{align}
\frac{(1-q)K^2}{4\|{\bm{z}}_t\|_2^2}-\nu<\eta<\frac{(1-q)K^2}{4\|{\bm{z}}_t\|_2^2}+\nu\,,
\end{align}
where $\nu=\sqrt{\frac{(1-q)^2K^4}{16\|{\bm{z}}_t\|_2^4}-\frac{(1-\mu)K^2\tau^2}{16\|{\bm{z}}_t\|_2^2f({\bm{z}}_t)}}$.
\paragraph{Step 2:} We use ${\bm{z}}={\bm{z}}_t-\eta\nabla f({\bm{z}}_t)$ to denote the gradient descent update, and ${\bm{z}}_{t+1}=\mathscr{P}_\mathcal{S}({\bm{z}})\in\mathcal{S}$ is the projected gradient descent update. In step 1 we established conditions under which \eqref{eq:diff}$<0$, i.e.
\begin{align}
\|{\bm{z}}-{\bm{x}}\|_2^2<\nu\|{\bm{z}}_t-{\bm{x}}\|_2^2\,.
\end{align}
Let ${\bm{s}}$ be a linear combination of ${\bm{z}}_{t+1}$ and a global optimizer ${\bm{x}}$ such that
\begin{align}
\label{eq:linear_comb}
{\bm{x}}-{\bm{z}}_{t+1} = a({\bm{z}}_{t+1}-{\bm{s}})\,,
\end{align}
where $a\in\mathbb{R}$, $a\neq 0$ is some constant. We can always find an ${\bm{s}}$ such that the following holds,
\begin{align}
\label{eq:perpendicular}
({\bm{s}}-{\bm{z}})^T({\bm{s}}-{\bm{z}}_{t+1})=0\,.
\end{align}
\begin{enumerate}[label={\arabic*.}]
\item If ${\bm{z}}={\bm{z}}_{t+1}$, then ${\bm{z}}\in\mathcal{S}$ and \eqref{eq:converge} holds.
\item If ${\bm{x}}={\bm{z}}_{t+1}$, \eqref{eq:converge} naturally holds.
\item Otherwise, we can choose $a=\frac{\|{\bm{x}}-{\bm{z}}_{t+1}\|_2^2}{\left({\bm{z}}_{t+1}-{\bm{z}}\right)^T\left({\bm{x}}-{\bm{z}}_{t+1}\right)}$. From \eqref{eq:perpendicular}, we can get:
\begin{align}
\label{eq:sum_four}
\|{\bm{s}}\|_2^2-{\bm{z}}^T{\bm{s}}-{\bm{s}}^T{\bm{z}}_{t+1} = -{\bm{z}}^T{\bm{z}}_{t+1}\,.
\end{align}
We also have
\begin{align}
\label{eq:sum_one}
\|{\bm{z}}-{\bm{z}}_{t+1}\|_2^2 &= \|{\bm{z}}\|_2^2 + \|{\bm{z}}_{t+1}\|_2^2-2{\bm{z}}^T{\bm{z}}_{t+1}\\
\label{eq:sum_two}
\|{\bm{z}}-{\bm{s}}\|_2^2&=\|{\bm{z}}\|_2^2+\|{\bm{s}}\|_2^2-2{\bm{z}}^T{\bm{s}}\\
\label{eq:sum_three}
\|{\bm{s}}-{\bm{z}}_{t+1}\|_2^2 &= \|{\bm{s}}\|_2^2 + \|{\bm{z}}_{t+1}\|_2^2 - 2{\bm{s}}^T{\bm{z}}_{t+1}\,.
\end{align}
Combining \eqref{eq:sum_four}-\eqref{eq:sum_three}, we get that
\begin{align}
\label{eq:sum_five}
\|{\bm{z}}-{\bm{z}}_{t+1}\|_2^2=\|{\bm{z}}-{\bm{s}}\|_2^2+\|{\bm{s}}-{\bm{z}}_{t+1}\|_2^2\,.
\end{align}
Using \eqref{eq:linear_comb}, we have
\begin{align}
\label{eq:sz_xs}
{\bm{s}}-{\bm{z}}_{t+1} = \frac{1}{a+1}({\bm{s}}-{\bm{x}})\,.
\end{align}
Plug \eqref{eq:sz_xs} into \eqref{eq:perpendicular}. We have
\begin{align}
({\bm{s}}-{\bm{z}})^T({\bm{s}}-{\bm{x}})=0\,.
\end{align}
Similarly, we can get that
\begin{align}
\label{eq:sum_six}
\|{\bm{z}}-{\bm{x}}\|_2^2=\|{\bm{z}}-{\bm{s}}\|_2^2+\|{\bm{s}}-{\bm{x}}\|_2^2\,.
\end{align}
\begin{enumerate}[label*={\arabic*)}]
\item If ${\bm{s}}\in\mathcal{S}$, since ${\bm{z}}_{t+1}$ is the projection of ${\bm{z}}$ in $\mathcal{S}$, we have $\|{\bm{z}}-{\bm{z}}_{t+1}\|_2^2\leq\|{\bm{z}}-{\bm{s}}\|_2^2$. Using \eqref{eq:sum_five}, we have:
\begin{align}
\|{\bm{s}}-{\bm{z}}_{t+1}\|_2^2=0\,.
\end{align}
Hence ${\bm{s}}$ and ${\bm{z}}_{t+1}$ is the same point. From \eqref{eq:sum_six}, we can get:
\begin{align}
\|{\bm{z}}-{\bm{x}}\|_2^2=\|{\bm{z}}-{\bm{z}}_{t+1}\|_2^2+\|{\bm{z}}_{t+1}-{\bm{x}}\|_2^2\,.
\end{align}
Since ${\bm{z}}\notin\mathcal{S}$, we have $\|{\bm{z}}-{\bm{z}}_{t+1}\|_2^2>0$. Hence $\|{\bm{z}}-{\bm{x}}\|_2^2>\|{\bm{z}}_{t+1}-{\bm{x}}\|_2^2$.
\item If ${\bm{s}}\notin\mathcal{S}$, we have:
\begin{align}
\label{eq:sum_seven}
\begin{split}
\|{\bm{s}}-{\bm{x}}\|_2^2&=\|{\bm{s}}-{\bm{z}}_{t+1}+{\bm{z}}_{t+1}-{\bm{x}}\|_2^2\\
&=\|{\bm{s}}-{\bm{z}}_{t+1}\|_2^2+\|{\bm{z}}_{t+1}-{\bm{x}}\|_2^2+2\left({\bm{s}}-{\bm{z}}_{t+1}\right)^\textrm{T}\left({\bm{z}}_{t+1}-{\bm{x}}\right)\,.
\end{split}
\end{align}
\begin{itemize}
\item If $a\in[0, \infty)$, from \eqref{eq:linear_comb}, we have $\left({\bm{s}}-{\bm{z}}_{t+1}\right)^\textrm{T}\left({\bm{z}}_{t+1}-{\bm{x}}\right)\geq 0$. From \eqref{eq:sum_seven}, we have $\|{\bm{s}}-{\bm{x}}\|_2^2\geq\|{\bm{z}}_{t+1}-{\bm{x}}\|_2^2$. Using \eqref{eq:sum_six}, we have $\|{\bm{z}}-{\bm{x}}\|_2^2\geq\|{\bm{z}}_{t+1}-{\bm{x}}\|_2^2$.
\item If $a\in(-1,0)$, from \eqref{eq:linear_comb}, we have ${\bm{z}}_{t+1}-{\bm{x}}=\frac{a}{-1-a}({\bm{x}}-{\bm{s}})$. Since $\frac{a}{-1-a}>0$, $({\bm{z}}_{t+1}-{\bm{x}})^T({\bm{x}}-{\bm{s}})>0$. We then have:
\begin{align}
\begin{split}
\|{\bm{z}}_{t+1}-{\bm{s}}\|_2^2&=\|{\bm{z}}_{t+1}-{\bm{x}}+{\bm{x}}-{\bm{s}}\|_2^2\\
&=\|{\bm{z}}_{t+1}-{\bm{x}}\|_2^2+\|{\bm{x}}-{\bm{s}}\|_2^2+2({\bm{z}}_{t+1}-{\bm{x}})^T({\bm{x}}-{\bm{s}})\\
&>\|{\bm{x}}-{\bm{s}}\|_2^2\,.
\end{split}
\end{align}
Using \eqref{eq:sum_five} and \eqref{eq:sum_six}, we have:
\begin{align}
\|{\bm{z}}-{\bm{z}}_{t+1}\|_2^2>\|{\bm{z}}-{\bm{x}}\|_2^2\,.
\end{align}
This is in contradiction with the assumption that ${\bm{z}}_{t+1}$ is the projection of ${\bm{z}}$ in $\mathcal{S}$ so that ${\bm{z}}_{t+1}$ is closest point in $\mathcal{S}$ to ${\bm{z}}$ in terms of $l_2$ norm: $\|{\bm{z}}-{\bm{z}}_{t+1}\|_2^2\leq\|{\bm{z}}-{\bm{x}}\|_2^2$, hence $a\notin(-1,0)$.
\item If $a\in(-\infty, -1]$, from \eqref{eq:linear_comb}, we have ${\bm{s}}=-\frac{1}{a}{\bm{x}}+(1+\frac{1}{a}){\bm{z}}_{t+1}$. Since $-\frac{1}{a}\in(0,1]$, ${\bm{s}}\in\mathcal{S}$. This is in contradiction with the assumption ${\bm{s}}\notin\mathcal{S}$, hence $a\notin(-\infty,1]$
\end{itemize}
\end{enumerate}
In summary, we have
\begin{align}
\|{\bm{z}}_{t+1}-{\bm{x}}\|_2^2\leq\|{\bm{z}}-{\bm{x}}\|_2^2<\mu\|{\bm{z}}_t-{\bm{x}}\|_2^2\,.
\end{align}
\end{enumerate}
\vspace{0.2cm}
\section{Introduction}
\label{sec:intro}
\input{1_intro.tex}
\section{The Noisy Turnpike Problem}
\label{sec:main_turnpike}
\input{2_propose_turnpike.tex}
\section{The Noisy Beltway Problem}
\label{sec:main_beltway}
\input{3_propose_beltway.tex}
\section{Experimental Results}
\label{sec:exp}
\input{4_experiments.tex}
\section{Conclusion}
\label{sec:con}
\input{5_conclusion.tex}
\bibliographystyle{IEEEbib}
| {
"timestamp": "2020-01-07T02:24:55",
"yymm": "1804",
"arxiv_id": "1804.02465",
"language": "en",
"url": "https://arxiv.org/abs/1804.02465",
"abstract": "We address the problem of reconstructing a set of points on a line or a loop from their unassigned noisy pairwise distances. When the points lie on a line, the problem is known as the turnpike; when they are on a loop, it is known as the beltway. We approximate the problem by discretizing the domain and representing the $N$ points via an $N$-hot encoding, which is a density supported on the discretized domain. We show how the distance distribution is then simply a collection of quadratic functionals of this density and propose to recover the point locations so that the estimated distance distribution matches the measured distance distribution. This can be cast as a constrained nonconvex optimization problem which we solve using projected gradient descent with a suitable spectral initializer. We derive conditions under which the proposed distance distribution matching approach locally converges to a global optimizer at a linear rate. Compared to the conventional backtracking approach, our method jointly reconstructs all the point locations and is robust to noise in the measurements. We substantiate these claims with state-of-the-art performance across a number of numerical experiments. Our method is the first practical approach to solve the large-scale noisy beltway problem where the points lie on a loop.",
"subjects": "Data Structures and Algorithms (cs.DS); Information Theory (cs.IT); Machine Learning (cs.LG)",
"title": "Reconstructing Point Sets from Distance Distributions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9838471618625456,
"lm_q2_score": 0.8152324915965392,
"lm_q1q2_score": 0.8020641731153867
} |
https://arxiv.org/abs/1802.01621 | Un-unzippable Convex Caps | An unzipping of a polyhedron P is a cut-path through its vertices that unfolds P to a non-overlapping shape in the plane. It is an open problem to decide if every convex P has an unzipping. Here we show that there are nearly flat convex caps that have no unzipping. A convex cap is a "top" portion of a convex polyhedron; it has a boundary, i.e., it is not closed by a base. | \section{Introduction}
\seclab{Introduction}
We define an \emph{unzipping} of a polyhedron ${\mathcal P}$ in ${\mathbb{R}}^3$ to be a
non-overlapping, single-piece unfolding
of the surface to the plane that results from cutting a continuous path
${\gamma}$ through all the vertices of ${\mathcal P}$.
The cut-path ${\gamma}$ need not follow the edges of ${\mathcal P}$, nor even
be polygonal, but it must include
every vertex of ${\mathcal P}$, passing through $n-2$ vertices and beginning and
ending at the other two vertices, where $n$ is the total number of vertices.
If ${\gamma}$ does follow edges of ${\mathcal P}$, we call it an \emph{edge-unzipping}.
Edge-unzippings are special cases of edge-unfoldings, where the cuts follow
a tree of edges that span the $n$ vertices.
The interest in edge-unfoldings stems largely from what has
become known as D\"urer's problem~\cite{do-gfalop-07}~\cite{o-dp-13}:
Does every convex polyhedron have an edge-unfolding?
The emphasis here is on a non-overlapping result, what is often
called a \emph{net} for the polyhedron.
This question was first formally raised by Shephard in~\cite{s-cpcn-75}.
In that paper, he already investigated the special case where the
cut edges form a Hamiltonian path of the $1$-skeleton of ${\mathcal P}$:
Hamiltonian unfoldings. These are exactly what I'm calling edge-unzippings.
Shephard noted that the rhombic dodecahedron does
not have an edge-unzipping because its $1$-skeleton has no
Hamiltonian path.
The attractive ``zipping'' terminology stems from the paper~\cite{lddss-zupc-10},
which defined \emph{zipper unfoldings} to be what I'm shortening to unzippings.
They showed that all the Platonic and the Archimedean solids have
edge-unzippings.
And they posed a fascinating question:
\begin{center}
\fbox{\textbf{Open Problem}: Does every convex polyhedron have an unzipping?}
\end{center}
\subsection{Nonconvex Polyhedra}
\seclab{NonconvexPolyhedra}
First we note that not every nonconvex polyhedron has an unzipping.
This has been a ``folk theorem'' for years, but has apparently not been
explicitly stated in the literature.\footnote{
The closest is~\cite{bdekms-upcf-03}, which notes that
``the neighborhood of a negative-curvature vertex ... requires two or more
cuts to avoid self-overlap.''}
In any case, it is not difficult to see.
Consider the polyhedron illustrated in Fig.~\figref{heartsfan}.
The central vertex $v$ has more than $4 \pi$ incident surface angle.
In fact, it has well more than $8 \pi$ incident angle, but we only need $> 4 \pi$.
An unzipping cut-path ${\gamma}$ cannot terminate at $v$, because the neighbhood
of $v$ in the unfolding has more than $2\pi$ incident angle, and so would
overlap in the planar development.
Nor can ${\gamma}$ pass through $v$, because partitioning the $> 4 \pi$ angle would
leave more than $2\pi$ to one side or the other, again forcing overlap
in the neighborhood of at least one of the two planar images of $v$.
Therefore, no polyhedron with a vertex with more than $> 4 \pi$ incident angle
has an unzipping.
Indeed, as Stefan Langerman observed,\footnote{
Personal communication, Aug. 2017}
similar reasoning shows that for
any degree ${\delta}$ there is a polyhedron that cannot be unfolded without overlap
by a cut tree of maximum degree ${\delta}$.
The polyhedron in Fig.~\figref{heartsfan} requires degree $> 4$ at $v$
to partition the more than $8 \pi$ angle into $< 2\pi$ pieces.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.4\linewidth]{Figures/heartsfan}
\caption{A polyhedron that cannot be unzipped.
Based on Fig.~24.14, p.370 in~\protect\cite{do-gfalop-07}.}
\figlab{heartsfan}
\end{figure}
\subsection{Open Problem: Conjecture}
This negative result for nonconvex polyhedra increases the interest in the open problem for convex polyhedra.
In~\cite{o2015spiral} I conjectured the answer is {\sc no}, but it seems
far from clear how to settle the problem.
For that reason, here we turn to a very special case.
\subsection{Convex Caps}
\seclab{ConvexCaps}
The special case is unzipping ``convex caps.''
I quote the definition from~\cite{o-eunfcc-17}:
\begin{quotation}
\noindent
``Let ${\mathcal P}$ be a convex polyhedron, and let $\phi(f)$ be
the angle the normal to face $f$ makes with the $z$-axis.
Let $H$ be a halfspace whose bounding plane is orthogonal to the $z$-axis, and includes points
vertically above that plane.
Define a \emph{convex cap} ${\mathcal C}$ of angle ${\Phi}$ to be $C={\mathcal P} \cap H$
for some ${\mathcal P}$ and $H$, such that ${\phi}(f) \le {\Phi}$ for all $f$ in ${\mathcal C}$. [...]
Note that ${\mathcal C}$ is not a closed polyhedron; it has no ``bottom,''
but rather a boundary ${\partial \mathcal C}$.''
\end{quotation}
\noindent
The result of this note is:
\begin{theorem}
For any ${\Phi} > 0$, there is a convex cap ${\mathcal C}$ that has no unzipping.
\thmlab{ZipCex}
\end{theorem}
\noindent
Because this holds for any ${\Phi} > 0$,
there are arbitrarily flat convex caps that cannot be unzipped.
(${\Phi}$ will not otherwise play a role in the proof.)
\section{Proof of Theorem~1}
\seclab{proof}
The convex caps used to prove the theorem are all variations on the cap
shown in Fig.~\figref{ZipCex3D}.
The base ${\partial \mathcal C} = (b_1,b_2,b_3)$ forms a unit side-length equilateral triangle in the $xy$-plane.
The three vertices $a_1,a_2,a_3$ are also the corners of an equilateral triangle,
lifted a small amount $z_a$ above the base.
In projection to the the $xy$-plane, the ``apron'' of quadrilaterals between
$\triangle b_1 b_2 b_3$ and $\triangle a_1 a_2 a_3$ has width ${\varepsilon} > 0$.
The vertex $c$, at height $z_c > z_a$, sits over the centroids of the equilateral triangles.
The shape of the cap is controlled by three parameters: ${\varepsilon}, z_a, z_b$.
Keeping ${\varepsilon}$ fixed and varying $z_a$ and $z_c$ permits controlling the
curvatures ${\omega}_a$ at $a_i$ and ${\omega}_c$ at $c$.
In Fig.~\figref{ZipCex3D}, ${\varepsilon}=0.1$
and $z_a,z_c = 0.02, 0.1$
leads to ${\omega}_a=1.9^\circ$ and ${\omega}_c=5.6^\circ$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\linewidth]{Figures/ZipCex3D}
\caption{A convex cap ${\mathcal C}$ that has no unzipping.}
\figlab{ZipCex3D}
\end{figure}
A typical attempt at an unzipping (of a variant of
Fig.~\figref{ZipCex3D}) is shown in Fig.~\figref{Layout_caaa}.
In general we will only display what are labeled $L$ and $R$ in this figure, rather
than the full unfolding.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.6\linewidth]{Figures/Layout_caaa}
\caption{An overlapping unfolding of a convex cap
(a variant of Fig.~\protect\figref{ZipCex3D})
from cut-path
${\gamma} =(c, a_2, a_3, a_1, b_1)$.
Compare Fig.~\protect\figref{Devel_caaa_linear_5_10} ahead.}
\figlab{Layout_caaa}
\end{figure}
From now on we will illustrate cut-paths and unzippings in the plane, starting
from Fig.~\figref{EqTris} (and not always repeating all the labels).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\linewidth]{Figures/EqTris}
\caption{Projection of Fig.~\protect\figref{ZipCex3D} to $xy$-plane.}
\figlab{EqTris}
\end{figure}
\subsection{Constraints on the cut-path}
\seclab{Constraints}
Any point $p$ in the relative interior of ${\gamma}$ (i.e., not an endpoint)
develops in the plane to two points $p'$ and $p''$,
with right and left incident surface angles ${\rho}={\rho}(p)$ and ${\lambda}={\lambda}(p)$.
If $p$ is not at a vertex of ${\mathcal C}$, then ${\lambda}+{\rho}=2\pi$.
If $p$ is at a vertex of curvature ${\omega}={\omega}(p)$, then ${\lambda}+{\rho}+{\omega}=2\pi$.
We will show the development ${\mathcal C}$ as cut by ${\gamma}$ by drawing two directed paths
$R$ and $L$, each determined by the ${\rho}$ and ${\lambda}$ angles,
which deviate by ${\omega}(v)$ at each vertex $v \in {\gamma}$.
The surface of ${\mathcal C}$ is right of $R$ and left of $L$
(see Fig.~\figref{Layout_caaa}), but not explicitly
depicted in subsequent figures.
The constraints on ${\gamma}$ to be an unzipping are:
\begin{enumerate}
\setlength{\itemsep}{0pt}
\item ${\gamma}$ must be a path, by definition of unzipping.
\item ${\gamma}$ must start at one of the vertices $\{c,a_1,a_2,a_3\}$
and terminate on ${\partial \mathcal C}$.
\item ${\gamma}$ does not have to include any of the vertices $\{b_1,b_2,b_3\}$,
it just needs to exit ${\mathcal C}$ at some point of ${\partial \mathcal C}$.
\item ${\gamma}$ can only touch ${\partial \mathcal C}$ at one point, for if it touches at two
or more points, the unfolding would be disconnected into more than one piece.
\item Between vertices, ${\gamma}$ can follow any path on ${\mathcal C}$, as long as ${\gamma}$
does not self-cross, which would again result in more than one piece.
\item And of course, the developments of $R$ and $L$ must not cross in the plane,
for $R / L$ crossings imply overlap.\footnote{
The reverse is not always true: It could be that $R$ and $L$ do not cross, but
other portions of the surface away from the cut ${\gamma}$ are forced to overlap
by, for example, large curvature openings.}
\end{enumerate}
We think of ${\gamma}$ as directed from its root start vertex
to ${\partial \mathcal C}$; the path opens from the root to its boundary exit.
The main constraint we exploit is item~4: ${\gamma}$ can only touch ${\partial \mathcal C}$ at one point.
We will see that only by leaving ${\mathcal C}$ and returning could the unzipping avoid overlap.
Due to the symmetry of ${\mathcal C}$---in particular, the equivalence of $\{a_1,a_2,a_3\}$---there are only four combinatorially distinct possible cut-paths ${\gamma}$,
where we use $b$ to represent any point on ${\partial \mathcal C}$:
\begin{enumerate}
\setlength{\itemsep}{0pt}
\item ${\gamma} = (c, a_1, a_2, a_3, b) = caaab$.
\item ${\gamma} = (a_1, c, a_2, a_3, b) = acaab$.
\item ${\gamma} = (a_1, a_2, c, a_3, b) = aacab$.
\item ${\gamma} = (a_1, a_2, a_3, c, b) = aaacb$.
\end{enumerate}
We abbreviate the path structure with strings $acaab$ and so on, with the obvious meaning.
It turns out that the location of $b$, the point at which ${\gamma}$ exits ${\mathcal C}$, plays
little role in the proof.
We will display the structure of ${\gamma}$ and the developments of $R$ and $L$
as in
Fig.~\figref{Path_acaa_linear}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\linewidth]{Figures/Path_acaa_linear}
\caption{%
Left: path ${\gamma} = (a_1, c, a_2, a_3, b_3) = acaab$.
Right: $R$ and $L$ developed.
$\{{\omega}_a,{\omega}_c\}=\{5^\circ,10^\circ\}$.}
\figlab{Path_acaa_linear}
\end{figure}
Here ${\gamma}$ is shown following straight segments between vertices, and
the developments overlap substantially.
But as per item~5 above, ${\gamma}$ can follow potentially any
(non-self-intersecting) curve between vertices.
However, the developed images of the vertices are independent of the
shape of the path
between vertices, a condition we exploit in the proof.
So once the combinatorial structure of the cut ${\gamma}$ is fixed, the developed
locations of the vertex images are determined.
We will continue to use $\{{\omega}_a,{\omega}_c\}=\{5^\circ,10^\circ\}$ for illustration,
although any smaller curvatures also work in the proofs.
\subsection{Radial Monotonicity: Intuition}
\seclab{RM}
Before beginning the proof details, we provide the intuition behind it.
That intuition depends on the notion of a ``radially monotone'' curve,
a concept used in~\cite{o-ucprm-16} and~\cite{o-eunfcc-17}.
A directed polygonal chain $P$ in the plane with vertices $u_1, u_2, \ldots, u_k$ is
\emph{radially monotone with respect to $u_1$} if the distance from $u_1$ to
every point $p \in P$ increases monotonically as $p$ moves out along the chain.
$P$ is \emph{radially monotone} if it is radially monotone with respect to each
vertex $u_i$: concentric circles centered on each $u_i$ are crossed just
once by the chain beyond $u_i$.
If both the $R$ and $L$ developments are radially monotone, then $L$ and $R$
do not intersect except at their common ``root'' vertex, a fact proved in the
cited papers.\footnote{
There are some curvature bound assumptions to this claim that are not relevant here.}
This suggests that ${\gamma}$ should be chosen so that $R$ and $L$ are radially monotone.
However, if $R$ or $L$ or both are not radially monotone, they do not necessarily
overlap: radial monotonicity is sufficient for non-overlap but not necessary.
Nevertheless, striving for radial monotonicity makes sense.
The sharp turns necessary to span the vertices of ${\mathcal C}$
(visible in Fig.~\figref{Path_acaa_linear}) should be avoided,
for they violate radial monotonicity.
(Any angle $\angle u_{i-1}, u_i, u_{i+1}$ smaller than $90^\circ$ implies
non-monotonicity at $u_i$ with respect to $u_{i-1}$.)
Avoiding these sharp turns forces ${\gamma}$ to exit ${\mathcal C}$ before spanning the vertices.
Although radial monotonicity is not used in the proofs to follow,
it is the intuition behind the proofs.
\subsection{Lemmas~1,2,3,4}
\seclab{Lemmas}
Of the four possible types of ${\gamma}$, $acaab$ is the ``closest'' to being
unzippable, so we start with this type.
\begin{lemma}
For sufficiently small ${\omega}_a$, ${\omega}_c$, and ${\varepsilon}$,
any cut-path ${\gamma}$ of type $acaab$ must leave and reenter ${\mathcal C}$
to avoid overlap. Therefore, ${\mathcal C}$ cannot be unzipped with
this type of cut-path.
\lemlab{acaab}
\end{lemma}
\begin{proof}
We have already seen in Fig.~\figref{Path_acaa_linear} that straight
connections between the vertices leads to overlap.
Fig.~\figref{RMlogic4frames}(a) repeats the set-up of that figure, with
added notation. Let $R_1,R_2,R_3$ be the portions of the
right development $R$ between vertices, and similarly for $L_i$.
We now imagine that $R_i$ and $L_i$ are arbitrary cuts between their vertex
endpoints. We concentrate on $R_3$ and $L_3$.
From the fact that the images of the vertices, and in particular, $a_2$,
are in their correct developed planar locations, we can derive
constraints on the shape of the $R_3$ and $L_3$ paths.
The shape of $R_i$ determines $L_i$ and vice versa, because for all
non-vertex points of ${\gamma}$, ${\rho}+{\lambda} = 2\pi$.
Thus $R_i$ and $L_i$ are congruent as curves, but rigidly rotated differently
by the curvatures along ${\gamma}$.
There are only two topological possibilities for $R_3$ and $L_3$
to avoid
crossing earlier portions of $R$ and $L$,
illustrated in Fig.~\figref{RMlogicTopology}.
In~(a) of the figure, $R_3$ passes right of $a''_2$ on its way
counterclockwise to $a'_3$, and
in~(b), $L_3$ passes right of $a'_2$ on its way
clockwise to $a''_3$.
The situations are analogous in the neighborhood of $a_2$,
and we concentrate only on the former more direct route.
Knowing that $R_3$ passes to the right of $a''_2$ determines the vector
displacement of the tightest possible prefix $(a'_2,a''_2)$ of $R_3$, but not the shape of that
prefix. This vector displacement forms an effective angle
$\angle c',a'_2,a''_2$ of much larger than the near-$30^\circ$ necessary
to stay on the narrow ${\varepsilon}$-apron.
Fig.~\figref{RMlogic4frames}(a) and~(b) show that this angle is
nearly $70^\circ$, well beyond $30^\circ$.
(And the angle is larger if $R_3$ passes further to the right of $a''_2$.)
This $70^\circ$ turn implies an effective surface angle ${\rho} = 290^\circ$ to the right of ${\gamma}$
on ${\mathcal C}$ at $a_2$, ``effective'' because the exact shape of ${\gamma}$ is unknown.
The exact angle and length of vector displacement depend on $\{{\omega}_a,{\omega}_c\}$,
but for any given curvatures, we can choose an ${\varepsilon}$ small enough so
that the prefix steps ${\gamma}$ exterior to ${\mathcal C}$.
Thus ${\gamma}$ must leave ${\mathcal C}$ to
avoid overlap before it completes its tour of the vertices.
Although this proves the lemma, we continue the analysis
below to reveal a deeper structure.
\end{proof}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\linewidth]{Figures/RMlogic4frames}
\caption{Analysis of ${\gamma}$ of type $acaab$.
(a)~Opening at $a_1$ and $c$ causes $R_3 / L_3$ overlap.
(b)~$R_3$ bends around $a''_2$.
(c)~$L_3$ complements $R_3$, which again intersects $L_3$.
(d)~$L_3$ complements $R_3$. $R_3$ is following the arc centered on $x$.}
\figlab{RMlogic4frames}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.85\linewidth]{Figures/RMlogicTopology}
\caption{Possible paths from $a_2$ to $a_3$.
(a)~$R_3$ skirts $a''_2$.
(b)~$L_3$ skirts $a'_2$.}
\figlab{RMlogicTopology}
\end{figure}
Knowing this constraint just derived on the prefix of $R_3$, we
know $L_3$ must complement $R_3$ on that prefix,
which leads to Fig.~\figref{RMlogic4frames}(c).
Again there is an overlap intersection further along $R_3$. Altering $R_3$ to again bend around $L_3$
leads to Fig.~\figref{RMlogic4frames}(d).
Continuing this process of incrementally determining constraints on $R_3$ that
are mirrored in $L_3$,
leads to the conclusion that $R_3$ must follow (or be outside of) the arc of a circle
centered at $x$, where $x$ is the combined center of rotation of
the rotations at $a_1$ and at $c$.
This is why we can be sure the angle at $a_2$ is well beyond $30^\circ$
independent of $\{{\omega}_a,{\omega}_c\}$:
it is determined by (an approximation to) the tangent to this circle.
Finally, $R_3$ and $L_3$ are opened further by the curvature ${\omega}_a$ at $a_2$, which does
not alter the previous analysis.
Here we pause to discuss the ``combined center of rotation'' just used.
Any pair of rotations about two distinct points is equivalent to a single rotation
about a combined center. In our situation, the two rotations are
${\omega}_a$ about $a_1$ and ${\omega}_c$ about $c$.
For small rotations, they are equivalent to a rotation by ${\omega}_a+{\omega}_c$
about the weighted center
$$
x= \frac{ {\omega}_a a_1 + {\omega}_c c }{ {\omega}_a+{\omega}_c} \;.
$$
This point is indicated in Figs.~\figref{RMlogic4frames}(a) and~(d).
This result on combining rotations is proved in both~\cite{o2017addendum} and~\cite{barvinok2017pseudo}
(and likely elsewhere).
The above analysis suggests that ${\mathcal C}$ can be unzipped if the apron were large enough
to include the circle arc that ${\gamma}$ must follow from $a_2$ to $a_3$. And indeed
Fig.~\figref{Devel_acaa_arc_5_10} shows that this is true.
An interesting consequence of this unzipping is that,
even with a small apron, if we close the convex cap ${\mathcal C}$
by adding an equilateral triangle base to form a closed convex polyhedron ${\mathcal P}$,
then ${\mathcal P}$ does have an unzipping. Follow the path shown
in Fig.~\figref{Devel_acaa_arc_5_10}, and complete it by
extending ${\gamma}$ to cut $(b_3, b_1, b_2)$, leaving $b_2 b_3$ uncut. Then the
arc illustrated would lie on the unfolding of the base $\triangle b_1 b_2 b_3$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\linewidth]{Figures/Devel_acaa_arc_5_10}
\caption{An $acaab$ unzipping of ${\mathcal C}$ extending outside ${\partial \mathcal C}$.
Right: The $R$ and $L$ developments do not cross.}
\figlab{Devel_acaa_arc_5_10}
\end{figure}
We now turn to the other three types of cut-paths ${\gamma}$.
The proof for type $caaab$ is similar to Lemma~\lemref{acaab},
and so will only be sketched.
\begin{lemma}
For sufficiently small ${\omega}_a$, ${\omega}_c$, and ${\varepsilon}$,
any cut-path ${\gamma}$ of type $caaab$ must leave and reenter ${\mathcal C}$
to avoid overlap. Therefore, ${\mathcal C}$ cannot be unzipped with
this type of cut-path.
\lemlab{caaab}
\end{lemma}
\begin{proof}
The cut-path with straight segments overlaps at two spots in development,
as shown in Fig.~\figref{Devel_caaa_linear_5_10}
(cf.~Fig.~\figref{Layout_caaa}).
Using the same reasoning as in Lemma~\lemref{acaab},
except that the rotations are centered on $c$ (rather than on both $a_1$
and $c$), leads to the conclusion that $R_2$ and $R_3$ must both
deviate from the $30^\circ$ turn at $a_2$ and the $60^\circ$ turn at $a_3$
needed to stay on an arbitrarily thin apron. In fact, $R_2$ and $R_3$
must follow circle arcs centered on $c$.
Doing so would in fact allow ${\mathcal C}$ to be unzipped if apron were large enough, as shown
in Fig.~\figref{Devel_caaa_arcs_5_10}.
But for an arbitrarily thin ${\varepsilon}$-apron, ${\gamma}$ must exit ${\mathcal C}$ before visiting
all vertices, and so cannot be unzipped with this type of cut-path.
\end{proof}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\linewidth]{Figures/Devel_caaa_linear_5_10}
\caption{The cut-path type $caaab$ leads to overlap with straight segments.}
\figlab{Devel_caaa_linear_5_10}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\linewidth]{Figures/Devel_caaa_arcs_5_10}
\caption{An $caaab$ unzipping of ${\mathcal C}$ extending outside ${\partial \mathcal C}$.
Right: The $R$ and $L$ developments do not cross.}
\figlab{Devel_caaa_arcs_5_10}
\end{figure}
The third type of cut-path, $aacab$ (Fig.~\figref{Devel_aaca_linear_5_10}),
is different in that not
even following arcs outside of ${\mathcal C}$ would suffice to unzip it without overlap.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\linewidth]{Figures/Devel_aaca_linear_5_10}
\caption{The cut-path type $aacab$ leads to $L / R$ overlap.}
\figlab{Devel_aaca_linear_5_10}
\end{figure}
\begin{lemma}
For sufficiently small ${\omega}_a$, ${\omega}_c$, and ${\varepsilon}$,
any cut-path ${\gamma}$ of type $aacab$ cannot visit all vertices
without overlap in the development.
Therefore, ${\mathcal C}$ cannot be unzipped with
this type of cut-path.
\lemlab{aacab}
\end{lemma}
\begin{proof}
We analyze the constraints on $R_2$ in
Fig.~\figref{RMlogic_aaca}.
The rotation at $a_1$ determines the prefix of $R_2$ following the same
reasoning as in Lemma~\lemref{acaab},
and again already in Fig.~\figref{RMlogic_aaca}(b) we have an angle
at $a_2$ much larger than the $30^\circ$ turn required to reach $c$.
This already establishes the cut-path cannot be an unzipping.
But in fact, it is clear that $R_2$ must follow the circle arc shown in
Fig.~\figref{RMlogic_aaca}(d), centered on $a_1$. Following this arc
makes it impossible for $R_2$ to reach $c$:
the path is forced toward $a_3$ instead.
\end{proof}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\linewidth]{Figures/RMlogic_aaca}
\caption{Analysis of ${\gamma}$ of type $aacab$.
(a)~Opening at $a_1$ causes $R_2 / L_2$ overlap.
(b)~$R_2$ bends around $a''_2$.
(c)~$L_2$ complements $R_2$, which again intersects $L_2$.
(d)~$L_2$ complements $R_2$. $R_2$ is following the arc centered on $a_1$.}
\figlab{RMlogic_aaca}
\end{figure}
\noindent
The last combinatorial cut-path type, $aaacb$, mixes themes in the
others: first, ${\gamma}$ must go outside ${\mathcal C}$, which already establishes there
is no unzipping, and second, even if the apron were large enough, ${\gamma}$ cannot
reach $c$. We rely just on the first impediment.
\begin{lemma}
For sufficiently small ${\omega}_a$, ${\omega}_c$, and ${\varepsilon}$,
any cut-path ${\gamma}$ of type $aaacb$ cannot visit all vertices
without leaving and re-entering ${\mathcal C}$.
Therefore, ${\mathcal C}$ cannot be unzipped with
this type of cut-path.
\lemlab{aaacb}
\end{lemma}
\begin{proof}
Fig.~\figref{Devel_aaac_linear_5_10} shows there is overlap when ${\gamma}$ is composed of straight segments.
By now familiar reasoning, the portion of ${\gamma}$ from $a_2$ to $a_3$ must follow a
circular arc centered on $a_1$. This is illustrated in
Fig.~\figref{Devel_aaac_arc_5_10}, and already steps outside
and ${\varepsilon}$-thin apron in the neighborhood of $a_2$, where it
makes an angle of approximately $90^\circ$ rather than the necessary $60^\circ$.
This establishes the claim of the lemma.
\end{proof}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\linewidth]{Figures/Devel_aaac_linear_5_10}
\caption{The cut-path type $aaacb$ leads to $L / R$ overlap with straight segments.}
\figlab{Devel_aaac_linear_5_10}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\linewidth]{Figures/Devel_aaac_arc_5_10}
\caption{The cut-path type $aaacb$ extends outside ${\mathcal C}$ on the $(a_2,a_3)$ arc
(and still $R$ crosses $L$ near $a_3)$.}
\figlab{Devel_aaac_arc_5_10}
\end{figure}
\medskip
\noindent
We restate Theorem~1 in more detail:
\setcounter{theorem}{0}
\begin{theorem}
Convex caps ${\mathcal C}$ with sufficiently small
$\{{\omega}_a,{\omega}_c, {\varepsilon}\}$, as depicted in Fig.~\figref{ZipCex3D},
have no unzipping: they are un-unzippable.
Thus there are arbitrarily flat convex caps that cannot be unzipped.
\end{theorem}
\begin{proof}
We argued that only four combinatorial types of cut-paths ${\gamma}$
are possible on ${\mathcal C}$.
Lemmas~\lemref{acaab}, \lemref{caaab}, \lemref{aacab}, \lemref{aaacb}
established that for sufficiently small
curvatures $\{{\omega}_a,{\omega}_c\}$ and a sufficiently thin ${\varepsilon}$-apron,
each of these cut-path types fails to unzip ${\mathcal C}$.
Because the arguments are independent of the exact values of
$\{{\omega}_a,{\omega}_c\}$, only requiring a sufficiently small ${\varepsilon}$ to match,
the claim holds for arbitrarily flat convex caps.
\end{proof}
\section{Discussion}
\seclab{Discussion}
It is tempting to hope that the negative result of Theorem~\thmref{ZipCex}
can somehow be used to address the open problem for convex polyhedra.
However, as mentioned earlier (Sec.~\secref{Lemmas}), closing the convex cap in Fig.~\figref{ZipCex3D}
by adding a base creates a polyhedron that can in fact be unzipped.
Perhaps this is not surprising, as the proofs rely crucially
on the fact that ${\mathcal C}$ has a boundary ${\partial \mathcal C}$.
I have also explored using several un-unzippable convex caps to tile
a closed convex polyhedron, but so far to no avail.
The open problem from~\cite{lddss-zupc-10}
quoted in Sec.~\secref{Introduction} remains open.
\newpage
\bibliographystyle{alpha}
| {
"timestamp": "2018-02-07T02:01:48",
"yymm": "1802",
"arxiv_id": "1802.01621",
"language": "en",
"url": "https://arxiv.org/abs/1802.01621",
"abstract": "An unzipping of a polyhedron P is a cut-path through its vertices that unfolds P to a non-overlapping shape in the plane. It is an open problem to decide if every convex P has an unzipping. Here we show that there are nearly flat convex caps that have no unzipping. A convex cap is a \"top\" portion of a convex polyhedron; it has a boundary, i.e., it is not closed by a base.",
"subjects": "Computational Geometry (cs.CG); Discrete Mathematics (cs.DM)",
"title": "Un-unzippable Convex Caps",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986571747626947,
"lm_q2_score": 0.8128673201042492,
"lm_q1q2_score": 0.8019519325840821
} |
https://arxiv.org/abs/2005.07846 | Jordan--Landau theorem for matrices over finite fields | Given a positive integer $r$ and a prime power $q$, we estimate the probability that the characteristic polynomial $f_{A}(t)$ of a random matrix $A$ in $\mathrm{GL}_{n}(\mathbb{F}_{q})$ is square-free with $r$ (monic) irreducible factors when $n$ is large. We also estimate the analogous probability that $f_{A}(t)$ has $r$ irreducible factors counting with multiplicity. In either case, the main term $(\log n)^{r-1}((r-1)!n)^{-1}$ and the error term $O((\log n)^{r-2}n^{-1})$, whose implied constant only depends on $r$ but not on $q$ nor $n$, coincide with the probability that a random permutation on $n$ letters is a product of $r$ disjoint cycles. The main ingredient of our proof is a recursion argument due to S. D. Cohen, which was previously used to estimate the probability that a random degree $n$ monic polynomial in $\mathbb{F}_{q}[t]$ is square-free with $r$ irreducible factors and the analogous probability that the polynomial has $r$ irreducible factors counting with multiplicity. We obtain our result by carefully modifying Cohen's recursion argument in the matrix setting, using Reiner's theorem that counts the number of $n \times n$ matrices with a fixed characteristic polynomial over $\mathbb{F}_{q}$. | \section{Introduction}
\hspace{3mm} The purpose of this paper is to give some concrete arithmetic applications of the following fact: for any finitely many distinct $d_{1}, \dots, d_{r} \in \bZ_{\geq 1}$, the distribution of numbers of degree $d_{1}, \dots, d_{r}$ irreducible factors of the characteristic polynomial $f_{A}(t)$ of a matrix $A$ chosen uniformly at random from the set $\Mat_{n}(\bF_{q})$ of $n \times n$ matrices over $\bF_{q}$ converges to the distribution of numbers of length $d_{1}, \dots, d_{r}$ cycles of a random $\sigma \in S_{n}$, as $q$ goes to infinity.
\
\begin{prop}\label{main} Let $d_{1}, \dots, d_{r} \in \bZ_{\geq 1}$ be distinct and $k_{1}, \dots, k_{r} \in \bZ_{\geq 0}$ not necessarily distinct. We have
$$\lim_{q \ra \infty}\Prob_{A \in \Mat_{n}(\bF_{q})}
\left(\begin{array}{c}
f_{A}(t) \text{ has } k_{j} \text{ degree } d_{j} \\
\text{irreducible factors for } 1 \leq j \leq r \\
\text{counting with multiplicity}
\end{array}\right)
= \Prob_{\sigma \in S_{n}}\left(\begin{array}{c}
\sigma \text{ has } k_{j} \text{ disjoint} \\
d_{j}\text{-cycles for } 1 \leq j \leq r
\end{array}\right).$$
\
where $f_{A}(t)$ is the characteristic polynomial of $A \in \Mat_{n}(\bF_{q})$.\footnote{The event on the right-hand side means that when we write $\sigma$ as a unique product of disjoint cycles, there are $k_{j}$ cycles of length $d_{j}$ in the product for $1 \leq j \leq r$. The special case $r = 1$ is indirectly mentioned in ``Conclusion'' section of \cite{Sto}.}
\end{prop}
\
\begin{rmk} After the previous version of this paper, we have been noted that Proposition \ref{main} is known due to Hansen and Schumutz (\cite{HS}, Theorem 1.1). Their result is for $\GL_{n}(\bF_{q})$ rather than $\Mat_{n}(\bF_{q})$, but these two are easily interchangeable. Indeed, the last line of their proof (but not the statement), together with a well-known fact regarding some statistics of degree $n$ monic polynomials in $\bF_{q}[t]$ for large $q$, implies Proposition \ref{main}. The proof of Hansen and Schumutz uses Stong's work \cite{Sto}, on which our proof also depends, and they even compute some asymptotic errors with respect to $q$ and $r$. Since our work only considers the limit in $q$, our proof is simpler, so we decided to keep it in the paper.
\
\hspace{3mm} In the following subsections, we discuss two applications of Proposition \ref{main}. The first one, Theorem \ref{p-adic}, considers the distribution of the cokernel of a random $n \times n$ matrix over $\bZ_{p}$, the ring of $p$-adic integers, with respect to the Haar (probability) measure on $\Mat_{n}(\bZ_{p}) = {\bZ_{p}}^{n^{2}}$, for large $p$ and $n$. According to our best knowledge and the consultations with the experts we have talked to, Theorem \ref{p-adic} is new. The second one, Theorem \ref{JL}, is a matrix version of Landau's theorem in number theory that estimates the number of irreducible factors of a random characteristic polynomial for large $n$ when $q \ra \infty$. Just as Proposition \ref{main} can be deduced from the work of Hansen and Schumutz \cite{HS}, Theorem \ref{JL} can also be deduced from their work if we apply another result of S. Cohen in \cite{Coh}. The interesting difference in our exposition is that we instead used Jordan's result in \cite{Jor} about the statistics of a random permutation in $S_{n}$ for large $n$ rather than the statistics of degree $n$ monic polynomials in $\bF_{q}[t]$ for large $q$ and $n$ in order to deduce Theorem \ref{JL}.
\end{rmk}
\
\subsection{The distribution of cokernels of Haar $\bZ_{p}$-random matrices with large $p$} Let $\bZ_{p}$ be the ring of $p$-adic integers, namely the inverse limit of the system of projection maps $\cdots \tra \bZ/(p^{3}) \tra \bZ/(p^{2}) \tra \bZ/(p)$.
\
\begin{thm}\label{p-adic} Fix any distinct $d_{1}, \dots, d_{r} \in \bZ_{\geq 1}$. For any $d \in \bZ_{\geq 1}$ denote by $\mc{M}(d, p)$ the set of all monic irreducible polynomials of degree $d$ in $\bF_{p}[t]$. Then the limit
$$\lim_{p \ra \infty} \Prob_{A \in \Mat_{n}(\bZ_{p})}
\left(\begin{array}{c}
\coker(P(A)) = 0 \\
\text{ for all } \bar{P} \in \bigsqcup_{j=1}^{r}\mc{M}(p, d_{j})
\end{array}\right)$$
\
exists, where for $P(A)$, we use any lift of $\bar{P}(t) \in \bF_{p}[t]$ in $\bZ_{p}[t]$ under the reduction modulo $p$ and the probability is according to the Haar measure on $\Mat_{n}(\bZ_{p})$. Moreover, we have
$$\lim_{n \ra \infty} \lim_{p \ra \infty} \Prob_{A \in \Mat_{n}(\bZ_{p})}
\left(\begin{array}{c}
\coker(P(A)) = 0 \\
\text{ for all } \bar{P} \in \bigsqcup_{j=1}^{r}\mc{M}(p, d_{j})
\end{array}\right)
= e^{-1/d_{1}} \cdots e^{-1/d_{r}}.$$
\end{thm}
\
\begin{rmk} Taking the two limits in the particular order matters in Theorem \ref{p-adic}. For instance, we are unsure whether we can change the order of these limits, which may lead to an interesting consequence regarding the Cohen-Lenstra measure introduced in \cite{CL}. It is worth noting that our proof can be easily modified so that one may replace $\bZ_{p}$ with $\bF_{q}\llb t \rrb$, reduce modulo $(t)$ in place of $p$, and let $q \ra \infty$ instead of $p \ra \infty$ in Theorem \ref{p-adic}, although we will just focus on $\bZ_{p}$. The reader does not need to have much expertise of the Haar measure on $\Mat_{n}(\bZ_{p})$ to follow our proof. The only property of the Haar measure on $\Mat_{n}(\bZ_{p})$ we will use is that each fiber under the mod $p$ projection map $\Mat_{n}(\bZ_{p}) \tra \Mat_{n}(\bF_{p})$ has the same measure, which is necessarily $1/p^{n^{2}}$.
\end{rmk}
\
\hspace{3mm} We conjecture that the large $p$ limit in Theorem \ref{p-adic} has to do with independent Poisson random variables with means $1/d_{1}, \dots, 1/d_{r}$ in more generality. However, we have little empirical evidence, so it would be interesting to see any more examples that either support or disprove this conjecture.
\
\begin{conj} Fix any distinct $d_{1}, \dots, d_{r} \in \bZ_{\geq 1}$ and not necessarily distinct $k_{1}, \dots, k_{r} \in \bZ_{\geq 0}$. Then the limit
$$\lim_{p \ra \infty} \Prob_{A \in \Mat_{n}(\bZ_{p})}
\left(\begin{array}{c}
|\coker(P_{j}(A))| = p^{d_{j}k_{j}} \\
\text{ for all } (\bar{P}_{1}, \dots, \bar{P}_{r}) \in \prod_{j=1}^{r}\mc{M}(p, d_{j})
\end{array}\right)$$
\
must exist. Moreover, we must have
$$\lim_{n \ra \infty}\lim_{p \ra \infty} \Prob_{A \in \Mat_{n}(\bZ_{p})}
\left(\begin{array}{c}
|\coker(P_{j}(A))| = p^{d_{j}k_{j}} \\
\text{ for all } (\bar{P}_{1}, \dots, \bar{P}_{r}) \in \prod_{j=1}^{r}\mc{M}(p, d_{j})
\end{array}\right)
= \frac{e^{-1/d_{1}}(1/d_{1})^{k_{1}}}{k_{1}!} \cdots \frac{e^{-1/d_{r}}(1/d_{r})^{k_{r}}}{k_{r}!}.$$
\end{conj}
\
\subsection{Jordan-Landau theorem for matrices over finite fields} It is a folklore that the cycle decomposition of a random permutation is analogous to the prime decomposition of a random positive integer. For instance, Granville (Section 2.1 of \cite{Gra}) notes that given $k \in \bZ_{\geq 1}$, a theorem of Jordan (p.161 of \cite{Jor}) says
$$\Prob_{\sigma \in S_{n}}(\sigma \text{ has } k \text{ disjoint cycles}) \sim \frac{(\log n)^{k-1}}{(k-1)! n}$$
\
for large $n \in \bZ_{\geq 1}$, while a theorem of Landau (p.211 of \cite{Lan} or Theorem 437 on p.491 of \cite{HW}) says
\begin{align*}
\Prob_{1 \leq N \leq x}\left(\begin{array}{c}
N \text{ has } k \text{ distinct} \\
\text{prime factors}
\end{array}\right) &\sim \Prob_{1 \leq N \leq x}\left(\begin{array}{c}
N \text{ has } k \text{ prime factors} \\
\text{counting with multiplicity}
\end{array}\right)\\
&\sim \frac{(\log \log x)^{k-1}}{(k-1)!\log x}
\end{align*}
\
for large $x \in \mathbb{R}_{\geq 1}$. The first probability is given uniformly at random from the set $S_{n}$ of permutations on $n$ letters. The second probability is given by choosing the integer $N$ uniformly at random from the set $\{1, 2, \dots, \lfloor x \rfloor\}$. We wrote ``$f(x) \sim g(x)$ for large $x$'' to mean that $f(x)/g(x) \ra 1$ as $x \ra \infty$. This is a generalization of the Prime Number Theorem (i.e., the case $k = 1$).
\
\hspace{3mm} By the Prime Number Theorem, for large $x$, we may expect one prime in every interval of length $\log x$. For large $n$, we expect one cycle in $n$ permutations (since there are $(n-1)!$ $n$-cycles while there are far less cycles of shorter lengths), so Jordan's result is analogous to Landau's result. We take this analogy further. In particular, summing the identity given in Proposition \ref{main} over all tuples $(d_{1}, \dots, d_{r}) \in (\bZ_{\geq 1})^{r}$ and $(k_{1}, \dots, k_{r}) \in (\bZ_{\geq 0})^{r}$ such that
\begin{itemize}
\item $r \in \bZ_{\geq 0}$,
\item $n = \sum_{j=1}^{r}k_{j}d_{j}$, and
\item $k = \sum_{j=1}^{r} k_{j}$,
\end{itemize}
\
we have
$$\lim_{q \ra \infty}\Prob_{A \in \Mat_{n}(\bF_{q})}
\left(\begin{array}{c}
f_{A}(t) \text{ has } k \text{ irreducible factors} \\
\text{counting with multiplicity}
\end{array}\right)
= \Prob_{\sigma \in S_{n}}(\sigma \text{ has } k \text{ disjoint cycles}),$$
\
which is asymptotically $(\log n)^{k-1}/((k-1)!n)$ for large $n$, as noted above. Using the facts that almost all matrices $A \in \Mat_{n}(\bF_{q})$ have square-free characteristic polynomials and almost all monic polynomials of degree $n$ are square-free when $q \ra \infty$ (see Section \ref{Landau}), we may obtain the following.
\
\begin{thm}[Jordan-Landau theorem for $\bF_{q}$-matrices]\label{JL} Given $k \in \bZ_{\geq 1}$, for large $n \in \mathbb{Z}_{\geq 1}$, we have
\begin{align*}
\lim_{q \ra \infty}\Prob_{A \in \Mat_{n}(\bF_{q})}
\left(\begin{array}{c}
f_{A}(t) \text{ has } k \text{ distinct} \\
\text{irreducible factors}
\end{array}\right) &= \lim_{q \ra \infty}\Prob_{A \in \Mat_{n}(\bF_{q})}
\left(\begin{array}{c}
f_{A}(t) \text{ has } k \text{ irreducible factors} \\
\text{counting with multiplicity}
\end{array}\right) \\
&\sim \frac{(\log n)^{k-1}}{(k-1)! n}.
\end{align*}
\end{thm}
\
\begin{rmk} There is an analogous result of Theorem \ref{JL}, where the sample space is the set $\bA^{n}(\bF_{q})$ of all degree $n$ monic polynomials in $\bF_{q}[t]$ due to S. Cohen \cite{Coh}. It says given $k \in \bZ_{\geq 1}$, for large $n \in \mathbb{Z}_{\geq 1}$, we have
\begin{align*}
\lim_{q \ra \infty}\Prob_{f \in \bA^{n}(\bF_{q})}
\left(\begin{array}{c}
f(t) \text{ has } k \text{ distinct} \\
\text{irreducible factors}
\end{array}\right) &\sim \lim_{q \ra \infty}\Prob_{f \in \bA^{n}(\bF_{q})}
\left(\begin{array}{c}
f(t) \text{ has } k \text{ irreducible factors} \\
\text{counting with multiplicity}
\end{array}\right) \\
&\sim \frac{(\log n)^{k-1}}{(k-1)! n}.
\end{align*}
\end{rmk}
\
With these two results in mind, we note that the push-forward measure $\Mat_{n}(\bF_{q}) = \bA^{n^{2}}(\bF_{q}) \tra \bA^{n}(\bF_{q})$ given by taking the characteristic polynomial of a matrix does not change the probability of the event we compute in $\bA^{n}(\bF_{q})$, when we take $q \ra \infty$. This can be also taken as another example to be added in the list of similarly behaving combinatorial examples in \cite{ABT}, a survey paper by Arratia, Barbour, and Tavar\'e.
\
\subsection{Fulman's generalization and further directions} Recently, we have also realized that Fulman generalized Proposition \ref{main} to certain linear algebraic groups $G$ over $\bF_{q}$, where $G = \GL_{n}$ is the desired special case for Proposition \ref{main} (\cite{Ful99}, Theorem 15), where $S_{n}$ occurs as the Weyl group of $G(\ol{\bF_{q}}) = \GL_{n}(\ol{\bF_{q}})$. Fulman's argument needs to assume that the probability that a random element in $G(\bF_{q}) \subset G(\ol{\bF_{q}})$ is regular semisimple converges to $1$, as $q \ra \infty$, which we previously thought as a mere heuristics. There is a statement in Fulman's paper (\cite{Ful99}, Theorem 5 and Theorem 7) that proves a similar property related to this hypothesis for $G = \GL_{n}$, but it takes $n \ra \infty$ before we can let $q \ra \infty$. However, for $g \in \GL_{n}(\bF_{q})$ saying that $g$ is regular semisimple turns out to mean that the characteristic polynomial of $g$ is square-free in $\bF_{q}[t]$, so using a similar argument as in Lemma \ref{sfmat}, we see that $G = \GL_{n}$ satisfies Fulman's hypothesis because the number of degree $n$ monic square-free polynomials $f(t) \in \bF_{q}[t]$ such that $f(0) \neq 0$ is precisely
$$q^{n} - 2q^{n-1} + 2q^{n-2} - \cdots + (-1)^{n-1}2q + (-1)^{n},$$
\
which replaces Lemma \ref{sf} in the argument. We are not sure whether the above count is explicitly written in literature, but this is certainly well-known (e.g., more general phenomena are dealt in various works such as \cite{BE}, \cite{Kim}, and \cite{VW}). More specifically, one can obtain this count by counting the $\bF_{q}$-points of the unordered configuration space $\Conf^{n}(\bA^{1} \sm \{0\})$ of $n$ points on the affine line minus the origin over $\bF_{q}$. The generating function for such count $|\Conf^{n}(\bA^{1} \sm \{0\})(\bF_{q})|$ is given by
$$\sum_{n=0}^{\infty} |\Conf^{n}(\bA^{1} \sm \{0\})(\bF_{q})| t^{n} = \frac{Z_{\bA^{1} \setminus \{0\}}(t)}{Z_{\bA^{1} \setminus \{0\}}(t^{2})} = \frac{1 - qt^{2}}{(1 + t)(1 - qt)},$$
\
where $Z_{X}(t)$ means the zeta series of an algebraic variety over $\bF_{q}$, and expanding the right-hand side, one may obtain the desired count. We are not sure whether this hypothesis may or may not be satisfied by other linear algebraic groups $G$ over $\bF_{q}$. The reader must note that the reasons that this works for $G = \GL_{n}$ are as follows:
\be
\item we are aware of the size of each fiber of the map $\Mat_{n}(\bF_{q}) = \bA^{n^{2}}(\bF_{q}) \rightarrow \bA^{n}(\bF_{q})$ given by taking the characteristic polynomials and
\item the preimage of the set of degree $n$ monic polynomials with nonzero constant terms under this map is precisely $\GL_{n}(\bF_{q})$.
\ee
Nevertheless, it would be extremely interesting if one can identify which algebraic groups $G$, other than $\GL_{n}$, over $\bF_{q}$ produce the sets $G(\bF_{q})$ of $\bF_{q}$-points that can replace $\Mat_{n}(\bF_{q})$ with the Weyl groups of $G(\ol{\bF_{q}})$, replacing $S_{n}$ in Proposition \ref{main}. If such algebraic groups are given in a sequence $G_{n}$ in $n \in \mathbb{Z}_{\geq 1}$, then we can hope that studying the corresponding sequence $W_{n}$ of Weyl groups asymptotically in $n$ may produce analogous statements for Theorem \ref{p-adic} and Theorem \ref{JL}.
\
\subsection{Organization for the rest} In Section \ref{Landau}, we will show that Proposition \ref{main} implies Theorem \ref{JL}. In Section \ref{Haar}, we will show that Proposition \ref{main} implies Theorem \ref{p-adic}. We will provide a crucial formula due to Stong in Section \ref{Stong}, but our proof is slightly different from the original one, as we use direct counting of Young diagrams. Stong's formula will be used in proving Proposition \ref{main} in Section \ref{mainproof}. Finally, in Section \ref{permproof}, we will provide another proof of Lemma \ref{perm}, an influential result of Shepp and Lloyd, which states that the number of length $d$ cycles of a random permutation in $S_{n}$ asymptotically follows the Poisson distribution with mean $1/d$ when $n$ is large, and the probability is given independently for finitely many different choices of $d$. It is interesting that our proof is entirely combinatorial, while the original proof was probabilistic. Since the result is quite popular in literature, our alternative proof of Lemma \ref{perm} may be known to experts, although we were unable to locate it.
\
\subsection{Acknowledgment} We thank Yifeng Huang, Nathan Kaplan, and Jeff Lagarias for helpful conversations. G. Cheong was supported by NSF grant DMS-1162181 and the Korea Institute for Advanced Study for his visits to the institution regarding this research. M. Yu was supported by a KIAS Individual Grant (SP075201) via the Center for Mathematical Challenges at Korea Institute for Advanced Study. He was also supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1C1C1A01007604).
\
\section{Proposition \ref{main} implies Theorem \ref{JL}}\label{Landau}
\hspace{3mm} We already know that Proposition \ref{main} implies that
$$\lim_{q \ra \infty}\Prob_{A \in \Mat_{n}(\bF_{q})}
\left(\begin{array}{c}
f_{A}(t) \text{ has } k \text{ irreducible factors} \\
\text{counting with multiplicity}
\end{array}\right) \sim \frac{(\log n)^{k-1}}{(k-1)! n}$$
\
for large $n$ in the statement of Theorem \ref{JL}. We need to justify the statement about distinct irreducible factors. We need two lemmas, well-known to the experts, to provide a complete exposition.
\
\begin{lem}\label{sf} Let $\bA^{n}(\bF_{q})$ be the set of all monic polynomials of degree $n$ in $\bF_{q}[t]$. For $n \geq 2$, we have
$$|\{f \in \bA^{n}(\bF_{q}): f(t) \text{ is square-free in } \bF_{q}[t]\}| = q^{n}(1 - q^{-1}).$$
\
In particular, we have
$$\lim_{q \ra \infty}\Prob_{f \in \bA^{n}(\bF_{q})}(f(t) \text{ is square-free in } \bF_{q}[t]) = 1.$$
\end{lem}
\
\begin{proof} This is a well-known fact in literature. For a proof, see Lemma 3.4 of \cite{CWZ}. A more combinatorial proof using partitions is also available (e.g., Proposition 5.9 in \cite{VW} with $a = 2$).
\end{proof}
\
\begin{lem}\label{sfmat} For any $n \in \bZ_{\geq 1}$, we have
$$\lim_{q \ra \infty}\Prob_{A \in \Mat_{n}(\bF_{q})}(f_{A}(t) \text{ is square-free in } \bF_{q}[t]) = 1.$$
\end{lem}
\
\begin{proof} Fix any monic square-free polynomial
$$f(t) = P_{1}(t) \cdots P_{r}(t),$$
\
where $P_{i}(t)$ are distinct monic irreducible polynomials in $\bF_{q}[t]$. By Theorem 2 in \cite{Rei}, the number of $A \in \Mat_{n}(\bF_{q})$ such that $f_{A}(t) = f(t)$ is $$q^{n^{2} - n} \cdot \frac{(1 - q^{-1})(1 - q^{-2}) \cdots (1 - q^{-n})}{(1 - q^{-\deg(P_{1})}) (1 - q^{-\deg(P_{2})}) \cdots (1 - q^{-\deg(P_{r})})} \geq q^{n^{2} - n} \cdot \frac{(1 - q^{-1})(1 - q^{-2}) \cdots (1 - q^{-n})}{(1 - q^{-n})^{r}}.$$
\
For $n = 1$, the map $A \mapsto f_{A}(t)$ is bijective, and thus letting $q \ra \infty$, we are done. Foe $n \geq 2$, by Lemma \ref{sf}, we have
$$\Prob_{A \in \Mat_{n}(\bF_{q})}(f_{A}(t) \text{ is square-free in } \bF_{q}[t]) \geq (1 - q^{-1}) \cdot \frac{(1 - q^{-1})(1 - q^{-2}) \cdots (1 - q^{-n})}{(1 - q^{-n})^{r}},$$
\
so letting $q \ra \infty$ gives the result.
\end{proof}
\
\begin{proof}[Proof that Proposition \ref{main} implies Theorem \ref{JL}] Again, we only need to show the first identity in the statement. By Lemma \ref{sfmat}, we have
\begin{align*}
\lim_{q \ra \infty}\Prob_{A \in \Mat_{n}(\bF_{q})}
\left(\begin{array}{c}
f_{A}(t) \text{ has } k \text{ distinct} \\
\text{irreducible factors}
\end{array}\right) &= \lim_{q \ra \infty}\Prob_{A \in \Mat_{n}(\bF_{q})}
\left(\begin{array}{c}
f_{A}(t) \text{ is square-free in } \bF_{q}[t] \text{ and}\\
\text{has } k \text{ irreducible factors} \\
\text{counting with multiplicity}
\end{array}\right) \\
&= \lim_{q \ra \infty}\Prob_{A \in \Mat_{n}(\bF_{q})}
\left(\begin{array}{c}
f_{A}(t) \text{ has } k \text{ irreducible factors} \\
\text{counting with multiplicity}
\end{array}\right),
\end{align*}
\
because the number of monic non-square-free polynomials of degree $n$ is negligible as $q \ra \infty$.
\end{proof}
\
\section{Proposition \ref{main} implies Theorem \ref{p-adic}}\label{Haar}
\subsection{Useful lemmas} The following result, due to Shepp and Lloyd, will be crucial in applying Proposition \ref{main} to obtain Theorem \ref{p-adic}. In \cite{SL}, Shepp and Lloyd obtained the result by showing that the the characteristic function of the distribution of the numbers of cycles of fixed lengths $d_{1}, \dots, d_{r}$ of a random permutation converges to the characteristic function of the distribution given by the independent Poisson random variables with the means $1/d_{1}, \dots, 1/d_{r}$ and then applying L\'evy's theorem. We will provide our own combinatorial proof of this result in Section \ref{permproof}.
\
\begin{notn}
For any permutation $\sigma \in S_{n}$, we denote by $m_{d}(\sigma)$ the number of $d$-cycles in the cycle decomposition of $\sigma$.
\end{notn}
\
\begin{lem}[cf. p.342 in \cite{SL}]\label{perm} Fix $r \in \bZ_{\geq 0}$. Given distinct $d_{1}, \dots, d_{r} \in \bZ_{\geq 1}$ and not necessarily distinct $k_{1}, \dots, k_{r} \in \bZ_{\geq 0}$, we have
$$\lim_{n \ra \infty}\Prob_{\sigma \in S_{n}}(m_{d_{j}}(\sigma) = k_{j} \text{ for } 1 \leq j \leq r) = \frac{e^{-1/d_{1}}(1/d_{1})^{k_{1}}}{k_{1}!} \cdots \frac{e^{-1/d_{r}}(1/d_{r})^{k_{r}}}{k_{r}!},$$
\
which means that the number of cycles of length $d_{1}, \dots, d_{r}$ of a random permutation of $n$ letters are asymptotically given by independent Poisson random variables with means $1/d_{1}, \dots, 1/d_{r}$ when $n$ is large.
\end{lem}
\
\hspace{3mm} We need the following lemma to deduce Theorem \ref{p-adic} from Proposition \ref{main}. This lemma contains some ideas used for proving Theorem C of \cite{CH}, but the proof here is simpler, since we only need a special case of their argument.
\
\begin{lem}\label{red} Let $P_{1}, \dots, P_{r} \in \bZ_{p}[t]$ be any monic polynomials whose images in $\bF_{p}[t]$ under the mod $p$ reduction are distinct irreducible polynomials. Then
$$\Prob_{A \in \Mat_{n}(\bZ_{p})}\left(\begin{array}{c}
\coker(P_{j}(A)) = 0 \\
\text{ for } 1 \leq j \leq r
\end{array}\right) = \Prob_{A \in \Mat_{n}(\bF_{p})}\left(\begin{array}{c}
A[\bar{P}_{j}^{\infty}] = 0 \\
\text{ for } 1 \leq j \leq r
\end{array}\right),$$
\
where $A[\bar{P}_{j}^{\infty}]$ means the $\bar{P}_{j}$-part of the $\bF_{p}[t]$-module structure given by the matrix action $A \acts \bF_{p}^{n}$.
\end{lem}
\begin{proof} First, note that for any $A \in \Mat_{n}(\bF_{p})$, we have $\dim_{\bF_{p}}\ker(\bar{P}_{j}(A)) = \dim_{\bF_{p}}\coker(\bar{P}_{j}(A))$ and $\ker(\bar{P}_{j}(A)) = A[\bar{P}_{j}^{\infty}]/\bar{P}_{j}(t)A[\bar{P}_{j}^{\infty}]$, as $\bF_{p}[t]$-modules where the action of $t$ is given by multiplying the matrix $A$ on the left. Hence, the following are equivalent:
\
\begin{itemize}
\item $A[\bar{P}_{j}^{\infty}] = 0$;
\item $\ker(\bar{P}_{j}(A)) = 0$;
\item $\coker(\bar{P}_{j}(A)) = 0$,
\end{itemize}
\
so it is enough to show that
$$\Prob_{A \in \Mat_{n}(\bZ_{p})}\left(\begin{array}{c}
\coker(P_{j}(A)) = 0 \\
\text{ for } 1 \leq j \leq r
\end{array}\right) = \Prob_{A \in \Mat_{n}(\bF_{p})}\left(\begin{array}{c}
\coker(\bar{P}_{j}(A)) = 0 \\
\text{ for } 1 \leq j \leq r
\end{array}\right).$$
\
For any $B \in \Mat_{n}(\bZ_{p})$, the right exactness of $(-) \otimes_{\bZ_{p}} \bF_{p}$ implies that $$\coker(B) \otimes_{\bZ_{p}} \bF_{p} \simeq \coker(\bar{B}),$$ where $\bar{B} \in \Mat_{n}(\bF_{p})$ is the reduction of $B$ mod $p$. Hence, by Nakayama's lemma, we have $\coker(B) = 0$ if and only if $\coker(\bar{B}) = 0$. Thus, if we consider the mod $p$ reduction map $\pi : \Mat_{n}(\bZ_{p}) \tra \Mat_{n}(\bF_{p})$, we have $$\pi^{-1}\left\{\begin{array}{c}
A \in \Mat_{n}(\bF_{p}) : \coker(\bar{P}_{j}(A)) = 0 \\
\text{ for } 1 \leq j \leq r
\end{array}\right\} = \left\{\begin{array}{c}
A \in \Mat_{n}(\bZ_{p}) : \coker(P_{j}(A)) = 0 \\
\text{ for } 1 \leq j \leq r
\end{array}\right\}.$$
\
Since each fiber under $\pi$ has the same measure $1/p^{n^{2}}$, this finishes the proof.
\end{proof}
\
\subsection{Proof of Theorem \ref{p-adic} given Proposition \ref{main}} We now prove Theorem \ref{p-adic}, assuming Proposition \ref{main}. Proposition \ref{main} will be proven in Section \ref{mainproof}.
\
\begin{proof}[Proof of Theorem \ref{p-adic} given Proposition \ref{main}] Let $\mc{M}(q, d)$ be the set of degree $d$ monic irreducible polynomials in $\bF_{q}[t]$ so that $M(q, d) = |\mc{M}(q, d)|$. By Lemma \ref{red}, we have
\begin{align*}\Prob_{A \in \Mat_{n}(\bZ_{p})}
\left(\begin{array}{c}
\coker(P(A)) = 0 \\
\text{ for all } P \in \bigsqcup_{j=1}^{r}\mc{M}(p, d_{j})
\end{array}\right) &= \Prob_{A \in \Mat_{n}(\bF_{p})}
\left(\begin{array}{c}
A[\bar{P}_{j}^{\infty}] = 0 \\
\text{ for all } P \in \bigsqcup_{j=1}^{r}\mc{M}(p, d_{j})
\end{array}\right) \\
&= \Prob_{A \in \Mat_{n}(\bF_{p})}
\left(\begin{array}{c}
f_{A}(t) \text{ has no irreducible factors of} \\
\text{degrees } d_{1}, \dots, d_{r}
\end{array}\right).
\end{align*}
\
Hence, we may apply Proposition \ref{main} and Lemma \ref{perm} to finish the proof.
\end{proof}
\
\section{Stong's formula}\label{Stong}
\subsection{Set-up} Let $R$ be a PID, and say $\mf{m}$ is a maximal ideal of $R$ such that $R/\mf{m} = \bF_{q}$. Denote by $\mc{P}$ the set of all partitions including the empty partition $\es$. Then any finite length (or equivalently, finite size) $\mf{m}^{\infty}$-torsion $R$-module is isomorphic to
$$H_{\mf{m},\ld}^{R} := R/\mf{m}^{\ld_{1}} \op \cdots \op R/\mf{m}^{\ld_{l}},$$
\
for a unique partition $\ld = (\ld_{1}, \dots, \ld_{l}) \in \mc{P}$. We will always assume $\ld_{1} \geq \cdots \geq \ld_{l} > 0$, and we write $|\ld| := \ld_{1} + \cdots + \ld_{l}$. The case $l = 0$ will correspond to the empty partition $\ld = \es$. Note that
$$|H_{\mf{m},\ld}^{R}| = q^{\ld_{1} + \cdots + \ld_{l}} = q^{|\ld|}.$$
\
We will write $H_{\mf{m},\ld} := H_{\mf{m},\ld}^{R}$ if the meaning of $R$ is evident from the context. Denote by $m_{d}(\ld)$ the number of parts of size $d$ in $\ld$, and define
$$n(\ld) := 0 \cdot \ld_{1} + 1 \cdot \ld_{2} + 2 \cdot \ld_{3} + \cdots + (l-1) \cdot \ld_{l}.$$
\
\hspace{3mm} The number $|\Aut_{R}(H_{\mf{m},\ld}^{R})|$ of automorphisms of $H_{\mf{m},\ld}^{R}$ can be computed by noting the fact that
$$|\Aut_{R}(H_{\mf{m},\ld}^{R})| = |\Aut_{R_{\mf{m}}}(H_{\mf{m}R_{\mf{m}},\ld}^{R_{\mf{m}}})|$$
\
and the following formula due to Macdonald.
\
\begin{lem}[(1.6) on p.181 in \cite{Mac}]\label{Mac} Let $(R, \mf{m})$ be a DVR (discrete valuation ring) with the finite residue field $R/\mf{m} = \bF_{q}$. Then we have
$$|\Aut_{R}(H_{\mf{m},\ld})| = q^{|\ld|+2n(\ld)} \prod_{d = 1}^{\infty}\prod_{i=1}^{m_{d}(\ld)}(1 - q^{-i}).$$
\end{lem}
\
\hspace{3mm} To our best knowledge, the following result is due to Stong in \cite{Sto}, but we rephrase Stong's result in a more general setting and provide a slightly different proof. We will essentially go through some key ideas in Lemma 6 in \cite{Sto}, which relies on the Fine-Herstein theorem (i.e., the number of $n \times n$ nilpotent matrices over $\bF_{q}$ is $q^{n(n-1)}$), but unlike the reference, we will avoid the use of M\"obius inversion along with exponentiation, logarithm, differentiation and integration of power series by counting partitions (i.e., Young diagrams) instead.
\
\begin{lem}[cf. Proposition 19 in \cite{Sto}]\label{key} Let $R$ be any Dedekind domain with a maximal ideal $\mf{m}$ such that $R/\mf{m} = \bF_{q}$. Then
$$\sum_{M \in \Mod_{R_{\mf{m}}}^{<\infty}} \frac{u^{q\dim_{\bF_q}(M)}}{|\Aut_{R}(M)|} = \prod_{i=1}^{\infty}\frac{1}{1 - q^{-i}u^{q}},$$
\
where $u$ is a complex variable with $|u| < q^{1/q}$. Equivalently, we have
$$\sum_{\ld \in \mc{P}} \frac{y^{|\ld|}}{|\Aut_{R}(H_{\mf{m},\ld})|} = \prod_{i=1}^{\infty}\frac{1}{1 - q^{-i}y},$$
\
where $y$ is a complex variable with $|y| < q$.
\end{lem}
\begin{proof} As explained in the beginning of this section, finite $\mf{m}^{\infty}$-torsion $R$-modules are parametrized by partitions. Hence, taking $y = u^{q}$, we see that the two given statements are equivalent. We will prove the second statement. Applying Lemma \ref{Mac}, this reduces our problem to the case where $R = \bF_{q}[t]$ and $\mf{m} = (t)$ by replacing $R_{\mf{m}}$ with $\bF_{q}[t]_{(t)}$ or $\bF_{q}\llb t \rrb$. This reduction lets us rewrite the left-hand side of the desired identity as
$$\sum_{n = 0}^{\infty}\sum_{\ld \vdash n}\frac{y^{n}}{|\Stab_{\GL_{n}(\bF_{q})}(J_{0,\ld})|},$$
\
where $J_{0,\ld}$ is the Jordan canonical form given by the $0$-Jordan blocks of whose sizes are equal to the parts of the partition $\ld$, and the action $\GL_{n}(\bF_{q}) \acts \Mat_{n}(\bF_{q})$ is given by the conjugation. By the orbit-stabilizer theorem, we have
$$\frac{1}{|\Stab_{\GL_{n}(\bF_{q})}(J_{0,\ld})|} = \frac{|\GL_{n}(\bF_{q}) \cdot J_{0,\ld}|}{|\GL_{n}(\bF_{q})|}.$$
\
Recall that $n \times n$ matrices all of whose eigenvalues are $0$ are precisely nilpotent matrices. Thus, we have
\begin{align*}
\sum_{n = 0}^{\infty}\sum_{\ld \vdash n}\frac{y^{n}}{|\Stab_{\GL_{n}(\bF_{q})}(J_{0,\ld})|} &= \sum_{n = 0}^{\infty}\frac{|\Nil_{n}(\bF_{q})|y^{n}}{|\GL_{n}(\bF_{q})|} \\
&= 1 + \sum_{n = 1}^{\infty}\frac{q^{n(n-1)}y^{n}}{(q^{n} - 1)(q^{n} - q) \cdots (q^{n} - q^{n-1})} \\
&= 1 + \sum_{n = 1}^{\infty}\frac{(q^{-1}y)^{n}}{(1 - q^{-1})(1 - q^{-2}) \cdots (1 - q^{-n})} \\
&= \sum_{n = 0}^{\infty}(q^{-1}y)^{n} \left(\sum_{j_{1}=0}^{\infty} q^{-j_{1}}\right) \left(\sum_{j_{2}=0}^{\infty} q^{-2j_{2}}\right) \cdots \left(\sum_{j_{n}=0}^{\infty} q^{-nj_{n}}\right) \\
&= \sum_{n = 0}^{\infty}(q^{-1}y)^{n} \sum_{j_{1}=0}^{\infty} \sum_{j_{2}=0}^{\infty} \cdots \sum_{j_{n}=0}^{\infty} q^{-j_{1} - 2j_{2} - \cdots - nj_{n}} \\
&= \prod_{i=0}^{\infty} (1 + q^{-i}(q^{-1}y) + q^{-2i}(q^{-1}y)^{2} + \cdots) \\
&= \prod_{i=0}^{\infty} \frac{1}{1 - q^{-(i+1)}y} \\
&= \prod_{i=1}^{\infty} \frac{1}{1 - q^{-i}y},
\end{align*}
\
where for the second equality we used the elementary identity $|\GL_{n}(\bF_{q})| = (q^{n} - 1)(q^{n} - q) \cdots (q^{n} - q^{n-1})$, and Fine-Herstein theorem (e.g., Theorem 1 of \cite{FH}), which gives the number $|\Nil_{n}(\bF_{q})|$ of nilpotent matrices in $\Mat_{n}(\bF_{q})$:
$$|\Nil_{n}(\bF_{q})| = q^{n(n-1)}.$$
\
For the sixth equality, we have used the fact that the coefficient of $Y^{n}$ of the following product
$$\prod_{i=0}^{\infty}(1 + X^{i}Y + X^{2i}Y^{2} + \cdots)$$
\
is equal to
$$\sum_{j_{1}=0}^{\infty} \sum_{j_{2}=0}^{\infty} \cdots \sum_{j_{n}=0}^{\infty} X^{j_{1} + 2j_{2} + \cdots + nj_{n}}.$$
\
where $X, Y$ are considered to be complex numbers varying within the open unit disk centered at $0$ (i.e., $|X|, |Y| < 1$) so that we can take $X = q^{-1}$ and $Y = q^{-1}y$ with $|y| < q$ in our proof for the sixth equality in the chain of equalities above. Indeed, when we expand the given product, we have
\begin{align*}
\prod_{i=0}^{\infty}(1 + X^{i}Y + X^{2i}Y^{2} + \cdots) &= \sum_{
m_{0}, m_{1}, m_{2}, \dots \geq 0} X^{0 \cdot m_{0} + 1 \cdot m_{1} + 2 \cdot m_{2} + \cdots} Y^{m_{0} + m_{1} + m_{2} + \cdots}\\
&= \sum_{n=0}^{\infty} Y^{n} \sum_{\substack{
m_{0}, m_{1}, m_{2}, \dots \geq 0 \\
m_{0} + m_{1} + m_{2} + \cdots = n}} X^{m_{1} + 2m_{2} + \cdots},
\end{align*}
\
so it is enough to show that
$$\sum_{\substack{
m_{0}, m_{1}, m_{2}, \dots \geq 0 \\
m_{0} + m_{1} + m_{2} + \cdots = n}} X^{m_{1} + 2m_{2} + \cdots} = \sum_{j_{1}=0}^{\infty} \sum_{j_{2}=0}^{\infty} \cdots \sum_{j_{n}=0}^{\infty} X^{j_{1} + 2j_{2} + \cdots + nj_{n}}.$$
\
Note that we have a bijection
$$\{(m_{0}, m_{1}, m_{2}, \dots) \in \bZ_{\geq 0}^{\infty} : m_{0} + m_{1} + m_{2} + \cdots = n \} \lra \{(m_{1}, m_{2}, \dots) \in \bZ_{\geq 0}^{\infty} : m_{1} + m_{2} + \cdots \leq n \}.$$
\
given by $(m_{0}, m_{1}, m_{2}, \dots) \mapsto (m_{1}, m_{2}, \dots)$. This reduces our problem to the following:
$$\sum_{\substack{
m_{1}, m_{2}, \dots \geq 0 \\
m_{1} + m_{2} + \cdots \leq n}} X^{m_{1} + 2m_{2} + \cdots} = \sum_{j_{1}=0}^{\infty} \sum_{j_{2}=0}^{\infty} \cdots \sum_{j_{n}=0}^{\infty} X^{j_{1} + 2j_{2} + \cdots + nj_{n}}.$$
\
If $n = 0$, both sides are $1$, so let $n \geq 1$. Let $A_{n}$ be the set of partitions whose number of parts counting with multiplicity (i.e., the row lengths of Young diagrams) is $\leq n$ and $B_{n}$ the set of partitions whose parts (i.e., the column lengths of Young diagrams) are $\leq n$. Then we have a bijection $A_{n} \lra B_{n}$ given by taking conjugation, so in particular, we have
$$\sum_{\ld \in A_{n}}X^{|\ld|} = \sum_{\ld \in B_{n}}X^{|\ld|}.$$
\
Now, noting that
$$\sum_{\ld \in A_{n}}X^{|\ld|} = \sum_{\substack{
m_{1}, m_{2}, \dots \geq 0 \\
m_{1} + m_{2} + \cdots \leq n}} X^{m_{1} + 2m_{2} + \cdots}$$
\
and
$$\sum_{\ld \in B_{n}}X^{|\ld|} = \sum_{j_{1}=0}^{\infty} \sum_{j_{2}=0}^{\infty} \cdots \sum_{j_{n}=0}^{\infty} X^{j_{1} + 2j_{2} + \cdots + nj_{n}},$$
\
we finish the proof.
\end{proof}
\
\section{Proof of Proposition \ref{main}}\label{mainproof}
\hspace{3mm} For our proof of Proposition \ref{main}, we need to analyze polynomials that encode some information about random matrices in $\Mat_{n}(\bF_{q})$ and random permutations in $S_{n}$. Such a polynomial is called a ``cycle index''.
\
\subsection{Cycle index} Given any permutation group $G \leqslant S_{n}$, we define the \textbf{cycle index} of $G$ as the following polynomial:
$$\mc{Z}(G, \bs{x}) := \frac{1}{|G|}\sum_{g \in G}x_{1}^{m_{1}(g)} \cdots x_{n}^{m_{n}(g)},$$
\
where we recall that $m_{d}(g)$ means the number of $d$-cycles in the cycle decomposition of $g$. Cycle indices were used to solve various enumeration problems, and to our best knowledge, these were invented independently by Redfield \cite{Red} and P\'olya \cite{Pol}. (P\'olya's paper is in German, but there is an English translation \cite{PR} by Read). We will use the cycle index $\mc{Z}(S_{n}, \bs{x})$ of the full symmetric group $S_{n}$. Note that for any $\sigma, \tau \in S_{n}$, we note that $\sigma$ and $\tau$ are conjugate to each other in $S_{n}$ if and only if $x_{1}^{m_{1}(\sigma)} \cdots x_{n}^{m_{n}(\sigma)} = x_{1}^{m_{1}(\tau)} \cdots x_{n}^{m_{n}(\tau)}$, so $\mc{Z}(S_{n}, \bs{x})$ captures some information about the conjugate action $S_{n} \acts S_{n}$.
\
\hspace{3mm} We will also make use of the following polynomial:
$$\mc{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x}) := \frac{1}{|\GL_{n}(\bF_{q})|}\sum_{A \in \Mat_{n}(\bF_{q})}\prod_{P \in |\bA^{1}_{\bF_{q}}|}x_{P, \mu_{P}(A)},$$
\
where $\mu_{P}(A) = (\mu_{1}, \dots, \mu_{l})$ is the partition defined by the $P$-part of the $\bF_{q}[t]$-module structure defined by the matrix multiplication $A \acts \bF_{q}^{n}$:
$$A[P^{\infty}] \simeq \bF_{q}[t]/(P)^{\mu_{1}} \op \cdots \op \bF_{q}[t]/(P)^{\mu_{l}}.$$
\
We denoted by $|\bA^{1}_{\bF_{q}}|$ the set of all monic irreducible polynomials in $\bF_{q}[t]$. For $P \in |\bA^{1}_{\bF_{q}}|$ and a nonempty partition $\ld$, the notation $x_{P,\ld}$ means a formal variable associated to the pair $(P, \ld)$. For the empty partition, we define $x_{P,\es} := 1$. We call $\mc{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x})$ the \textbf{cycle index} of the conjugate action $\GL_{n}(\bF_{q}) \acts \Mat_{n}(\bF_{q})$. The terminology makes sense because each monomial $\prod_{P \in |\bA^{1}_{\bF_{q}}|}x_{P, \mu_{P}(A)}$ is characterized by an orbit of such action. That is, two matrices $A$ and $B$ are in the same orbit if and only if $\prod_{P \in |\bA^{1}_{\bF_{q}}|}x_{P, \mu_{P}(A)} = \prod_{P \in |\bA^{1}_{\bF_{q}}|}x_{P, \mu_{P}(B)}$. The notion of $\mc{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x})$ is introduced by Stong \cite{Sto}, generalizing a similar definition for the conjugate action $\GL_{n}(\bF_{q}) \acts \GL_{n}(\bF_{q})$ introduced by Kung \cite{Kun}. We will use the following factorization results for the generating functions for the cycle indicies.
\
\begin{lem}[p.14 of \cite{PR}]\label{fac1} In the ring $\bQ [\bs{x}] \llb u \rrb$ of formal power series in $u$ over the polynomial ring $\bQ [\bs{x}] = \bQ[x_{1},x_{2}, \dots]$, we have
$$\sum_{n=0}^{\infty}\mc{Z}(S_{n}, \bs{x}) u^{n} = \prod_{d=1}^{\infty}e^{x_{d}u^{d}/d}.$$
\end{lem}
\begin{proof} Given $\ld \vdash n$ (i.e., a partition of $n$), the number of permutations in $S_{n}$ with cycle type $\ld$ is
$$\frac{n!}{m_{1}(\ld)!1^{m_{1}(\ld)} \cdots m_{n}(\ld)!n^{m_{n}(\ld)}},$$
\
so
$$\mc{Z}(S_{n}, \bs{x}) = \sum_{\ld \vdash n}\frac{x_1^{m_{1}(\ld)} \cdots x_{n}^{m_{n}(\ld)}}{m_{1}(\ld)!1^{m_{1}(\ld)} \cdots m_{n}(\ld)!n^{m_{n}(\ld)}}.$$
\
Thus, we have
\begin{align*}
\sum_{n=0}^{\infty}\mc{Z}(S_{n}, \bs{x})u^{n} &= \sum_{n=0}^{\infty}\sum_{\ld \vdash n}\frac{x_1^{m_{1}(\ld)} \cdots x_{n}^{m_{n}(\ld)} u^{1 \cdot m_{1}(\ld)} \cdots u^{n \cdot m_{n}(\ld)}}{m_{1}(\ld)!1^{m_{1}(\ld)} \cdots m_{n}(\ld)!n^{m_{n}(\ld)}} \\
&= \sum_{\ld \in \mc{P}} \prod_{d=1}^{\infty} \frac{ (x_{d}u^{d}/d)^{m_{d}(\ld)} }{m_{d}(\ld)!} \\
&= \prod_{d=1}^{\infty}\sum_{m=0}^{\infty}\frac{(x_{d}u^{d}/d)^{m}}{m!} \\
&= \prod_{d=1}^{\infty}e^{x_{d}u^{d}/d},
\end{align*}
\
as desired.
\end{proof}
\
\begin{lem}[Lemma 1 (2) in \cite{Sto}]\label{fac2} In the ring $\bQ [\bs{x}] \llb u \rrb$ of formal power series in $u$ over the polynomial ring $\bQ [\bs{x}] = \bQ[x_{P,\ld}]_{P \in |\bA^{1}_{\bF_{q}}|, \ld \in \mc{P} \sm \{\es\}}$, we have
$$\sum_{n=0}^{\infty}\mc{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x})u^n = \prod_{P \in |\bA^{1}_{\bF_{q}}|} \sum_{\ld \in \mc{P}} \frac{x_{P,\ld}u^{|\ld|\deg(P)}}{|\Aut_{\bF_{q}[t]}(H_{P, \ld})|},$$
\
where $H_{P, \ld} := H_{(P),\ld}^{\bF_{q}[t]}$, following the notation defined in the beginning of Section \ref{Stong}.
\end{lem}
\
\begin{rmk} Lemma \ref{fac2} says that on the left-hand side, the coefficient of $u^{n}$ is given by
$$\sum_{|\ld^{(1)}|\deg(P_{1}) + \cdots + |\ld^{(s)}|\deg(P_{s}) = n}\frac{x_{P_{1}, \ld^{(1)}} \cdots x_{P_{s}, \ld^{(s)}}}{|\Aut_{\bF_{q}[t]}(H_{P_{1}, \ld^{(1)}})| \cdots |\Aut_{\bF_{q}[t]}(H_{P_{s}, \ld^{(s)}})|},$$
\
where the sum is over distinct $P_{1}, \dots, P_{s} \in |\bA^{1}_{\bF_{q}}|$ and not necessarily distinct $\ld^{(1)}, \dots, \ld^{(s)}$ with the stipulated condition. This is a finite sum, so it makes sense to evaluate any complex numbers in the variables $x_{P, \ld}$ for $\ld \neq \es$ of the infinite product in the statement of Lemma \ref{fac2} and get an identity in $\bC \llb u \rrb$. (Recall that $x_{P, \es} = 1$.)
\end{rmk}
\
\subsection{Connection between two cycle indices} We define $\bs{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x})$ to be the polynomial given by specializing $x_{P,\ld} = x_{\deg(P)}^{|\ld|}$ in the cycle index $\mc{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x})$, which also makes sense for the empty partition because $x_{P,\es} = 1 = x_{\deg(P)}^{0}$. More explicitly, we have
\begin{align*}
\bs{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x}) &= \frac{1}{|\GL_{n}(\bF_{q})|}\sum_{A \in \Mat_{n}(\bF_{q})}\prod_{P \in |\bA^{1}_{\bF_{q}}|}x_{\deg(P)}^{|\mu_{P}(A)|} \\
&= \frac{1}{|\GL_{n}(\bF_{q})|}\sum_{A \in \Mat_{n}(\bF_{q})}x_{1}^{m_{1}(A)} \cdots x_{n}^{m_{n}(A)},
\end{align*}
\
where $m_{d}(A)$ is the number of degree $d$ monic irreducible polynomials, counting with multiplicity, in $\bF_{q}[t]$ dividing the characteristic polynomial $f_{A}(t)$ of $A$. Note that only $x_{1}, \dots, x_{n}$ will occur in $\bs{Z}([\Mat_{n}/\GL_{n}](\bF_{q})$ because the degree of $f_A(t)$ is $n$ for $A \in \Mat_{n}(\bF_{q})$, so $f_A(t)$ is not divisible by any irreducible polynomial of degree $> n$. In the concluding remark of \cite{Sto}, Stong essentially observed that there is a close relationship between $\bs{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x})$ and $\mc{Z}(S_{n}, \bs{x})$ when $q$ is large. We rigorously formulate what he might have meant.
\
\begin{lem}[cf. ``Conclusion'' in \cite{Sto}]\label{conv} We have
$$\lim_{q \ra \infty}\bs{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x}) = \mc{Z}(S_{n}, \bs{x}),$$
\
meaning that the coefficients of $\bs{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x})$ converge to the coefficients of $\mc{Z}(S_{n}, \bs{x})$ as $q \ra \infty$.
\end{lem}
\begin{proof} Let $x_{d} \in \bC$ such that $|x_{d}| \leq 1$ for any $d \in \bZ_{\geq 1}$. Applying Lemma \ref{fac2} and then Lemma \ref{key}, we have
\begin{align*}
\sum_{n=0}^{\infty}\bs{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x})u^{n} &= \prod_{P \in |\bA^{1}_{\bF_{q}}|} \sum_{\ld \in \mc{P}} \frac{(x_{\deg(P)}u^{\deg(P)})^{|\ld|}}{|\Aut_{\bF_{q}[t]}(H_{P, \ld})|} \\
&= \prod_{P \in |\bA^{1}_{\bF_{q}}|} \prod_{i=1}^{\infty}(1 - x_{\deg(P)}(q^{-i}u)^{\deg(P)})^{-1} \\
&= \prod_{d=1}^{\infty}\prod_{i=1}^{\infty}(1 - x_{d}(q^{-i}u)^{d})^{-M(q,d)},
\end{align*}
\
for $|u| < 1$, where $M(q, d)$ is the number of monic irreducible polynomials in $\bF_{q}[t]$ with degree $d$. Since $|x_{d}| \leq 1$ for all $d \geq 1$ so that we have
$$|\bs{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x})|^{1/n} \leq \left(\prod_{i=1}^{n}\frac{1}{1 - q^{-i}}\right)^{1/n} \leq \frac{1}{1 - q^{-1}},$$
\
the radius of convergence of the power series $\sum_{n=0}^{\infty}\bs{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x})u^{n}$ in $u$ is at least $1 - q^{-1} > 0$. Since our statement only involves finitely many $x_{d}$, we may set all but finitely many of them to be $0$, say $x_{d} = 0$ for all $d > m$ for fixed $m \in \bZ_{\geq 1}$. This lets us have
$$\sum_{n=0}^{\infty}\bs{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x})u^{n} = \prod_{d=1}^{m}\prod_{i=1}^{\infty}(1 - x_{d}(q^{-i}u)^{d})^{-M(q,d)}.$$
\
In what follows, we will use the fact that
$$\lim_{q \ra \infty}\frac{M(q,d)}{q^{d}/d} = 1,$$
\
for any $d \geq 1$, which can be found as Theorem 2.2 in \cite{Ros}. (Note that for $M(q, 1) = q$, so we do not need to take the limit to see this for $d = 1$.) Now, for $|u| < 1 - q^{-1}$, we have
\begin{align*}
\lim_{q \ra \infty}\sum_{n=0}^{\infty}\bs{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x})u^{n} &= \prod_{d=1}^{m}\lim_{q \ra \infty}\prod_{i=1}^{\infty}(1 - x_{d}(q^{-i}u)^{d})^{-M(q,d)} \\
&= \prod_{d=1}^{m}\lim_{q \ra \infty}\prod_{i=1}^{\infty}\left((1 - x_{d}(q^{-i}u)^{d})^{-q^{d}/d}\right)^{M(q,d)d/q^{d}} \\
&= \prod_{d=1}^{m}\prod_{i=1}^{\infty}\lim_{q \ra \infty}\left((1 - x_{d}(q^{-i}u)^{d})^{-q^{d}/d}\right)^{M(q,d)d/q^{d}} \\
&= \prod_{d=1}^{\infty}\lim_{q \ra \infty}(1 - x_{d}(q^{-1}u)^{d})^{-q^{d}/d} \\
&= \prod_{d=1}^{\infty}\left(\lim_{q \ra \infty}\left(1 - \frac{x_{d}u^{d}}{q^{d}}\right)^{- q^{d}/(x_{d}u^{d})}\right)^{(x_{d}u^{d})/d} \\
&= \prod_{d=1}^{\infty}e^{x_{d}u^{d}/d}.
\end{align*}
\
Applying Lemma \ref{fac1} to the last expression, we see that
$$\lim_{q \ra \infty}f_{q}(u) = f(u),$$
\
where
$$f_{q}(u) := \sum_{n=0}^{\infty}\bs{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x})u^{n}$$
\
and
$$f(u) := \sum_{n=0}^{\infty}\mc{Z}(S_{n}, \bs{x})u^{n}$$
\
are holomorphic functions in $u$ defined for $|u| < 1-q^{-1}$. Take any $0 < \epsilon < 1-q^{-1}$ and write $C_{\eps} := \{u \in \bC : |u| = \eps\}$. By the Cauchy integral formula, we have
$$\bs{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x}) = \frac{1}{2\pi i}\varointctrclockwise_{z \in C_{\eps}}\frac{f_{q}(z)}{z^{n+1}}dz$$
\
and
$$\mc{Z}(S_{n}, \bs{x}) = \frac{1}{2\pi i}\varointctrclockwise_{z \in C_{\eps}}\frac{f(z)}{z^{n+1}}dz.$$
\
Thus, by the Dominated Convergence Theorem, we have
\begin{align*}
\lim_{q \ra\infty} \bs{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x}) &= \lim_{q \ra \infty} \frac{1}{2\pi i}\varointctrclockwise_{z \in C_{\eps}}\frac{f_{q}(z)}{z^{n+1}}dz \\
&= \frac{1}{2\pi i}\varointctrclockwise_{z \in C_{\eps}}\frac{f(z)}{z^{n+1}}dz \\
&= \mc{Z}(S_{n}, \bs{x}).
\end{align*}
\
Since $x_{1}, \dots, x_{m}$ are arbitrary with the restriction $|x_{d}| \leq 1$, this is enough to prove the desired statement about the convergence of the coefficients in $x_{1}, \dots, x_{m}$ by the Cauchy integral formula for several variables (e.g., Theorem 2.1.3 of \cite{Fie}) with the Dominate Convergence Theorem.
\end{proof}
\
\subsection{Proof of Proposition \ref{main}} Again, Lemma \ref{conv} says that the coefficients of the polynomial
$$\bs{Z}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x}) = \frac{1}{|\GL_{n}(\bF_{q})|}\sum_{A \in \Mat_{n}(\bF_{q})}x_{1}^{m_{1}(A)} \cdots x_{n}^{m_{n}(A)} \in \bQ[x_{1}, \dots, x_{n}]$$
\
converge to the coefficients of the polynomial
$$\mc{Z}(S_{n}, \bs{x}) = \frac{1}{|S_{n}|}\sum_{\sigma \in S_{n}}x_{1}^{m_{1}(\sigma)} \cdots x_{n}^{m_{n}(\sigma)} \in \bQ[x_{1}, \dots, x_{n}]$$
\
as $q \ra \infty$, where the convergences are happening in $\bR$ or $\bC$, although the limits still lie in $\bQ$. This implies that, setting $x_{d,0} := 1$ and letting $x_{d,m}$ be formal for any $d, m \in \bZ_{\geq 1}$, the coefficients of
$$\hat{\bs{Z}}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x}) := \frac{1}{|\GL_{n}(\bF_{q})|}\sum_{A \in \Mat_{n}(\bF_{q})}x_{1, m_{1}(A)} \cdots x_{n, m_{n}(A)} \in \bQ
\left[\begin{array}{c}
x_{1,1}, \dots, x_{1,n}, \\
\cdots, \cdots, \cdots \\
x_{n,1}, \dots, x_{n,n}
\end{array}\right]$$
\
converge to the coefficients of
$$\hat{\mc{Z}}(S_{n}, \bs{x}) := \frac{1}{|S_{n}|}\sum_{\sigma \in S_{n}}x_{1, m_{1}(\sigma)} \cdots x_{n, m_{n}(\sigma)} \in \bQ
\left[\begin{array}{c}
x_{1,1}, \dots, x_{1,n}, \\
\cdots, \cdots, \cdots \\
x_{n,1}, \dots, x_{n,n}
\end{array}\right].$$
\
To refer back to this fact, we record this as follows.
\
\begin{lem}\label{conv'} As $q \ra \infty$, the coefficients of the polynomial $\hat{\bs{Z}}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x})$ converge to those of $\hat{\mc{Z}}(S_{n}, \bs{x})$. In particular, for any evaluation of the variables $x_{d,m}$ in $\bC$, we have
$$\lim_{q \ra \infty}\hat{\bs{Z}}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x}) = \hat{\mc{Z}}(S_{n}, \bs{x}).$$
\end{lem}
\
\hspace{3mm} We are now ready to prove Proposition \ref{main}:
\begin{proof}[Proof of Proposition \ref{main}] Recall the statements of Proposition \ref{main}, since we will use the notations in them. We will also use the notations given in this section. If we evaluate
\begin{itemize}
\item $x_{d_{1},k_{1}} = \cdots = x_{d_{r},k_{r}} = 1$,
\item $x_{d_{j},m} = 0$ for all $m \neq 0, k_{j}$ for $1 \leq j \leq r$, and
\item $x_{d,m} = 1$ for all the other cases from above,
\end{itemize}
\
then we have
$$\hat{\bs{Z}}([\Mat_{n}/\GL_{n}](\bF_{q}), \bs{x}) = \frac{\left|\left\{\begin{array}{c}
A \in \Mat_{n}(\bF_{q}) : f_{A}(t) \text{ has } k_{j} \text{ irreducible} \\ \text{factors of degree } d_{j} \text{ for } 1 \leq j \leq r
\end{array}
\right\}\right|}{|\GL_{n}(\bF_{q})|},$$
\
where we count the factors with multiplicity, and
$$\hat{\mc{Z}}(S_{n}, \bs{x}) = \frac{\left|\left\{\begin{array}{c}
\sigma \in S_{n} : \sigma \text{ has } k_{j} \text{ cycles} \\ \text{of length } d_{j} \text{ for } 1 \leq j \leq r
\end{array}
\right\}\right|}{|S_{n}|}.$$
\
Thus, applying Lemma \ref{conv'} and noting that
$$\lim_{q \ra \infty}\frac{|\Mat_{n}(\bF_{q})|}{|\GL_{n}(\bF_{q})|} = \lim_{q \ra \infty}(1 - q^{-1})^{-1} \cdots (1 - q^{-n})^{-1} = 1,$$
\
we have
$$\lim_{q \ra \infty}\Prob_{A \in \Mat_{n}(\bF_{q})}\left(\begin{array}{c}
A \in \Mat_{n}(\bF_{q}) : f_{A}(t) \text{ has } k_{j} \\ \text{irreducible factors of} \\
\text{degree } d_{j} \text{ for } 1 \leq j \leq r
\end{array}
\right) = \Prob_{\sigma \in S_{n}}\left(\begin{array}{c}
\sigma \in S_{n} : \sigma \text{ has } k_{j} \text{ cycles} \\ \text{of length } d_{j} \text{ for } 1 \leq j \leq r
\end{array}
\right),$$
\
as desired.
\end{proof}
\
\section{Combinatorial proof of Lemma \ref{perm}}\label{permproof}
\hspace{3mm} In this section, we give a proof of Lemma \ref{perm}, a result due to Shepp and Lloyd in \cite{SL}, originally proven by computing the characteristic functions of the distributions. Unlike their proof, we will directly compute the desired probability with a more combinatorial method. Most of our argument will be encoded in the following lemma.
\
\begin{lem}\label{issue} Take
\begin{itemize}
\item $x_{d_{j},m} = 0$ with $1 \leq j \leq s$ and $m \neq 0$,
\item $x_{d_{j},m} = 0$ with $s + 1 \leq j \leq r$ and $m \neq 0, k_{j}$, and
\item $x_{d,m} = 1$ for any other ones not on the above list.
\end{itemize}
Then $$\sum_{n=0}^{\infty} \hat{\mc{Z}}(S_{n}, \bs{x})u^{n} = \left( \frac{e^{-u^{d_1}/d_{1}} \cdots e^{-u^{d_r}/d_{r}}}{1 - u} \right) \cdot \left( \prod_{j=s+1}^{r} \left(1 + \frac{(u^{d_j}/d_{j})^{k_{j}}}{k_{j}!} \right) \right) \in \bC \llb u \rrb.$$
\end{lem}
\
\begin{rmk}\label{Tau} Before proving Lemma \ref{issue}, we first see how it implies Lemma \ref{perm}. We will make use of the following useful observation: for any $f(u) = c_{0} + c_{1}u + c_{2}u^{2} + \cdots \in \bC \llb u \rrb$, if $f(1) = c_{0} + c_{1} + c_{2} + \cdots$ exists, then the limit of the coefficient sequence of the power series
$$\frac{f(u)}{1 - u} = b_{0} + b_{1}u + b_{2}u^{2} + \cdots$$
\
is given by
$$\lim_{n \ra \infty} b_{n} = f(1),$$
\
because $(1 - u)^{-1} = 1 + u + u^{2} + \cdots$ so that $b_{n} = c_{0} + c_{1} + \cdots + c_{n}$, which is the coefficient of $u^{n}$ for the power series $f(u)(1 - u)^{-1}$.
\end{rmk}
\
\begin{proof}[Proof of Lemma \ref{perm}] Using Lemma \ref{issue} with $s = r$, we have
$$\sum_{n=0}^{\infty}\Prob_{\sigma \in S_{n}}\left( m_{d_{j}}(\sigma) = 0 \text{ for } 1 \leq j \leq r \right) u^{n} = \frac{e^{-u^{d_1}/d_{1}} \cdots e^{-u^{d_r}/d_{r}}}{1 - u}.$$
\
Using Remark \ref{Tau}, this implies that
$$\lim_{n \ra \infty}\Prob_{\sigma \in S_{n}}\left(m_{d_{j}}(\sigma) = 0 \text{ for } 1 \leq j \leq r\right) = e^{-1/d_{1}} \cdots e^{-1/d_{r}},$$
\
as claimed. For the general case, we work with the induction on the number of nonzero integers among $k_{1}, \dots, k_{r}$. The base case is when such number is $0$, which is exactly what we have proved above. Suppose that the result is true when such number is $0, 1, \dots, s-1$, where $s - 1 \geq 0$. Then we assume that there are $s$ nonzero elements among $k_{1}, \dots, k_{r}$, so permuting them if necessary, say
$$k_{1} = \cdots = k_{s} = 0,$$
\
while
$$k_{s+1}, \dots, k_{r} \neq 0.$$
\
Applying Lemma \ref{issue}, we get
\begin{align*}
\sum_{n=0}^{\infty}&\Prob_{\sigma \in S_{n}}\left(
\begin{array}{c}
m_{d_{j}}(\sigma) = 0 \text{ for } 1 \leq j \leq s, \text{ and } \\
m_{d_{s+1}}(\sigma) \in \{0, k_{s+1}\}, \dots, m_{d_{r}}(\sigma) \in \{0, k_{r}\}
\end{array}
\right) u^{n} \\
&= \left( \frac{e^{-u^{d_1}/d_{1}} \cdots e^{-u^{d_r}/d_{r}}}{1 - u} \right) \cdot \left( \prod_{j=s+1}^{r} \left(1 + \frac{(u^{d_j}/d_{j})^{k_{j}}}{k_{j}!} \right) \right),
\end{align*}
\
so
\begin{align*}
\sum_{\eps_{s+1} \in \{0, k_{s+1}\}} \cdots \sum_{\eps_{r} \in \{0, k_{r}\}} & \lim_{n \ra \infty}\Prob_{\sigma \in S_{n}}\left(
\begin{array}{c}
m_{d_{1}}(\sigma) = \cdots = m_{d_{s}}(\sigma) = 0, \\
m_{d_{s+1}}(\sigma) = \eps_{s+1}, \dots, m_{d_{r}}(\sigma) = \eps_{r}
\end{array}
\right) \\
&= e^{-1/d_{1}} \cdots e^{-1/d_{r}} \prod_{j=s+1}^{r} \left(1 + \frac{(1/d_{j})^{k_{j}}}{k_{j}!} \right)
\end{align*}
\
Applying the induction hypothesis after expanding the right-hand side of the above identity implies that
$$\Prob_{\sigma \in S_{n}}\left(
\begin{array}{c}
m_{d_{1}}(\sigma) = \cdots = m_{d_{s}}(\sigma) = 0, \\
m_{d_{s+1}}(\sigma) = k_{s+1}, \dots, m_{d_{r}}(\sigma) = k_{r}
\end{array} \right) = e^{-1/d_{1}} \cdots e^{-1/d_{r}} \frac{(1/d_{s+1})^{k_{s+1}}}{k_{s+1}!} \cdots \frac{(1/d_{r})^{k_{r}}}{k_{r}!},$$
\
which finishes the proof.
\end{proof}
\
\hspace{3mm} Finally, we provide a proof of Lemma \ref{issue}.
\
\begin{proof}[Proof of Lemma \ref{issue}] For now, let us not evaluate any variables in $\bs{x} = (x_{d,m})$. Recall that
$$\hat{\mc{Z}}(S_{n}, \bs{x}) = \frac{1}{|S_{n}|}\sum_{\sigma \in S_{n}}x_{1, m_{1}(\sigma)} \cdots x_{n, m_{n}(\sigma)},$$
\
where $x_{d,0} = 1$ and $x_{d,m}$ are defined to be formal variables for $d, m \geq 1$. We observe that
$$\hat{\mc{Z}}(S_{n}, \bs{x}) = \sum_{\ld \vdash n}\frac{x_{1,m_{1}(\ld)} \cdots x_{n,m_{n}(\ld)}}{m_{1}(\ld)!1^{m_{1}(\ld)} \cdots m_{n}(\ld)!n^{m_{n}(\ld)}},$$
\
so
$$\sum_{n=0}^{\infty}\hat{\mc{Z}}(S_{n}, \bs{x})u^{n} = \prod_{d=1}^{\infty}\sum_{m=0}^{\infty}\frac{x_{d,m}u^{dm}}{m!d^{m}} = \prod_{d=1}^{\infty}\sum_{m=0}^{\infty}\frac{x_{d,m}(u^d/d)^{m}}{m!}.$$
\
Note that taking all $x_{d,m} = 1$ in the identity, we get
$$\frac{1}{1 - u} = 1 + u + u^{2} + \cdots + = \prod_{d=1}^{\infty}\sum_{m=0}^{\infty}\frac{(u^d/d)^{m}}{m!}.$$
\
Hence, with the given evaluation in the variables $\bs{x} = (x_{d,m})$, we get
\begin{align*}
\sum_{n=0}^{\infty}\hat{\mc{Z}}(S_{n}, \bs{x}) u^{n} &= \left( \prod_{d=1}^{\infty}\sum_{m=0}^{\infty}\frac{(u^d/d)^{m}}{m!} \right) \cdot \left( \prod_{j=1}^{r} \left( \sum_{m=0}^{\infty}\frac{(u^{d_j}/d_{j})^{m}}{m!} \right)^{-1} \right) \cdot \left( \prod_{j=s+1}^{r} \left(1 + \frac{(u^{d_j}/d_{j})^{k_{j}}}{k_{j}!} \right) \right) \\
&= \left( \frac{e^{-u^{d_1}/d_{1}} \cdots e^{-u^{d_r}/d_{r}}}{1 - u} \right) \cdot \left( \prod_{j=s+1}^{r} \left(1 + \frac{(u^{d_j}/d_{j})^{k_{j}}}{k_{j}!} \right) \right).
\end{align*}
\
Since the identities makes sense as those of holomorphic functions in $u \in \bC$ with $|u| < 1$, this finishes the proof.
\end{proof}
\
\newpage
| {
"timestamp": "2020-05-19T02:06:15",
"yymm": "2005",
"arxiv_id": "2005.07846",
"language": "en",
"url": "https://arxiv.org/abs/2005.07846",
"abstract": "Given a positive integer $r$ and a prime power $q$, we estimate the probability that the characteristic polynomial $f_{A}(t)$ of a random matrix $A$ in $\\mathrm{GL}_{n}(\\mathbb{F}_{q})$ is square-free with $r$ (monic) irreducible factors when $n$ is large. We also estimate the analogous probability that $f_{A}(t)$ has $r$ irreducible factors counting with multiplicity. In either case, the main term $(\\log n)^{r-1}((r-1)!n)^{-1}$ and the error term $O((\\log n)^{r-2}n^{-1})$, whose implied constant only depends on $r$ but not on $q$ nor $n$, coincide with the probability that a random permutation on $n$ letters is a product of $r$ disjoint cycles. The main ingredient of our proof is a recursion argument due to S. D. Cohen, which was previously used to estimate the probability that a random degree $n$ monic polynomial in $\\mathbb{F}_{q}[t]$ is square-free with $r$ irreducible factors and the analogous probability that the polynomial has $r$ irreducible factors counting with multiplicity. We obtain our result by carefully modifying Cohen's recursion argument in the matrix setting, using Reiner's theorem that counts the number of $n \\times n$ matrices with a fixed characteristic polynomial over $\\mathbb{F}_{q}$.",
"subjects": "Combinatorics (math.CO); Number Theory (math.NT)",
"title": "Jordan--Landau theorem for matrices over finite fields",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717464424892,
"lm_q2_score": 0.8128673178375734,
"lm_q1q2_score": 0.8019519293850368
} |
https://arxiv.org/abs/2208.08506 | On $d$-permutations and Pattern Avoidance Classes | Bonichon and Morel first introduced $d$-permutations in their study of multidimensional permutations. Such permutations are represented by their diagrams on $[n]^d$ such that there exists exactly one point per hyperplane $x_i$ that satisfies $x_i= j$ for $i \in [d]$ and $j \in [n]$. Bonichon and Morel previously enumerated $3$-permutations avoiding small patterns, and we extend their results by first proving four conjectures, which exhaustively enumerate $3$-permutations avoiding any two fixed patterns of size $3$. We further provide a enumerative result relating $3$-permutation avoidance classes with their respective recurrence relations. In particular, we show a recurrence relation for $3$-permutations avoiding the patterns $132$ and $213$, which contributes a new sequence to the OEIS database. We then extend our results to completely enumerate $3$-permutations avoiding three patterns of size $3$. | \section{Introduction}
Starting with Knuth's \cite{knuth1973art} work on permutations in 1973, the field of pattern avoidance has been well-studied in enumerative combinatorics. Simion and Schmidt first considered pattern avoidance in their work on enumerating permutation avoidance classes in 1985 \cite{simion1985restricted}. Pattern avoidance can be defined as follows:
\begin{definition}
Let $\sigma \in S_{n}$ and $\pi \in S_{k}$, where $k \leq n$. We say that the permutation $\sigma$ \emph{contains} the pattern $\pi$ if there exists indices $c_1 < \dots < c_k$ such that $\sigma(c_1) \cdots \sigma(c_k)$ is order-isomorphic to $\pi$. We say a permutation \emph{avoids} a pattern if it does not contain it.
\end{definition}
It is well-known that permutations avoiding certain patterns are in bijection with other combinatorial objects, such as Dyck paths \cite{krattenthaler2001permutations, reifegerste2003diagram}, maximal chains of lattices \cite{simion1985restricted}, and the Catalan and Schr{\"o}der numbers \cite{west1995generating}. In their work, Simion and Schmidt \cite{simion1985restricted} completely enumerated permutations avoiding any single pattern, two patterns, or three patterns of size $3$, paving the path for more work in the field of pattern avoidance.
More recently, Bonichon and Morel \cite{bonichon2022baxter} defined a multidimensional generalization of a permutation, called a $d$-permutation, which resembles the structure of a $(d-1)$-tuple of permutations. Tuples of permutations have been studied before \cite{gunby2019asymptotics, aldred2005permuting}, but $d$-permutations have not been thoroughly studied yet, mainly appearing in a few papers related to separable permutations \cite{gunby2019asymptotics, asinowski2010separable}. In particular, Asinoski and Mansour \cite{asinowski2010separable} presented a generalization of separable permutations that are similar to $d$-permutations and characterized these generalized permutations with sets of forbidden patterns. The study of pattern-avoidance classes of permutations has received much attention and permutations avoiding sets of small patterns have been exhaustively enumerated \cite{simion1985restricted, mansour2020enumeration, knuth1973art}. However, $d$-permutations introduced by Bonichon and Morel are slightly different than the one introduced by Asinoski and Mansour \cite{asinowski2010separable} and coincide with the classical permutation for $d=2$.
Similar to the enumeration Simion and Schmidt \cite{simion1985restricted} did in 1985, Bonichon and Morel \cite{bonichon2022baxter} started the enumeration of $d$-permutations avoiding small patterns and made many conjectures regarding the enumeration of $3$-permutations avoiding sets of two patterns. We present two main classes of results regarding the enumeration of $3$-permutation avoiding small patterns. We first completely enumerate $3$-permutations avoiding classes of two patterns and prove their respective recurrence relations, solving the conjectures presented by Bonichon and Morel \cite{bonichon2022baxter}. Further, we derive a recurrence relation for $3$-permutations avoiding $132$ and $213$, which does not appear on the OEIS database \cite{oeis} and Bonichon and Morel were unable to make a conjecture about. We then further initiate and completely enumerate $3$-permutations avoiding classes of three patterns, similar to Simion and Schmidt \cite{simion1985restricted}'s results in 1985.
This paper is organized as follows. In Section 2, we introduce preliminary definitions and notation. In Section 3, we completely enumerate sequences of $3$-permutations avoiding two patterns of size 3 and prove four conjectures of Bonichon and Morel \cite{bonichon2022baxter}. In addition, we prove a recurrence relation for an avoidance class that does not appear on the OEIS database \cite{oeis}, completing our enumeration. In Section 4, we extend our enumeration to $3$-permutations avoiding three patterns of size 3 and prove recurrence relations for their avoidance classes. We conclude with open problems in Section 5.
\section{Preliminaries} \label{sec:preliminaries}
Let $S_n$ denote the set of permutations of $[n] = \{ 1,2, \dots, n \}$. Note that we can represent each permutation $\sigma \in S_n$ as a sequence $\sigma(1) \cdots \sigma(n)$. Further, let $\mathrm{Id}_n$ denote the identity permutation $12 \cdots n$ of size $n$ and given a permutation $\sigma \in S_n$, let $\rev(\sigma)$ denote the reverse permutation $\sigma(n) \sigma(n-1) \cdots \sigma(1)$. We further say that a sequence $w$ is \emph{consecutively increasing} (respectively \emph{decreasing}) if for every index $i$, $w(i+1) = w(i)+1$ (respectively $w(i+1) = w(i)-1$).
For a sequence $w = w(1) \cdots w(n)$ with distinct real values, the \emph{standardization} of $w$ is the unique permutation with the same relative order. Note that once standardized, a consecutively-increasing sequence is the identity permutation and a consecutively-decreasing sequence is the reverse identity permutation. Moreover, we say that in a permutation $\sigma$, the elements $\sigma(i)$ and $\sigma(i+1)$ are \emph{adjacent} to each other. More specifically, $\sigma(i)$ is \emph{left-adjacent} to $\sigma(i+1)$ and similarly, the element $\sigma(i+1)$ is $\emph{right-adjacent}$ to $\sigma(i)$. The following definitions in this section were introduced in \cite{bonichon2022baxter}.
\begin{definition}
A \emph{$d$-permutation} $\boldsymbol{\sigma} := (\sigma_1 , \dots, \sigma_{d-1})$ of size $n$ is a tuple of permutations, each of size $n$. Let $S_{n}^{d-1}$ denote the set of $d$-permutations of size $n$. We say that $d$ is the \textit{dimension} of $\boldsymbol{\sigma}$. Moreover, the \textit{diagram} of $\sigma$ is the set of points $(i, \sigma_1(i) ,\dots, \sigma_{d-1}(i))$ for all $i \in [n]$.
\end{definition}
Note that the identity permutation is implicitly included in the diagram of a $d$-permutation, which justifies why a $d$-permutation is a $(d-1)$-tuple of permutations. For a $d$-permutation $\boldsymbol{\sigma} = (\sigma_1 ,\dots, \sigma_{d-1}),$ let $\boldsymbol{\Bar{\sigma}} = (\mathrm{Id}_n, \sigma_1, \dots, \sigma_{d-1}).$ Further, with this definition, it is natural to consider the projections of the diagram of a $d$-permutation, which is useful in defining the notion of pattern avoidance for $d$-permutations.
\begin{definition}
Given $d' \in \mathbb N$ and $\boldsymbol{i} = i_1, \dots, i_{d'} \in [d]^{d'}$, the \emph{projection on $\boldsymbol{i}$} of some $d$-permutation $\boldsymbol{\sigma}$ is the $d'$-permutation $\mathrm{proj}_{\boldsymbol{i}}(\boldsymbol{\sigma}) = (\boldsymbol{\Bar{\sigma}}_{i_2} \circ \boldsymbol{\Bar{\sigma}}_{i_1}^{-1}, \dots, \boldsymbol{\Bar{\sigma}}_{i_{d'}} \circ \boldsymbol{\Bar{\sigma}}_{i_1}^{-1} ).$
\end{definition}
We say that a projection is \emph{direct} if $i_1 < \dots < i_{d'}$ and \emph{indirect} otherwise.
\begin{remark}
There are only three direct projections of dimension $2$ of a $3$-permutation $\boldsymbol{\sigma} = (\sigma, \sigma')$. Namely, they are $\sigma$, $\sigma'$, and $\sigma' \circ \sigma^{-1}$.
\end{remark}
In the remainder of the section, we use the projection of a $3$-permutation $\boldsymbol{\sigma} = (\sigma, \sigma')$ to refer to the projection $\sigma' \circ \sigma^{-1}$. Using direct projections, Bonichon and Morel \cite{bonichon2022baxter} introduced the following definition of pattern avoidance, which coincides with the existing concept of pattern avoidance for regular permutations.
\begin{definition}
Let $\boldsymbol{\sigma} = (\sigma_1 ,\dots, \sigma_{d-1}) \in S_{n}^{d-1}$ and $\boldsymbol{\pi} = (\pi_1, \dots, \pi_{d'-1}) \in S_{k}^{d'-1}$, where $k \leq n$. We say that the $d$-permutation $\boldsymbol{\sigma}$ \emph{contains} the pattern $\boldsymbol{\pi}$ if there exists a direct projection $\boldsymbol{\sigma'}$ of dimension $d'$ and indices $c_1 < \dots < c_k$ such that $\boldsymbol{\sigma'}_i(c_1) \cdots \boldsymbol{\sigma'}_i(c_k)$ is order-isomorphic to $\pi_i$ for all $i$. We say a $d$-permutation \emph{avoids} a pattern if it does not contain it.
\end{definition}
Given $m$ patterns $\boldsymbol{\pi_1}, \dots, \boldsymbol{\pi_m} \in S^{d'-1}_{n'}$, we write $S_{n}^{d-1}(\boldsymbol{\pi_1}, \dots, \boldsymbol{\pi_m})$ to mean the set of $d$-permutations of size $n$ that simultaneously avoid $\boldsymbol{\pi_1}, \dots, \boldsymbol{\pi_m}$.
Bonichon and Morel \cite{bonichon2022baxter} also noted symmetries on $d$-permutations that correspond to symmetries on the $d$-dimensional cube. In particular, these symmetries are counted by signed permutation matrices of dimension $d$. Such a signed permutation matrix is a square matrix with entries consisting of $-1, 0$, or $1$ such that each row and column contain exactly one nonzero element. We call $\textit{d-Sym}$ the set of such signed permutation matrices of size $d$.
This allows us to extend the well-known definitions of Wilf-equivalence and trivial Wilf-equivalence to higher dimensions.
\begin{definition}
We say that two sets of patterns $\boldsymbol{\pi_1}, \dots, \boldsymbol{\pi_k}$ and $\boldsymbol{\tau_1}, \dots, \boldsymbol{\tau_\ell}$ are \emph{d-Wilf-equivalent} if $|S_{n}^{d-1}(\boldsymbol{\pi_1}, \dots, \boldsymbol{\pi_k})| = |S_n^{d-1}(\boldsymbol{\tau_1}, \dots, \boldsymbol{\tau_\ell})|$. Moreover, these patterns are \emph{trivially d-Wilf-equivalent} if there exists a symmetry $s \in \textit{d-Sym}$ that maps $S_{n}^{d-1}(\boldsymbol{\pi_1}, \dots, \boldsymbol{\pi_k})$ to $S_n^{d-1}(\boldsymbol{\tau_1}, \dots, \boldsymbol{\tau_\ell})$ bijectively.
\end{definition}
\section{Enumeration of Pattern Avoidance Classes of at most size 2}
\label{sec:enumeration}
Bonichon and Morel \cite{bonichon2022baxter} proposed the problem of enumerating sequences of $3$-permutations avoiding at most two patterns of size 2 or 3. They provided Table \ref{double avoidance}, conjecturing the recurrences in the last four rows and leaving the remainder as open problems.
\begin{table}[htp]
\centering
\begin{tabular}{|c | c | c | c | c|}
\hline
Patterns & \#TWE & Sequence & OEIS Sequence & Comment \\ [0.5ex]
\hline\hline
12 & 1 & $1,0,0,0,0,\dots$ & & \cite{bonichon2022baxter} \\
\hline
21 & 1 & $1,1,1,1,1,\dots$ & & \cite{bonichon2022baxter} \\
\hline
123 & 1 & $1,4,20,100,410,1224,2232, \dots$ & & Not in OEIS \\
\hline
132 & 2 & $1,4,21,116,646,3596,19981, \dots$ & & Not in OEIS \\
\hline
231 & 2 & $1,4,21,123,767,4994,35584, \dots$ & & Not in OEIS \\
\hline
321 & 1 & $1,4,21,128,850,5956,43235, \dots$ & & Not in OEIS \\
\hline
$123, 132$ & 2 & $1,4,8,8,0,0,0, \dots$ & & Terminates after $n=4$ \\
\hline
$123, 231$ & 2 & $1,4,9,6,0,0,0, \dots$ & & Terminates after $n=4$ \\
\hline
$123, 321$ & 1 & $1,4,8,0,0,0,0, \dots$ & & Terminates after $n=3$ \\
\hline
$132, 213$ & 1 & $1,4,12,28,58,114,220, \dots$ & & Theorem \ref{132,213} \\
\hline
$132, 231$ & 4 & $1,4,12,32,80,192,448, \dots$ & \href{http://oeis.org/A001787}{A001787} & Theorem \ref{132,231} \\
\hline
$132, 321$ & 2 & $1,4,12,27,51,86,134, \dots$ & \href{http://oeis.org/A047732}{A047732} & Theorem \ref{132,321} \\
\hline
$231, 312$ & 1 & $1,4,10,28,76,208,568, \dots$ & \href{http://oeis.org/A026150}{A026150} & Theorem \ref{231,312} \\
\hline
$231, 321$ & 2 & $1,4,12,36,108,324,972, \dots$ & \href{http://oeis.org/A003946}{A003946} & Theorem \ref{231,321} \\
\hline
\end{tabular}
\caption{Sequences of $3$-permutations avoiding at most two patterns of size 2 or 3. The second column indicates the number of trivially Wilf-equivalent patterns.}
\label{double avoidance}
\end{table}
In all of the following theorems, we take constructive approaches to prove recurrence relations. Given an element $\boldsymbol{\sigma}$ in $S_n^2(\pi_1, \pi_2)$, we attempt to construct elements in $S_{n+1}^2(\pi_1, \pi_2)$ via inserting a maximal element $n+1$ into the permutations in $\boldsymbol{\sigma}$. Note that if a permutation $\sigma \in S_n$ contains a pattern $\pi$, then adding a maximal element $n+1$ anywhere into $\sigma$ still contains $\pi$. Similarly, if a permutation $\sigma \in S_n$ avoids a pattern $\pi$, then removing the maximal element $n$ from $\sigma$ will still avoid $\pi$.
\begin{theorem}\label{132,231}
Let $a_n = |S_n^2(132,231)|$. Then $a_n$ satisfies the recurrence relation $a_{n+1} = 2a_n + 2^{n}$ with initial term $a_1 =1$, which corresponds with OEIS sequence \href{http://oeis.org/A001787}{A001787}.
\end{theorem}
\begin{proof}
Given any $\boldsymbol{\sigma} = (\sigma, \sigma') \in S^2_n(132,231)$, we construct an element of $S^2_{n+1}(132,231)$ by inserting a maximal element $n+1$ in both $\sigma$ and $\sigma'$. To avoid both $132$ and $231$, the maximal element $n+1$ must be inserted into either the beginning or end of $\sigma$ and $\sigma'$; otherwise if there are elements on both sides of $n+1$, then there must be either an occurrence of $132$ or $231$.
Appending a maximal element $n+1$ onto the left of both $\sigma$ and $\sigma'$ or onto the right of both $\sigma$ and $\sigma'$ also avoids $132$ and $231$. In other words, $(\sigma (n+1), \sigma'(n+1))$ and $((n+1)\sigma, (n+1)\sigma')$ both still avoid $132$ and $231$. This contributes $2a_{n}$ different $3$-permutations in $S_{n+1}^2(132,231)$.
We now make the following claims:
\begin{claim}
The $3$-permutation $(\sigma(n+1), (n+1)\sigma')$ avoids $132$ and $231$ if and only if $\sigma$ is $\mathrm{Id}_n$ and $\sigma' \in S_n^1(132,231)$.
\end{claim}
\begin{proof}
For the forwards direction, suppose that $(\sigma(n+1), (n+1)\sigma')$ avoids $132$ and $231$. Now writing the projection $((n+1)\sigma') \circ (\sigma(n+1))^{-1} = (\sigma_L (n+1) \sigma_R)$ for some subpermutations $\sigma_L$ and $\sigma_R$, note that $\sigma_R$ is nonempty, and hence using the reasoning mentioned above, $\sigma_L$ is empty. Otherwise, $(\sigma_L (n+1) \sigma_R)$ contains an occurrence of either $132$ or $231$. And thus $\sigma$ begins with the minimal element $1$. But because $\sigma$ is forced to avoid the $132$ pattern, it is forced to be consecutive and hence is the identity permutation.
For the backwards direction, both $(\sigma(n+1))$ and $((n+1)\sigma')$ still avoid $132$ and $231$. Further, the projection $((n+1)\sigma') \circ (\sigma(n+1))^{-1}$ evaluates to $(n+1)\sigma'$, which also still avoids $132$ and $231$.
\end{proof}
\begin{claim}
The $3$-permutation $((n+1)\sigma, \sigma'(n+1))$ avoids $132$ and $231$ if and only if $\sigma$ is $\mathrm{rev}(\mathrm{Id}_n)$ and $\sigma' \in S_n^1(132,231)$.
\end{claim}
\begin{proof}
For the forwards direction, we write the projection $(\sigma'(n+1))\circ((n+1)\sigma)^{-1}$ in the form of $\sigma_L (n+1) \sigma_R$. As above, $\sigma_R$ is nonempty and hence, $\sigma_L$ must be empty to avoid the patterns $132$ and $231$. So we conclude that $\sigma$ must end with a minimal element $1$. And because $\sigma$ must avoid the $231$ permutation, it is forced to be consecutively decreasing and hence is $\mathrm{rev}(\mathrm{Id}_n)$.
For the backwards direction, $((n+1)\sigma)$ and $(\sigma'(n+1))$ both still avoid $132$ and $231$. The projection $(\sigma'(n+1))\circ ((n+1)\sigma)^{-1}$ evaluates to $(n+1)\mathrm{rev}(\sigma')$. Because $132$ and $231$ are reverses of each other, $\mathrm{rev}(\sigma')$ still avoids $132$ and $231$, and thus $(n+1)\mathrm{rev}(\sigma')$ avoids these patterns as well.
\end{proof}
Thus we have shown that given any $3$-permutation $\boldsymbol{\sigma} = (\sigma, \sigma') \in S^2_n(132,231)$, we can construct two elements in $S^2_{n+1}(132,231)$; furthermore, we can construct two additional elements in $S^2_{n+1}(132,231)$ if and only if $\sigma' \in S^1_n(132,231)$ and $\sigma$ is $\Id_n$ or $\rev(\Id_n)$. Simion and Schmidt \cite{simion1985restricted} have shown that $|S^1_n(132,231)| = 2^{n-1}$. In the cases where $\sigma$ is $\Id_n$ or $\rev(\Id_n)$, it follows that $\boldsymbol{\sigma}$ avoids $132$ and $231$ if and only if $\sigma'$ avoids these patterns, and hence it follows that \begin{align*}
a_{n+1} = 2a_n + 2^n. & \qedhere
\end{align*}
\end{proof}
\begin{theorem}\label{132,321}
Let $a_n = |S^2_n(132,321)|$. Then $a_n$ follows the recurrence relation $a_{n+1} = a_n +n(n+2)$ with initial term $a_1 = 1$, which corresponds with the OEIS sequence \href{http://oeis.org/A047732}{A047732}.
\end{theorem}
\begin{proof}
Let us write $\boldsymbol{\sigma} = (\sigma, \sigma') \in S^2_n(132,321)$ in the form $(\sigma_L n \sigma_R, \sigma_L' n \sigma_R')$. We construct an element of $S^2_{n+1}(132,321)$ by inserting a maximal element $n+1$ in both $\sigma$ and $\sigma'$.
Inserting $n+1$ onto the end of $\sigma$ and $\sigma'$ always constructs a $132$-avoiding and $321$-avoiding $3$-permutation, and thus contributes $a_n$ different $3$-permutations to $S^2_{n+1}(132,321)$.
We also have the following three cases:
\begin{enumerate}
\item $\sigma_R$ and $\sigma_R'$ are both nonempty and $\sigma_L$, $\sigma_L'$, $\sigma_R$, and $\sigma_R'$ are all consecutively increasing. Moreover, every element of $\sigma_L$ and $\sigma_L'$ is greater than every element of $\sigma_R$ and $\sigma_R'$, respectively.
\item Exactly one of $\sigma_R$, $\sigma_R'$ is empty and the other is of the form $\sigma_L n \sigma_R$, where $\sigma_L$ and $\sigma_R$ are consecutively increasing and every element of $\sigma_L$ is greater than every element of $\sigma_R$.
\item Both $\sigma_R$, $\sigma_R'$ are empty.
\end{enumerate}
First we show that when $\boldsymbol{\sigma}$ is none of these cases, inserting a maximal element $n+1$ into $\boldsymbol{\sigma}$ cannot avoid these patterns. So let $\boldsymbol{\sigma} = (\sigma_L n \sigma_R, \sigma_L' n \sigma_R')$, where without loss of generality, $\sigma_L$ is nonincreasing.
Every element of $\sigma_L$ and $\sigma_L'$ still must be greater than every element of $\sigma_R$ and $\sigma_R'$, respectively; otherwise, they would contain an occurrence of $132$. So because $\sigma_L$ is not consecutively increasing, there is an occurrence of $\ell (\ell-c)$ in $\sigma_L$.
If $\sigma_R$ contains elements in the interval $(\ell-c, n)$, then $\boldsymbol{\sigma}$ would contain an occurrence of $132$. Similarly, if $\sigma_R$ contains elements in the interval $(1, \ell-c)$, then $\boldsymbol{\sigma}$ would contain an occurrence of $321$. So $\sigma_R$ is empty. Inserting $n+1$ to the left of $\ell$ gives an occurrence of $321$. And inserting $n+1$ to the right of $\ell$ gives an occurrence of $132$. So nothing outside these cases avoids $132$ and $321$.
Now we present each case:
\begin{enumerate}
\item We claim that only $(\sigma_L n (n+1) \sigma_R, \sigma_L' n (n+1) \sigma_R')$ avoids $132$ and $321$.
Because $\sigma_L$ and $\sigma_R$ are consecutive and $\sigma_R$ must start with $1$, then in the projection $(\sigma_L' n (n+1) \sigma_R') \circ (\sigma_L n (n+1) \sigma_R)^{-1}$, either $n+1$ is right-adjacent to $n$, or the composition begins with $n+1$. In the former case, this projection avoids $132$ and $321$, and hence $\boldsymbol{\sigma}$ avoids these patterns too. In the latter case, the projection is of the form $(n+1) \sigma_R' \sigma_L' n$. But $\sigma_R' \sigma_L' n$ is strictly increasing, so the projection also avoids $132$ and $321$. Therefore the $3$-permutation $(\sigma_L n (n+1) \sigma_R, \sigma_L' n (n+1) \sigma_R')$ avoids these patterns too.
Now we show that inserting $n+1$ into $\boldsymbol{\sigma}$ anywhere else cannot result in an element in $S_{n+1}^2(132,321)$. In particular, we show that we are forced to insert $n+1$ at the end of $\sigma$ and $\sigma'$ or directly after $n$ in these two permutations. Otherwise, because $\sigma_L$, $\sigma_L'$, $\sigma_R$, and $\sigma_R'$ are all consecutively increasing, $\sigma$ or $\sigma'$ would contain $132$.
Now it is sufficient to show $(\sigma_L n \sigma_R (n+1), \sigma_L' n (n+1) \sigma_R')$ and $(\sigma_L n (n+1) \sigma_R, \sigma_L' n \sigma_R' (n+1))$ do not avoid $132$ and $321$.
To see the former, we take the projection $\rho$ and note that it must take one of the following three forms:
\begin{enumerate}
\item $\rev(\Id_{n+1})$.
\item $(\pi_1 c \pi_2 m \pi_3 (c+1))$, where $m >c$ for some $m$ and $c$.
\item $(\pi (n+1)c)$, where $c$ is the maximum element in $\sigma_R'$ and $\pi$ is a subpermutation.
\end{enumerate}
In Case (a), $\rho$ contains $321$, in Case (b), $\rho$ contains $132$, and in Case (c), $\rho$ contains $132$.
Similar reasoning shows that $(\sigma_L n (n+1) \sigma_R, \sigma_L' n \sigma_R' (n+1))$ does not avoid $132$ and $321$.
There are $(n-1)$ ways to choose $\sigma_L n \sigma_R$ and $\sigma_L' n \sigma_R'$, so this case contributes $(n-1)^2$ distinct $3$-permutations to $S^2_{n+1}(132,321)$.
\item Without loss of generality, let $\sigma_R$ be empty. Then we claim that only the $3$-permutations $((n+1)\sigma_L n, \sigma_L' n (n+1) \sigma_R')$ and $(\sigma_L n(n+1), \sigma_L' n (n+1) \sigma_R')$ avoid $132$ and $321$.
Checking that both of these $3$-permutations avoid $132$ and $321$ uses a similar argument to the previous case. Now we show that inserting $n+1$ anywhere else in $\boldsymbol{\sigma}$ cannot avoid the patterns $132$ and $321$. In particular, we must insert $n+1$ into the beginning or end of $\sigma$ and either right-adjacent to $n$ or at the end in $\sigma'$. So it is sufficient to show that $((n+1)\sigma_L n, \sigma_L' n \sigma_R' (n+1))$ does not avoid $132$ and $321$.
Now taking the projection gives us a permutation of the form $\pi (n+1) c$, where $c$ is the first element of $\sigma_L'$ and $\pi$ is some subpermutation. Because $\sigma_R'$ is nonempty, $\pi$ contains elements in $\sigma_R'$, and this composition contains an instance of $132$.
A similar argument holds for when $\sigma_R'$ is empty and $\sigma_R$ is nonempty. This case contributes $4(n-1)$ total $3$-permutations to $S^2_{n+1}(132,321)$.
\item Then $(\sigma_L n (n+1), (n+1) \sigma_L' n)$, $((n+1)\sigma_L n, (n+1) \sigma_L' n)$, and $((n+1)\sigma_L n, \sigma_L' n (n+1))$ all avoid $132$ and $321$.
Checking that these avoid $132$ and $321$ follow a similar reasoning to Case 1. Because we are forced to insert the maximal element $n+1$ to the beginning or end of $\sigma$ and $\sigma'$, any other insertion would not avoid $132$ and $321$.
This case contributes $3$ new elements in $S^2_{n+1}(132,321)$.
\end{enumerate}
And hence we conclude that \begin{align*}
a_{n+1} &= a_n + (n-1)^2 + 4(n-1) +3 \\
&= a_n + n(n+2). & \qedhere
\end{align*}
\end{proof}
\begin{theorem}\label{231,312}
Let $a_n = |S^2_n(231,312)|$. Then $a_n$ follows the recurrence relation $a_{n+1} = 2a_n + 2a_{n-1}$ with initial terms $a_1 = 1$ and $a_2 =4$, which corresponds to the OEIS sequence \href{http://oeis.org/A026150}{A026150}.
\end{theorem}
\begin{proof}
Let $\boldsymbol{\sigma} = (\sigma, \sigma') \in S_n^2(231,312)$ and write $\boldsymbol{\sigma}$ in the form $(\sigma_L n \sigma_R, \sigma_L' n \sigma_R')$. Note that each element of $\sigma_L$ and $\sigma_L'$ are less than each element of $\sigma_R$ and $\sigma_R'$, respectively. Further, $\sigma_R$ and $\sigma_R'$ have to be consecutively decreasing. If $\sigma_R$ is nonempty, $n-1$ must be right-adjacent to $n$ in $\sigma$ to avoid instances of $231$ and $312$. We then have the following cases, where $\sigma_R$ and $\sigma_R'$ are nonempty:
\begin{enumerate}
\item $\boldsymbol{\sigma}$ is of the form $(\sigma_L n, \sigma_L' n)$.
\item $\boldsymbol{\sigma}$ is of the form $(\sigma_L (n-1) n \sigma_R, \sigma_L' (n-1) n \sigma_R')$.
\item $\boldsymbol{\sigma}$ is of the form $(\sigma_L (n-1) n \sigma_R, \sigma_L' n (n-1) \sigma_R')$.
\item $\boldsymbol{\sigma}$ is of the form $(\sigma_L n (n-1) \sigma_R, \sigma_L' (n-1) n \sigma_R')$.
\item $\boldsymbol{\sigma}$ is of the form $(\sigma_L n (n-1) \sigma_R, \sigma_L' n (n-1) \sigma_R')$.
\end{enumerate}
Now we present each case:
\begin{enumerate}
\item $(\sigma, \sigma') = (\sigma_L n, \sigma_L' n)$.
The maximal element $n+1$ must be inserted adjacent to $n$ in both $\sigma$ and $\sigma'$. If not, then there would be an occurrence of $312$. Thus the following $3$-permutations all avoid $231$ and $312$: $(\sigma_L n (n+1), \sigma_L' n (n+1)), (\sigma_L n (n+1), \sigma_L' (n+1)n), (\sigma_L (n+1) n, \sigma_L' n (n+1)),$ and $(\sigma_L (n+1)n, \sigma_L' (n+1)n)$. So each instance of $\boldsymbol{\sigma}$ in this case contributes $4$ new $3$-permutations that avoid $231$ and $312$.
\item $(\sigma, \sigma')= (\sigma_L (n-1) n \sigma_R, \sigma_L' (n-1) n \sigma_R')$.
Then both $\sigma_R$ and $\sigma_R'$ are empty (or else there would be an occurrence of $231$). And this is counted in Case 1.
\item $(\sigma, \sigma')= (\sigma_L (n-1) n \sigma_R, \sigma_L' n (n-1) \sigma_R')$.
Then $\sigma_R$ must be empty and $n (n-1) \sigma_R'$ must be consecutively decreasing. Appending the maximal element $n+1$ onto the end of $\sigma$ and $\sigma'$ also avoids $231$ and $312$. In other words, $(\sigma_L (n-1) n (n+1), \sigma_L' n (n-1) \sigma_R' (n+1))$ avoids $231$ and $312$. In addition, the $3$-permutation $(\sigma_L (n-1) n (n+1), \sigma_L' (n+1) n (n-1) \sigma_R')$ also avoids $231$ and $312$.
To see this, we first evaluate the projection of $(\sigma_L (n-1) n, \sigma_L' n (n-1) \sigma_R')$. As shown in Figure 1, we can subdivide $\sigma_L$ into $\pi_L$ and $\pi_R$, where $|\pi_L| = |\sigma_L'|$. For the projection to avoid $312$, $\pi_R (n-1) n$ must be strictly increasing because $n (n-1) \sigma_R'$ is consecutively decreasing.
\begin{figure}[htp]
\centering
\begin{tikzpicture}[coo/.style={coordinate}]
\path (0,0) coordinate (x1) --++ (2,0) coordinate (x2)
--++ (0.6,0) coordinate (x3)
--++ (0.8,0) coordinate (x4)
--++ (0.8,0) coordinate (x5)
--++ (2,0) coordinate (x6)
--++ (0.8,0) coordinate (x7)
--++ (0.8,0) coordinate (x8);
\foreach \i in {1,2,3}
\coordinate[] (L\i) at (0,-\i);
\draw[|-|] (L2-|x1) -- (L2-|x6) node[above,pos=0.5] {$\sigma_L$};
\node at (L2-|x7) {$n-1$};
\node at (L2-|x8) {$n$};
\draw[|-|] (L3-|x1) -- (L3-|x2) node[above,pos=0.5] {$\sigma_L'$};
\node at (L3-|x3) {$n$};
\node at (L3-|x4) {$n-1$};
\draw[|-|] (L3-|x5) -- (L3-|x8) node[above,pos=0.5] {$\sigma_R'$};
\draw[|-|,dashed] (L1-|x1) -- (L1-|x2) node[above,pos=0.5] {$\pi_L$};
\draw[|-|,dashed] (L1-|x6) -- (L1-|x2) node[above,pos=0.5] {$\pi_R$};
\end{tikzpicture}
\caption{The two-line notation used to evaluate $(\sigma_L' n (n-1) \sigma_R') \circ (\sigma_L (n-1)n)^{-1}$. The second line represents the first permutation in the $3$-permutation and the last line represents the second permutation in the $3$-permutation.}
\end{figure}
This projection is of the form $(\sigma_L' \circ \pi_L^{-1}) n (n-1) \sigma_R'$, which still avoids $231$ and $312$. A similar argument shows that $(\sigma_L (n-1) n (n+1), \sigma_L' (n+1) n (n-1) \sigma_R')$ also avoids $231$ and $312$ by noting that its projection is of the form $(\sigma_L' \circ \pi_L^{-1})(n+1)n (n-1) \sigma_R'$, which still avoids these patterns.
Now we show that inserting $n+1$ anywhere else in $\boldsymbol{\sigma}$ cannot produce a $3$-permutation that avoids $231$ and $312$. We must insert $n+1$ adjacent to $n$ in $\sigma$. If not, then inserting $n+1$ anywhere to the left of $n-1$ contains an occurrence of $312$. Similarly, we must insert $n+1$ left-adjacent to $n$ or at the end in $\sigma'$. Inserting $n+1$ anywhere to the right of $n-1$ contains an occurrence of $231$.
We first show that $(\sigma_L (n-1) (n+1) n, \sigma_L' (n+1) n (n-1) \sigma_R')$ cannot avoid these patterns.
As discussed above, we can subdivide $\sigma_L$ into $\pi_L$ and $\pi_R$, where $\pi_L$ is the same size as $\sigma_L'$ and $\pi_R(n-1)$ is consecutively increasing.
\begin{center}
\begin{tikzpicture}[coo/.style={coordinate}]
\path (0,0) coordinate (x1) --++ (2,0) coordinate (x2)
--++ (0.8,0) coordinate (x3)
--++ (0.8,0) coordinate (x4)
--++ (0.8,0) coordinate (x5)
--++ (0.8,0) coordinate (x6)
--++ (0.8,0) coordinate (x7)
--++ (0.8,0) coordinate (x8)
--++ (1.2,0) coordinate (x9)
--++ (0.8,0) coordinate (xx);
\foreach \i in {1,2,3}
\coordinate[] (L\i) at (0,-\i);
\draw[|-|] (L2-|x1) -- (L2-|x7) node[above,pos=0.5] {$\sigma_L$};
\node at (L2-|x8) {$n-1$};
\node at (L2-|x9) {$n+1$};
\node at (L2-|xx) {$n$};
\draw[|-|] (L3-|x1) -- (L3-|x2) node[above,pos=0.5] {$\sigma_L'$};
\node at (L3-|x5) {$n-1$};
\node at (L3-|x4) {$n$};
\node at (L3-|x3) {$n+1$};
\draw[|-|] (L3-|x6) -- (L3-|xx) node[above,pos=0.5] {$\sigma_R'$};
\draw[|-|,dashed] (L1-|x1) -- (L1-|x2) node[above,pos=0.5] {$\pi_L$};
\draw[|-|,dashed] (L1-|x7) -- (L1-|x2) node[above,pos=0.5] {$\pi_R$};
\end{tikzpicture}
\end{center}
Then the projection is of the form $(\sigma_L' \circ \pi_L^{-1}) \pi (r+2) r (r+1)$, where $r$ is the minimal element of $\sigma_R'$ and $\pi$ is a subpermutation. This contains an occurrence of $312$.
A similar calculation shows that $(\sigma_L (n-1) (n+1) n, \sigma_L' n (n-1) \sigma_R' (n+1))$ cannot avoid $231$ and $312$ either, because the projection is of the form $\pi (r+1) (n+1) r$ for a subpermutation $\pi$, which contains an occurrence of $231$. Hence each instance of $\boldsymbol{\sigma}$ in this case contributes $2$ new elements in $S_{n+1}^2(231,312)$.
\item $(\sigma, \sigma')= (\sigma_L n (n-1) \sigma_R, \sigma_L' (n-1) n \sigma_R')$.
Similar to the previous case, $n(n-1)\sigma_R$ must be consecutively decreasing and $\sigma_R'$ must be empty. As in the previous cases, $(\sigma_L n (n-1) \sigma_R (n+1), \sigma_L' (n-1) n (n+1))$ avoids $231$ and $312$. Moreover, $(\sigma_L (n+1) n (n-1) \sigma_R, \sigma_L' (n-1) n (n+1))$ also avoids these patterns. To see this, we first evaluate the projection of $(\sigma_L n (n-1) \sigma_R, \sigma_L' (n-1) n)$.
\begin{center}
\begin{tikzpicture}[coo/.style={coordinate}]
\path (0,0) coordinate (x1) --++ (2,0) coordinate (x2)
--++ (0.6,0) coordinate (x3)
--++ (1.0,0) coordinate (x4)
--++ (0.8,0) coordinate (x5)
--++ (2,0) coordinate (x6)
--++ (0.8,0) coordinate (x7)
--++ (0.8,0) coordinate (xx);
\foreach \i in {1,2,3}
\coordinate[] (L\i) at (0,-\i);
\draw[|-|] (L1-|x1) -- (L1-|x2) node[above,pos=0.5] {$\sigma_L$};
\node at (L1-|x3) {$n$};
\node at (L1-|x4) {$n-1$};
\draw[|-|] (L1-|x5) -- (L1-|xx) node[above,pos=0.5] {$\sigma_R$};
\draw[|-|] (L2-|x1) -- (L2-|x6) node[above,pos=0.5] {$\sigma_L'$};
\node at (L2-|x7) {$n-1$};
\node at (L2-|xx) {$n$};
\draw[|-|,dashed] (L3-|x1) -- (L3-|x2) node[above,pos=0.5] {$\pi_L$};
\draw[|-|,dashed] (L3-|x6) -- (L3-|x2) node[above,pos=0.5] {$\pi_R$};
\node at (L3-|x4) { };
\end{tikzpicture}
\end{center}
Because the projection of $(\sigma_L n (n-1) \sigma_R, \sigma_L' (n-1) n)$ is of the form $(\pi_L \circ \sigma_L^{-1}) n (n-1) \mathrm{rev}(\pi_R)$, we conclude that $n(n-1)\mathrm{rev}(\pi_R)$ must be consecutively decreasing, since this projection must avoid $231$ and $312$. We evaluate the projection of $(\sigma_L (n+1) n (n-1) \sigma_R, \sigma_L' (n-1) n (n+1))$.
\begin{center}
\begin{tikzpicture}[coo/.style={coordinate}]
\path (0,0) coordinate (x1) --++ (2,0) coordinate (x2)
--++ (0.8,0) coordinate (x3)
--++ (1.0,0) coordinate (x4)
--++ (1.0,0) coordinate (x5)
--++ (0.8,0) coordinate (x6)
--++ (0.6,0) coordinate (x7)
--++ (0.8,0) coordinate (x8)
--++ (1.0,0) coordinate (x9)
--++ (1.0,0) coordinate (xx);
\foreach \i in {1,2,3}
\coordinate[] (L\i) at (0,-\i);
\draw[|-|] (L1-|x1) -- (L1-|x2) node[above,pos=0.5] {$\sigma_L$};
\node at (L1-|x3) {$n+1$};
\node at (L1-|x4) {$n$};
\node at (L1-|x5) {$n-1$};
\draw[|-|] (L1-|x6) -- (L1-|xx) node[above,pos=0.5] {$\sigma_R$};
\draw[|-|] (L2-|x1) -- (L2-|x7) node[above,pos=0.5] {$\sigma_L'$};
\node at (L2-|x8) {$n-1$};
\node at (L2-|x9) {$n$};
\node at (L2-|xx) {$n+1$};
\draw[|-|,dashed] (L3-|x1) -- (L3-|x2) node[above,pos=0.5] {$\pi_L$};
\draw[|-|,dashed] (L3-|x7) -- (L3-|x2) node[above,pos=0.5] {$\pi_R$};
\node at (L3-|x8) { };
\end{tikzpicture}
\end{center}
Then the projection of the $3$-permutation $(\sigma_L (n+1) n (n-1) \sigma_R, \sigma_L' (n-1) n (n+1))$ is of the form $(\pi_L \circ \sigma_L^{-1}) (n+1) n (n-1) \mathrm{rev}(\pi_R)$, which also avoids $231$ and $312$.
Now we show that inserting $n+1$ anywhere else in $\boldsymbol{\sigma}$ cannot produce a $3$-permutation that avoids $231$ and $312$. Using similar logic to the previous case, it is sufficient to show that $(\sigma_L (n+1) n (n-1) \sigma_R, \sigma_L' (n-1) (n+1) n)$ and $(\sigma_L n (n-1) \sigma_R (n+1), \sigma_L' (n-1) (n+1) n)$ do not avoid $231$ and $312$.
For the former $3$-permutation, we take the projection:
\begin{center}
\begin{tikzpicture}[coo/.style={coordinate}]
\path (0,0) coordinate (x1) --++ (2,0) coordinate (x2)
--++ (0.8,0) coordinate (x3)
--++ (1.0,0) coordinate (x4)
--++ (1.0,0) coordinate (x5)
--++ (0.8,0) coordinate (x6)
--++ (0.6,0) coordinate (x7)
--++ (0.8,0) coordinate (x8)
--++ (1.2,0) coordinate (x9)
--++ (1.0,0) coordinate (xx);
\foreach \i in {1,2,3}
\coordinate[] (L\i) at (0,-\i);
\draw[|-|] (L1-|x1) -- (L1-|x2) node[above,pos=0.5] {$\sigma_L$};
\node at (L1-|x3) {$n+1$};
\node at (L1-|x4) {$n$};
\node at (L1-|x5) {$n-1$};
\draw[|-|] (L1-|x6) -- (L1-|xx) node[above,pos=0.5] {$\sigma_R$};
\draw[|-|] (L2-|x1) -- (L2-|x7) node[above,pos=0.5] {$\sigma_L'$};
\node at (L2-|x8) {$n-1$};
\node at (L2-|x9) {$n+1$};
\node at (L2-|xx) {$n$};
\draw[|-|,dashed] (L3-|x1) -- (L3-|x2) node[above,pos=0.5] {$\pi_L$};
\draw[|-|,dashed] (L3-|x7) -- (L3-|x2) node[above,pos=0.5] {$\pi_R$};
\node at (L3-|x8) { };
\end{tikzpicture}
\end{center}
The projection is of the form $(\pi_L \circ \sigma_L^{-1}) n (n+1) (n-1) \mathrm{rev}(\pi_R)$, which contains an occurrence of $231$.
For the latter $3$-permutation, a similar argument shows that this projection contains an occurrence of $312$. Hence each instance of $\boldsymbol{\sigma}$ in this case contributes $2$ new elements in $S_{n+1}^2(231,312)$.
\item $(\sigma, \sigma')= (\sigma_L n (n-1) \sigma_R, \sigma_L' n (n-1) \sigma_R')$.
Then $n(n-1)\sigma_R$ and $n(n-1)\sigma_R'$ must be consecutively decreasing. We claim $|\sigma_R| = |\sigma_R'|$. For the sake of contradiction, suppose that $|\sigma_R'|>|\sigma_R|$. Then the $3$-permutations are:
\begin{center}
\begin{tikzpicture}[coo/.style={coordinate}]
\path (0,0) coordinate (x1) --++ (2,0) coordinate (x2)
--++ (0.6,0) coordinate (x3)
--++ (1,0) coordinate (x4)
--++ (0.8,0) coordinate (x5)
--++ (2,0) coordinate (x6)
--++ (0.6,0) coordinate (x7)
--++ (1,0) coordinate (x8)
--++ (0.8,0) coordinate (x9)
--++ (2.5,0) coordinate (xx);
\foreach \i in {1,2,3}
\coordinate[] (L\i) at (0,-\i);
\draw[|-|] (L1-|x1) -- (L1-|x6) node[above,pos=0.5] {$\sigma_L$};
\node at (L1-|x7) {$n$};
\node at (L1-|x8) {$n-1$};
\draw[|-|] (L1-|x9) -- (L1-|xx) node[above,pos=0.5] {$\sigma_R$};
\draw[|-|] (L2-|x1) -- (L2-|x2) node[above,pos=0.5] {$\sigma_L'$};
\node at (L2-|x3) {$n$};
\node at (L2-|x4) {$n-1$};
\draw[|-|] (L2-|x5) -- (L2-|xx) node[above,pos=0.5] {$\sigma_R'$};
\end{tikzpicture}
\end{center}
The projection is of the form $\pi_1 n \pi_2 r \pi_3 (r+c)$, where $r$ is the minimal element of $\sigma_R'$, $c$ is some positive integer, and $\pi_1$, $\pi_2$, $\pi_3$ are subpermutations. And hence the projection contains $312$, a contradiction.
A similar argument holds for $|\sigma_R'|<|\sigma_R|$, so the two permutations must be the same size. Moreover, because both $n(n-1)\sigma_R$ and $n(n-1)\sigma_R'$ are consecutively decreasing, then $\sigma_R = \sigma_R'$.
We immediately see that $(\sigma_L n (n-1) \sigma_R (n+1), \sigma_L' n (n-1) \sigma_R' (n+1))$ is in $S_{n+1}^2 (231,312)$.
Moreover, $(\sigma_L (n+1) n (n-1) \sigma_R, \sigma_L' (n+1) n (n-1) \sigma_R')$ also avoids $231$ and $312$, because the projection is of the form $(\sigma_L' \circ \sigma_L^{-1}) (\sigma_R' \circ \sigma_R^{-1}) (n-1) n (n+1)$.
Now we show that inserting the maximal element $n+1$ anywhere else cannot avoid $231$ and $312$. In fact, $n+1$ can only be inserted either at the end of $\sigma$ and $\sigma'$ or left-adjacent to $n$. If $n+1$ is inserted anywhere in $\sigma_L$ or $\sigma_L'$, then there would be an occurrence of $312$. If $n+1$ is inserted anywhere to the right of $n$ and not at the end of the permutation, then there would be an occurrence of $231$.
So we show that the $3$-permutations $(\sigma_L (n+1) n (n-1) \sigma_R, \sigma_L' n (n-1) \sigma_R' (n+1))$ and $(\sigma_L n (n-1) \sigma_R (n+1), \sigma_L' (n+1) n (n-1) \sigma_R')$ cannot avoid $231$ and $312$.
For the first $3$-permutation, the projection looks like
\begin{center}
\begin{tikzpicture}[coo/.style={coordinate}]
\path (0,0) coordinate (x1) --++ (2,0) coordinate (x2)
--++ (0.8,0) coordinate (x3)
--++ (0.8,0) coordinate (x4)
--++ (0.8,0) coordinate (x5)
--++ (0.8,0) coordinate (x6)
--++ (0.8,0) coordinate (x7)
--++ (0.8,0) coordinate (xx);
\foreach \i in {1,2,3}
\coordinate[] (L\i) at (0,-\i);
\draw[|-|] (L1-|x1) -- (L1-|x2) node[above,pos=0.5] {$\sigma_L$};
\node at (L1-|x5) {$n-1$};
\node at (L1-|x4) {$n$};
\node at (L1-|x3) {$n+1$};
\draw[|-|] (L1-|x6) -- (L1-|xx) node[above,pos=0.5] {$\sigma_R$};
\draw[|-|] (L2-|x1) -- (L2-|x2) node[above,pos=0.5] {$\sigma_L'$};
\node at (L2-|x4) {$n-1$};
\node at (L2-|x3) {$n$};
\node at (L2-|xx) {$n+1$};
\draw[|-|] (L2-|x5) -- (L2-|x7) node[above,pos=0.5] {$\sigma_R'$};
\end{tikzpicture}
\end{center}
Evaluating the projection gives the form $(\sigma_L' \circ \sigma_L^{-1})(n+1) \pi (n-1)(n)$ for a subpermutation $\pi$, which contains $312$.
A similar argument shows that for the second $3$-permutation, the projection is of the form $(\sigma_L' \circ \sigma_L^{-1}) \pi (n-1)n(n+1)r$, where $r$ is the minimal element of $\sigma_R'$ and $\pi$ is a subpermutation. This contains $231$.
So each instance of $\boldsymbol{\sigma}$ in this case contributes $2$ new elements in $S_{n+1}^2(231,312)$.
\end{enumerate}
Now we show that $3$-permutations avoiding $231$ and $312$ must be one of the forms above. We have only one form to consider, where exactly one of $\sigma$, $\sigma'$ is empty. Let $\sigma_R$ be empty and $\sigma_R'$ be nonempty. In particular, $(\sigma, \sigma')= (\sigma_L n, \sigma_L' n \sigma_R')$.
Now $n-1$ must be adjacent to $n$ in $\sigma'$. If $\sigma' = \sigma_L' (n-1) n \sigma_R'$, then $\sigma_R'$ must be empty to avoid an occurrence of $231$. Then Case 1 covers this. If $\sigma' = \sigma_L' n (n-1) \sigma_R'$, then we show that $n-1$ is adjacent to $n$ in $\sigma$. So suppose, for the sake of contradiction, that this is not the case. First, $n(n-1)\sigma_R'$ must be consecutively decreasing. Taking the projection $\sigma' \circ \sigma^{-1}$, we conclude that it is of the form $\pi_L n \pi_R k r$, where $k \neq r+1$ and $r$ is the minimal element in $\sigma_R'$. Now we consider where the element $r+1$ is in the permutation. If $r+1$ is in $\pi_L$, then there is an occurrence of $231$. If $r+1$ is in $\pi_R$, then if $k>r+1$, then there is an occurrence of $231$, and if $k<r+1$, then there is an occurrence of $312$. Hence $n-1$ must be adjacent to $n$ in $\sigma$, and a similar argument from Case 3 covers this case.
A similar argument also holds for when the case when $\sigma_R$ is nonempty and $\sigma_R'$ is empty. So we see that for every $3$-permutation $\boldsymbol{\sigma} = (\sigma, \sigma')$ in $S_n^2(231,312)$, inserting a maximal element $n+1$ onto the end of both $\sigma$ and $\sigma'$ always yields a $3$-permutation in $S_{n+1}^2(231,312)$; moreover, inserting a maximal element such that the parities of the two largest elements in $\sigma$ and $\sigma'$ are preserved also always yields another $3$-permutation.
This contributes $2a_n$ different $3$-permutations to $S_{n+1}^2(231,312)$. In the case that $\boldsymbol{\sigma}$ is in the form in Case 1 (where $\sigma$, $\sigma'$ each end with the maximal element $n$), each $\boldsymbol{\sigma}$ can construct two elements in $S_{n+1}^2(231,312)$ in addition to the elements generated above, so this case constructs an additional $2a_{n-1}$ elements in $S_{n+1}^2(231,312)$. Hence \begin{align*}
a_{n+1} = 2a_n + 2a_{n-1}. & \qedhere
\end{align*}
\end{proof}
\begin{theorem}\label{231,321}
Let $a_n = |S^2_n(231,321)|$. Then $a_n$ follows the formula $a_{n+1} = 4 \cdot 3^{n-1}$ (where $a_1 = 1$), which corresponds to the OEIS sequence \href{http://oeis.org/A003946}{A003946}.
\end{theorem}
\begin{proof}
Let $\boldsymbol{\sigma} = (\sigma, \sigma') \in S_n^2(231,321)$ and let $\boldsymbol{\sigma}$ be of the form $(\sigma_L 1 \sigma_R, \sigma_L' 1 \sigma_R')$. Note that $\sigma_L$ and $\sigma_L'$ either contain one element or are empty.
We insert a minimal element $0$ to the permutation and standardize the new permutation. The element $0$ must be inserted adjacent to $1$ or in the front of both $\sigma$ and $\sigma'$. We then have the following cases:
\begin{enumerate}
\item $(\sigma, \sigma') = (1 \sigma_R, 1 \sigma_R')$.
We can see that $(0 \ 1 \ \sigma_R, 0 \ 1 \ \sigma_R')$, $(0 \ 1 \ \sigma_R, 1 \ 0 \ \sigma_R')$, $(1 \ 0 \ \sigma_R, 0 \ 1 \ \sigma_R')$, and $(1 \ 0 \ \sigma_R, 1 \ 0 \ \sigma_R')$ all avoid $231$ and $321$.
So a $3$-permutation $\boldsymbol{\sigma}$ in this case constructs $4$ distinct $3$-permutations in $S_{n+1}^2 (231,321)$.
\item $(\sigma, \sigma') = (1 \sigma_R, \ell 1 \sigma_R')$ for some integer $\ell$.
Similar to the previous case, we see that $(0 \ 1 \ \sigma_R, 0 \ell 1 \sigma_R')$, $(0 \ 1 \ \sigma_R, \ell 0 \ 1 \ \sigma_R')$, $(1 \ 0 \ \sigma_R, 0 \ell 1 \sigma_R')$, and $(1 \ 0 \ \sigma_R, \ell 0 \ 1 \ \sigma_R')$ all avoid $231$ and $321$.
A $3$-permutation $\boldsymbol{\sigma}$ in this case also constructs $4$ distinct $3$-permutations in $S_{n+1}^2 (231,321)$.
\item $(\sigma, \sigma') = (\ell 1 \sigma_R, 1 \sigma_R')$ for some integer $\ell$.
Appending a minimal element $0$ onto the front of $\sigma$ and $\sigma'$ still avoids $231$ and $321$. In particular, $(0 \ell 1 \sigma_R, 0 \ 1 \ \sigma_R')$, as well as $(\ell 0 \ 1 \ \sigma_R, 1 \ 0 \ \sigma_R')$, avoids these patterns.
Now we show that inserting $0$ anywhere else cannot avoid these patterns. In particular, note that $(0 \ell 1 \sigma_R, 1 \ 0 \ \sigma_R')$ and $(\ell 0 \ 1 \ \sigma_R, 0 \ 1 \ \sigma_R')$ cannot avoid $231$ and $321$. For both $3$-permutations, the projection is of the form $1 \pi_L 0 \pi_R$ for subpermutations $\pi_L$ and $\pi_R$ (where $\pi_L$ is nonempty). This contains an instance of $231$.
Hence $3$-permutations $\boldsymbol{\sigma}$ in this case constructs $2$ different elements in $S_{n+1}^2 (231,321)$.
\item $(\sigma, \sigma') = (\ell_L 1 \sigma_R, \ell_R' 1 \sigma_R')$ for integers $\ell_L$, $\ell_R'$.
As in the previous cases, note that $(0 \ell_L 1 \sigma_R, 0 \ell_R' 1 \sigma_R')$, as well as $(\ell_L 0 \ 1 \ \sigma_R, \ell_R' 0 \ 1 \ \sigma_R')$, avoids $231$ and $321$.
Now we show that inserting $0$ anywhere else in $\boldsymbol{\sigma}$ cannot avoid $231$ and $321$. In particular, we show that $(0 \ell_L 1 \sigma_R, \ell_R' 0 \ 1 \ \sigma_R')$ and $(\ell_L 0 \ 1 \ \sigma_R, 0 \ell_R' 1 \sigma_R')$ cannot avoid $231$ and $321$.
For both $3$-permutations, the projection is $\ell_R' 1 \pi_R$ for some subpermutation $\pi_R$, which contains an instance of $321$ since $\pi_R$ must contain the element $0$. And hence $3$-permutations $\boldsymbol{\sigma}$ in this case constructs $2$ distinct elements in $S_{n+1}^2 (231,321)$.
\end{enumerate}
We claim that in $S_n^2(231,321)$, exactly half of the elements $\boldsymbol{\sigma} = (\sigma, \sigma')$ satisfy $\sigma(1) = 1$ after standardization. The base case can be seen in $S_2^2(231,321)$. Then for our inductive step let us assume that this is the case for $S_n^2(231,321)$. We wish to show that this is true for $S_{n+1}^2(231,321)$. In each case above, exactly half of the $3$-permutations constructed have the property $\sigma(1)= 1$ and the other half satisfy $\sigma(1) \neq 1$ after standardization, so via induction, exactly half of the elements in $S_n^2(231,321)$ satisfy $\sigma(1) = 2$.
Then when $\sigma(1) = 1$, we are in Case 1 or Case 2, which contribute $4$ elements in $S_{n+1}^2(231,321)$. When $\sigma(1) \neq 1$, we are in Case 3 or Case 4, which contribute $2$ elements in $S_{n+1}^2(231,321)$.
Thus we conclude that $$a_{n+1} = \frac{a_n}{2}\cdot 4 + \frac{a_n}{2} \cdot 2 = 3a_n.$$
We can see that $a_2 = 4$, so we conclude that \begin{align*}
a_{n+1} = 4 \cdot 3^{n-1}. & \qedhere
\end{align*}
\end{proof}
This allows us to prove all the conjectures Bonichon and Morel \cite{bonichon2022baxter} have made in regard to $3$-permutations avoiding two patterns of size $3$. However, there is one class of $3$-permutations that have yet to be classified, which we now enumerate. We begin with a well-known lemma.
\begin{lemma}\label{inverselemma}
Let $\sigma$ be a permutation and $\pi$ be an involution. Then $\sigma$ avoids $\pi$ if and only if $\sigma^{-1}$ avoids $\pi$.
\end{lemma}
Because $132$ and $213$ are both involutions, $\sigma$ avoids $132$ if and only if $\sigma^{-1}$ avoids $132$. The same reasoning holds for the pattern $213$.
\begin{theorem}\label{132,213}
Let $a_n = |S_n^2(132,213)|$. Then $a_n$ follows the recurrence relation $$a_{n+1} = a_n + 3 \cdot 2^{n-1} +2(n-1)$$ with the initial term $a_1 = 1$.
\end{theorem}
\begin{proof}
Let $\boldsymbol{\sigma} = (\sigma, \sigma') \in S_n^2(132,213)$ and let $\boldsymbol{\sigma}$ be of the form $(\sigma_L n \sigma_R, \sigma_L' n \sigma_R')$.
Note that $\sigma_L n$ and $\sigma_L' n$ are increasing; otherwise we would contain an occurrence of $213$. Moreover, they must be consecutively increasing; if not we would have an occurrence of $132$.
Adding a maximal element $n+1$ right-adjacent to $n$ in both $\sigma$ and $\sigma'$ always produces a $3$-permutation in $S_{n+1}^2(132,213)$. To see this, suppose that $|\sigma_L| > |\sigma_L'|$. Then the projection $\sigma' \circ \sigma^{-1}$ would look like
\begin{center}
\begin{tikzpicture}[coo/.style={coordinate}]
\path (0,0) coordinate (x1) --++ (2,0) coordinate (x2)
--++ (0.6,0) coordinate (x3)
--++ (0.6,0) coordinate (x4)
--++ (0.8,0) coordinate (x5)
--++ (2,0) coordinate (x6)
--++ (0.6,0) coordinate (x7)
--++ (0.6,0) coordinate (x8)
--++ (0.8,0) coordinate (x9)
--++ (2.5,0) coordinate (xx);
\foreach \i in {1,2,3}
\coordinate[] (L\i) at (0,-\i);
\draw[|-|] (L1-|x1) -- (L1-|x6) node[above,pos=0.5] {$\sigma_L$};
\node at (L1-|x7) {$n$};
\draw[|-|] (L1-|x8) -- (L1-|xx) node[above,pos=0.5] {$\sigma_R$};
\draw[|-|] (L2-|x1) -- (L2-|x2) node[above,pos=0.5] {$\sigma_L'$};
\node at (L2-|x3) {$n$};
\draw[|-|] (L2-|x4) -- (L2-|xx) node[above,pos=0.5] {$\sigma_R'$};
\draw[|-|,dashed] (L3-|x1) -- (L3-|x4) node[above,pos=0.5] {$\pi_L$};
\draw[|-|,dashed] (L3-|xx) -- (L3-|x8) node[above,pos=0.5] {$\pi_R$};
\draw[|-|,dashed] (L3-|x4) -- (L3-|x8) node[above,pos=0.5] {$\pi_M$};
\end{tikzpicture}
\end{center}
This has the form $(\pi_R \circ \sigma_R^{-1}) \pi_L \pi_M$, where $\pi_L$ is consecutively increasing and ends with $n$. This must avoid $132$ and $213$.
Now consider $(\sigma_1, \sigma_2) = (\sigma_L n (n+1) \sigma_R, \sigma_L' n (n+1) \sigma_R')$. The projection would look like
\begin{center}
\begin{tikzpicture}[coo/.style={coordinate}]
\path (0,0) coordinate (x1) --++ (2,0) coordinate (x2)
--++ (0.6,0) coordinate (x3)
--++ (1,0) coordinate (x4)
--++ (0.8,0) coordinate (x5)
--++ (2,0) coordinate (x6)
--++ (0.6,0) coordinate (x7)
--++ (1,0) coordinate (x8)
--++ (0.8,0) coordinate (x9)
--++ (2.5,0) coordinate (xx);
\foreach \i in {1,2,3}
\coordinate[] (L\i) at (0,-\i);
\draw[|-|] (L1-|x1) -- (L1-|x6) node[above,pos=0.5] {$\sigma_L$};
\node at (L1-|x7) {$n$};
\node at (L1-|x8) {$n+1$};
\draw[|-|] (L1-|x9) -- (L1-|xx) node[above,pos=0.5] {$\sigma_R$};
\draw[|-|] (L2-|x1) -- (L2-|x2) node[above,pos=0.5] {$\sigma_L'$};
\node at (L2-|x3) {$n$};
\node at (L2-|x4) {$n+1$};
\draw[|-|] (L2-|x5) -- (L2-|xx) node[above,pos=0.5] {$\sigma_R'$};
\draw[|-|,dashed] (L3-|x1) -- (L3-|x4) node[above,pos=0.5] {$\pi_L$};
\draw[|-|,dashed] (L3-|xx) -- (L3-|x9) node[above,pos=0.5] {$\pi_R$};
\draw[|-|,dashed] (L3-|x4) -- (L3-|x9) node[above,pos=0.5] {$\pi_M$};
\end{tikzpicture}
\end{center}
This has the form $(\pi_R \circ \sigma_R^{-1}) \pi_L (n+1) \pi_M$, which still avoids $132$ and $213$. The case where $|\sigma_L| = |\sigma_L'|$ follows as well. To see when $|\sigma_L| < |\sigma_L'|$, we utilize Lemma 3.5. We've showed that if $\sigma$, $\sigma'$, and $\sigma' \circ \sigma^{-1}$ avoid $132$ and $213$ and $|\sigma_L| > |\sigma_L'|$, then $\sigma_2 \circ \sigma_1^{-1}$ also avoids $132$ and $213$. So it follows that if $\sigma'$, $\sigma$, and $\sigma \circ \sigma'^{-1}$ avoid $132$ and $213$ and $|\sigma_L'| > |\sigma_L|$, then $\sigma_1 \circ \sigma_2^{-1}$ also avoids $132$ and $213$. But Lemma 3.5 shows that $\sigma_1 \circ \sigma_2^{-1}$ avoiding $132$ and $213$ implies that $\sigma_2 \circ \sigma_1^{-1}$ also avoids $132$ and $213$.
Hence appending a maximal element $n+1$ right-adjacent to $n$ in both $\sigma$ and $\sigma'$ contributes $a_{n}$ elements in $S^2_{n+1}(132,213).$
In addition, we have the following cases:
\begin{enumerate}
\item $\sigma = \sigma'$.
Then $((n+1) \sigma, (n+1) \sigma')$ also avoids $132$ and $213$.
Now we show that inserting $n+1$ anywhere else in $\boldsymbol{\sigma}$ does not avoid $132$ and $213$. Note that $n+1$ must be inserted either at the beginning or right-adjacent to $n$ in $\sigma$ and $\sigma'$. If we insert $n+1$ to the left of $n$ (but not at the beginning), then we contain an instance of $132$. Similarly, if we insert $n+1$ to the right of $n$ (but not adjacent to $n$), then we contain an instance of $213$.
Then we show that $((n+1)\sigma, \sigma_L' n (n+1) \sigma_R')$ and $(\sigma_L n (n+1) \sigma_R, (n+1) \sigma')$ cannot avoid $132$ or $213$. To see that the first $3$-permutation cannot avoid $132$ or $213$, its projection is of the form $1\pi(n+1)\ell$, where $\ell$ is the first element in $\sigma_L'$ and $\pi$ is a subpermutation. This contains an occurrence of $132$. A similar argument shows that the projection of the latter $3$-permutation also contains $213$. The exception is when $\sigma = \sigma' = \Id_n$, in which case $((n+1)\Id_n, \Id_n (n+1))$ and $(\Id_n (n+1), (n+1) \Id_n)$ both avoid $132$ and $213$.
Since Simion and Schmidt \cite{simion1985restricted} showed there are $2^{n-1}$ possible permutations that avoid $132$ and $213$ with size $n$, this contributes an additional $2^{n-1} + 2$ elements in $S_{n+1}^2(132,213)$.
\item $\sigma = \Id_n$ and $\sigma' \neq \Id_n$.
We note that $(\sigma (n+1), (n+1) \sigma')$ avoids $132$ and $213$. In the special case where $\sigma_R'$ is consecutively increasing, then $((n+1) \sigma, \sigma_L' n (n+1) \sigma_R')$ is also an element in $S_{n+1}^2(132,213)$.
Now we show that inserting $n+1$ anywhere else in $\boldsymbol{\sigma}$ cannot avoid $132$ and $213$. We first show that $(\sigma (n+1), \sigma' (n+1))$ and $((n+1)\sigma, (n+1) \sigma')$ cannot avoid $132$ and $213$. Taking the projection of these $3$-permutations evaluates to $\sigma' (n+1)$, which contains an occurrence of $213$ because $\sigma'$ is not the identity and hence must contain an occurrence of $21$.
Now we show that $((n+1) \sigma, \sigma' (n+1))$ cannot avoid $132$ and $213$. Let us write $\sigma' = \ell \pi_2$. Note that $\ell>1$, because if $\sigma'$ started with $1$, then it would become the identity.
Taking the projection gives us the form $\pi_2 (n+1) \ell$, which contains an occurrence of $132$.
Now let $\sigma_R'$ be nonincreasing. We wish to show that $((n+1) \sigma, \sigma_L' n (n+1) \sigma_R')$ cannot avoid $132$ and $213$. Because $\sigma_R'$ contains an instance of $21$ and every element in $\sigma_R'$ is smaller than every element in $\sigma_L'$, taking the projection gives an occurrence of $213$.
Simion and Schmidt \cite{simion1985restricted} has shown that there are $2^{n-1}$ different $\sigma'$ that avoid $132$ and $213$, so $(\sigma (n+1), (n+1) \sigma')$ contributes $2^{n-1}-1$ different elements to $S_{n+1}^2(132,213)$. Moreover, the special case $((n+1) \sigma, \sigma_L' n (n+1) \sigma_R')$ contributes $n-1$ elements to $S_{n+1}^2(132,213)$.
\item $\sigma \neq \Id_n$ and $\sigma' = \Id_n$.
This case also contributes $2^{n-1}+n-2$ elements in $S_{n+1}^2(132,213)$. This is a consequence of Lemma 3.5 and the reasoning discussed above.
\end{enumerate}
Now we show that nothing else can contribute to $S_{n+1}^2(132,213)$. Let $\boldsymbol{\sigma} = (\sigma_L n \sigma_R, \sigma_L' n \sigma_R')$ and assume that $\sigma_R$ and $\sigma_R'$ are nonempty and that $\sigma \neq \sigma'$. This implies that $\sigma$ and $\sigma'$ cannot be the identity permutation.
Inserting $n+1$ at the beginning of $\sigma$ and $\sigma'$ gives the projection $(\sigma' \circ \sigma^{-1}) (n+1)$. And because $\sigma \neq \sigma'$, then $\sigma' \circ \sigma^{-1}$ cannot be the identity and hence contains an occurrence of $21$. So the projection contains an occurrence of $213$.
We show that $((n+1) \sigma, \sigma_L' n (n+1) \sigma_R')$ cannot avoid $132$ and $213$ either. To see this, let $|\sigma_L| \geq |\sigma_L'|$. Then we evaluate the projection.
\begin{center}
\begin{tikzpicture}[coo/.style={coordinate}]
\path (0,0) coordinate (x1) --++ (1.1,0) coordinate (x2)
--++ (2,0) coordinate (x3)
--++ (0.6,0) coordinate (x4)
--++ (0.8,0) coordinate (x5)
--++ (0.8,0) coordinate (x6)
--++ (0.6,0) coordinate (x7)
--++ (0.6,0) coordinate (x8)
--++ (0.6,0) coordinate (x9)
--++ (2.5,0) coordinate (xx);
\foreach \i in {1,2,3}
\coordinate[] (L\i) at (0,-\i);
\draw[|-|] (L1-|x2) -- (L1-|x7) node[above,pos=0.5] {$\sigma_L$};
\node at (L1-|x8) {$n$};
\node at (L1-|x1) { \ \ \ \ \ $n+1$};
\draw[|-|] (L1-|x9) -- (L1-|xx) node[above,pos=0.5] {$\sigma_R$};
\draw[|-|] (L2-|x1) -- (L2-|x3) node[above,pos=0.5] {$\sigma_L'$};
\node at (L2-|x4) {$n$};
\node at (L2-|x5) {$n+1$};
\draw[|-|] (L2-|x6) -- (L2-|xx) node[above,pos=0.5] {$\sigma_R'$};
\end{tikzpicture}
\end{center}
This is of the form $r \pi_L (n+1) \pi_R \ell$, where $\ell$ is an element in $\sigma_L'$ (or $n$ if $\sigma_L'$ is empty), $r$ is an element in $\sigma_R'$, and $\pi_L$ and $\pi_R$ are subpermutations. Because elements in $\sigma_L'$ are greater than elements in $\sigma_R'$, then the projection contains an occurrence of $132$. Lemma 3.5 concludes that when $|\sigma_L| \leq |\sigma_L'|$, the projection cannot avoid $132$ either.
Finally we show that $( \sigma_L n (n+1) \sigma_R, (n+1) \sigma')$ cannot avoid $132$ and $213$. Let $|\sigma_L| \geq |\sigma_L'|$. Then we evaluate the projection.
\begin{center}
\begin{tikzpicture}[coo/.style={coordinate}]
\path (0,0) coordinate (x1) --++ (1.1,0) coordinate (x2)
--++ (2,0) coordinate (x3)
--++ (0.6,0) coordinate (x4)
--++ (0.6,0) coordinate (x5)
--++ (2,0) coordinate (x6)
--++ (0.6,0) coordinate (x7)
--++ (1,0) coordinate (x8)
--++ (0.8,0) coordinate (x9)
--++ (2.5,0) coordinate (xx);
\foreach \i in {1,2,3}
\coordinate[] (L\i) at (0,-\i);
\draw[|-|] (L1-|x1) -- (L1-|x6) node[above,pos=0.5] {$\sigma_L$};
\node at (L1-|x7) {$n$};
\node at (L1-|x8) {$n+1$};
\draw[|-|] (L1-|x9) -- (L1-|xx) node[above,pos=0.5] {$\sigma_R$};
\draw[|-|] (L2-|x2) -- (L2-|x3) node[above,pos=0.5] {$\sigma_L'$};
\node at (L2-|x4) {$n$};
\node at (L2-|x1) {\ \ \ \ \ $n+1$};
\draw[|-|] (L2-|x5) -- (L2-|xx) node[above,pos=0.5] {$\sigma_R'$};
\end{tikzpicture}
\end{center}
This is of the form $(\sigma_R' \circ \sigma_R^{-1}) (n+1) \pi_L n \pi_R$, where $\pi_L$ and $\pi_R$ are subpermutations. Because $\sigma_R' \circ \sigma_R^{-1}$ must contain the minimal element $1$, the projection contains an occurrence of $132$, and Lemma 3.5 concludes that when $|\sigma_L| \leq |\sigma_L'|$, this result holds as well.
And because nothing else can contribute elements in $S_{n+1}^2(132,213)$, we conclude that \begin{align*}
a_{n+1} = a_n + 3 \cdot 2^{n-1} + 2(n-1). & \qedhere
\end{align*}
\end{proof}
These theorems allow us to enumerate all $3$-permutations avoiding two patterns of size $3$ that correspond to existing OEIS sequences. Moreover, since the sequence in Theorem \ref{132,213} does not yet correspond to a sequence on the OEIS database \cite{oeis}, it allows the complete classification and enumeration of all $3$-permutations avoiding two patterns of size $3$.
\section{Enumeration of Pattern Avoidance Classes of size 3}
Having enumerated all $3$-permutations avoiding two patterns, we now turn our attention to enumerating $3$-permutations avoiding three patterns, as Simion and Schmidt \cite{simion1985restricted} have done with classic permutations. In Table \ref{triple avoidance}, we extend Bonichon and Morel's \cite{bonichon2022baxter} conjectures to $3$-permutations avoiding three patterns of size $3$.
\begin{table}[h]
\centering
\begin{tabular}{|c | c | c | c | c|}
\hline
Patterns & \#TWE & Sequence & OEIS Sequence & Comment \\ [0.5ex]
\hline\hline
$123,132,213$ & 3 & $1,4,2,0,0, \dots$ & & Terminates after $n=3$ \\
\hline
$123,132,231$ & 4 & $1,4,3,0,0,\dots$ & & Terminates after $n=3$ \\
\hline
$123,231,312$ & 1 & $1,4,0,0,0, \dots$ & & Terminates after $n=2$ \\
\hline
$123,231,321$ & 2 & $1,4,3,0,0, \dots$ & & Terminates after $n=3$ \\
\hline
$132,213,312$ & 2 & $1,4,6,8,10, \dots$ & \href{http://oeis.org/A005843}{A005843} & Theorem \ref{132,213,312} \\
\hline
$132,213,321$ & 1 & $1,4,9,16,25, \dots$ & \href{http://oeis.org/A000290}{A000290} & Theorem \ref{132,213,321} \\
\hline
$132,231,312$ & 2 & $1,4,7,10,13, \dots$ & \href{http://oeis.org/A016777}{A016777} & Theorem \ref{132,231,312} \\
\hline
$213, 231, 321$ & 4 & $1,4,6,8,10, \dots$ & \href{http://oeis.org/A005843}{A005843} & Theorem \ref{213,231,321} \\
\hline
$231,312,321$ & 1 & $1,4,7,19,40, \dots$ & \href{http://oeis.org/A006130}{A006130} & Theorem \ref{231,312,321} \\
\hline
\end{tabular}
\caption{Sequences of $3$-permutations avoiding three permutations of size $3$. The second column indicates the number of trivially Wilf-equivalent patterns.}
\label{triple avoidance}
\end{table}
\begin{theorem}\label{132,213,312}
Let $a_n = |S_n^2(132,213,312)|$. Then $a_{n+1}$ follows the formula $a_{n+1} = 2(n+1)$ for $n>0$ (with initial term $a_1=1$).
\end{theorem}
\begin{proof}
Let $\boldsymbol{\sigma} = (\sigma, \sigma') \in S_n^2(132,213,312)$. Let $\boldsymbol{\sigma}$ be of the form $(\sigma_L n \sigma_R, \sigma_L' n \sigma_R').$
Note that $\sigma_R$ and $\sigma_R'$ have to be either empty or consecutively decreasing, and similarly, $\sigma_L$ and $\sigma_L'$ have to be either empty or consecutively increasing. Moreover, every element in $\sigma_L$ and $\sigma_L'$ must be larger than every element in $\sigma_R$ and $\sigma_R'$, respectively. If not, there would be an occurrence of $132$.
We have the following cases:
\begin{enumerate}
\item $\sigma_L$, $\sigma_R$ are nonempty.
If $\sigma_L'$ and $\sigma_R'$ are nonempty, consider $\boldsymbol{\sigma} = (\sigma_L n \sigma_R, \sigma_L' n \sigma_R')$ in $S_{n}^2 (132,213,312)$. Note that inserting $n+1$ right-adjacent to $n$ in both $\sigma$ and $\sigma'$ will avoid $132$, $231$, and $312$. In particular, $(\sigma_L n (n+1) \sigma_R, \sigma_L' n (n+1) \sigma_R')$ avoids $132$, $213$, and $312$.
Now we show that inserting $n+1$ anywhere else in $\boldsymbol{\sigma}$ does not yield an element of $S_{n+1}^2 (132,213,312)$. Consider $\sigma_L n \sigma_R$. We cannot insert $n+1$ in the beginning of this permutation, or else there would be an instance of $312$. Further, we cannot insert $n+1$ anywhere to the left of $n$, or else there would be an instance of $132$. There would also be an occurrence of $213$ if $n+1$ is inserted anywhere to the right of $n$ that is not adjacent to $n$.
Hence $n+1$ is forced to be right-adjacent to $n$ in $\sigma$. The same conclusion follows for $\sigma'$.
If $\sigma_L'$ is empty, then $\sigma' = \rev(\Id_n)$. The projection $\sigma' \circ \sigma^{-1}$ contains an occurrence of $132$, and hence this case is impossible. Similarly, if $\sigma_R'$ is empty, the projection $\sigma' \circ \sigma^{-1}$ contains an occurrence of $312$, and this case is also impossible.
Therefore every element in this case contributes $1$ element in $S_{n}^2 (132,213,312)$.
\item $\sigma_L$ is empty.
If both $\sigma_L'$ and $\sigma_R'$ are nonempty, then $\boldsymbol{\sigma} = (\rev(\Id_n), \sigma_L' n \sigma_R')$. Taking the projection gives an instance of $132$ because every element in $\sigma_L'$ is larger than every element in $\sigma_R'$. And hence this is not a valid element in $S_{n}^2 (132,213,312)$, so this case is impossible.
If $\sigma_L'$ is empty, then we conclude that $(\sigma, \sigma') = (\rev(\Id_n), \rev(\Id_n))$. Following similar logic to the previous case, $n+1$ must be inserted adjacent to $n$ in both $\sigma$ and $\sigma'$ to avoid $312$ and $213$. Note that $((n+1)\rev(\Id_n), (n+1)\rev(\Id_n))$ avoids $132$, $213$, and $312$. However, the projections of the $3$-permutations $((n+1)\rev(\Id_n), (n (n+1) \rev(\Id_{n-1}))$, $(n(n+1)\rev(\Id_{n-1}), (n+1)\rev(\Id_n))$, and $(n(n+1)\rev(\Id_{n-1}), n(n+1)\rev(\Id_{n-1}))$ all cannot avoid $132$, $213$, and $312$, so $(\rev(\Id_n), \rev(\Id_n))$ contributes one additional element to $S_{n+1}^2 (132,213,312)$, in addition to inserting $n+1$ right-adjacent to $n$ in both $\sigma$ and $\sigma'$ as discussed in the previous case.
If $\sigma_R'$ is empty, then $(\sigma, \sigma') = (\rev(\Id_n), \Id_n)$. The element $n+1$ must be inserted adjacent to $n$ in $\sigma$, while $n+1$ must be inserted at the end of $\sigma'$. We can see that $((n+1)\rev(\Id_n), \Id_n (n+1))$ is an element of $S_{n+1}^2 (132,213,312)$ and furthermore, $(n (n+1) \rev(\Id_{n-1}), \Id_n (n+1))$ is not an element, because the projection $\sigma' \circ \sigma^{-1}$ contains an instance of $312$.
Hence each element in this case contributes $1$ element towards $S_{n+1}^2 (132,213,312)$, with the exception of $(\rev(\Id_n), \rev(\Id_n))$, which contributes $2$ elements towards $S_{n+1}^2 (132,213,312)$.
\item $\sigma_R$ is empty.
If $\sigma_L'$ is nonempty, then $\boldsymbol{\sigma} = (\Id_n, \sigma_L' n \sigma_R')$. Then $n+1$ is forced to be right-adjacent to $n$ for both $\sigma$ and $\sigma'$, which contributes $1$ element to $S_{n+1}^2 (132,213,312)$.
If $\sigma_L'$ is empty, then $(\sigma, \sigma') = (\Id_n, \rev(\Id_n))$. Note that $(\Id_n(n+1), (n+1)\rev(\Id_n))$ avoids $132$, $213$, and $312$, so $(\Id_n, \rev(\Id_n))$ contributes one additional element to $S_{n+1}^2 (132,213,312)$ in addition to inserting $n+1$ right-adjacent to $n$ in both $\sigma$ and $\sigma'$.
Hence each element in this case contributes $1$ element towards $S_{n+1}^2 (132,213,312)$, with the exception of $(\Id_n, \rev(\Id_n))$, which contributes $2$ elements towards $S_{n+1}^2 (132,213,312)$.
\end{enumerate}
Inserting $n+1$ anywhere else in $\boldsymbol{\sigma}$ cannot provide an element in $S_{n+1}^2 (132,213,312)$, and hence $a_{n+1} = a_n +2$. We have the base case $a_2 = 4$, thus \begin{align*}
a_{n+1} = 2(n+1). & \qedhere
\end{align*}
\end{proof}
\begin{theorem}\label{132,213,321}
Let $a_n = |S_n^2(132,213,321)|$. Then $a_{n+1}$ follows the formula $a_{n+1} = (n+1)^2$.
\end{theorem}
\begin{proof}
Let $\boldsymbol{\sigma} = (\sigma, \sigma') \in S_n^2(132,213,321)$. Write $(\sigma, \sigma')$ as $(\sigma_L n \sigma_R, \sigma_L' n \sigma_R')$.
Using a similar reasoning discussed in Theorem \ref{132,213,312}, note that $\sigma_L$, $\sigma_L'$, $\sigma_R$, and $\sigma_R'$ are consecutively increasing. Moreover, every element in $\sigma_L$ and $\sigma_L'$ is larger than every element in $\sigma_R$ and $\sigma_R'$, respectively.
Similar to the reasoning in Theorem \ref{132,213,312}, $(\sigma_L n (n+1) \sigma_R, \sigma_L' n (n+1) \sigma_R')$ is in $S_{n+1}^2(132,213,321)$. This contributes $a_n$ different $3$-permutations to $S_{n+1}^2(132,213,321)$. We also have the following cases:
\begin{enumerate}
\item $\sigma_R$ is empty and $\sigma_R'$ is nonempty.
Note that this implies that $\sigma = \Id_n$ and $\sigma' \neq \Id_n$. Then $((n+1) \Id_n, \sigma_L' n (n+1) \sigma_R')$ avoids $132$, $213$, and $312$.
Inserting $n+1$ anywhere else in $\boldsymbol{\sigma}$ cannot avoid $132$, $213$, and $312$. In $\sigma$, we must insert $n+1$ at the beginning of the permutation, or else we contain an instance of $132$. Similarly, we must insert $n+1$ right-adjacent to $n$ in $\sigma'$. If $n+1$ is left of $n$, then $\sigma'$ contains an instance of $321$. If $n+1$ is right of $n$ but not adjacent, then $\sigma'$ contains an instance of $213$.
And hence $((n+1) \Id_n, \sigma_L' n (n+1) \sigma_R')$ is the only $3$-permutation we can construct in $S_{n+1}^2(132,213,321)$. Hence this case contributes $n-1$ elements to $S_{n+1}^2(132,213,321)$.
\item $\sigma_R$ is nonempty and $\sigma_R'$ is empty.
This implies that $\sigma' = \Id_n$ and $\sigma \neq \Id_n$. Note that $(\sigma_L n (n+1) \sigma_R, (n+1) \Id_n)$ belongs to $S_{n+1}^2(132,213,321)$. Using a similar argument as in Case 1, inserting $n+1$ in $\boldsymbol{\sigma}$ anywhere else does not avoid $132$, $213$, and $312$. Hence this case contributes $n-1$ different $3$-permutations to $S_{n+1}^2(132,213,321)$.
\item Both $\sigma_R$, $\sigma_R'$ are empty.
This implies that $\sigma = \sigma' = \Id_n$. We see that $((n+1)\Id_n,\Id_n (n+1))$, $((n+1)\Id_n, (n+1)\Id_n)$, and $(\Id_n (n+1), (n+1) \Id_n)$ all avoid $132$, $213$, and $312$. And using the same reasoning as in Case 1 shows that inserting $n+1$ anywhere else in $\boldsymbol{\sigma}$ cannot avoid these patterns, and hence this case contributes $3$ elements to $S_{n+1}^2(132,213,321)$.
\end{enumerate}
Lastly when $\sigma$ and $\sigma'$ are not the identity permutation, then $\sigma_R$ and $\sigma_R'$ are both nonempty, and the same argument in Case 1 shows that inserting $n+1$ anywhere not right-adjacent to $n$ in $\sigma$ and $\sigma'$ cannot avoid $132$, $213$, and $312$. Hence no other insertions of $n+1$ in $\boldsymbol{\sigma}$ produce an element in $S_{n+1}^2(132,213,321)$, and so $$a_{n+1} = a_n + 2n + 1.$$
The base case is $a_1 = 1$, so we conclude that \begin{align*}
a_{n+1} = (n+1)^2. & \qedhere
\end{align*}
\end{proof}
\begin{theorem}\label{132,231,312}
Let $a_n = |S_n^2(132,231,312)|$. Then $a_n$ follows the recurrence relation $a_{n+1} = a_n+3$ with initial term $a_1 = 1$.
\end{theorem}
\begin{proof}
Let $\boldsymbol{\sigma} = (\sigma, \sigma') \in S_n^2(132,231,312)$. Write $(\sigma, \sigma')$ as $(\sigma_L n \sigma_R, \sigma_L' n \sigma_R')$. Note that $n \sigma_R$ and $n \sigma_R'$ must be consecutively decreasing to avoid $312$ and $132$.
We insert a maximal element $n+1$ into $\sigma$ and $\sigma'$ to count how many elements in $S_{n+1}^2(132,231,312)$ there are. Note that $(\sigma (n+1), \sigma' (n+1))$ avoids $132$, $231$, and $312$. So this contributes $a_n$ different $3$-permutations to $S_{n+1}^2(132,231,312)$. We have the following additional cases:
\begin{enumerate}
\item $\sigma = \rev(\Id_n)$ and $\sigma' \neq \rev(\Id_n)$.
This forces $\sigma'$ to be the identity. Then $((n+1)\rev(\Id_n), \sigma' (n+1))$ avoids $132$, $231$, and $312$. Now inserting $n+1$ anywhere else in $\boldsymbol{\sigma}$ cannot avoid these patterns. Namely, if $n+1$ is inserted anywhere not in the beginning or end of $\sigma$, there is an occurrence of $231$. Moreover, inserting $n+1$ into the beginning of $\sigma'$ contains $312$. If $n+1$ is inserted anywhere not in the beginning or end of $\sigma'$, there is an occurrence of either $231$ or $132$. Hence $((n+1)\rev(\Id_n), \sigma' (n+1))$ is the only element we can construct in $S_{n+1}^2(132,231,312)$ in this case. And this case contributes one element towards $S_{n+1}^2(132,231,312)$.
\item $\sigma' = \rev(\Id_n)$ and $\sigma \neq \rev(\Id_n)$.
Then similar to Case 1, $(\sigma (n+1), (n+1) \rev(\Id_n))$ avoids $132$, $231$, and $312$, and inserting $n+1$ anywhere else into this $3$-permutation cannot construct a $3$-permutation in $S_{n+1}^2(132,231,312)$. Hence this case contributes one element towards $S_{n+1}^2(132,231,312)$.
\item $\sigma = \sigma' =\rev(\Id_n)$.
Note that $((n+1) \rev(\Id_n), (n+1) \rev(\Id_n))$ works. Now we show that no other insertions of $n+1$ into this $3$-permutation avoids the patterns $132$, $231$, and $312$. The projection of $( \rev(\Id_n) (n+1), (n+1) \rev(\Id_n))$ contains an occurrence of $231$ and the projection of $((n+1) \rev(\Id_n), \rev(\Id_n) (n+1))$ contains an occurrence of $312$, and hence this case contributes one element towards $S_{n+1}^2(132,231,312)$.
\end{enumerate}
Inserting $n+1$ into $(\sigma, \sigma') = (\sigma_L n \sigma_R, \sigma_L' n \sigma_R')$ anywhere else cannot avoid $132$, $231$, and $312$, where $\sigma, \sigma' \neq \rev(\Id_n)$. This implies that $\sigma_L$ and $\sigma_L'$ are nonempty.
Inserting $n+1$ left-adjacent to $n$ contains $132$ and inserting $n+1$ anywhere to the left of this contains $312$. Further, inserting $n+1$ anywhere to the right of $n$ (but not at the end of the permutation) contains $231$. Hence we must insert $n+1$ at the end of the permutation, and no other insertions of $n+1$ in $\boldsymbol{\sigma}$ avoid $132$, $231$, and $312$.
Thus $$a_{n+1} = a_n+3.$$
Because our base case is $a_1 = 1$, this is equivalent to $a_{n+1} = 3n+1$.
\end{proof}
\begin{theorem}\label{213,231,321}
Let $a_n = |S_n^2(213, 231, 321)|$. Then $a_{n+1}$ follows the formula $a_{n+1} = 2(n+1)$ for $n>0$ (with initial term $a_1 = 1$).
\end{theorem}
\begin{proof}
Let $\boldsymbol{\sigma} = (\sigma, \sigma') \in S_n^2(213,231,321)$. Writing $(\sigma, \sigma')$ as $(\sigma_L n \sigma_R, \sigma_L' n \sigma_R')$, note that $\sigma_L$, $\sigma_L'$, $\sigma_R$, and $\sigma_R'$ are all consecutively increasing or empty.
We insert a maximal element $n+1$ to $\sigma$ and $\sigma'$ in an attempt to construct an element in $S_{n+1}^2(213,231,321)$. If $\sigma_R$ and $\sigma_R'$ are nonempty, we cannot construct an element of $S_{n+1}^2(213,231,321)$ via insertion because inserting $n+1$ to the left of $n$ contains $321$, inserting $n+1$ right-adjacent to $n$ contains $231$, and inserting $n+1$ anywhere else contains $213$. Then it is enough to consider $(\sigma, \sigma') = (\Id_n, \Id_n)$. We have two cases:
\begin{enumerate}
\item We insert $n+1$ to the end of $\sigma$.
Then we can insert $n+1$ anywhere in $\sigma'$ and the resulting $3$-permutation is an element of $S_{n+1}^2(213,231,321)$. So this case contributes $n+1$ different elements to $S_{n+1}^2(213,231,321)$.
\item We do not insert $n+1$ to the end of $\sigma$.
Note that inserting $n+1$ into the same position in $\sigma$ and $\sigma'$ avoids $213$, $231$, and $321$. Further, $(\Id_{n-1} (n+1)n, \Id_{n} (n+1))$ also avoids these patterns.
Inserting $n+1$ anywhere else contains one of these patterns because the resulting projection contains either $321$ or $231$, and hence this case contributes $n+1$ different $3$-permutations to $S_{n+1}^2(213,231,321)$.
\end{enumerate}
And hence \begin{align*}
a_{n+1} = 2(n+1). & \qedhere
\end{align*}
\end{proof}
\begin{theorem}\label{231,312,321}
Let $a_n = |S_n^2(231,312,321)|$. Then $a_n$ follows the recurrence relation $$a_{n+1} = a_n + 3a_{n-1}$$ with initial terms $a_1 = 1$ and $a_2 = 4$.
\end{theorem}
\begin{proof}
Let $\boldsymbol{\sigma} = (\sigma, \sigma') \in S_n^2(231,312,321)$. Write $(\sigma, \sigma')$ as $(\sigma_L n \sigma_R, \sigma_L' n \sigma_R')$.
Note that $(\sigma (n+1), \sigma' (n+1))$ is an element of $S_{n+1}^2(231,312,321)$. This contributes $a_n$ different $3$-permutations towards $S_{n+1}^2(231,312,321)$. We consider the following additional case: when $\sigma_R$ and $\sigma_R'$ are empty.
Then $(\sigma, \sigma') = (\sigma_L n, \sigma_L' n)$, and thus $(\sigma_L (n+1)n, \sigma_L' n(n+1))$, $(\sigma_L (n+1)n, \sigma_L' (n+1)n)$, and $(\sigma_L n(n+1), \sigma_L' (n+1)n)$ are all elements in $S_{n+1}^2(231,312,321)$. Inserting $n+1$ anywhere else cannot avoid these patterns, because inserting $n+1$ anywhere non-adjacent to $n$ contains $312$. And hence this case contributes $3a_{n-1}$ distinct $3$-permutations to $S_{n+1}^2(231,312,321)$.
Now when either $\sigma_R$ and $\sigma_R'$ are nonempty, we show that inserting $n+1$ anywhere but the end of the $3$-permutation cannot avoid $231$, $312$, and $321$. Let $\sigma_R$ be nonempty. Then we must insert $n+1$ at the end of $\sigma$; otherwise, inserting $n+1$ to the right of $n$ contains $231$, inserting left-adjacent to $n$ contains $321$, and inserting to the left of $n$ contains $312$. And we evaluate the projection $(\sigma_L n \sigma_R (n+1), \sigma_L' (n+1) n)$:
\begin{center}
\begin{tikzpicture}[coo/.style={coordinate}]
\path (0,0) coordinate (x1) --++ (2,0) coordinate (x2)
--++ (0.6,0) coordinate (x3)
--++ (0.6,0) coordinate (x4)
--++ (0.6,0) coordinate (x5)
--++ (0.8,0) coordinate (x6)
--++ (0.8,0) coordinate (x7);
\foreach \i in {1,2,3}
\coordinate[] (L\i) at (0,-\i);
\draw[|-|] (L1-|x1) -- (L1-|x2) node[above,pos=0.5] {$\sigma_L$};
\node at (L1-|x3) {$n$};
\node at (L1-|x7) {$n+1$};
\draw[|-|] (L1-|x4) -- (L1-|x6) node[above,pos=0.5] {$\sigma_R$};
\draw[|-|] (L2-|x1) -- (L2-|x5) node[above,pos=0.5] {$\sigma_L'$};
\node at (L2-|x7) {$n$};
\node at (L2-|x6) {$n+1$};
\end{tikzpicture}
\end{center}
Since $\sigma_R$ is nonempty, this contains an instance of $312$. The case where $\sigma_R'$ is nonempty is similar. Inserting $n+1$ anywhere else in $\boldsymbol{\sigma}$ cannot produce an element in $S_{n+1}^2(231,312,321)$, and hence \begin{align*}
a_{n+1} = a_n + 3a_{n-1}. & \qedhere
\end{align*}
\end{proof}
\section{Final Remarks and Open Problems}
In this paper, we completely enumerated $3$-permutations avoiding two patterns of size $3$ and three patterns of size $3$. The theorems in this paper prove all the conjectures by Bonichon and Morel \cite{bonichon2022baxter} regarding $3$-permutations avoiding two patterns of size $3$ and extend their conjectures to classify $3$-permutations avoiding all classes of three patterns of size $3$. We conclude with the following open problems.
\begin{problem}
Enumerate $3$-permutations avoiding one pattern of size $3$ or one pattern of size $4$.
\end{problem}
Although this paper has shown connections between $3$-permutations avoiding two patterns of size $3$ and their recurrence relations, there are no existing OEIS sequences \cite{oeis} that correspond with the number of $3$-permutations avoiding one pattern of size $3$ or $3$ permutations avoiding one pattern of size $4$. In a similar vein, enumeration of $d$-permutations with dimension greater than $3$ remains an open problem.
Bonichon and Morel \cite{bonichon2022baxter} also introduced other tables enumerating $3$-permutations avoiding other combinations of patterns, such as avoiding patterns with dimension $3$ or avoiding exactly one permutation of size 2 and dimension 3 and exactly one permutation of size 3 and dimension 2. We present a few of these as future directions to continue.
\begin{conjecture}[Bonichon and Morel \cite{bonichon2022baxter}]
The $3$-permutations avoiding the $3$-patterns $(12,12)$ and $(231,312)$ are enumerated by the OEIS sequence \href{http://oeis.org/A295928}{A295928}.
\end{conjecture}
In addition, Table \ref{future problems} by Bonichon and Morel \cite{bonichon2022baxter} presents $3$-permutations avoiding a permutation of size 2 and dimension 3 as well as a pattern of size 3 and dimension 2. Many of these sequences also correspond to existing sequences on the OEIS database \cite{oeis}, but little research has been done to enumerate such sequences of $3$-permutations. It would be interesting to prove these recurrences that result from $3$-permutations avoiding patterns with different dimensions.
\begin{table}[htp]
\centering
\begin{tabular}{|c | c | c | c | c|}
\hline
Patterns & \#TWE & Sequence & OEIS Sequence & Comment \\ [0.5ex]
\hline\hline
$123, (12,12)$ & 1 & $1,3,14,70,288,822,1260, \dots$ & & Not in OEIS \\
\hline
$123, (12,21)$ & 3 & $1,3,6,6,0,0,0, \dots$ & & Terminates after $n=4$ \\
\hline
$132, (12,12)$ & 2 & $1,3,11,41,153,573,2157, \dots$ & \href{http://oeis.org/A281593}{A281593?} & \\
\hline
$132, (12,21)$ & 6 & $1,3,11,43,173,707,2917, \dots$ & \href{http://oeis.org/A026671}{A026671?} & \\
\hline
$231, (12,12)$ & 2 & $1,3,9,26,72,192,496, \dots$ & \href{http://oeis.org/A072863}{A072863?} & \\
\hline
$231, (12,21)$ & 4 & $1,3,11,44,186,818,3706, \dots$ & & Not in OEIS \\
\hline
$231, (21,12)$ & 2 & $1,3,12,55,273,1428,7752, \dots$ & \href{http://oeis.org/A001764}{A001764?} & \\
\hline
$321, (12,12)$ & 1 & $1,3,2,0,0,0,0, \dots$ & & Terminates after $n=3$ \\
\hline
$321, (12,21)$ & 3 & $1,3,11,47,221,1113,5903, \dots$ & \href{http://oeis.org/A217216}{A217216?} & \\
\hline
\end{tabular}
\caption{Sequences of $3$-permutations avoiding one pattern of size 3 and dimension 2 and one pattern of size 2 and dimension 3. The ``?" after the OEIS sequences mean that the sequences match on the first few terms and Bonichon and Morel \cite{bonichon2022baxter} conjectured that they are the same. The second column indicates the number of trivially Wilf-equivalent patterns.}
\label{future problems}
\end{table}
We notice that the sequence \href{http://oeis.org/A001787}{A001787} in Theorem \ref{132,231} counts the number of $132$-avoiding permutations of length $n+2$ with exactly one occurrence of a $123$-pattern and the number of Dyck $(n+2)$-paths with exactly one valley at height $1$ and no higher valley \cite{oeis}. In this spirit, we propose the following problem:
\begin{problem}
Find combinatorial bijections to explain the relationships between the $3$-permutation avoidance classes found in this paper and their recurrence relations.
\end{problem}
In general, the problem of enumerating $d$-permutations avoiding sets of small patterns is widely open. Since several of these enumeration sequences correspond to sequences on the OEIS database \cite{oeis}, there are certainly interesting combinatorial properties of these $3$-permutation avoidance classes, and there are several bijections to find that explain these sequences.
\section*{Acknowledgements}
This research was conducted at the 2022 University of Minnesota Duluth REU and is supported by Jane Street Capital, the NSA (grant number H98230-22-1-0015), the NSF (grant number DMS-2052036), and the Harvard College Research Program. The author is indebted to Joe Gallian for his dedication and organizing the University of Minnesota Duluth REU. Lastly, a special thanks to Joe Gallian, Amanda Burcroff, Maya Sankar, and Andrew Kwon for their invaluable feedback and advice on this paper.
\newpage
| {
"timestamp": "2022-08-26T02:03:14",
"yymm": "2208",
"arxiv_id": "2208.08506",
"language": "en",
"url": "https://arxiv.org/abs/2208.08506",
"abstract": "Bonichon and Morel first introduced $d$-permutations in their study of multidimensional permutations. Such permutations are represented by their diagrams on $[n]^d$ such that there exists exactly one point per hyperplane $x_i$ that satisfies $x_i= j$ for $i \\in [d]$ and $j \\in [n]$. Bonichon and Morel previously enumerated $3$-permutations avoiding small patterns, and we extend their results by first proving four conjectures, which exhaustively enumerate $3$-permutations avoiding any two fixed patterns of size $3$. We further provide a enumerative result relating $3$-permutation avoidance classes with their respective recurrence relations. In particular, we show a recurrence relation for $3$-permutations avoiding the patterns $132$ and $213$, which contributes a new sequence to the OEIS database. We then extend our results to completely enumerate $3$-permutations avoiding three patterns of size $3$.",
"subjects": "Combinatorics (math.CO)",
"title": "On $d$-permutations and Pattern Avoidance Classes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.98657174604767,
"lm_q2_score": 0.8128673133042217,
"lm_q1q2_score": 0.8019519245916243
} |
https://arxiv.org/abs/1310.7984 | Polarization of Koszul cycles with applications to powers of edge ideals of whisker graphs | In this paper, we introduced the polarization of Koszul cycles and use it to study the depth function of powers of edge ideals of whisker graphs. | \section*{Introduction}
Polarization is a technique to deform an arbitrary monomial ideal $I$ in a polynomial ring $S$ into a squarefree monomial ideal $I^\wp$ in a larger polynomial ring $S^\wp$ such that $S/I$ is a quotient of $S^\wp/ I^\wp$ modulo a regular sequence of linear forms. The polarized ideal $I^\wp$ has a nice property that it has the same graded Betti numbers as $I$. Therefore, many questions regarding monomial ideals can be reduced to the study of squarefree monomial ideals. The fact that $I$ and $I^\wp$ has same graded Betti numbers implies that the corresponding Koszul homology modules of the ideal and its polarization have the same vector-space dimension. Therefore, it is natural to ask whether cycles whose homology classes form a basis of the Koszul homology of $I$ can be naturally lifted to cycles representing a basis for the Koszul homology of $I^\wp$. In Theorem~\ref{main}, it is shown that this is indeed the case.
In his book \cite[Proposition 6.3.2]{V}, Villarreal uses polarization to give a simple proof of the fact that the edge ideal of a whisker graph is Cohen-Macaulay. Given a finite simple graph $G$ on the vertex set $V(G)= \{x_1, \ldots, x_n\}$ and the edge set $E(G)$. One defines whisker graph $G^*$ of $G$ to be the graph with vertex set $\{x_1, \ldots, x_n, y_1, \ldots, y_n\}$ and edge set $E(G) \cup \{ \{x_i, y_i\} : \; i=1, \ldots, n\}$. By using the results of Section~\ref{polarization}, one easily sees that the homology classes of the cycles
\begin{eqnarray}\label{basis1}
x_{i_1} \ldots x_{i_k} e_{j_1} \wedge e_{j_{n-k}}\wedge f_{i_1} \wedge \ldots \wedge f_{i_k}
\end{eqnarray}
with ${\mathcal S} = \{i_1, \ldots, i_k\}$ a maximal independent set of $G$ and $\{j_1, \ldots, j_{n-k}\}=V(G) \setminus {\mathcal S}$, form a basis of the Koszul homology $H_n(x_1, \ldots, x_n, y_1, \ldots, y_n ; S^*/I(G^*))$. Here $e_1, \ldots, e_n, f_1, \ldots, f_n$ is a $S^*$-basis of free module $K_1(x_1, \ldots, x_n, y_1, \ldots, y_n; S^*/I(G^*))$ with $\partial (e_i) = x_i$ and $\partial (f_j)= y_j$. A basis cycle as described in (\ref{basis1}) is used in Section~\ref{whisker graphs} in the study of the powers of edge ideals of whisker graphs.
The homological and algebraic behavior of powers of an ideal has been subject of many research papers in recent years. In particular, the nature of the depth function $f(k)= \depth(S/I^k)$ of a graded ideal $I$ in a polynomial ring $S$ is still quite mysterious. While it is known by a classical result of Brodmann \cite{Br1} that $f(k)$ for $k \gg 0$ is constant, the behavior of $f(k)$ is not so well understood for initial values of $k$. In \cite{HH1}, it is shown that any non-decreasing bounded integer function $f(k)$ is the depth function of a suitable monomial ideal and it is conjectured that $f(k)$ can be any convergent nonnegative integer valued function. In support of this conjecture, it was shown in \cite{BHH} that $f(k)$ may have arbitrarily many local maxima. On the other hand, it seems that the depth function for the edge ideals behave more tamely. In particular, it is expected that the depth function of an edge ideal is a non-increasing function. This is indicated by the fact that edge ideals satisfy the persistence property for the associated prime ideals of their powers, as shown in \cite{CMS}. Interesting lower bounds for the depth function of an edge ideal have been achieved by Morey \cite{M}. On the other hand, even for simple graphs like a line graph or a cycle, the precise depth function is unknown.
In this paper we give an upper bound for the depth function for any connected whisker graph. In fact we show in Theorem~\ref{whisker} that for any connected graph $G$ on the vertex set $[n]$, we have $\depth (S^*/I(G^*)^k) \leq n-k+1$ for $k= 1, \ldots , n$. It can be shown by examples that this upper bound is no longer valid if we drop the assumption that $G$ is connected. For connected graph this upper bound is obtained by constructing suitable non-vanishing homology classes for the Koszul homology of the powers of $I(G^*)$. The cycles representing these non-vanishing homology classes are obtained as products of certain 1-cycles and an $(n-1)$-cycle which is defined via an independent set of $G$. For showing that the homology of this product of cycles in the corresponding homology group is non-vanishing, we use a combinational fact proved in Proposition~\ref{gamma} which says that any connected graph admits a friendly independent set in the sense as described in this proposition. By using results from \cite{CMS} and \cite {EH2}, we show in Corollary~\ref{limit} that $\depth(S^*/I(G^*)^k) =1$ for $k\geq n$ if $G$ is bipartite and $\depth(S^*/I(G^*)^k) =0$ for $k\geq n$ if $G$ is non-bipartite.
The upper bound for the depth of the powers of a whisker graph given by our Theorem~\ref{whisker} is not always attained. The simplest examples for such case are the whisker graphs of a $3$-cycle or $4$-cycle. On the other hand, Villarreal \cite[Proposition 6.3.7]{V}, has shown that $\depth(S^*/I(G^*)^2) \geq n-1$ if $G$ is tree (or even a forest) on the vertex set $[n]$. In Theorem~\ref{tree}, we extend the result of Villarreal and show that for any forest $G$ one has $\depth(S^*/I(G^*)^k) \geq n-k+1$ for $k=1, \ldots , n$. Together with Theorem~\ref{whisker} we conclude that for any tree $G$ we have $\depth(S^*/I(G^*)^k) = n-k+1$ for $k=1, \ldots, n$.
\section{Polarization of Koszul cycles} \label{polarization}
Let $K$ be a field and $I\subset S=K[x_1,\ldots,_n]$ a monomial ideal in the polynomial ring $S$. We denote as usual by $G(I)$ the unique minimal set of monomial generators of $I$. If $u=x_1^{a_1}\cdots x_n ^{a_n}$ is a monomial, we call ${\bold a}=(a_1,\ldots,a_n)$ the multi-degree of $u$ and set $\deg_{x_i}u=a_i$ for all $i$.
Let $c_i=\max\{\deg_{x_i} u\:\; u\in G(I)\}$ for $i=1,\ldots,n$, and let $S^{\wp}$ be the polynomial ring over $K$ in the variables $x_{ij}$, $i=1,\ldots,n$, $j=1,\ldots.c_i$. The {\em polarization} of $I$ is the squarefree monomial $I^\wp\subset S^\wp$ generated by the monomials $u^\wp$ with $u\in G(I)$ where for $u=x_1^{a_1}\cdots x_n^{a_n}$ one sets
\[
u^\wp=\prod_{i=1,\ldots,n}\prod_{j=1,\ldots,a_i} x_{ij}.
\]
We extend this polarization operation to elements in the Koszul complex. Let $K({\bold x};I)$ be the Koszul complex of the sequence ${\bold x}=x_1,\ldots,x_n$ with values in $I$. Recall that $K_i({\bold x})=\bigwedge^iF$ where $F=\Dirsum_j^nSe_j$ and where $\partial e_j=x_j$ for $j=1,\ldots,n$, and that $K({\bold x};I)=K({\bold x})\tensor I$. Thus an element of $K_i({\bold x};I)$ is of the form
\[
\sum_{J}f_Je_J,
\]
where the sum is taken over all ordered sets $J=\{j_1<j_2<\cdots <j_i\}$ of cardinality $i$, where $f_J\in I$ and where $e_J=e_{j_1}\wedge e_{j_2}\wedge \cdots \wedge e_{j_i}$.
Next we consider the Koszul complex $K({\bold x}^\wp; I^\wp)$. Here ${\bold x}^\wp$ is the sequence
\[
x_{11},x_{12},\ldots,x_{1c_1},x_{21},\ldots, x_{2c_2},\ldots, x_{n1},\ldots, x_{nc_n},
\]
and $K_i({\bold x}^\wp)=\bigwedge^i G$ where $G=\Dirsum_{i=1,\ldots,n}\Dirsum_{j=1,\ldots,c_i}S^{\wp}e_{ij}$.
We call an element $u_Je_J$ a {\em monomial} of $K({\bold x};I)$ if $u_J$ is a monomial. We set
\[
\deg_{x_i}(u_J e_J)= \deg_{x_i}u_J + \delta_j ,
\]
where
\[
\delta_j= \left\{ \begin{array}{ll}
1, & \;\textnormal{if $j\in J$}, \\ 0, & \;\text{otherwise.}
\end{array} \right.
\]
and call
\[
\deg(u_J e_J) = (\deg_{x_1}(u_J e_J), \ldots, \deg_{x_n} (u_J e_J) )
\]
the multi-degree of $u_Je_J$.
For any monomial $u_J e_J$ of multi-degree $\leq {\bold c}$ (componentwise) where ${\bold c}= (c_1, \ldots, c_n)$, we define the polarization of $u_Je_J$ to be the monomial
\[
(u_Je_J)^\wp=u_J^\wp e_{j_1a_{j_1}+1}\wedge e_{j_2a_{j_2}+1}\wedge \cdots \wedge e_{j_ia_{j_i}+1},
\]
in $K({\bold x}^{\wp};I^\wp)$, where $J=\{j_1<j_2<\cdots <j_i\}$, and $a_i = \deg_{x_i} u_J$.
We extend this polarization operator to an arbitrary multi-homogeneous element $f=\sum_J\lambda_Ju_Je_J$, $\lambda_J\in K$, of multi-degree $\leq {\bold c}$ , by setting
\[
f^\wp= \sum_J\lambda_J(u_Je_J)^\wp.
\]
If follows from \cite[Theorem 3.1]{BHbook} that any non-vanishing homology class of $H_i({\bold x};I)$ can be represented by a multi-homogeneous cycle $z=\sum_J\lambda_Ju_Je_J$ in $K_i({\bold x};I)$ with $\deg z \leq {\bold c}$. Thus the polarization of such cycles is defined.
\medskip
The following example demonstrate the polarization of cycles: let $I=(x_1^2 x_2, x_1 x_2^2)$. Then $z= x_1x_2^2 e_1 - x_1^2x_2 e_2$ is a cycle in $K_1(x_1, x_2;I)$, and $z^\wp = x_{11}x_{21} x_{22} e_{12} - x_{11}x_{12}x_{21}e_{22}$.
\medskip
With the notation introduced, we have
\begin{Theorem} \label{main}
Let $I \subset S=K[x_1, \ldots, x_n]$ be a monomial ideal and let ${\bold c}=(c_1, \ldots, c_n)$ be the integer vector with $c_i=\max\{\deg_{x_i} u\:\; u\in G(I)\}$ for $i=1, \ldots, n$. Let $z_1, \ldots, z_r$ be multi-homogeneous cycles with multi-degree $\leq {\bold c}$, whose homology classes form a $K$-basis of $H_i({\bold x};I)$. Then the homology classes of the cycles $z_1^\wp, \ldots, z_r^\wp$ form a $K$-basis of $H_i({\bold x}^\wp; I^\wp)$.
\end{Theorem}
The theorem will be a consequence of the following
\begin{Lemma}
\label{comparison}
Let $M$ be a finitely graded $S$-module, and assume that $x_1$ is a non zero-divisor modulo $M$. Then there is a natural isomorphism
\[
\varphi: H_{i}(x_1,\ldots,x_n;M)\to H_i(x_2,\ldots,x_n;\bar{M}),
\]
where $\bar{M}$ is the $\bar{S} = S/x_1S$-module $M/x_1M$. This isomorphism is given as follows: let $z \in Z_i(x_1,\ldots,x_n;M)$ and write $z=e_1\wedge z_0+z_1$ with $z_1\in K_i(x_2,\ldots,x_n;M)$. Then the homology class $[z]\in H_{i}(x_1,\ldots,x_n;M)$ is mapped to $[\bar{z}_1]\in H_{i}(x_2,\ldots,x_n;\bar{M})$, where $\bar{z}_1$ is obtained from $z_1$ by taking the residue classes of the coefficients of $z_1$ modulo $x_1$.
\end{Lemma}
\begin{proof}
Observe that $\bar{z_1}$ is indeed a cycle in $K(x_2, \ldots x_n; \bar{M})$, because $0= x_1 z_0 - e_1 \wedge \partial z_0 + \partial z_1$. From this equation it follows that $e_1 \wedge \partial z_0 =0$ and hence $\partial \bar{z_1} =0$. Next we show that $\varphi$ is well defined. Let $z$ be as in the statement and let $w=z+\partial b$ where $b \in K_{i+1}(x_1, \ldots, x_n;M)$. Let $b= e_1 \wedge b_0 + b_1$ with $b_1 \in K_{i+1}(x_2, \ldots, x_n; M)$. Then $w= e_1\wedge w_0 + w_1 $ where $w_1 = z_1 + x_1 b_0 + \partial b_1$. We have to show that $[\bar{w}_1] = [\bar{z}_1]$. But this is obvious, because $\bar{w}_1 = \bar{z}_1 + \partial \bar{b}_1$, so that $\bar{w}_1$ and $\bar{z}_1$ differ only by a boundary in $K_{i}(x_2, \ldots, x_n;\bar{M})$.
Since $H_i (x_1, \ldots, x_n ; M) \iso \Tor_i^S (K;M)$ and $H_i (x_2, \dots, x_n; \bar{M}) \iso \Tor_i^{\bar{S}} (K, \bar{M})$, we conclude that $\dim_K H_i (x_1, \ldots, x_n ; M) = \dim_K H_i (x_2, \dots, x_n; \bar{M})$. Indeed, since $x_1$ is a non-zero on $M$, the graded minimal free resolution of $\bar{M}$ is obtained from the graded minimal free resolution of $M$ be reduction modulo $x_1$. This implies that $\dim_K \Tor_i^S (K;M) = \dim_K \Tor_i^{\bar{S}} (K, \bar{M})$. Hence, in order to prove that $\varphi$ is an isomorphism, it suffices to show that $\varphi$ is surjective.
Let $[v] \in H_i (x_2, \ldots, x_n ; \bar{M})$. There exists $z_1 \in K_i (x_2, \ldots, x_n; M)$ with $\bar{z_1} = v$. It follows that $\partial z_1 = -x_1 z_0$ for some $z_0 \in K_{i-1} (x_2, \ldots, x_n;M)$. Since $0= \partial^2 z_1 = -x_1 \partial z_0$, we see that $\partial z_0 =0$. Now we set $z= e_1 \wedge z_0 + z_1$. Then $z$ is a cycle and $\varphi [z] = [v] $.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{main}]
Fix an integer $1 \leq i \leq n$. For each $u \in G(I)$ we define
\[
u'= \left\{ \begin{array}{ll}
(u/x_i)y , & \;\textnormal{if $x_i^2 | u$}, \\ u, & \;\text{otherwise.}
\end{array} \right.
\]
The element $u'$ is called the {\em $1$-step polarization} of $u$ with respect to the variable $x_i$, and the ideal
$I' = (\{u'| u \in G(I)\})$ is called a 1-step polarization of $I$. Obviously, the (complete) polarization of $I$ can be obtained by a sequence of 1-step polarization.
Let $I'$ be the 1-step polarization of $I$ with respect to $x_i$. Without loss of generality, we may assume that $i=1$. We consider the Koszul complex $K(y,x_1, \ldots, x_n; I')= (\bigwedge H) \tensor I'$ where $H$ is the free $S[y]$-module with basis $f, e_1 \ldots, e_n$ and where $\partial f = y$ and $\partial e_j = x_j$ for $j=1, \ldots, n$. Let $z= \sum_{J} \lambda_J u_J e_J $ be a multi-homogenous cycle of $K_i(x_1, \ldots, x_n;I)$ with $\deg z \leq {\bold c}$ whose homology class is non-zero.
We set $z' = \sum_{J} \lambda_J (u_J e_J)'$, where
\[
(u_J e_J)'= \left\{ \begin{array}{ll}
u_J e_J , & \;\textnormal{if $x_1 \nmid u_J$}, \\ u_{J}'e_J, & \;\text{if $x_1|u_J$ and $1 \notin J$},
\\ u_{J}e_{J}', & \;\text{if $x_1|u_J$ and $1 \in J$}.
\end{array} \right.
\]
Here $e_{J}'$ is obtained from $e_J$ by replacing the factor $e_1$ in $e_J$ by $f$.
\medskip
As an example we consider again the cycle $z= x_1x_2^2 e_1 - x_1^2x_2 e_2$ in $K_1(x_1,x_2;I)$ where $I=(x_1^2 x_2, x_1x_2^2)$. Then $z' = x_1 x_2^2 f - x_1 y x_2 e_2$.
\medskip
We claim that $z'$ is a cycle in $K_i(y, x_1, \ldots, x_n; I')$, and that the map
\[
H_i (y, x_1, \ldots, x_n;I') \rightarrow H_i( x_1, \ldots, x_n; I), \quad [z'] \mapsto [z]
\]
is an isomorphism. From this claim the theorem follows by induction on the number of 1-step polarization which are required to obtain the polarization $I^{\wp}$ of $I$.
\medskip
Proof of the claim: we first show that $z'$ is a cycle. We first discuss the case when $\deg_{x_1} z \leq 1$.
By the definition of $(u_J e_J)'$, we have $(u_J e_J)' = u_J e_J$, for all $J$. It shows that $z=z'$ and hence $z'$ is a cycle.
Now we discuss the case when $\deg_{x_1}z > 1$. Let $z=e_1\wedge z_0+z_1$ and $z'= f \wedge z'_0 +z'_1$ with $z_1\in K_i(x_2,\ldots,x_n;I)$ and $z'_1\in K_i(x_1,\ldots,x_n;I')$. Moreover, $z_0 = \sum_{1 \in J} \lambda_J u_J e_{J \setminus \{1\}}$ and $z_1 = \sum_{1 \notin J} \lambda_J u_J e_J$. From the definition of $z'$ we see that $z'_0 = z_0$ and $z'_1 = \sum_{1\notin J} \lambda _J u'_J e_J$ where $u'_J = y u_J / x_1 $. It implies that $z'_1 = (y/x_1 ) z_1$. By applying $\partial$ on $z'$, we get $\partial (z') = y z_0 + \partial (z'_1) = y z_0 + (y/x_1) \partial (z_1)$. It shows that $x_1\partial (z') = y \partial (z) = 0$. Hence $\partial (z') = 0$.
We first observe that $y-x_1$ is a non-zero divisor on $S[y]/I'$ and that $I' / (y-x_1)I' = I$. Therefore, by Lemma~\ref{comparison}, there exists the isomorphism $\varphi : H_i(y, x_1, \ldots, x_n) = H_i(y-x_1, x_1, \ldots, x_n; I') \rightarrow H_i (x_1, \ldots, x_n; I)$. Thus it remains to be shown that $\varphi([z']) = [z]$. Applying the Lemma~\ref{comparison}, we write $z'= (f-e_1) \wedge w_0 +w_1$.
By definition,
\begin{eqnarray*}
z'&=& \sum_{x_1 \nmid u_J} u_Je_J + \sum_{x_1 | u_J, 1 \notin J} u'_Je_J + \sum_{x_1|u_J, 1 \in J} u_J f \wedge e_{J \setminus \{1\}} \\
&=& \sum_{x_1 \nmid u_J} u_Je_J + \sum_{x_1 | u_J, 1 \notin J} u'_Je_J + \sum_{x_1|u_J, 1 \notin J} u_J (f-e_1) \wedge e_{J \setminus \{1\}} + \sum_{x_1|u_J, 1 \in J} u_J e_{J}.
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
w_1&=& \sum_{x_1 \nmid u_J} u_Je_J + \sum_{x_1 | u_J, 1 \notin J} u'_Je_J + \sum_{x_1|u_J, 1 \in J} u_J e_{J}.
\end{eqnarray*}
From this it follows that $\bar{w}_1 = z$, which by the definition of $\varphi$ implies that $\varphi ([z']) = [z]$, as desired.
\end{proof}
\begin{Corollary}\label{polarize}
Let $I\subset S$ be a monomial ideal as in Theorem~\ref{main}. Let $z_1, \ldots, z_r$ be multi-homogeneous cycles with multi-degree $\leq {\bold c}$, whose homology classes form a $K$-basis of $H_i({\bold x};S/I)$ for $i \geq 1$. Then the homology classes of the cycles $z_1^\wp, \ldots, z_r^\wp$ form a $K$-basis of $H_i({\bold x}^\wp; S^\wp/I^\wp)$.
\end{Corollary}
\begin{proof}
We notice that for $i \geq 1$ there is an isomorphism $\varphi: H_i({\bold x}^\wp;S^\wp/I^\wp) \rightarrow H_{i+1}({\bold x}^\wp;I^\wp)$ with $\varphi ([z]) = [\partial(w)]$ and $w \in K({\bold x}^\wp; S^\wp)$ such that $z=w+I^\wp K({\bold x}^\wp;S^\wp)$. Since $\partial(f^\wp) = (\partial(f))^\wp$ for any multi-homogenous element $f \in K({\bold x}; S^\wp)$ with $\deg f \leq {\bold c}$, the desired conclusion follows.
\end{proof}
As an example for the polarization of Koszul cycles, we consider whisker graphs. Let $G$ be a finite simple graph on the vertex set $[n]=\{1, \ldots, n\}$. The {\em whisker graph} $G^*$ of $G$ is the graph with the vertex set $V(G^*)=\{1, \ldots, n\} \cup \{1', \ldots, n'\}$ and the edge set $E(G^*)=E(G) \cup \{ \{1, 1'\}, \{2, 2'\}, \ldots, \{n, n'\} \}$.
\medskip
Figure~\ref{one} displays the whisker graph of the graph $G$ with edges $\{1,2\}, \{2,3\}, \{3,4\}$ and $\{4,2\}$.
\begin{figure}[hbt] \begin{center} \label{one}
\psset{unit=0.6cm}\begin{pspicture}(0.5,1)(4.5,5) \pspolygon(2,2)(3,3.71)(4,2)\psline(3,3.71)(3,5.2)\psline(0.6,1.1)(2,2)\psline(4,2)(4,3.49) \psline(2,2)(2,3.49) \psline(0.6,1.1)(0.6,2.59) \rput(0.6,2.59){$\bullet$} \rput(2,3.49){$\bullet$}\rput(4,3.49){$\bullet$}\rput(2,2){$\bullet$}\rput(3,3.71){$\bullet$}\rput(4,2){$\bullet$}\rput(3,5.2){$\bullet$}\rput(0.6,1.1){$\bullet$}\rput(0.6,3.1){$1'$}\rput(2,4){$2'$}\rput(4.25,4){$3'$}\rput(2,1.5){$2$}\rput(3.25,4){$4$}
\rput(4,1.5){$3$}\rput(3.1,5.7){$4'$}\rput(0.6,0.6){$1$}
\end{pspicture}
\end{center}
\caption{}
\label{example}\end{figure}
Let $K$ be a field. The edge ideal $I(G)$ of $G$ is the monomial ideal in $S=K[x_1, \ldots, x_n]$ generated by the monomials $x_ix_j$ with $\{i,j\} \in E(G)$. We consider the edge ideal $I(G^*)$ of the whisker graph $G^*$ of $G$ as the monomial ideal in $S^*=K[x_1, \ldots, x_n, y_1, \ldots, y_n]$ with $I(G^*) = I_G + (\{ x_k y_k | k \in [n] \})$.
Next, we let $J(G) = ( I(G) , x_1^2 , \ldots, x_n^2 )$. Then, obviously, $I(G^*) = J(G)^{\wp}$, where for simplicity we set $x_i = x_{i1} , y_i = x_{i2}$, for $i= 1, \ldots, n$. For the polarized Koszul complex of $K(x_1, \ldots, x_n; I(G))$ we use the notation $e_i = e_{i1}$ and $f_i = e_{i2}$. Given a cycle $\sum_J \lambda_J u_J e_J \in K(x_1, \ldots, x_n; J(G))$ representing a non-zero homology class, the polarized cycle in $K(x_1, \ldots, x_n, y_1, \ldots, y_n ; I(G^*))$ is given as $\sum_J \lambda_j u_j e_{J'}$ where $e_{J'} $ is obtained from $e_J$ by replacing $e_{j}$ for $j \in J$ by $f_j$ if $x_j |u$.
Note that $H_n(x_1, \ldots, x_n; J(G))$ is minimally generated by the homology classes $[u e_1\wedge \ldots \wedge e_n]$ with $u = x_{i_1}\ldots x_{i_k}$ where $\{i_1, \ldots, i_k\}$ is a maximal independent set of $G$. Recall that a subset ${\mathcal S} \subset V(G)$ is an {\em independent set} of $G$ if $\{i,j\} \notin E(G)$ for all $i,j \in {\mathcal S}$. The set ${\mathcal S}$ is called a maximal independent set if ${\mathcal S} \cup \{k\}$ is not independent for all $k \notin V(G) \setminus {\mathcal S}$.
It follows from Corollary \ref{polarize}, that the elements
\begin{eqnarray}\label{basis}
x_{i_1} \ldots x_{i_k} e_{j_1} \wedge e_{j_{n-k}}\wedge f_{i_1} \wedge \ldots \wedge f_{i_k}
\end{eqnarray}
form a basis of $H_n(x_1, \ldots, x_n, y_i, \ldots, y_n ; S^*/I(G^*))$ where ${\mathcal S} = \{i_1, \ldots, i_k\}$ is a maximal independent set of $G$ and $\{j_1, \ldots, j_{n-k}\}=V(G) \setminus {\mathcal S}$.
\section{Powers of whisker graphs}\label{whisker graphs}
In this section, we want to study the powers of whisker graphs. For the formulation of the main result we introduce the following concept. Let $G$ be a finite simple graph on $[n]$, and let ${\mathcal S}$ be a maximal independent subset of $V(G)$. We define the graph $\Gamma_{{\mathcal S}}(G)$ with vertex $V(\Gamma_{{\mathcal S}}(G)) = {\mathcal S}$ and $\{i,j\} \in E(\Gamma_{{\mathcal S}}(G))$ if and only if there exists $k \in V(G) \setminus {\mathcal S}$ such that $\{i,k\}, \{j,k\}
\in E(G)$.
\begin{Proposition}\label{gamma}
Let $G$ be a finite simple connected graph. Then there exists an independent set ${\mathcal S}$ such that $\Gamma_{{\mathcal S}}(G)$ is connected.
\end{Proposition}
\begin{proof}
Let $\Delta(G)$ be the clique complex of $G$ with cliques $F_1, \ldots, F_r$. We are going to construct the independent set ${\mathcal S}$ of $G$ as follows. Let $v_1 \in V(F_1)$. We may assume that $v_1 \in V(F_i)$ for $i = 1, \ldots, t$ and $v_1 \notin V(F_i)$ for $i>t$. If $t=r$, then we are done. Assume that $t <r$. Since $G$ is connected, there exists $F_i$ with $i>t$, say $F_{t+1}$, such that $V(F_{t+1}) \cap V(F_j) \neq \emptyset $ for some $j \leq t$. Since $F_{t+1}$ is a maximal clique, $V(F_{t+1}) \not\subset \bigcup_{i=1}^t V(F_i) $ because otherwise $v_1 \in V(F_{t+1})$, a contradiction. Hence, we may choose $v_2 \in V(F_{t+1} )\setminus \bigcup_{i=1}^t V(F_i) $. We may assume that $v_2 \in V(F_i)$ for $i=t+1, \ldots, s$ and does not belong to any other clique. If $s=r$, then $\Gamma_{{\mathcal S}}(G) $ is a line graph with vertex set $ \{v_1, v_2\}$. Indeed, $\{v_1, v_2\} \notin E(G)$ because the set of neighbors of $v_1$ is equal to $\bigcup_{i=1}^t F_i$ and $v_2 \notin \bigcup_{i=1}^t F_i $. On the other hand, if $k \in V(F_{t+1}) \cap V(F_j)$. then $\{v_1,k\}, \{v_2,k\} \in E(G)$. Therefore, $\{v_1, v_2\} \in E(\Gamma_{{\mathcal S}}(G))$.
Consider all $F_j$ for $j >s$ such that $V(F_j) \subset \bigcup_{i=1}^t V(F_i)$. We may assume that it is the case for $F_{s+1}, \ldots, F_k$. If $k=r$, then $\{v_1, v_2\}$ is an independent set for $G$, and we are done. If $k<r$, then since $G$ is connected there exists a clique $F_i$, say $F_{k+1}$, such that $V(F_{k+1}) \cap V(F_j) \neq \emptyset $ for some $j<k$ and $V(F_{k+1}) \not\subset \bigcup_{i=1}^s V(F_i) (= \bigcup_{i=1}^k V(F_i) ) $. We choose $v_3 \in V(F_{k+1}) \setminus \bigcup_{i=1}^s V(F_i)$. If $j<t$ then $\{v_1, v_3\} $ will be an edge of $\Gamma_{{\mathcal S}}(G)$, and if $t+1\leq j\leq s$, then $\{v_2,v_3\}$ will be an edge of $\Gamma_{{\mathcal S}}(G)$. Proceeding this way, we obtain the desired independent set ${\mathcal S}$ of $G$ such that $\Gamma_{{\mathcal S}}(G)$ is connected.
\end{proof}
We call an independent set ${\mathcal S}$ of $G$ {\em friendly} if it satisfies the condition that $\Gamma_{{\mathcal S}}(G)$ is connected. For example, if we consider the line graph $L$ on vertex set $[4]$ with edges $\{\{1,2\}, \{2,3\}, \{3,4\}\}$. Then ${\mathcal S}= \{1,3\}$ is a friendly independent set of $L$ while $\{1,4\}$ is not a friendly independent set of $L$.
\begin{Theorem}\label{whisker}
Let $G$ be a finite simple connected graph on vertex set $[n]$, and $G^*$ be the whisker graph of $G$. Furthermore, let $I(G^*)\subset S^*=K[x_1, \ldots, x_n, y_1, \ldots, y_n]$ be the edge ideal of $G^*$. Then
\[
\depth (S^*/I(G^*)^k) \leq n-k+1 , \text{\; for \;} k= 1, \ldots, n.
\]
\end{Theorem}
\begin{proof}
Let $M$ be an $S^*$-module and consider the Koszul complex
\[
K(M) = K(x_1, \ldots, x_n, y_1, \ldots, y_n;M)
\]
with $K_1(M) = \Dirsum_{i=1}^n M e_i \dirsum \Dirsum_{j=1}^n M f_j$ and $\partial e_i = x_i$ and $\partial f_j = y_j$, for all $i,j$.
We first show that
\[
H_{2n-2} (I(G^*)^n) \neq 0.
\]
This will imply that $\depth (S^*/ I(G^*)^n) \leq 1$. To see that the above Koszul homology does not vanish, we proceed as follows.
By Proposition~\ref{gamma} we may choose a friendly independent set ${\mathcal S}$ of $G$ with $|{\mathcal S}| =s$. Since $\Gamma_{{\mathcal S}}(G)$ is connected, there exists a spanning tree $T$ of $\Gamma_{{\mathcal S}}(G)$ with $s-1$ edges $\alpha_1, \ldots, \alpha_{s-1}$. We may assume that $\alpha_1, \ldots, \alpha_{s-1}$ is a leaf order for $T$. In other words, the following conditions are satisfied: (i) $\alpha_1$ has a free vertex in $T$, (ii) for each $j>1$, $ \alpha_j \cap \alpha_i \neq \emptyset$ for some $i<j$ and $\alpha_j$ has a free vertex in the tree $T_j= \alpha_1, \ldots, \alpha_{j}$. Now, we label the vertices of $T$ inductively as follows: $1$ is the free vertex of $\alpha_1$ in $T_1$ and the other vertex in $T_1$ is given the label $2$. Suppose, the labeling of $T_{j-1}$ is defined. Then we give the new vertex of $T_j$, the label $j+1$. Then $\alpha_1=\{1,2\}$ and for each $j>1$, $\alpha_j=\{i_j,j+1\}$, where $\{i_j\}= \alpha_j \cap \alpha_i$.
The following Figure~\ref{two}, gives an example of such a labeling.
\begin{figure}[hbt]\begin{center}\label{two}
\psset{unit=0.6cm}\begin{pspicture}(0.5,1.5)(4.5,3)\psline(2,2)(4,2)\psline(0,1)(2,2) \psline(0,3)(2,2)\psline(4,2)(6,1)\psline(4,2)(6,3)\rput(2,2){$\bullet$}\rput(4,2){$\bullet$}\rput(0,1){$\bullet$}
\rput(0,3){$\bullet$}\rput(6,1){$\bullet$}\rput(6,3){$\bullet$}\rput(2,1.5){$2$}\rput(4,1.5){$4$}\rput(-0.4,0.9){$1$}
\rput(-0.4,3.1){$3$}\rput(6.4,0.9){$5$}\rput(6.4,3.1){$6$}\rput(3,1.6){$\alpha_3$}\rput(1.3,1.1){$\alpha_1$}
\rput(1.3,2.9){$\alpha_2$}\rput(4.9,2.9){$\alpha_4$}\rput(5,1.1){$\alpha_5$}
\end{pspicture}
\end{center}
\caption{}
\label{example}\end{figure}
According to our labeling of $T$, we have ${\mathcal S}= \{1, \ldots, s\}$. By definition of $\Gamma_{{\mathcal S}}(G)$, there exists for each edge $\alpha_j=\{i_j,j+1\} \in E(T)$, a vertex $v_j \in \{s+1, \ldots, n\}$ such that $\{i_j,v_{j}\}, \{v_j,j+1\} \in E(G)$. Then $z_j= x_{i_j}x_{v_j} e_{j+1} - x_{j+1} x_{v_j} e_{i_j}$ is a cycle belonging to $Z_1(I(G^*))$.
Furthermore, for each $k \in \{s+1, \ldots, n\}$, we choose $j_k \in {\mathcal S}$ such that $\{k, j_k\} \in E(G)$. Then $z_k = x_k x_{j_k} f_k - x_k y_k e_{j_k} $ is a cycle belonging to $Z_1(I(G^*))$. This gives $n-s$ such cycles.
Let
\[
c= \prod _{i =1}^s x_i e_{s+1} \wedge \ldots \wedge e_n \wedge f_1 \wedge \ldots \wedge f_s.
\]
Note that by (\ref{basis}), $c$ is a cycle in $Z_n(S^*/I(G^*))$ whose homology class $[c]$ in $H_n(S^*/I(G^*))$ is non-zero. In particular, $[\partial (c)]$ is non-zero homology class in $H_{n-1}(I(G^*))$.
Let
\[
a=c \wedge z_1\wedge \ldots \wedge z_{s-1} \wedge z_{s+1} \wedge \ldots \wedge z_n.
\]
Observe that $a \in K_{2n-1}(I(G^*)^{n-1})$. We set $z= \partial (a)$. Then $z \in Z_{2n-2} (I(G^*)^n)$. Indeed, $z = \partial (c) \wedge z_1\wedge \ldots \wedge z_{s-1} \wedge z_{s+1} \wedge \ldots \wedge z_n$, and it has coefficients in $I(G^*)^n$ because $\partial (c)$ and each $z_i$ has coefficients in $I(G^*)$.
We claim that $[z]$ is a non-zero homology class in $H_{2n-2} (I(G^*)^n)$. To prove the claim, we show that $z$ is not a boundary, that is, there does not exist any $b \in K_{2n-1}(I(G^*)^n)$ such that $z= \partial b$. On contrary, assume that such $b$ exists. Then $\partial (b) = \partial (a) =z$ implies $\partial ( a-b) = 0$ which gives $a-b \in Z_{2n-1}(I(G^*)^{n-1} )$. Then, there exists $b' \in K_{2n}(S^*)$ such that $\partial (b')= a-b$ where $b' = v e_1 \wedge \ldots \wedge e_n \wedge f_1 \wedge \ldots \wedge f_n$ and $v$ is a monomial in $S^*$ because all cycles under consideration are ${\NZQ Z}^{2n}$-graded.
Note that
\[
w= ( \prod _{i=1}^s x_i ) (\prod_{k=s+1}^n x_k x_{j_k} ) (\prod_{j=1}^{s-1} x_{i_j} x_{v_j}) e_2 \wedge \cdots \wedge e_{n} \wedge f_1 \wedge \cdots \wedge f_n
\]
with $i_j, j_k \in {\mathcal S}$, $k, v_j \in \{s+1, \ldots, n\}$ is a non-zero term of $a$ and it is not cancelled by any other term of $a$ because the product $ e_2 \wedge \cdots \wedge e_n \wedge f_1 \wedge \cdots \wedge f_n$ appears only once in the expansion of $a$. To see this, consider
\begin{eqnarray}\label{c}
c \wedge z_{s+1} \wedge \cdots \wedge z_n = ( \prod _{i=1}^s x_i ) (\prod_{k=s+1}^n x_k x_{j_k} ) e_{s+1} \wedge \cdots \wedge e_{n} \wedge f_1 \wedge \cdots \wedge f_n.
\end{eqnarray}
Therefore, it follows that the term $w$ appears only once in the expansion of $a$ if $e_2 \wedge \cdots \wedge e_{s-1}$ appears only once in the expansion of $z_1 \wedge \cdots \wedge z_{s-1}$. Now to see this, we write $z_j=g_j - h_j$, where $g_j=x_{i_j}x_{v_j} e_{j+1} $ and $h_j=x_{j+1} x_{v_j} e_{i_j}$ for $j= 1, \ldots, s-1$. Note that for $1 \leq t \leq s-1$ the wedge product $z_1\wedge \cdots \wedge z_{t}$ is a linear combination of $g_{i_1}\wedge \cdots \wedge g_{i_k} \wedge h_{j_1}\wedge \cdots \wedge h_{j_t}$ with $\{i_1, \ldots, i_k\}\cup\{ j_1, \ldots, j_t\}= \{1, \ldots, t\}$. We prove by induction on $t$ that among these terms $g_1 \wedge \cdots \wedge g_t$ is the only term that does not contain $e_1$. For $t=1$, the assertion is trivial. Now let $t>1$ and assume that th sonly term that does not contain $e_1$ is $g_1 \wedge \cdots \wedge g_{t-1}$. Then, the only terms of $z_1 \wedge \cdots \wedge z_t$ which do not contain $e_1$ are either
$g_1 \wedge \cdots \wedge g_{t}$ or $g_1 \wedge \cdots \wedge g_{t-1} \wedge h_t$. However, by the definition of the cycles $z_j$ given in terms of the tree $T$ it follows that $i_t = \{2, \ldots, t-1\}$. Therefore, $g_1 \wedge \cdots \wedge g_{t-1} \wedge h_t=0$.
\medskip
Next, we show that $a \in K_{2n-1}(I(G^*)^{n-1}) \setminus K_{2n-1} (I(G^*)^n)$. For this it suffice to show that $w \in K_{2n-1}(I(G^*)^{n-1}) \setminus K_{2n-1} (I(G^*)^n)$, because $w$ is a non-zero term of $a$ which does not cancel against any other term in $a$, as we have just seen. In fact, $ ( \prod _{i=1}^s x_i ) (\prod_{k=s+1}^n x_k x_{j_k} ) (\prod_{j=1}^{s-1} x_{i_j} x_{v_j}) $ which is coefficient of $w$, there are $n-1$ terms with indices in $\{s+1, \ldots, n\}$ and $n+s-1$ terms with indices in ${\mathcal S}=\{1, \ldots, s\}$. Since ${\mathcal S}$ is a maximal independent set, this implies that $w$ contains a product of exactly $n-1$ generator of $I(G^*)$.
Since all coefficients of $b=a-\partial(b')$ are in $I(G^*)^n$ and the term $w$ which appears in the expansion of $a$ does not have coefficient in $I(G^*)^{n}$, $w$ must be cancelled by some term of $\partial (b')$. This gives
\[
v x_1 e_2 \wedge \cdots \wedge e_{n} f_1 \wedge \ldots \wedge f_n = ( \prod _{i=1}^s x_i ) (\prod_{k=s+1}^n x_k x_{j_k} ) (\prod_{j=1}^{s-1} x_{i_j} x_{v_j}) e_2 \wedge \cdots \wedge e_{n} \wedge f_1 \wedge \cdots \wedge f_n,
\]
which implies
\[
v= ( \prod _{i=2}^s x_i ) (\prod_{k=s+1}^n x_k x_{j_k} ) (\prod_{j=1}^{s-1} x_{i_j} x_{v_j}) \in I(G^*)^{n-1}.
\]
The coefficient of the term $v y_n e_1 \wedge \ldots \wedge e_n f_1 \wedge \cdots \wedge f_{n-1} $ which appears in the expansion of $\partial(b')$ does not belong to $I(G^*)^{n}$ because $x_n$ is the only neighbor of $y_n$. Also the term $v y_n e_1 \wedge \ldots \wedge e_n f_1 \wedge \cdots \wedge f_{n-1} $ is not cancelled by any of the terms of $a$ because from (\ref{c}) we can see that all terms of $a$ contain the wedge product $f_1\wedge \cdots \wedge f_n$ as a factor. Hence, our assumption that $z$ is a boundary leads us to contradiction.
For simplicity of notation, we set $z'_i=z_i$ for $i = 1, \ldots, s-1$ and $z'_i=z_{i+1}$ for $i=s+1, \ldots, n-1$. Note that $\partial(c)\wedge z'_1 \wedge \cdots \wedge z'_{k-1} \in Z_{n+k-2}(I(G^*)^k)$. We claim that this cycle is not a boundary in $K(I(G^*)^k)$. This then implies that $\depth (S^*/I(G^*)^k) \leq n+k-1$ since $H_{n+k-1}(S^*/I(G^*)^k) \iso H_{n+k-2} (I(G^*)^k) \neq 0$.
In order to prove the claim, assume that there exists $b \in K_{n+k-1}(I(G^*)^k)$ such that $\partial (b) = \partial (c) \wedge z'_1 \wedge \cdots \wedge z'_{k-1}$. Let $b'=b \wedge z'_k \wedge \cdots \wedge z'_{n-1} $. Then $b' \in K_{2n-1}(I(G^*)^n)$ and $\partial(b') = \partial (b) \wedge z'_k \wedge \cdots \wedge z'_{n-1} = z$, a contradiction.
\end{proof}
Our hypothesis of Theorem~\ref{whisker} which requires that $G$ is connected is needed. For example, if we take the disconnected graph $G$ on vertex $[4]$ with edge $\{\{1,2\}, \{3,4\}\}$, then $\depth(S^*/I(G^*)^4) = 2$.
\begin{Remark}{\em
Let $I$ be an arbitrary monomial ideal in $K[x_1, \ldots, x_n, y_1, \ldots, y_n]$. In \cite[Theorem 3.3]{HQ}, it is shown that $\depth (S/I^k) \leq 2n-k+1$ for $k=1, \ldots, r$, where $r< 2n$ is a number depending on $I$. Comparing this result with our Theorem~\ref{whisker}, where $I$ is the edge ideal of a whisker graph, our bound is about half of the bound which is valid for general monomial ideals.}
\end{Remark}
\begin{Corollary}\label{limit}
Let $G$ be a finite simple connected graph on vertex set $[n]$, $G^*$ be the whisker graph of $G$, and $I(G^*)\subset S^*$ be the edge ideal of $G^*$. If $G$ is bipartite, then $\depth (S^*/I(G^*)^k) =1$ for all $k \geq n$, and if $G$ is non-bipartite, then $\depth (S^*/I(G^*)^k) =0$ for all $k\geq n$.
\end{Corollary}
\begin{proof}
Suppose first that $G$ is bipartite. Then $G^*$ is bipartite as well. It follows from \cite[Theorem 5.9]{SVV} that the $\depth (S^*/I(G^*)^k )\geq 1$ for all $k$. Thus our Theorem~\ref{whisker} implies that $\depth(S^*/I(G^*)^n)=1$. On the other hand, since the Rees ring $R(I(G^*))$ of $I(G^*)$ is Cohen-Macaulay (see for example \cite[Corollary 5.20]{EH}), the result of Eisenbud and Huneke \cite[Proposition 3.3]{EH2} yields the desired conclusion.
Now let $G$ be a non-bipartite graph. It follows from \cite[Corollary 4.3]{CMS}, applied to our case, that $\Ass (S^*/I(G^*)^k) = \Ass(S^*/I(G^*)^{n})$ for all $k \geq n$. On the other hand, since $G$ is non-bipartite, we know by \cite[Corollary 3.4]{CMS} that $\depth (S^*/I(G^*)^k) =0$ for $k \gg 0$. This implies that $\depth (S^*/I(G^*)^k) =0$ for all $k\geq n$.
\end{proof}
In general the upper bounds for the depth of the powers of the edge ideal of a whisker graph given in Theorem~\ref{whisker} are not attained. For example, if $G$ is a 3-cycle then $\depth(S^*/I(G^*))=3 $ and $\depth(S^*/I(G^*)^k)=0$ for all $k \geq 2$. Even if $G$ is bipartite, this bound is not attained. For example, if $G$ is a 4-cycle, then $\depth(S^*/I(G^*))=4$, $\depth(S^*/I(G^*)^2)=3$ and $\depth(S^*/I(G^*)^k) =1$ for $k \geq 3$.
On the other hand, Villarreal showed \cite[Proposition 6.3.7]{V}, that if $G$ is a forest then $\depth(S^*/I(G^*)^2) \geq n-1$. Together with our Theorem~\ref{whisker} it follows that $\depth(S^*/I(G^*)^2) = n-1$. By using the arguments applied in Villarreal's proof, we now show more generally
\begin{Theorem} \label{tree}
\label{whiskertree}
Let $G$ be a forest on $n$ vertices and let
\[
I=I(G^*)\subset S^*=K[x_1,\dots,x_n,y_1,\ldots,y_n]
\]
be the edge ideal of $G^*$. Then
\[
\depth(S^*/I^k) \geq n-k+1 \quad \text{for $k=1,\ldots,n$}.
\]
\end{Theorem}
\begin{proof}
We show this by induction on $k + n$. If $k+n=1$, then either $k=1$ or $n=1$. If $k=1$, then $\depth(S^*/I^k)=n$ since $I$ is a Cohen-Macaulay of height $n$ and for $n=1$ the assertion is trivial.
Let $x_n$ be a free vertex of the forest $G$ with $\{x_{n-1}, x_n\} \in E(G)$. Following the notation used in the proof \cite[Proposition 6.3.7]{V}, we denote by $J$ the ideal which is obtained by $I$ by substituting $x_n=0$ and by $L$ the ideal which obtained from $J$ by substituting $x_{n-1} = 0$. Furthermore, we set $K=(J^k, x_{n-1}x_n, x_ny_n)$. Since $(J^k, x_{n-1}x_n):x_n = (L^k, x_{n-1})$, we obtain the exact sequence
\begin{eqnarray*}\label{depth1}
0 \rightarrow S^*/ (L^k, x_{n-1}) \rightarrow S^* / ( {J^k, x_{n-1}x_n} )\rightarrow S^* / (J^k, x_n) \rightarrow 0 .
\end{eqnarray*}
Since $J$ is edge of a whisker forest on $n-1$ vertices and $L$ is the edge ideal of the whisker forest on $n-2$ vertices, our induction hypothesis implies that
\[
\depth (S^*/ (L^k, x_{n-1})) \geq n-k+2 \text{ and } \depth (S^* / (J^k, x_n)) \geq n-k+1.
\]
This implies that
\begin{eqnarray}\label{depth2}
\depth (S^* / (J^k, x_{n-1}x_n )) \geq n-k+1.
\end{eqnarray}
Since $(J^k, x_{n-1}x_n): x_ny_n = (L^k, x_{n-1})$, we obtain the exact sequence
\begin{eqnarray}\label{depth3}
0 \rightarrow S^*/ (L^k, x_{n-1}) \rightarrow S^* / ( {J^k, x_{n-1}x_n} )\rightarrow S^* / K \rightarrow 0 .
\end{eqnarray}
From (\ref{depth2}) and (\ref{depth3}), we obtain
\begin{eqnarray}\label{depth4}
\depth (S^* / K) \geq n-k+1.
\end{eqnarray}
Note that $(I^k, x_n y_n) = (J, x_{n-1}x_n)^k + (x_n y_n)$. Therefore, $(I^k , x_n y_n) : x_{n-1} x_n = (J, x_{n-1} x_n)^k : x_{n-1}x_n + (y_n)$. Since $(J, x_{n-1} x_n)$ is the edge ideal of the graph for which $\{n-1, n\}$ is an edge with free vertex $n$, it follows by result of Morey \cite[Lemma 2.10]{M} that $(J, x_{n-1} x_n)^k : x_{n-1}x_n = (J, x_{n-1} x_n)^{k-1}$. Therefore, altogether we have that $(I^k, x_ny_n): x_{n-1}x_n = (J, x_{n-1}x_n)^{k-1} + (y_n)$. Thus, we obtain the exact sequence
\begin{eqnarray}\label{depth5}
0 \rightarrow S^*/ ((J, x_{n-1x_n})^{k-1} + (y_n) ) \rightarrow S^* / ( {I^k, x_n y_n} )\rightarrow S^* / K \rightarrow 0 .
\end{eqnarray}
We claim that for $k=2, \ldots, n$ we have $\depth (S^*/ (J, x_{n-1}x_n)^{k-1} ) \geq n-k+2$. Therefore, (\ref{depth4}) and (\ref{depth5}) implies
\begin{eqnarray}\label{depth6}
\depth (S^* / ({I^k, x_n y_n}) ) \geq n-k+1.
\end{eqnarray}
By using (\ref{depth6}) and our induction hypothesis, the exact sequence
\begin{eqnarray*}
0 \rightarrow S^*/ I^{k-1} \rightarrow S^* / I^k \rightarrow S^* / (I^k, x_n y_n) \rightarrow 0 .
\end{eqnarray*}
yields that $\depth (S^*/ I^k) \geq n-k+1$ and proves our theorem.
It remains to prove the claim. For that we use induction on $k$. For $k=2$, this inequality is shown in the proof of \cite[Proposition 6.3.7]{V}. Suppose that $k>2$. Since $(J, x_{n-1}x_n)$ is the edge ideal of a tree with free vertex $n$, we may apply \cite[Lemma 2.10]{M}, and obtain the exact sequence
\begin{eqnarray*}\label{depth7}
0 \rightarrow S^*/ (J, x_{n-1x_n})^{k-2} \rightarrow S^* / ( J, x_{n-1} x_n )^{k-1} \rightarrow S^* / (J^{k-1}, x_{n-1}x_n) \rightarrow 0 .
\end{eqnarray*}
By our induction hypothesis, $\depth ( S^*/ (J, x_{n-1x_n})^{k-2} ) \geq n-k+3$ and (\ref{depth2}) applied for $k-1$ yields $\depth ( S^*/ (J, x_{n-1x_n})^{k-2} ) \geq n-k+2$. Therefore, it follows that $\depth (S^* / ( J, x_{n-1} x_n )^{k-1} ) \geq n-k+2$, as desired.
\end{proof}
By combining Theorem~\ref{whisker} and Theorem~\ref{tree}, we obtain
\begin{Corollary}
Let $G$ be a tree. Then
\[
\depth(S^*/I(G^*)^k) = n-k+1 \quad \text{for $k=1,\ldots,n$}.
\]
\end{Corollary}
More generally, we expect that if $G$ is a forest with $m$ connected components, then
\[
\depth(S^*/I(G^*)^k) = \left\{ \begin{array}{ll}
n-k+1, & \text{if $k\leq n-m+1$}, \\
m, &\text{if $k \geq n-m+1$}. \end{array} \right.
\]
| {
"timestamp": "2013-10-31T01:01:43",
"yymm": "1310",
"arxiv_id": "1310.7984",
"language": "en",
"url": "https://arxiv.org/abs/1310.7984",
"abstract": "In this paper, we introduced the polarization of Koszul cycles and use it to study the depth function of powers of edge ideals of whisker graphs.",
"subjects": "Commutative Algebra (math.AC)",
"title": "Polarization of Koszul cycles with applications to powers of edge ideals of whisker graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9808759593358227,
"lm_q2_score": 0.8175744828610095,
"lm_q1q2_score": 0.8019391552047819
} |
https://arxiv.org/abs/2211.03218 | Projection error-based guaranteed L2 error bounds for finite element approximations of Laplace eigenfunctions | For conforming finite element approximations of the Laplacian eigenfunctions, a fully computable guaranteed error bound in the $L^2$ norm sense is proposed. The bound is based on the a priori error estimate for the Galerkin projection of the conforming finite element method, and has an optimal speed of convergence for the eigenfunctions with the worst regularity. The resulting error estimate bounds the distance of spaces of exact and approximate eigenfunctions and, hence, is robust even in the case of multiple and tightly clustered eigenvalues. The accuracy of the proposed bound is illustrated by numerical examples. The demonstration code is available at https://ganjin.online/xfliu/EigenfunctionEstimation4FEM . | \section{Introduction}
Deriving guaranteed error bounds for approximate eigenfunctions of the Laplace operator is a challenging task due to possible ill-posedness of eigenfunctions. In the case of multiple and tightly clustered eigenvalues, the corresponding exact
\LIU{eigenfunctions} are sensitive even to small perturbations of the problem and may change abruptly. Any accurate error bound has to take into account this sensitivity. Therefore, the recent results \cite{CanDusMadStaVoh2017,CanDusMadStaVoh2018,CanDusMadStaVoh2020,LiuVej2022} consider an arbitrary cluster of eigenvalues and the space generated by corresponding eigenfunctions. The resulting error bounds then estimate a distance between the eigenfunction spaces associated to exact and approximate eigenvalues. The particular distances between spaces are naturally based either on the energy or $L^2$ norm.
Interestingly, the $L^2$ norm bounds provided by Algorithm I of \cite{LiuVej2022}, which is based on the Rayleigh quotients of the approximate eigenfunctions, are considerably less accurate than the corresponding bounds in the energy norm.
For the linear FEM solutions to the Laplacian eigenvalue problems, Algorithm I of \cite{LiuVej2022} provides the error bounds in the energy norm that exhibit the optimal rate of convergence. However, the $L^2$ error bounds often converge sub-optimally and are considerably less accurate. It is worth pointing out that, the residual error-based Algorithm II of \cite{LiuVej2022} provides an optimal $L^2$ norm bound through the re-constructed flux and the Prager--Synge method; see the computation results for the L-shaped domain in \S \ref{sec:l-shape}.
\medskip
In this paper, we bound the $L^2$ error by utilizing the {\em a priori} error estimation
proposed {in} \cite{LiuOis2013} for the boundary value problems, and the resulting bound achieves the optimal rate of convergence and more accurate numerical results.
For eigenfunction space $E$ associated to eigenvalues in a cluster $\mathcal{C}$ with eigenvalues $\{\lambda_n, \cdots, \lambda_N\}$, we obtain the following bounds in Lemma \ref{thm:u_and_pi_k_u_new_ver} and Theorem \ref{th:main_new_}:
\begin{equation*}
\delta_b(E,E_h)
\le (1+\beta)
\max_{u \in E, \|\nabla u\|=1} \|u - P_h u\|
\le (1+\beta) {\lambda_N} C_h^2~.
\end{equation*}
Here, $\delta_b(E,E_h)$ is the $L^2$ norm directed distance to measure the distance between $E$ and its approximate eigenspace $E_h$;
$\beta$ is a quantity related to the cluster width and the gap between the
cluster $\mathcal{C}$ and the rest
of the spectrum;
$C_h$ is a quantity with
an explicitly known or computable value that comes from
the {\em a priori} error estimation for the projection operator $P_h$. The proposed
estimate
of quantity $\beta$ can be regarded as an improvement of \cite{CarGed2011}; see the comparison in Remark \ref{remark:comparison_carsten_gedick}. Also, the obtained explicit bounds are consistent with the standard qualitative analysis for the eigenfunction approximations; see the discussion in Remark \ref{remark:convergence-rate}.
\medskip
To evaluate the bounds on eigenfunctions, suitable lower and upper bounds on eigenvalues are required. In this paper, we assume that sufficiently accurate two-sided bounds on eigenvalues are available, although we admit that computing guaranteed eigenvalue bounds, especially from below, is not a simple task.
We use the recent method \cite{Liu2015} based on the explicitly know interpolation constant for the Crouzeix--Raviart finite element method; see also, \cite{LiuOis2013,CarGal2014,CarGed2014}. This method provides lower bounds on eigenvalues and we further use the Lehmann--Goerisch method \cite{Lehmann1949,Lehmann1950,GoeHau1985} to compute their high-precision improvements.
Let us note that there is a vast literature on error estimates for symmetric elliptic eigenvalue problems. Classical works \cite{Chatelin1983,BabOsb:1991,Boffi:2010} provide the fundamental theories.
Many existing {\em a posteriori} error bounds on eigenvalues contain unknown constants or are valid asymptotically; see, e.g., \cite{DurGasPad1999,ArmDur2004,Yang2010,MehMie2011,DarDurPad2012,GiaHal2012,JiaCheXie2013,HuHuaLin2014}.
In the last years, several results providing fully computable and guaranteed a posteriori error estimates for eigenvalues appeared; see \cite{CarGal2014,CarGed2014,Liu2015,LiuOis2013,SebVej2014,Vejchodsky2018b,Vejchodsky2018,carstensen2021direct}.
These estimates contain no unknown constants and bound eigenvalues on all meshes, not only asymptotically.
In particular, the general framework proposed in \cite{Liu2015} was applied to the Stokes eigenvalue problem \cite{Xie2LIU-2018}, Steklov eigenvalue problem \cite{you-xie-liu-2019}, and biharmonic operators related to the quadratic interpolation error constants \cite{liu-you:2018,LiaoYuLiu2019}.
The series of papers \cite{CanDusMadStaVoh2017,CanDusMadStaVoh2018,CanDusMadStaVoh2020} provides guaranteed, robust, and optimally convergent {\em a posteriori} bounds for eigenvalues and even for corresponding eigenfunctions for both conforming and nonconforming approximations. The last paper in the series solves the difficult case of multiple and tightly clustered eigenvalues.
The recent work \cite{LiuVej2022} proposes two algorithms to handle multiple and tightly clustered eigenvalues as well and provides alternative guaranteed and fully computable error bounds for eigenfunctions.
Particularly, the residual error-based Algorithm II of \cite{LiuVej2022} provides high-precision bounds by successfully extending the Davis--Kahan theorem to weakly formulated eigenvalue problems.
\medskip
The rest of the paper is organized as follows.
Section~\ref{se:eigenproblem} briefly recalls the Laplace eigenvalue problem, its discretization by the finite element method, and division of the spectrum into clusters.
Section~\ref{se:optimalL2} derives a project error-based bound for finite element eigenfunctions in the $L^2$ {sense}.
Section~\ref{se:numex} presents the results of two numerical examples and Section~\ref{se:conclusions} draws the conclusions.
Below is url of the online demonstration:
\begin{center}
\url{https://ganjin.online/xfliu/EigenfunctionEstimation4FEM}
\end{center}
\section{Laplace eigenvalue problem}
\label{se:eigenproblem}
Let us considers the Laplace eigenvalue problem to find eigenvalues $\lambda_i \in \mathbb{R}$ and corresponding eigenfunctions $u_i \neq 0$ such that
\begin{equation}
\label{eq:modpro}
-\Delta u_i = \lambda_i u_i \quad\text{in }\Omega,
\qquad
u_i = 0 \quad\text{on }\partial\Omega,
\end{equation}
where $\Omega \subset \mathbb{R}^d$ is a bounded, Lipschitz $d$-dimensional domain.
The weak formulation of this eigenvalue problem reads: find $\lambda_i \in \mathbb{R}$ and $u_i \in H^1_0(\Omega) \setminus \{0\}$ such that
\begin{equation}
\label{eq:weakf}
(\nabla u_i, \nabla v) = \lambda_i (u_i,v) \quad \forall v \in H^1_0(\Omega),
\end{equation}
where $H^1_0(\Omega)$ is the usual Sobolev space of square integrable functions with the square integrable gradients and with zero traces on the boundary $\partial\Omega$; and $(\cdot,\cdot)$ stands for the $L^2(\Omega)$ inner product.
The Laplace eigenvalue problem is well studied in \cite{BabOsb:1991,Boffi:2010}.
There exists a countable sequence of eigenvalues
$$
0 < \lambda_1 \leq \lambda_2 \leq \cdots,
$$
where we repeat each eigenvalue according to its multiplicity.
The corresponding eigenfunctions $u_i \in H^1_0(\Omega)$ are assumed to be normalized such that
$$
(u_i,u_j) = \delta_{ij}, \quad i,j = 1,2, \dots.
$$
We discretize problem \eqref{eq:weakf} by the standard conforming finite element method. For simplicity, we assume $\Omega$ to be a polytope.
We consider the usual conforming simplicial mesh
$\mathcal{T}_h$ in $\Omega$ and define the finite element space $V_h$ of piece-wise polynomial and continuous functions over the mesh
$\mathcal{T}_h$ satisfying the Dirichlet boundary conditions as
$$
V_h =\{v_h \in H^1_0(\Omega) : {v_h} |_K \in \mathbb{P}_p(K) \text{ for all } K \in \mathcal{T}_h \},
$$
where $\mathbb{P}_p(K)$ stands for the space of polynomials of degree at most $p$ defined in $K$.
The finite element eigenvalue problem reads:
find $\lambda_{h,i}\in\mathbb{R}$ and $u_{h,i} \in V_h\setminus\{0\}$ such that
\begin{equation}
\label{eq:eig_pro_with_fem}
(\nabla u_{h,i,} \nabla v_h) = \lambda_{h,i} (u_{h,i}, v_h)\quad \forall v_h \in V_h,
\end{equation}
where $i=1,2,\dots,\operatorname{dim} (V_h)$.
Discrete eigenfunctions are assumed to be normalized such that $(u_{h,i},u_{h,j})=\delta_{ij}$ and $(\nabla u_{h,i},\nabla u_{h,j})=\lambda_{h,i} \delta_{ij}$.
As we mentioned in the introduction, {we will formulate the $L^2$ error bound on eigenfunctions for clusters of eigenvalues.} For the purpose of the theory, the splitting of the spectrum into clusters can be arbitrary.
Let $n_k$ and $N_k$ stand for indices of the first and the last eigenvalue in the $k$th cluster; see Figure~\ref{fi:clusters}.
{Note that eigenvalues in a cluster need not equal to each other}.
We consider the $k$th cluster to be of interest, and set $n=n_k$ and $N=N_k$
to simplify the notation.
Let $E_k$ be the space of exact eigenfunctions associated to
$k$th cluster of eigenvalues:
$$
E_k = \operatorname{span}\{ u_{n}, u_{n+1}, \dots, u_{N} \}
~.
$$
Similarly, finite element approximations $u_{h,i} \in H^1_0(\Omega)$
of exact eigenfunctions $u_i$, {for $i=n,n+1,\dots,N$}, form the corresponding approximate space:
$$
E_{h,k} = \operatorname{span}\{ u_{h,n}, u_{h,n+1}, \dots, u_{h, N} \}~.
$$
\begin{figure}[t]
\begin{tikzpicture}[scale=1]
\newcommand{0.1}{0.1}
\newcommand{\tick}[1]{\draw [semithick] (#1,-0.1)--(#1,0.1);}
\draw [semithick] (0,0)--(6.2,0);
\draw [thick,dotted] (6.2+0.2,0)--(6.2+1.3,0);
\draw [thick] (6.2+1.5,0)--(6.2+7,0);
\tick{0.5}\node [below] at (0.5,-0.1) {$0$};
\tick{1.7}\node [above] at (1.7,0.1) {$\lambda_{n_1}$};
\tick{1.9}
\tick{2.5}\node [above] at (2.3,0.1) {$\cdots$};
\tick{2}
\tick{2.7}
\node [above] at (2.9,0.1) {$\lambda_{N_1}$};
\node [below] at (2.25,-0.1) {$1$st cluster};
\tick{4}\node [above] at (4,0.1) {$\lambda_{n_2}$};
\tick{4.25}
\tick{4.6}\node [above] at (4.7,0.1) {$\cdots$};
\tick{5.1}
\tick{5.2}
\node [above] at (5.3,0.1) {$\lambda_{N_2}$};
\node [below] at (4.75,-0.1) {$2$nd cluster};
\tick{9.5}\node [above] at (9,0.1) {$(\lambda_n:=)\lambda_{n_k}$};
\tick{9.65}
\tick{10}
\tick{10.1}
\tick{10.35}
\tick{10.5}
\tick{10.7}
\tick{10.8}
\tick{11}\node [above] at (11.75,0.1) {$\lambda_{N_k}(=:\lambda_{N})$};
\node [below] at (10.5,-0.1) {$k$th cluster};
\end{tikzpicture}
\caption{Clusters of eigenvalues on the real axis.}
\label{fi:clusters}
\end{figure}
Denoting by $\|\cdot\|$ the $L^2(\Omega)$ norm, the directed distances of spaces measured in the energy and $L^2$ norms are defined as follows.
\begin{equation}
\label{eq:Delta}
\delta_a(E_k, E_{h,k}) = \max_{\substack{v \in E_k\\ \|\nabla v \|=1}} \min_{ v_h \in E_{h,k}} \|\nabla v - \nabla v_h \|
,\quad
\delta_b(E_k, E_{h,k}) = \max_{\substack{v \in E_k\\ \| v \|=1}} \min_{ v_h \in E_{h,k}} \| v- v_h \|.
\end{equation}
For reader's convenience and for the later reference, we recall the recent error bounds from \cite{LiuVej2022}.
Take $\rho$ such that
$\lambda_n < \rho \leq \lambda_{N+1}$, then
{
\begin{align}
\label{eq:Deltaest}
\delta_a^2(E_k,E_{h,k})
&\leq {\frac{\rho (\hat\lambda^{(k)}_N - \lambda_n) + \lambda_n \hat\lambda^{(k)}_N \vartheta^{(k)}}{\hat\lambda^{(k)}_N(\rho - \lambda_n)}}
\quad\text{and}
\\
\label{eq:deltaest}
\delta_b^2(E_k,E_{h,k})
&\leq {\frac{\hat\lambda^{(k)}_N - \lambda_n + \theta^{(k)}}{\rho - \lambda_n}},
\end{align}
where
\begin{align*}
\hat\lambda^{(k)}_N &= \max_{v_h \in E_{h,k}} \frac{\norm{\nabla v_h}^2}{\norm{v_h}^2},
\quad
\vartheta^{(k)} = \sum_{\ell=1}^{k-1} \left( \frac{\rho}{ \lambda_{n_\ell} } - 1 \right) \left[ \hat{\zeta}(E_{h,\ell},E_{h,k}) + \delta_a(E_\ell,E_{h,\ell}) \right]^2,
\\
\theta^{(k)} &= \sum_{\ell=1}^{k-1} \left(\rho - \lambda_{n_\ell}\right) \left[ \hat{\varepsilon}(E_{h,\ell},E_{h,k}) + \delta_b(E_\ell,E_{h,\ell}) \right]^2.
\end{align*}
Note that quantities
\begin{equation*}
\hat{\zeta}(E_{h,\ell},E_{h,k}) = \max_{\substack{v\in E_{h,\ell}\\ \|\nabla v\|=1}} \max_{\substack{w\in E_{h,k}\\ \|\nabla w\|=1}} (\nabla v, \nabla w),
\quad
\hat{\varepsilon}(E_{h,\ell},E_{h,k}) = \max_{\substack{v\in E_{h,\ell}\\ \| v\|=1}} \max_{\substack{w\in E_{h,k}\\ \|w\|=1}} ( v , w )
\end{equation*}
}%
measure the non-orthogonality between spaces of approximate eigenfunctions for the previous clusters and can be easily computed by using \cite[Lemma~2]{LiuVej2022}.
{Further}
{note that in \cite{LiuVej2022}, the approximate eigenfunction $\{u_{h,i}\}$
are considered as
arbitrary and the orthogonality of $\{u_{h,i}\}$ is not required.}
\section{Projection error-based estimate in the $L^2$ norm}
\label{se:optimalL2}
The result of \cite[Theorem 8.1]{Boffi:2010}, and the explicitly known value of the constant in the \emph{a priori} error estimate for the energy projection \cite{LiuOis2013} enable us to mimic this approach for the eigenvalue problem and derive an optimal order convergent guaranteed and fully computable upper bound on the directed distance of the exact and approximate spaces of eigenfunctions measured in the $L^2(\Omega)$-norm.
First, we mention that $u_{h,i}$ is not available in practical computation, in general, because it is a result of a
generalized matrix eigenvalue solver polluted typically by rounding errors and truncation errors of iterative algorithms.
In principle,
{we could apply the interval arithmetic to have a rigorous representation of $u_{h,i}$,}
{but such} argument would make the paper lengthy and not easy to read. Therefore, we concentrate here on a theoretical analysis of the discretization error $(u_{h,i}-u_i)$,
{where $u_{h,i}$ is the exact solution of the discrete problem \eqref{eq:eig_pro_with_fem}.}
For the reader's convenience, we recall several results about the \emph{a priori} error estimates for finite element solutions of the Poisson equation. These \emph{a priori} error estimates will play an important role in subsequent error bounds for eigenfunctions.
Given $f\in L^2(\Omega)$, let $u\in H_0^1(\Omega)$ be the weak solution of the Poisson problem satisfying
$$
(\nabla u, \nabla v) = (f,v) \quad \forall v \in H^1_0(\Omega).
$$
The corresponding Galerkin approximation $u_h \in V_h(\subset H^1_0(\Omega))$ is determined by the identity
$$
(\nabla u_h, \nabla v_h) = (f,v_h) \quad \forall v_h \in V_h.
$$
The energy projector $P_h : H_0^1(\Omega) \rightarrow V_h$ is defined by
$(\nabla (u - P_h u), \nabla v_h) = 0$ for all $v_h \in V_h$. Clearly, $u_h = P_h u$.
In \cite{LiuOis2013}, Liu proposed the following constructive \emph{a priori} error estimate with a computable constant $C_h$:
\begin{equation}
\label{eq:Ch}
\|\nabla(u-P_h u) \| \le C_h \|f\|, \quad \| u - P_h u \| \le C_h \| \nabla(u-P_h u) \| \le C_h^2 \|f\|
\end{equation}
and the following lower eigenvalue bounds:
\begin{equation}
\lambda_{k} \ge \frac{\lambda_{h,k}}{1+C_h^2 \lambda_{h,k}}~\quad (k=1, 2,\cdots, \operatorname{dim}(V_h)).
\end{equation}
\iffalse
\cred{Do we need the first inequality? It is included in the second one. If you think it can be deleted, please, remove it. If not, keep it.}
\cblue{It depends. For some non-conforming FEM space $V_h$, we can evaluate $C_h$ in $ \| u - P_h u \| \le C_h \| \nabla(u-P_h u) \|$ directly and without additional cost, compared with the hypercircle method which has to solve a side problem.}
\cred{So, would you like to keep it as it is? (It is fine with me.)}
\cblue{Yes.leave it unchanged. There is no pace to discuss the non-conforming case. }
\fi
In case of non-convex domains, the value of $C_h$ can be computed by solving a dual saddle-point problem based on the hypercircle method; see \cite[Sections 3.2--3.3]{LiuOis2013}.
In case of convex domains, the value of $C_h$ can be easily computed by considering the Lagrange interpolation error constant; see \cite[Theorem~3.1]{LiuOis2013}.
The specific value of $C_h$ is provided below in Section~\ref{se:numex} for the considered examples.
Throughout this section, we consider an arbitrary cluster of eigenvalues $\lambda_n, \lambda_{n+1}, \dots, \lambda_N$.
We denote by
$\mathcal{C} = \{n, n + 1, \dots, N\}$
the set of indices of eigenvalues in this cluster
and by $|\mathcal{C}| = N - n + 1$ their number.
Spaces of exact and finite element eigenfunctions corresponding to this clusters are
$E = \operatorname{span}\{ u_i : i \in \mathcal{C} \}$ and
$E_h = \operatorname{span}\{ u_{h,i} : i \in \mathcal{C} \}$, respectively.
It is also assumed that
\begin{equation} \label{eq:no_overlap_of_eigs}
\lambda_{h,n-1} < \lambda_n, \quad \lambda_N < \lambda_{h,N+1}.
\end{equation}
Such an assumption makes it possible to define the following quantities:
\begin{equation*}
\tau = \max_{j \in \mathcal{C}}\max_{i {\in}\mathcal{I} \setminus \mathcal{C}}\frac{\lambda_j}{|\lambda_{h,i}-\lambda_j|}
,\quad
\tau_h = \max_{j \in \mathcal{C}}\max_{i {\in}\mathcal{I} \setminus \mathcal{C}}\frac{\lambda_{h,i}}{|\lambda_{h,i}-\lambda_j|},
\end{equation*}
where $\mathcal{I} = \{1,2,\dots,\operatorname{dim} (V_h)\}$ stands for the set of all indices.
These quantities extend the one in \cite[pages 53, 57]{Boffi:2010} and have their origin in \cite{raviart1983introduction}.
The application of the quantity $\tau $ can be found in \cite[Prop.~3.1]{CarGed2011}. The result in Lemma~\ref{thm:u_and_pi_k_u} can be regarded an improvement of the one of \cite{CarGed2011}.
To derive
the projection error-based
upper bound
on the directed distance of the exact and approximate spaces of eigenfunctions measured in the $L^2(\Omega)$-norm
by applying estimates \eqref{eq:Ch}, we need to
bound the error of the $L^2(\Omega)$ orthogonal projection $\Pi^\mathcal{C}_h: H^1_0(\Omega) \rightarrow E_h$
by the error of the energy projection $P_h: H^1_0(\Omega) \rightarrow V_h$. To achieve this goal, we first introduce several quantities and two auxiliary lemmas.
\medskip
Let us introduce the unit ball $E^B := \{ u \in E : \|u\|=1 \}$.
For the given cluster of eigenfunction, we introduce $\beta$
as the optimal (minimal) quantity that makes the
inequality
\begin{equation}
\label{def:beta}
\|(I-\Pi^\mathcal{C}_h) P_h u\| \le
\beta
\max_{\substack{v \in E^B}} \|v - P_h v\| \quad
\mbox{hold for any } u\in E^B
~,
\end{equation}
and aim to obtain an upper bound of $\beta$.
In case $\|u - P_h u\|=0$ for all $u\in E^B$, it is natural to define $\beta=0$.
Given $u =\sum_{j\in \mathcal{C}} c_j u_j \in E^B$,
let $\kappa:E^B \to E^B$ be the mapping such that
\begin{equation}
\label{eq:def-kappa-h}
\kappa u =
\overline{\lambda}^{-1}
\sum_{j\in \mathcal{C}} {c_j \lambda_j} u_j,\quad
\text{where}\quad
{\overline{\lambda}^2} =\sum_{j\in \mathcal{C}}c_j^2 \lambda_j^2 .
\end{equation}
It is easy to see that $\kappa:E^B \to E^B$ is bijective.
We set $\overline u = \kappa u$, define the relative width $\xi := (\lambda_N - \lambda_n)/\lambda_n$ of the eigenvalue cluster of interest, and note that the following estimate holds:
$$
\|u-\overline{u}\|^2 = \sum_{j\in \mathcal{C} } c_j^2 (1-\lambda_j/\overline{\lambda})^2 \le \xi^2.
$$
\begin{lemma}\label{thm:u_and_pi_k_u}
Given an arbitrary clusters of eigenvalues,
the quantity $\beta$ satisfies
\begin{equation}
\label{eq:est_of_beta_k}
\beta \leq \tau |\mathcal{C}|^{1/2}.
\end{equation}
Further, if
\begin{equation}
\label{eq:condition_for_lemma}
\tau_h\xi < 1 - |\mathcal{C}|^{-1/2}
\end{equation}
then
\begin{equation}
\label{eq:est_of_beta_k_sharper}
\beta \leq \frac{\tau}{1-\tau_h\xi} ~.
\end{equation}
\end{lemma}
\begin{remark}
Note that if condition \eqref{eq:condition_for_lemma} is satisfied than the estimate \eqref{eq:est_of_beta_k_sharper} is always sharper then the bound \eqref{eq:est_of_beta_k}.
Further, a smaller relative width of the cluster $\xi$ leads to a sharper upper bound \eqref{eq:est_of_beta_k_sharper}.
In the extreme case of a multiple eigenvalue such that $\lambda_n=\lambda_N$, we have $\xi = 0$ and hence $\beta \le \tau$.
\end{remark}
\begin{proof}
{First, for
$u_j \in E^B$ as an eigenfunction, let us apply the standard argument (see, e.g., \cite{Boffi:2010}) to show that
$\|(I - \Pi^\mathcal{C}_h) P_h u_j\| \le \tau \| (I - P_h) u_j\|$. }
Note that
$$
(I - \Pi^\mathcal{C}_h) P_h u_j = \sum_{i \in \mathcal{I} \setminus \mathcal{C}} (P_h u_j, u_{h,i}) u_{h,i} \in V_h,
$$
which leads to
\begin{equation}
\label{eq:IPik}
\|(I - \Pi^\mathcal{C}_h ) P_h u_j\|^2 = \sum_{i \in \mathcal{I} \setminus \mathcal{C}} (P_h u_j, u_{h,i})^2\:.
\end{equation}
In equality
\begin{equation}
\label{eq:equality_in_lemma}
\lambda_{h,i} (P_h u_j , u_{h,i}) = (\nabla P_h u_j, \nabla u_{h,i})
= (\nabla u_j, \nabla u_{h,i}) = \lambda_j (u_j,u_{h,i}),
\end{equation}
we subtract $\lambda_j (P_h u_j,u_{h,i})$ on both sides and obtain
$$
(P_h u_j, u_{h,i}) = \frac{\lambda_j}{\lambda_{h,i} -\lambda_j } (u_j - P_h u_j,u_{h,i}).
$$
Summation over $i \in \mathcal{I}\setminus\mathcal{C}$ gives
$$
\sum_{i \in \mathcal{I} \setminus \mathcal{C}} (P_h u_j, u_{h,i})^2 \le \tau^2 \sum_{i \in \mathcal{I} \setminus \mathcal{C}} (u_j - P_h u_j,u_{h,i})^2 \le \tau^2 \|u_j - P_h u_j\|^2,
$$
where the last inequality follows form the identity $\sum_{i \in \mathcal{I}} (u_j - P_h u_j,u_{h,i})^2 = \| \Pi_h(u_j - P_h u_j) \|^2$ with $\Pi_h : H^1_0(\Omega) \rightarrow V_h$ denoting the $L^2(\Omega)$ orthogonal projector.
Using this in \eqref{eq:IPik}, we finally derive
\begin{equation}
\label{eq:est_for_single_uj}
\|(I - \Pi^\mathcal{C}_h) P_h u_j\| \le \tau \| (I - P_h) u_j\|.
\end{equation}
Next,
we consider any $u \in E^B$ and express it in the form $u =\sum_{j\in \mathcal{C}} c_j u_j$ with $\sum_{j\in\mathcal{C}} c_j^2 = 1$.
Denoting the linear operator $(I - \Pi^\mathcal{C}_h) P_h $ by $L$,
the estimate \eqref{eq:est_for_single_uj} leads to
$$
\|L u \|^2 = \left\| \sum_{j \in \mathcal{C}} c_j L u_j \right\|^2 \le \sum_{j \in \mathcal{C}} \|Lu_j\|^2 \le \tau^2 \sum_{j \in \mathcal{C}} \| (I - P_h) u_j\|^2.
$$
Thus, we can estimate $\|(I - \Pi^\mathcal{C}_h) P_h u\|$ as
\begin{equation}
\label{eq:lemma_estimate_for_general_u_ver0}
\|(I - \Pi^\mathcal{C}_h) P_h u \| \le
\tau |\mathcal{C}|^{1/2} \max_{u \in E^B} \|u - P_h u\|
\end{equation}
and statement \eqref{eq:est_of_beta_k} easily follows.
Finally, we consider the case when the condition \eqref{eq:condition_for_lemma} holds true.
Given $u\in E^B$, we take $\overline{u}=\kappa u$ and $\overline{\lambda}$ as defined in \eqref{eq:def-kappa-h}.
Using inequality \eqref{eq:equality_in_lemma}, we obtain for $u$ the identity
$$
\lambda_{h,i} (P_h u , u_{h,i})
= \overline{\lambda} (\overline{u},u_{h,i}).
$$
Subtracting $\overline{\lambda} (P_h \overline{u},u_{h,i})$ on both sides, we derive
$$
\left( \lambda_{h,i} P_h (u - \overline{u})
+ (\lambda_{h,i}
-\overline{\lambda}) P_h \overline{u} , u_{h,i} \right) = \overline{\lambda} (\overline{u} -P_h \overline{u} ,u_{h,i}).
$$
Thus,
$$
(P_h \overline{u}, u_{h,i}) = \frac{\overline{\lambda}}{\lambda_{h,i} -\overline{\lambda} } ( \overline{u} - P_h \overline{u},u_{h,i})
-
\frac{\lambda_{h,i}}{\lambda_{h,i} -\overline{\lambda} }
(P_h (u - \overline{u}), u_{h,i}).
$$
Since function $g(t)=t/ |\lambda_{h,i}-t|$ satisfies
$g(t) \le \max(g(\lambda_n), g(\lambda_N))$ for all $t \in [\lambda_n, \lambda_N]$
and function $g_h(t)=\lambda_{h,i}/|\lambda_{h,i}-t|$ is bounded in the same way, we have
$$
|(P_h \overline{u}, u_{h,i})| \leq \tau |\left(( I - P_h) \overline{u},u_{h,i}\right)| + \tau_h
|(P_h (u - \overline{u}), u_{h,i})|.
$$
Now, considering these inequalities for $i\in\mathcal{I}\setminus\mathcal{C}$, using the geometric inequality\footnote{
Given vectors $a=(a_1, \cdots, a_n), b=(b_1, \cdots, b_n), c=(c_1, \cdots, c_n)$ with $a_i,b_i,c_i>0$ and
$a_i\leq b_i+c_i$, then their Euclidean norms satisfy
$\|a\| \le \|b\|+\|c\|.$
}
and the general fact that
$\sum_{i\in \mathcal{I}\setminus\mathcal{C}} (\varphi,u_{h,i})^2
= \| (I - \Pi^\mathcal{C}_h) \Pi_h \varphi \|^2$ for any $\varphi \in H^1_0(\Omega)$ with $\Pi_h: H^1_0(\Omega) \rightarrow V_h$ being the $L^2(\Omega)$ orthogonal projector,
we derive the bound
\begin{equation}
\label{eq:sub_inequality_1}
\|(I-\Pi^\mathcal{C}_h)P_h {\overline{u}}\| \le
\tau \|\overline{u}-P_h \overline{u}\| +
\tau_h
\|(I-\Pi^\mathcal{C}_h)P_h (u - \overline{u})\|.
\end{equation}
Since $\|u-\overline{u}\| \le \xi $, the definition of $\beta$ gives
\begin{equation}
\label{eq:sub_inequality_2}
\|(I-\Pi^\mathcal{C}_h) P_h (u-\overline{u})\|
\le
\xi \beta \max_{u \in E^B} \|u - P_h u\|.
\end{equation}
Inequalities \eqref{eq:sub_inequality_1} and \eqref{eq:sub_inequality_2} lead to the relation
\begin{equation}
\label{eq:lemma_estimate_for_general_u_2}
\|(I - \Pi^\mathcal{C}_h) P_h \overline{u} \| \le
\left( \tau + \tau_h \xi \beta \right)
\max_{u \in E^B} \|u - P_h u\|.
\end{equation}
Since $\overline u = \kappa u$ and $\kappa: E^B \to E^B$ is a bijection, we have
$$
\max_{u\in E^B} \|(I - \Pi^\mathcal{C}_h) P_h \kappa u \|
= \max_{u\in E^B} \|(I - \Pi^\mathcal{C}_h) P_h u \|.
$$
Consequently,
from the bound \eqref{eq:lemma_estimate_for_general_u_2} and the definition of $\beta$, we obtain
$$
\beta \le \tau + \tau_h \xi \beta.
$$
Since condition \eqref{eq:condition_for_lemma} implies $1-\tau_h\xi>0$, the estimate \eqref{eq:est_of_beta_k_sharper} follows.
\end{proof}
In next lemma, we show the relation between $\delta_b(E,E_h)$ and the projection error using the quantity $\beta$.
\begin{lemma}\label{thm:u_and_pi_k_u_new_ver}
For the given cluster of eigenvalues,
the following estimate holds:
\begin{equation}
\label{eq:thm_u_and_pi_k_u_new_}
\delta_b(E,E_h) = \max_{u\in E^B} \|u-\Pi^\mathcal{C}_h u\| \le
(1+\beta)
\max_{u \in E^B} \|u - P_h u\|.
\end{equation}
\end{lemma}
\begin{proof}
For any $u\in E^B$, since $\Pi^\mathcal{C}_h u$ provides the best approximation of $u$ under the $L^2$ norm in $E_h$, we have
\begin{equation}
\label{eq:lem_triangle_inequality_new_}
\|u - \Pi^\mathcal{C}_h u \|
\le \|u - \Pi^\mathcal{C}_h P_h u \|
\le \|u - P_h u \| + \|P_h u-\Pi^\mathcal{C}_h P_h u \| .
\end{equation}
Using the definition of the quantity $\beta$, we easily draw the conclusion.
\end{proof}
\begin{remark}
\label{remark:comparison_carsten_gedick}
In Proposition 3.2 of \cite{CarGed2011}, with $m:=|\mathcal{C}|=N-n+1$, the following result is obtained: For $u_{h,j} $ in $E$ as an eigenfunction associated to $\lambda_{h,j}$,
$$
\min_{u\in E} \|u_{h,j}-u\| \le
\sqrt{2} m(2m+1)(1+\tau)
\max_{u_j\in E^B} \|u_j-P_h u_j\|~.
$$
If such a result is applied to $\delta_b(E,E_h)$, one can obtain the following estimate.
\begin{equation}
\label{eq:overestimation}
\delta_b(E,E_h) \le \sqrt{2} m(2m+1)(1+\tau) \max_{u \in E^B} \|u - P_h u\|~.
\end{equation}
This bound is larger than the result in Lemma~\ref{thm:u_and_pi_k_u_new_ver}.
Particularly, if
the eigenvalue cluster is tight, i.e., $\xi\approx 0$, we have $\beta\approx \tau$ and the bound
\eqref{eq:overestimation}
is overestimated by the factor $\sqrt{2} m(2m+1)$.
\end{remark}
Bounding $\beta$ by Lemma~\ref{thm:u_and_pi_k_u}, the following theorem presents
the main result.
\begin{theorem}\label{th:main_new_}
Let $\beta$ be the quantity defined in \eqref{def:beta}.
For an arbitrary cluster of eigenvalues,
the following estimate holds:
\begin{equation}
\label{eq:l2_norm_optimal}
\delta_b(E,E_h) \le (1+\beta) {\lambda_N} C_h^2~.
\end{equation}
\end{theorem}
\begin{proof}
Given $u\in E_k$, $u = \sum_{j\in \mathcal{C}}c_j u_j$, let $\overline{u}=\kappa u$ and $\overline{\lambda}^2 = \sum_{j\in \mathcal{C}}c_j^2 \lambda_j^2$.
Then the identity
$$
(\nabla u, \nabla v)= (\overline{\lambda}\overline{u}, v), \quad \forall v\in V,
$$
holds and the {\em a priori} error estimate \eqref{eq:Ch} with $f=\overline{\lambda}\overline{u}$ yields
\begin{equation}
\label{eq:local_est_1_new}
\|u-P_h u\| \le C_h^2 \| \overline{\lambda}\overline{u}\| \le C_h^2\lambda_N .
\end{equation}
Definition of $\delta_b(E,E_h)$ and bounds \eqref{eq:thm_u_and_pi_k_u_new_} and \eqref{eq:local_est_1_new} give
\begin{equation*}
\label{eq:est2_new}
\delta_b(E,E_h) = \max_{u\in E^B} \|u-\Pi^\mathcal{C}_h u\|
\leq (1+\beta) \max_{u \in E^B} \| u - P_h u \| \le (1+\beta) C_h^2 \lambda_N.
\end{equation*}
\end{proof}
\begin{remark}
\label{remark:convergence-rate}
The result \cite{LiuOis2013} shows how to compute the quantity $C_h$. For convex domains, the solution of the Poisson problem has the regularity $u\in H^2(\Omega)$ and, consequently, we have
$C_h=O(h)$ via the Lagrange interpolation error estimation.
For non-convex domains, the solution $u$ belongs to
$H^{1+\alpha}(\Omega)$, where $\alpha \in (0,1]$ depends on the angles of re-entrant non-convex corners.
In this case, the value of $C_h$ is evaluated by the hypercircle method using the Raviart--Thomas FEM, and it is expected that
$C_h=O(h^{\alpha})$.
As {it} is pointed out in \cite[Theorem 9.13]{Boffi:2010}, the FEM solutions approximate the eigenfunction independently.
That is, even for non-convex domains, if an
eigenfunction has the $H^2$-regularity, then the FEM approximation to such an eigenfunction has the $O(h^2)$ convergence rate under $L^2$ norm.
Since the projection error in the estimation \eqref{eq:thm_u_and_pi_k_u_new_} is restricted to the function in $E$ for the specified eigenvalue cluster, the estimation of Lemma \ref{thm:u_and_pi_k_u_new_ver} is consistent with the theoretical analysis of \cite{Boffi:2010}.
The proposed estimation \eqref{eq:l2_norm_optimal} using $C_h$ has a defect that, in case of non-convex domains, for an eigenfunction with a better regularity, the proposed bound still keeps the degenerated convergence rate, which is because the {\em a priori} error estimation is considering the worst case for the projection error.
If the regularity for eigenfunction in $E$ is known, then the estimation in Theorem \ref{th:main_new_} can also be improved since the estimation only depends on the projection error for eigenfunctions in the specified cluster.
For example, in the case of an L-shaped domain of \S \ref{sec:l-shape},
the eigenfunction $u=\sin(\pi x)\sin (\pi y)$ associated to $\lambda_3=2\pi^2$ has the
$H^2$-regularity, thus one can take $C_h<0.493h$ (where $h$ is the largest leg length for right triangles in the triangulation) for FEM approximation using triangulation with right triangles.
\end{remark}
\begin{remark}
Theorem 3 in \cite{LiuVej2022} provides the following estimate:
\begin{equation}
\label{eq:energy_by_L2}
\delta_a^2(E,E_h) \leq {2 - 2 \lambda_n \left( \frac{1 - \delta_b^2(E,E_h) }{\lambda_N \lambda_{h,N}} \right)^{1/2}} ~,
\end{equation}
where the energy error $\delta_a(E,E_h)$ {is bounded by} the $L^2$ error $\delta_b(E,E_h)$.
However, bound \eqref{eq:energy_by_L2} is not optimal for clusters of a positive widths, i.e., $\lambda_n<\lambda_{N}$. On the other hand, for clusters consisting of a simple or a multiple eigenvalue, we have $\lambda_n = \lambda_N$ and the bound \eqref{eq:energy_by_L2} has the optimal speed of convergence. Indeed, in this case, it can be easily shown that the right-hand side of \eqref{eq:energy_by_L2} is dominated by $|\lambda_{h,N} - \lambda_N|$ and the other terms, including $\delta_b^2(E,E_h)$ are of higher order.
{Consequently, bound \eqref{eq:energy_by_L2} combined with \eqref{eq:l2_norm_optimal} provides a guaranteed and fully computable error bound in the energy norm with the optimal speed of convergence for a cluster consisting of only one simple or multiple eigenvalue.}
\end{remark}
\iffalse
\begin{remark}\label{re:re2}
Theorem 8.1 of \cite{Boffi:2010} proves the estimate
$$
\Delta(E_{h,K}, E_K) \le C(K) \sup_{v\in E_1\cup \cdots \cup E_{K}, \|v\|=1} \|\nabla(v-P_h v)\|,
$$
where we use the notation of the current paper.
Although the explicit bound on $\Delta(E_{h,K}, E_K)$ is not given in \cite{Boffi:2010}, we believe that using the constant $C_h$ from \eqref{eq:Ch}, we can provide an explicit bound on $C(K)$ and $\|\nabla(v-P_h v)\|$ and, thus,
an estimate for $\Delta(E_{h,K}, E_K)$.
In our future work, we will derive this estimate and compare it with bounds derived in the current paper.
[DO WE WANT TO PROMISE THIS?]
\cred{[What is your idea?]}
\LIU{[In Corollary 9.11 of \cite{Boffi:2010}, there is a stronger result. But be careful that the assumption there is that the cluster width is zero. In case of zero width of cluster, our result in \eqref{eq:energy_by_L2} already proves this property.]} \LIU{[As a conclusion, there is no direct evidence or idea that we can obtain the ``optimal" explicit estimation of $\delta_a$ for a cluster with different eigenvalues. So, the remark shall be further weakened or removed.]}
\cred{[OK, please, remove this remark completely. We do not need it for the current paper at all.]}
\end{remark}
\fi
\section{Numerical examples}
\label{se:numex}
This section provides numerical examples to illustrate the accuracy of proposed bounds on the directed
distances of spaces of exact and approximate eigenfunctions. The first example is the Laplace eigenvalue problem \eqref{eq:modpro} in the unit square domain for which
{the exact eigenvalues and eigenfunctions are well known.}
The second example is the same problem considered
{in a non-convex L-shaped domain where eigenfunctions may have singularities at the re-entrant corner.}
Both examples are computed in
the floating point arithmetic and the influence of rounding errors is not taken into account
{for simplicity.}
However, if needed, mathematically rigorous estimates could be obtained by employing the interval arithmetic \cite{moore2009introduction}.
\subsection{The unit square domain}
\label{sec:unit_square}
Consider the Laplace eigenvalue problem with homogeneous Dirichlet boundary conditions in the unit square $\Omega=(0,1)^2$:
find eigenvalues $\lambda_i \in \mathbb{R}$ and corresponding eigenfunctions $u_i \neq 0$ such that
\begin{equation}
\label{eq:modpro}
-\Delta u_i = \lambda_i u_i \quad\text{in }\Omega;
\qquad
u_i = 0 \quad\text{on }\partial\Omega.
\end{equation}
\begin{table}[ht]
\caption{\label{ta:square_domain_clusters}The four leading clusters for the unit square.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
\rule[-2mm]{0cm}{6mm}{}
Cluster & 1 & 2 & 3 & 4 \\
\hline
\rule[-2mm]{0cm}{6mm}{}
Eigenvalues & $\lambda_1 = 2\pi^2$ &
$\lambda_2 = \lambda_3 = 5\pi^2$ &
$\lambda_4 = 8\pi^2$ &
$\lambda_5 = \lambda_6 = 10\pi^2$ \\
\hline
\end{tabular}
\end{center}
\end{table}
The exact eigenpairs are known analytically to be
$$
\lambda_{ij} = (i^2+j^2)\pi^2,\quad u_{ij}=\sin(i\pi x) \sin(j\pi y), \quad i,j=1,2,3, \dots.
$$
These eigenvalues are either simple or multiple
and we cluster them according to the multiplicity.
The first four clusters are listed in Table~\ref{ta:square_domain_clusters}.
Since the exact eigenvalues are known, we use them to evaluate error bounds.
{To compute bounds \eqref{eq:Deltaest} and \eqref{eq:deltaest} for
the cluster $\{\lambda_{n}, \lambda_{n+1}, \cdots, \lambda_{N}\}$,
we choose $\rho = \lambda_{N+1}$.
}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.25]{square_uniform.eps}
\end{center}
\caption{\label{fig:uniform_mesh_square} The uniform mesh with $h=1/4$ for the unit square.%
}
\label{fi:squaremesh}
\end{figure}
{Problem \eqref{eq:modpro}} is discretized by the conforming finite element method using piecewise linear functions. The finite element mesh $\mathcal{T}_h$ is chosen as the uniform triangulation consisting of
isosceles right triangles;
{see Figure~\ref{fi:squaremesh} for an illustration.
}
The projection error constant can be easily obtained through the interpolation error constant
{as}
$C_h\le h/0.493$.
{
For each cluster, we compute bounds on $\delta_b(E_k,E_{h,k})$ and $\delta_a(E_k,E_{h,k})$ by
the estimate \eqref{eq:l2_norm_optimal} from Theorem~\ref{th:main_new_} and its combination with the relation \eqref{eq:energy_by_L2}, respectively.
We then compare these results with the bounds \eqref{eq:Deltaest} and \eqref{eq:deltaest} computed by Algorithm I of \cite{LiuVej2022}.
}
The convergence behavior of computed bounds for the four leading clusters is shown in Figure ~\ref{fig:unit-square-l2} and \ref{fig:unit-square-h1}.
The results confirm the expected optimal convergence rate $O(h^2)$ of the estimate \eqref{eq:l2_norm_optimal}
for {$\delta_b(E_k,E_{h,k})$}, and the sub-optimal rate $O(h)$ from Algorithm I of \cite{LiuVej2022}.
The estimate by Algorithm II of \cite{LiuVej2022} can provide impressively sharp bounds and the optimal convergence rate for the error of approximate eigenfunctions under both $L^2$ and $H^1$ norms. Since such an approach needs more {effort to post-process the approximate eigenfunction, reconstruct the flux, and estimate the} residual error of the eigenfunction approximation, the comparison with Algorithm II of \cite{LiuVej2022} is omitted here.
\begin{figure}[htp]
\centering
\includegraphics[width=\textwidth]{Unitsquare_L2.eps}
\caption{\label{fig:unit-square-l2}Bounds on the $L^2(\Omega)$ distances of spaces of eigenfunctions
{$\delta_b(E_k,E_{h,k})$} for the square domain and four leading clusters of eigenvalues
{$k=1,2,3,4$.}
}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=\textwidth]{Unitsquare_H1.eps}
\caption{\label{fig:unit-square-h1}
Bounds on the energy distances of spaces of eigenfunctions {$\delta_a(E_k,E_{h,k})$} for the square domain and the four leading clusters {$k=1,2,3,4$}.
}
\end{figure}
\subsection{The L-shaped domain}
\label{sec:l-shape}
We consider the Laplace eigenvalue problem \eqref{eq:modpro} in the L-shaped domain $\Omega = (-1,1)^2\setminus(-1,0]^2$
to present the standard example with singularities of eigenfunctions
and also to demonstrate the versatility of the proposed method. We solve this problem by using the classical linear conforming finite element space over a uniform mesh.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{L_shape_mesh.eps}
\caption{L-shaped domain and the initial mesh
}
\label{fig:l_shaped_domain}
\end{figure}
Since the exact eigenvalues are not known,
the eigenvalue bounds
are evaluated by using
two-sided bounds on eigenvalues, which were computed in \cite{liu2014high}
and we list them in Table~\ref{tab:l_shaped_eig_lower_bound}.
The first four eigenvalues are simple and form trivial clusters.
The values of the projection error constants are obtained by applying the hypercircle method proposed in \cite{LiuOis2013}; see Table \ref{table:lshape-projection-error-constant}.
\begin{table}[ht]
\centering
\caption{Lower bounds on the leading eigenvalues for the L-shaped domain.}
\begin{tabular}{|c|c|c|c|c|}
\hline
\rule[-2mm]{0mm}{6mm}{}
$\lambda_1$ & $\lambda_2$ &$\lambda_3$ & $\lambda_4$ & $\lambda_5$ \\
\hline
\rule[-2mm]{0mm}{6mm}{}
$9.6397_{1}^{3}$ & $15.1972_{5}^{6}$ & $19.7392_0^1$ & $29.5214_7^9$ & $31.9126_2^4$ \\
\hline
\end{tabular}
\label{tab:l_shaped_eig_lower_bound}
\end{table}
\begin{table}[h]
\centering
\caption{\label{table:lshape-projection-error-constant}Projection error constants}
\begin{tabular}{|c|c|c|c|c|}
\hline
\rule[-2mm]{0mm}{6mm}{}
$h$ & $1/32$ &$1/64$ & $1/128$ & $1/256$ \\
\hline
\rule[-2mm]{0mm}{6mm}{}
$C_h$ & $0.0359$ & $0.0218$ & $0.0134$ & $0.00832$ \\
\hline
\end{tabular}
\end{table}
The initial finite element mesh is displayed in Figure~\ref{fig:l_shaped_domain}.
First, we apply the bounds \eqref{eq:Deltaest} and \eqref{eq:deltaest} to the four leading eigenvalue clusters. Since the exact error $\delta_b$ cannot be evaluated directly, we apply the residual error-based estimation, i.e., Algorithm II of \cite{LiuVej2022} to obtain a sharp bound of $\delta_b$. Numerical evaluation of such a bound implies that $\delta_b$ has the convergence rate as $O(h^{3/2})$ for the first cluster and $O(h^{2})$ for the rest $3$ clusters.
Figure~\ref{fig:lshape-l2} shows the bounds on the $L^2(\Omega)$ distance $\delta_b$.
Figure~\ref{fig:lshape-h1} compares the bounds on the energy distance $\delta_a$.
The results confirm that the newly proposed estimate of $\delta_b$ based on the projection error estimate, namely the estimate \eqref{eq:l2_norm_optimal} in Theorem~\ref{th:main_new_}, provides improved convergence rates in comparison with the bound \eqref{eq:deltaest}.
\begin{figure}[htp]
\centering
\includegraphics[width=\textwidth]{LShape_L2_with_REE.eps}
\caption{\label{fig:lshape-l2}Bounds on the $L^2(\Omega)$ distances of spaces of eigenfunctions
{$\delta_b(E_k,E_{h,k})$} for the L-shaped domain and four leading clusters {$k=1,2,3,4$}.
}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=\textwidth]{LShape_H1_N.eps}
\caption{\label{fig:lshape-h1}Bounds on the energy distances of spaces of eigenfunctions {$\delta_a(E_k,E_{h,k})$} for the L-shaped domain and the four leading clusters {$k=1,2,3,4$}.
}
\end{figure}
\section{Conclusions}
\label{se:conclusions}
For finite element eigenfunctions, we derived
a projection error-based
bound on the $L^2$
distance $\delta_b$ by employing the explicitly
known value of the constant {$C_h$} in the \emph{a priori} error estimate for the energy projection.
The obtained optimal estimate of $\delta_b$ can be further utilized to improve the bound for the energy distance $\delta_a$.
The derived bound is fully computable and guaranteed.
\bibliographystyle{amsplain}
| {
"timestamp": "2022-11-08T02:16:37",
"yymm": "2211",
"arxiv_id": "2211.03218",
"language": "en",
"url": "https://arxiv.org/abs/2211.03218",
"abstract": "For conforming finite element approximations of the Laplacian eigenfunctions, a fully computable guaranteed error bound in the $L^2$ norm sense is proposed. The bound is based on the a priori error estimate for the Galerkin projection of the conforming finite element method, and has an optimal speed of convergence for the eigenfunctions with the worst regularity. The resulting error estimate bounds the distance of spaces of exact and approximate eigenfunctions and, hence, is robust even in the case of multiple and tightly clustered eigenvalues. The accuracy of the proposed bound is illustrated by numerical examples. The demonstration code is available at https://ganjin.online/xfliu/EigenfunctionEstimation4FEM .",
"subjects": "Numerical Analysis (math.NA)",
"title": "Projection error-based guaranteed L2 error bounds for finite element approximations of Laplace eigenfunctions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969689263265,
"lm_q2_score": 0.8152324915965392,
"lm_q1q2_score": 0.801860207704613
} |
https://arxiv.org/abs/1712.04993 | On the Alexander polynomial and the signature invariant of two-bridge knots | Fox conjectured the Alexander polynomial of an alternating knot is trapezoidal, i.e. the coefficients first increase, then stabilize and finally decrease in a symmetric way. Recently, Hirasawa and Murasugi further conjectured a relation between the number of the stable coefficients in the Alexander polynomial and the signature invariant. In this paper we prove the Hirasawa-Murasugi conjecture for two-bridge knots. | \section{Introduction}
A knot is said to be alternating if it admits a diagram in which the crossings alternate between over- and underpasses. In 1962, Fox posed the following conjecture concerning a curious behavior of the Alexander polynomial of an alternating knot.
\begin{conj}[\cite{Fox62}]\label{Foxconj}
Let $K$ be an alternating knot with the Alexander polynomial $\Delta_K(t)=\Sigma_{j=0}^{2n}(-1)^ja_jt^{2n-j}$, $a_j>0$. Then
$$a_0<a_1<\cdots<a_{n-m-1}<a_{n-m}=\cdots=a_{n+m}>a_{n+m+1}>\cdots>a_{2n}.$$
\end{conj}
Polynomials satisfying the above condition are called trapezoidal, so this conjecture is known as Fox's trapezoidal conjecture. This conjecture remains open.It is, however, supported by the verification on several classes of alternating knots. The case of two-bridge knots is confirmed by Hartley \cite{Har79}. More generally Murasugi proved it for alternating algebraic knots \cite{Mur85}. The case of genus two alternating knots has also been verified by Ozsv\'ath and Szab\'o using Heegaard Floer homology \cite{OS03}, and by Jong via a combinatorial method \cite{Jon09}. Recently, Hirasawa and Murasugi showed that the conjecture holds for alternating stable knots \cite{HM13}. Moreover, in this case they observed that the signature of such knots are zero, and $m=0$ in Conjecture \ref{Foxconj}. Therefore, this progress led them to pose the following strengthened conjecture.
\begin{conj}[\cite{HM13}]\label{HMconjecture}
Let $K$ be an alternating knot, whose signature $|\sigma(K)|=2k$ and the Alexander polynomial $\Delta_K(t)=\Sigma_{j=0}^{2n}(-1)^ja_jt^{2n-j}$, $a_j>0$. Then
$$a_0<a_1<\cdots<a_{n-m-1}<a_{n-m}=\cdots=a_{n+m}>a_{n+m+1}>\cdots>a_{2n},$$
moreover, $m\leq k$.
\end{conj}
We provide some evidence supporting this conjecture in this paper. We first observe that the case of genus two knots can be confirmed easily by using a result of Ozsv\'ath and Szab\'o in \cite{OS03}, or by Jong's inequalities in \cite{Jon10}, which is pointed out to the author by Kunio Murasugi.
\begin{thm}\label{genus two case}
If $K$ is an alternating knot of genus two, then it satisfies the statement of Conjecture \ref{HMconjecture}.
\end{thm}
\begin{proof}
Note since the trapezoidal conjecture is true for genus two alternating knots, only $m\leq k$ are left to be verified. If $|\sigma(K)|=4=2g(K)$, then Conjecture \ref{HMconjecture} is obviously true since the degree the symmetric Alexander polynomial is less than or equal to $g(K)$. If $|\sigma(K)|=2$, Corollary 1.6 of \cite{OS03} or Theorem 1.6 of \cite{Jon10} implies $a_1\geq 2a_0+1$, hence the conjecture. If $\sigma(K)=0$, Corollary 1.6 of \cite{OS03} or Theorem 1.6 of \cite{Jon10} implies $a_1\geq 2a_2$, and $\Delta_K(1)=1$ implies $a_0=1+2a_1-2a_2$, therefore $a_0>a_1>a_2$.
\end{proof}
Our main result confirms the Hirasawa-Murasugi conjecture for two-bridge knots.
\begin{thm}\label{main}
Conjecture \ref{HMconjecture} is true for two-bridge knots.
\end{thm}
The proof of this theorem is given in Section 3. For the strategy of the proof, we extend Hartley's induction argument in \cite{Har79}. Hartley's induction utilizes extended digrams of two-bridge knots to compute their Alexander polynomials, and for our purpose we further implement Shinohara's algorithm in the induction to compute the signature invariant \cite{Shi76}.\\
\noindent\textbf{Organization.} In Section 2 we recall the preliminaries, which includes computing the Alexander polynomial using extended diagrams and Shinohara's result on the signature of two-bridge knots. Section 3 is devoted for the induction argument: after some further technical preparation for the induction in Subsection 3.1 and Subsection 3.2, the key parts of the proof are carried out in three parallel steps in Subsection 3.3-3.5.\\
\noindent\textbf{Acknowledgment:} I thank Stephan Burton, Matt Hedden, Effie Kalfagianni and Christine Ruey Shan Lee for their interest, and Matt Hedden again for his help on improving the exposition of this work. The author is grateful to Kunio Murasugi for pointing out a mistake in an earlier version of this paper, and suggesting a correction for the proof of Theorem \ref{genus two case}.
\section{Preliminaries}
For the reader's convenience, we recall some preliminaries regarding two-bridge knots (and links) in this section. As it will be clear, all the two-bridge links we consider will come with a preferred orientation, so this allows us to talk about the signature of a two-bridge link without ambiguity. This section has three parts, consisting of the Schubert normal form, extended diagrams and Shinohara's method for computing the signature invariant. In particular, we shall see both the Alexander polynomial and the signature of a two-bridge link can be read off from its extended diagram.
\textbf{Convention.\ }For the ease of terminology, by the term two-bridge knot we often include the case of links and this shall not cause any confusion. With this convention, Theorem \ref{main} can also be understood as: any two-bridge link with the preferred orientation specified below satisfies Conjecture 1 (see Theorem \ref{reformulation of the main theorem} for a precise reformulation).
\subsection{Two-bridge knots and their Schubert normal forms}
A two-bridge knot is one that admits a bridge-presentation with two overarcs and two underarcs. Every two-bridge knot can be presented in its \emph{Schubert normal form}. More concretely, given a pair of coprime numbers $(p,q)$ such that $q$ is odd and $2p>q>0$, we may construct a two-bridge knot via following procedure. Firstly we draw two overarcs, placed horizontally on the same level, on which we mark $p+1$ points equidistantly, numbered from $0$ to $p$ with $0$ at the end near the center (see Fig.\ \ref{overarcs}). Then an underarc begins by spiralling out clockwisely from one of the $0$'s, passing under the two overarcs alternatively through the mark points $q$, $2q$, ... When reaching the outside, it makes a turn with a radius within $q/2$, and then spirals counterclockwisely, again passes through the overarcs alternatively under mark points with a $q$-unit difference. This process is repeated until the underarc reaches the tail of some overarc (Fig.\ \ref{(4,3) in Schubert normal form}). The other underarc is drew symmetrically. The so obtained diagram is called the Schubert normal form of the two-bridge knot of type $(p,q)$. Throughout, we orient thus obtained knots (or links) by requiring the orientation of overarcs to be center pointing.
\begin{figure}[htb]
\begin{minipage}[t]{0.5\linewidth}
\centering{
\fontsize{0.5cm}{2em}
\resizebox{65mm}{!}{\input{coordinates.pdf_tex}}
\caption{$p=4$}
\label{overarcs}
}
\end{minipage}%
\begin{minipage}[t]{0.5\linewidth}
\centering{
\fontsize{0.5cm}{2em}
\resizebox{65mm}{!}{\input{sample1.pdf_tex}}
\caption{$(4,3)$ with one underarc}
\label{(4,3) in Schubert normal form}
}
\end{minipage}%
\end{figure}
\subsection{Extended diagrams and the Alexander polynomial}
Closely related to the Schubert normal form of a two-bridge knot is its \emph{extended diagram}, introduced by Hartley \cite{Har79}. The extended diagram is obtained by unwinding the Schubert normal form: instead of drawing two overarcs horizontally, we draw a number of parallel overarcs, placed vertically, each one is marked off by numbers from $0$ to $p$ from the bottom to the top (see Fig.\ \ref{verticals}). Then starting from $0$ of one overarc, we let the underarc proceed from left to right if it were going clockwisely in the Schubert normal form, and going from right ot left if it were spiralling counterclockwisely (Fig.\ \ref{extended diagram for (4,3)}).
The main advantage of the extended diagram presentation is that one could read off the (reduced) Alexander polynomial of the corresponding knot directly. To be precise, denote the overarcs which are hit by the underarc by $W_i$, with $i$ goes from $0$ to some number $l$ from left to right, and let $\alpha_i$ be the number of segments joining the $W_i$ and the $W_{i+1}$. By applying Fox calculus to the knot group presentation coming from the Schubert normal form, Hartley proved
\begin{thm}[\cite{Har79}]
$\Delta(t)=\Sigma_{i=0}^{l-1}(-1)^i\alpha_i t^i$.
\end{thm}
For example, the two bridge link of type $(4,3)$ shown in Fig. \ref{extended diagram for (4,3)} has $\Delta(t)=2-2t$.
More technical results regarding extended diagrams will be needed for the proof of our main theorem, however, we defer that to Section 3 for the ease of reading.
\begin{figure}[htb]
\begin{minipage}[t]{0.5\linewidth}
\centering{
\fontsize{0.5cm}{2em}
\resizebox{63mm}{!}{\input{verticals.pdf_tex}}
\caption{overacrs in extended diagram}
\label{verticals}
}
\end{minipage}%
\begin{minipage}[!htb]{0.5\linewidth}
\centering{
\fontsize{0.5cm}{2em}
\resizebox{40mm}{!}{\input{sample2.pdf_tex}}
\caption{Extended diagram for $(4,3)$: $\alpha_0=2$, $\alpha_1=2$, so the $(4,3)$-link has $\Delta(t)=2-2t$.}
\label{extended diagram for (4,3)}
}
\end{minipage}
\end{figure}
\subsection{The signature of two-bridge knots}
Shinohara gave a convenient way of computing the signature invariant of a two-bridge knot from its Schubert normal form \cite{Shi76}. Keep in mind that the knots (especially links) that we consider are oriented. We have
\begin{thm}[\cite{Shi76}]\label{Shinohara}
For a two-bridge knot $K$ of type $(p,q)$, its signature $\sigma(K)$ equals the algebraic sum of the signed crossings of one underarc with the overarcs in its Schubert normal form.
\end{thm}
In view of the relation between the Schubert normal form and the extended diagram, we may say $\sigma$ equals the algebraic sum of the signed crossings of an underarc with the overarcs in the extended diagram, with the overarcs oriented as downward pointing. For exmaple, the two-bridge knot of type $(4,3)$ has signature $1$ (Fig.\ \ref{(4,3) in Schubert normal form} and Fig.\ \ref{extended diagram for (4,3)}).
\begin{rmk} Denote by $\sigma(p,q)$ the signature of the two-bridge knot of type $(p,q)$, one can deduct the following well-known formula from Theorem \ref{Shinohara}:
$$\sigma(p,q)=\sum_{i=1}^{p-1}(-1)^{[\frac{iq}{p}]}.$$
\end{rmk}
\section{Proof the main theorem}
In this section we give a proof of the main theorem. Overall, the strategy is to carry out an induction on the pairs $(p,q)$, starting from $(1,1)$ using three types of moves $T_i$ that will be defined later, $i=1,2,3$. This is the approach Hartley took to prove the trapezoidal conjecture for two-bridge knots. To facilitate our proof, the first subsection recalls technical results regarding extended diagrams from \cite{Har79}. Then we move to examine the effect of each move $T_i$. Among these, the effect of the $T_1$ move is most subtle and requires a fair amount of technical care, hence the corresponding discussion will be postponed to the last subsection.
\subsection{More technical preparations on extended diagrams}
To carry out the induction, rather than restricting to a pair $(p,q)$ such that $2p>q>0$, Hartley introduced a bigger set consisting of the so-called admissible pairs.
\begin{defn}
A pair of postive integers $(p,q)$ is said to be admissible if $gcd(p,q)=1$, and $q$ is odd.
\end{defn}
Note given an admissible pair $(p,q)$, we can similarly associate to it an extended diagram. More concretely, first introduce grid lines which consist of infinitely many parallel vertical lines, $W_i$, placed equidistantly, with subindex ranging from $-\infty$ to $\infty$, from left to right. On each grid line, mark $p+q$ points from $-(q-1)/2$ to $p+(q-1)/2$ with higher points having higher index (See Fig.\ \ref{gridlines}). The segments between $0$ and $p$ serve as the overarcs. Secondly, denote the point labeled by $j$ on $W_i$ by $x_{ij}$, and for all $i$, join the following pair of points by pairwise disjoint simple arcs lying within the region bounded by $W_i$ and $W_{i+1}$ (See Fig.\ \ref{arcs between two grid lines}):
\begin{itemize}
\item $x_{ij}$ and $x_{i+1,j+q}$, where $-(q-1)/2\leq j \leq p-(q+1)/2$
\item $x_{i+1,j}$ and $x_{i+1,-j}$ (called bottom loops), $x_{i,p-j}$ and $x_{i,p+j}$ (called top loops), where $1\leq j\leq (q-1)/2$
\end{itemize}
After this, the above simple arcs piece up to give infinitly many underarcs (See Fig.\ \ref{infinitly many underarcs}). Arbitrarily pick a single underarc, by which we call \textit{the principal underarc}. Reindex the overarcs if necessary, so that leftmost overarc hit by the principal underarc is $W_0$. If the rightmost overarc hit by the principal underarc is $W_l$, we call $l$ to be the \textit{length} of $(p,q)$ (See Fig.\ \ref{infinitly many underarcs}).\\
There are two important sequence associated to the extended diagram of $(p,q)$. The first one is the \textit{arc sequence} $\alpha_i$, which is the number of arcs connecting $W_i$ and $W_{i+1}$ and coincides with the coefficients of the Alexander polynomial. The second one is the so-called \textit{bottom sequence} $b_i$, which is equal to twice the number of bottom loops of the principal underarc at $W_i$, plus one if the principal underarc starts at $W_i$ (sometimes we may consider $b_k$ with $k>l$, in this case $b_k$ should be understood as $0$). For example, in Fig.\ \ref{infinitly many underarcs} where the extended diagram of $(4,3)$ is shown, we see $l=2$, $b_0=2$, $b_1=1$ and $b_2=0$.
\begin{figure}[htb]
\begin{minipage}[t]{0.5\linewidth}
\centering{
\fontsize{1.5cm}{2em}
\resizebox{45mm}{!}{\input{extendverticals.pdf_tex}}
\caption{grid lines for $(4,3)$}
\label{gridlines}
}
\end{minipage}%
\begin{minipage}[!htb]{0.5\linewidth}
\centering{
\fontsize{0.5cm}{2em}
\resizebox{40mm}{!}{\input{sample3.pdf_tex}}
\caption{arcs between two grid lines}
\label{arcs between two grid lines}
}
\end{minipage}
\end{figure}
\begin{figure}[htb]
\centering{
\fontsize{1cm}{2em}
\resizebox{85mm}{!}{\input{sample4.pdf_tex}}
\caption{Revisiting the extended diagram for $(4,3)$: the thickened (and green) underarc is the principle underarc; the length of $(4,3)$ is $2$.}
\label{infinitly many underarcs}
}
\end{figure}
Key technical results regarding $\alpha_i$ and $b_i$ are summarized below.
\begin{prop}[\cite{Har79}]
Let $(p,q)$, $\alpha_i$ and $b_i$ be as above. Then $$\alpha_i-\alpha_{i-1}=b_i-b_{l-i},\ 1\leq i \leq l.$$ Moreover, $\{b_i\}$ satisfies the following three so-called IH properties:
\begin{itemize}
\item[(IH1)] There is an interger $h$ satisfying $1\leq h \leq l$ and an integer $r\leq h$ such that $b_i=0$ when $i>h$, and $0\leq S_0< S_1<\cdots<S_r=S_{r+1}=\cdots=S_h$, where $S_{2j}=b_{h-j}$ and $S_{2j+1}=b_j$.
\item[(IH2)] If $h^*\geq h$ and $2j\leq h^*$, then $b_j\geq b_{h^*-j}$.
\item[(IH3)] If $0\leq i<j$ and $b_i=b_j$, then $b_i=b_k=b_j$ for all $k$ such that $i\leq k \leq j$.
\end{itemize}
\end{prop}
Now let us introduce the $T_i$ moves that we promised in the beginning of this section. These are operations on the admissible pairs defined as:
\begin{align*}
&T_1:(p,q)\longmapsto (p+q,q)\\
&T_2:(p,q)\longmapsto (p,2p+q)\\
&T_3:(p,q)\longmapsto (p,2p-q),
\end{align*}
where $T_3$ is only defined when $p>q$, hence $T_3$ cannot be applied after $T_2$ or $T_3$, but only after $T_1$. One nice feature of the $T_i$ moves is the following.
\begin{prop}[\cite{Har79}]\label{inductible lemma}
Any admissible pair $(p,q)$ can be obtained from $(1,1)$ via applying a sequence of $T_i$, $i=1,2,3$.
\end{prop}
In fact, the IH properties are proved inductively using these $T_i$ moves, and they imply the trapezoidal conjecture for two-bridge knots. For our purpose, the following facts will be important.
\begin{prop}[\cite{Har79}]\label{change of bottom sequence}
Given an admissible pair $(p,q)$, let $l$, $b_i$ and $\alpha_i$ denote the length, bottom sequence and the arc sequence respectively. Then the length and bottom sequence of $T_i(p,q)$, $i=1,2,3$ are summarized in the following table
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
& length $l'$ & bottom sequence $b_i'$ \\
\hline
$T_1(p,q)$ & $l+1$ & $b_i$ \\
\hline
$T_2(p,q)$ & $l$ & $2\alpha_i+b_i$ \\
\hline
$T_3(p,q)$ & $l$ & $2\alpha_i-b_i$ \\
\hline
\end{tabular}
\end{center}
\end{prop}
\subsection{Reformulation of the main theorem}
Notice that an admissible pair $(p,q)$ may give rise to a two-component link. Therefore during this process of induction, both the degree of the Alexander polynomial and the signature may change their parity, so we would like to adjust the statement of the Hirasawa-Murasugi conjecture to take care of this issue.
\begin{thm}\label{reformulation of the main theorem}
Let $(p,q)$ be an admissible pair, $\sigma$ be its signature, and $\Delta_K(t)=a_0-a_1t+\cdots+(-1)^{l-1}a_{l-1}t^{l-1}$ be its Alexander polynomial, where $a_i>0$, $i=0,...,l-1$. Then
\begin{equation}\label{Fox inequality}
a_0<a_1<\cdots<a_{i_0-1}=a_{i_0}=\cdots=a_{l-i_0}>a_{l-i_0+1}>\cdots>a_{l-1}.
\end{equation}
Moreover,
\begin{equation}\label{HM inequality}
\lfloor \frac{|\sigma|+1}{2}\rfloor \geq \lfloor \frac{l-2(i_0-1)}{2}\rfloor.
\end{equation}
\end{thm}
Here $\lfloor \cdot \rfloor$ is understood as taking the maximal integer part. It is obvious that the above theorem implies Theorem \ref{main}.
\begin{proof}
Note Inequality (\ref{Fox inequality}) is already proved by Hartley, so in the rest of this section, we will focus on proving Inequality (\ref{HM inequality}). To do that, we begin by noticing it is obviously true for the pair $(1,1)$. In view of Prop.\ \ref{inductible lemma}, we just need to see that if a pair $(p,q)$ satisfies Theorem \ref{reformulation of the main theorem}, so is $T_i(p,q)$ for $i=1,2,3$. This is done in subsections 3.3-3.5.
\end{proof}
\begin{rmk}
From now on, we call $\lfloor \frac{l-2(i_0-1)}{2}\rfloor$ the radius of the stable terms of the Alexander polynomial. Note that the case in which all coefficients are distinct could happen, and in that case, $i_0-1=l-i_0$, which implies the radius is zero.
\end{rmk}
\subsection{The effect of $T_2$ move}
This subsection is devoted to proving the following statement.
\begin{prop}
If Theorem \ref{reformulation of the main theorem} is true for an admissible pair $(p,q)$, then it is true for $T_2(p,q)$.
\end{prop}
\begin{proof}
Let $\alpha_i$, $b_i$, $l$ denote the number of connecting arcs, bottom sequence, and length for $(p,q)$, and let $\alpha_i'$, $b_i'$, $l'$ be the corresponding quantities for $T_2(p,q)=(p,2p+q)$. By Prop.\ \ref{change of bottom sequence} $b_i'=2\alpha_i+b_{l-i}$ and $l'=l$, hence we have
$$\begin{aligned}
\alpha_i'-\alpha_{i-1}' &= b_i'-b_{l'-i}'\\
&=(2\alpha_i+b_{l-i})-(2\alpha_{l-i}+b_{i})\\
&=2(\alpha_i-\alpha_{l-i})-(b_i-b_{l-i})\\
&=2(\alpha_i-\alpha_{i-1})-(b_i-b_{l-i})\\
&=2(\alpha_i-\alpha_{i-1})-(\alpha_i-\alpha_{i-1})\\
&=\alpha_i-\alpha_{i-1},
\end{aligned}$$
where in the $4th$ equality we used $\alpha_{l-i}=\alpha_{i-1}$ due to the symmetry of the Alexander polynomial. So the radius $m=\lfloor \frac{l-2(i_0-1)}{2}\rfloor$ does not change after the $T_2$ move.
The signature invariant is also unchanged after the $T_2$ move. To see this, note
$$\sigma(p,q)=\sum_{i=1}^{p-1}(-1)^{\lfloor \frac{iq}{p}\rfloor }=\sum_{i=1}^{p-1}(-1)^{\lfloor \frac{i(q+2p)}{p} \rfloor}=\sigma(p,2p+q).$$
Therefore, Theorem \ref{reformulation of the main theorem} is true for $T_2(p,q)$ provided it is true for $(p,q)$.
\end{proof}
\subsection{The effect of $T_3$ move}
In this subsection we examine the effect of $T_3$.
\begin{prop}
If Theorem \ref{reformulation of the main theorem} is true for an admissible pair $(p,q)$, then it is true for $T_3(p,q)$.
\end{prop}
\begin{proof}
Let $\alpha_i$, $b_i$, $l$ denote the number of connecting arcs, bottom sequence, and length for $(p,q)$, and let $\alpha_i'$, $b_i'$, $l'$ be the corresponding quantities for $T_3(p,q)=(p,2p-q)$. In this case, we have $b_i'=2\alpha_i-b_i$ and $l'=l$ by Prop.\ \ref{change of bottom sequence}. Therefore,
$$\begin{aligned}
\alpha_i'-\alpha_{i-1}' &= b_i'-b_{l'-i}'\\
&=(2\alpha_i-b_{i})-(2\alpha_{l-i}-b_{l-i})\\
&=2(\alpha_i-\alpha_{l-i})-(b_i-b_{l-i})\\
&=2(\alpha_i-\alpha_{i-1})-(b_i-b_{l-i})\\
&=2(\alpha_i-\alpha_{i-1})-(\alpha_i-\alpha_{i-1})\\
&=\alpha_i-\alpha_{i-1}\\
\end{aligned}$$
So the radius $m=\lfloor \frac{l-2(i_0-1)}{2}\rfloor$ does not change after the $T_3$ move.
For the signature, we have
$$\sigma(p,2p-q)=\Sigma_{i=1}^{p-1}(-1)^{\lfloor\frac{i(2p-q)}{p}\rfloor}=\Sigma_{i=1}^{p-1}(-1)^{\lfloor\frac{i(-q)}{p}\rfloor}=-\Sigma_{i=1}^{p-1}(-1)^{\lfloor\frac{iq}{p}\rfloor}=-\sigma(p,q).$$
Therefore, neither does $|\sigma|$ change after the $T_3$ move. Hence Theorem \ref{reformulation of the main theorem} is true for $T_3(p,q)$ provided it is true for $(p,q)$.
\end{proof}
\subsection{The effect of $T_1$ move}
In this subsection we will discuss the effect of $T_1$. Note on the level of knots, $T_2$ preserves the knot, and $T_3$ changes the knot to its mirror, and that is the reason these two cases are relatively easier compared to case of $T_1$. The goal of this subsection is to prove
\begin{prop}\label{T_1 is good}
If Theorem \ref{reformulation of the main theorem} is true for an admissible pair $(p,q)$, then it is true for $T_1(p,q)$.
\end{prop}
The proof of this proposition will come at the end of this subsection, after investigating the effect of $T_1$ on the signature and the Alexander polynomial.
First of all, we present the effect of $T_1$ on the signature.
\begin{lem}\label{effect of T_1 on signature}
$\sigma(T_1(p,q))-\sigma(p,q)=1$.
\end{lem}
\begin{figure}
\begin{minipage}[t]{0.5\linewidth}
\centering{
\fontsize{1cm}{2em}
\resizebox{40mm}{!}{\input{beforeT1.pdf_tex}}
\caption{Before $T_1$}
\label{BeforeT1}
}
\end{minipage}%
\begin{minipage}[t]{0.5\linewidth}
\centering{
\fontsize{1cm}{2em}
\resizebox{50mm}{!}{\input{afterT1.pdf_tex}}
\caption{After $T_1$}
\label{AfterT1}
}
\end{minipage}
\end{figure}
\begin{figure}
\centering{
\fontsize{1cm}{2em}
\resizebox{85mm}{!}{\input{signatureT1.pdf_tex}}
\caption{The effect of $T_1$ on signature}
\label{signatureT1}
}
\end{figure}
\begin{proof}
The effect of $T_1$ on the extended diagram is shown in Fig.\ \ref{BeforeT1} and Fig.\ \ref{AfterT1}. We describe the effect as sliding the bottom end of the parallel overarcs one unit to the right.
First compare the new overarcs and the old ones pair by pair who share the same top, starting from right to left. We see that the crossings between an old overarc and the principle underarc has counterparts in the crossings between the new overarc and the principle underarc (the crossing which are circled in Fig.\ \ref{signatureT1}). Secondly, the presence of each bottom circle in the principle underarc creates two new crosing with the new overarcs (the crossings which are boxed in Fig.\ \ref{signatureT1}), yet these two crossings cancel each other algebraically. Finally, sliding the overarc on which the principle underarc starts creates a new crossing (the crossing marked by a triangle in Fig.\ \ref{signatureT1}), and this crossing has positive sign. The conclusion then follows from comparing the sum of the signed crossings in view of Theorem \ref{Shinohara}.
\end{proof}
Next we examine how the radius of the stable terms behave under the $T_1$ operation. We separate the discussion into two cases. First we have
\begin{prop}\label{radius change if there were no stable term}
Given an admissible pair $(p,q)$, if there are no stable terms in the corresponding Alexander polynomial, then there are exactly two stable terms in the Alexander polynomial corresponding to $T_1(p,q)$.
\end{prop}
\begin{proof}
In this case, in view of symmetry of the coefficients, the degree of the Alexander polynomial must be even, and hence $l=l(p,q)$ is odd. Let $l=2k+1$ and the coefficients of the Alexander polynomial for $(p,q)$ be $\alpha_0$,...,$\alpha_{k-1}$, $\alpha_{k}$,$\alpha_{k+1}$,...,$\alpha_{2k}$. After the $T_1$ move, denote the coefficients by $\alpha_0'$,...,$\alpha_{k}'$,$\alpha_{k+1}'$,...,$\alpha_{2k+1}'$. We have
\begin{displaymath}
\alpha_{k+1}'-\alpha_{k}'=b_{k+1}'-b_{(2k+2)-(k+1)}'=b_{k+1}-b_{k+1}=0,
\end{displaymath}
and
\begin{displaymath}
\alpha_{k}'-\alpha_{k-1}'=b_{k}'-b_{(2k+2)-(k)}'=b_{k}-b_{k+2}\geq b_{k}-b_{k+1}=\alpha_{k}-\alpha_{k-1}>0
\end{displaymath}
where we used $b_{k+1}\geq b_{k+2}$ due to the second IH property. So the only stable terms are $\alpha_{k+1}'$ and $\alpha_{k}'$.
\end{proof}
Secondly, when there were stable terms in the Alexander polynomial before we apply $T_1$, we have the following proposition.
\begin{prop}\label{radius change when there are stable terms}
Let $\alpha_i$ and $\alpha_i'$ be the coefficients of the Alexander polynomial corresponding to $(p,q)$ and $T_1(p,q)$ respectively. Denote by $l$ the length of $(p,q)$. Let $i_0$, $i_0'$ be integers such that
$$\alpha_0<\alpha_1<\cdots<\alpha_{i_0-1}=\alpha_{i_0}=\cdots=\alpha_{l-i_0}>\alpha_{l-i_0+1}>\cdots>\alpha_{l-1},$$
and
$$\alpha_0'<\alpha_1'<\cdots<\alpha_{i_0'-1}'=\alpha_{i_0'}'=\cdots=\alpha_{l+1-i_0'}'>\alpha_{l-i_0'+2}'>\cdots>\alpha_{l}'.$$
If $i_0-1<l-i_0$, then one of the following statement is true:
\begin{enumerate}
\item $i_0'=i_0+1$\\
\item $i_0'=i_0$ and $b_{i_0}=b_{i_0+1}=\cdots =b_l=0$.
\end{enumerate}
\end{prop}
\begin{proof}
Note that $\alpha_{i_0}-\alpha_{i_0-1}=b_{i_0}-b_{l-i_0}=0$ and hence $b_{i_0}=b_{i_0+1}=\cdots=b_{l-i_0}$ by the third IH property. If $l-i_0>i_0$, then $\alpha_{i_0+1}'-\alpha_{i_0}'=b'_{i_0+1}-b'_{l'-(i_0+1)}=b_{i_0+1}-b_{l-i_0}=0$. Therefore, $i_0'\leq i_0+1$. If $l-i_0=i_0$, then $i_0'\leq i_0+1$ by considering the degree and symmetry of the Alexander polynomial. We move to see $i_0'\geq i_0$.
If $i_0'< i_0+1$, we have $\alpha_{i_0}'-\alpha_{i_0-1}'=b_{i_0}-b_{l-i_0+1}=0$, then by third IH property, $b_{i_0}=b_{i_0+1}=\cdots=b_{l-i_0+1}$. We continue the discussion in two cases.
\textbf{(Case 1) }If there is $\alpha_{i_0-2}$ term, i.e.\ $i_0\geq 2$, then since $b_{i_0-1}-b_{l-i_0+1}=\alpha_{i_0-1}-\alpha_{i_0-2}>0$, we learn that $b_{i_0-1}>b_{i_0}=b_{i_0+1}=\cdots=b_{l-i_0+1}$. Then $\alpha_{i_0-1}'-\alpha_{i_0-2}'=b_{i_0-1}-b_{l-i_0+2}\geq b_{i_0-1}-b_{l-i_0+1}>0$. Here $b_{l-i_0+1}\geq b_{l-i_0+2}$ follows from the second IH property. So in this case $i_0'=i_0$. Now let $h$ be the integer in the first IH property. If $h<l-i_0+1$, then by the first IH property, $b_{l-i_0+1}=\cdots=b_l=0$ and therefore $b_{i_0}=b_{i_0+1}=\cdots=b_l=0$. We claim $h$ cannot be greater than or equal to $l-i_0+1$. If not, $h\geq l-i_0+1$, then for some $j$ we have $S_j=b_{l-i_0+1}$, then $S_{j+1}$ must be $b_k$ for some $k\leq i_0-1$; otherwise, we have $b_{i_0-1}\leq b_{l-i_0+1}$ in view of first IH property, but this contradicts our earlier observation that $b_{i_0-1}>b_{l-i_0+1}$. This understood, we further oberserve that since $S_{j+2}=b_{l-i_0}=b_{l-i_0+1}=S_j$ (the existence of $S_{j+2}$ follows from the assumption $i_0-1<l-i_0$), so we have $S_j=S_{j+1}$ by the first IH property. Therefore $b_k=b_{l-i_0+1}$, and since $k\leq i_0-1\leq l-i_0+1$ we have $b_{i_0-1}=b_{l-i_0+1}$ by the third IH property. However, this contradicts $b_{i_0-1}>b_{l-i_0+1}$, so $h$ cannot be greater than or equal to $l-i_0+1$. In summary, we have $i_0'=i_0$ and $b_{i_0}=b_{i_0+1}=\cdots=b_{l}=0$.
\textbf{(Case 2) }If there is no $\alpha_{i_0-2}$ term, i.e.\ $i_0=1$, then $i_0'=1$ and $\alpha_0=\alpha_1=\cdots=\alpha_{l-1}$. Since $0=\alpha_1-\alpha_0=b_1-b_{l-1}$, by the third IH property, we have $b_1=b_2=\cdots=b_{l-1}$. Moreover, $\alpha_1'-\alpha_0'=b_1-b_l=0$ implies $b_1=\cdots=b_l$. If $b_l$ is not zero, then the first IH property implies $b_l\leq b_0\leq b_{l-1}=b_l$, and hence all the $b_i's$ are equal by the third IH property, but this is impossible since by definition one and only one of the $b_i$'s is odd and all the others are even. Therefore, we must have $b_1=\cdots=b_l=0$, with $b_0$ being the only nonzero term.
In summary, after $T_1$, the starting index of the stable terms either stays or increases by 1, and moreover, when the starting index stays, more than half of the $b_i$ sequence are zero.
\end{proof}
To prove Prop.\ \ref{T_1 is good}, we need further control of the signature in the case when the starting index of the stable terms stays level. This is what the following proposition addresses.
\begin{prop}\label{signature is positive when half of b_i vanish}
Let $(p,q)$ be an admissible pair with $\{b_i\}$ such that
\begin{displaymath}
b_0, b_1,..., b_{i_0-1}>0,
\end{displaymath}
\begin{displaymath}
b_{i_0}=\cdots =b_l=0
\end{displaymath}
where $l=l(p,q)$ and $i_0\leq \lfloor \frac{l}{2}\rfloor$. Then $\sigma(p,q)\geq 0$.
\end{prop}
To prove the proposition, we need four lemmas.
\begin{lem}\label{signature is less than the bredth of the alexander poly}
For any admissible pair $(p,q)$, let $\sigma=\sigma(p,q)$ and $l=l(p,q)$, then $|\sigma|\leq l-1$.
\end{lem}
\begin{proof}
Note that this statement is true for $(1,1)$. Note $T_2$ and $T_3$ does not change $|\sigma|$ or $l$, while $T_1$ increase $l$ by one and increase $|\sigma|$ at most by one. So inductively, we can show the statement is true for all admissible pairs in view of Prop.\ \ref{inductible lemma}.
\end{proof}
\begin{lem}\label{$T_2(p,q)$ has no zero terms in its bottom sequence}
$T_2(p,q)$ has no zero terms in its bottom sequence.
\end{lem}
\begin{proof}
Let ${b_i}$ stands for the bottom sequence for $(p,q)$, and ${b_i'}$ for $T_2(p,q)$, $l=l(p,q)=l'(T_2(p,q))=l'$. Then $b_i'=2\alpha_i+b_{l-i}\geq 2\alpha_i>0$ when $i\leq l-1$ and $b_l'=2\alpha_l+b_0=b_0>0$.
\end{proof}
\begin{lem}\label{effect of T3T1}
Let $\{b_i'\}_{0 \leq i\leq l+1}$ be the bottom sequence for $T_3\circ T_1(p,q)$. Then the only zero term is $b_{l+1}'$.
\end{lem}
\begin{figure}
\centering{
\fontsize{0.5cm}{2em}
\resizebox{10cm}{!}{\input{T3T1.pdf_tex}}
\caption{}
\label{T3T1}
}
\end{figure}
\begin{proof}
Let $l=l(p,q)$, and for $T_1(p,q)$, we denote by
\begin{displaymath}
b_0, b_1, ..., b_l, 0
\end{displaymath}
\begin{displaymath}
\alpha_0, \alpha_1,...,\alpha_{l}
\end{displaymath}
the bottom sequence and connecting arc sequence.
So $b'_{l+1}=2\alpha_{l+1}-b_{l+1}=0-0=0$. We claim that $b'_l=2\alpha_l-b_l>0$, hence by the first IH property we know that $b_{l+1}'$ is the only zero term. To see $b'_l>0$, note $T_1(p,q)=(p+q,q)=(p',q')$, so $p'>q'>0$. First, if $b_l$ is zero or odd, then we are done since $2\alpha_l$ will never be zero or odd. Second, if $b_l$ is positive and even, see Fig.\ref{T3T1}: the lower arc joining a bottom loop of $(p',q')$ at $W_l$ must hit $W_{l+1}$ since $p'>q'$ (see Fig.\ \ref{T3T1}.(b)); the upper arc joining the bottom loop cannot turn over $W_l$ (See Fig.\ \ref{T3T1}.(c)), for that would imply $p'\leq \lfloor q'/2\rfloor+\lfloor q'/2\rfloor\leq q'$; therefore, what left are three possiblities (see the second row of Fig.\ \ref{T3T1}) and in all three cases, the existence of one bottom circle gives rise to at least two connecting arcs between $W_l$ and $W_{l+1}$, hence $\alpha_l\geq b_l$, which implies $b'_l>0$.
\end{proof}
\begin{lem}
Starting from $(1,1)$, to obtain an admissible pair $(p,q)$ with ${b_i}$ such that
\begin{displaymath}
b_0, b_1,..., b_{i_0-1}>0,
\end{displaymath}
\begin{displaymath}
b_{i_0}=\cdots =b_l=0
\end{displaymath}
and $i_0\leq \lfloor \frac{l}{2} \rfloor$,
then there must be at least $(l-i_0)$ $T_1$'s successively in the end, i.e.\ $(p,q)=T_1^{l-i_0}(p',q')$ for some $(p',q')$.
\end{lem}
\begin{proof}
Note that there are $l-i_0+1$ zero terms in the bottom sequence of $(p,q)$. The operator cannot end with $T_2$ in view of Lemma \ref{$T_2(p,q)$ has no zero terms in its bottom sequence}. If the operator end with $T_3$, then since by definition $T_3$ cannot be applied successively or after $T_2$, nor can it be applied to $(1,1)$ directly, there must be a $T_1$ before it and hence $l\geq 2$. In view of Lemma \ref{effect of T3T1}, the pair we get will have only one zero term in its bottom sequence, not satisfying the assumption that more than half of the $b_i$'s are zero. So $(p,q)=T_1^k(p',q')$ for some $k$. If there is a $T_2$ before $T_1^k$, then $k=(l-i_0+1)$. If there is $T_3$ before $T_1^k$ or $(p',q')=(1,1)$, then there is already a zero in the bottom sequence and hence $k=l-i_0$.
\end{proof}
Now we are ready to give a proof to Prop.\ \ref{signature is positive when half of b_i vanish}.
\begin{proof}[Proof of Prop.\ \ref{signature is positive when half of b_i vanish}]
By the lemma above, $(p,q)=T_1^{l-i_0}(p',q')$. Let $l=l(p,q)$, $l'=l'(p',q')$. Then $l'=l-(l-i_0)=i_0$ and $|\sigma'(p',q')|<i_0$ in view of Lemma \ref{signature is less than the bredth of the alexander poly}. So by Prop.\ \ref{change of bottom sequence} we have $\sigma(p,q)=\sigma'(p',q')+(l-i_0)\geq l-2i_0\geq 0$.
\end{proof}
Finally, with all these preparations we are ready to prove Prop.\ \ref{T_1 is good}, hence concluding the proof of the main theorem.
\begin{proof}[Proof of Prop.\ \ref{T_1 is good}]
Recall we want to prove Theorem \ref{reformulation of the main theorem} is true for $T_1(p,q)$ provided it is true for $(p,q)$. Let $l=l(p,q)$, $l'=l(T_1(p,q))=l+1$. We separate the discussion into two cases.
\textbf{(Case 1) }If there are no stable terms in the Alexander polynomial corresponding to $(p,q)$, then by Prop.\ \ref{radius change if there were no stable term}, there are exactly two stable terms in the Alexander polynomial corresponding to $T_1(p,q)$, and hence the radius of stable terms is $1$. Recall in the proof of Prop.\ \ref{radius change if there were no stable term} we observed $l$ is odd in this case, which implies the length $l'$ for $T_1(p,q)$ is even. This in turn implies $T_1(p,q)$ corresponds to a two-component link, so $|\sigma(T_1(p,q))|\geq 1$ since it must be odd. Hence $\lfloor \frac{|\sigma|+1}{2} \rfloor \geq 1$.
\textbf{(Case 2) }If there are some stable terms in the Alexander polynomial corresponding to $(p,q)$. Let $i_0$, $i_0'$ be as in Prop.\ \ref{radius change when there are stable terms}. When $l=2n+1$, $|\sigma|=2k$. By assumption $k\geq \lfloor\frac{l-2(i_0-1)}{2}\rfloor=n-i_0+1$. If $i_0'=i_0+1$, then $\lfloor \frac{l'-2(i_0'-1)}{2}\rfloor=n-i_0+1\leq k$, and $\lfloor \frac{|\sigma'|+1}{2}\rfloor \geq \lfloor \frac{(2k-1)+1}{2}\rfloor=k$. If $i_0'=i_0$, then $\lfloor \frac{l'-2(i_0'-1)}{2}\rfloor=n-i_0+2\leq k+1$, and by Prop.\ \ref{radius change when there are stable terms}, Prop.\ \ref{signature is positive when half of b_i vanish} and Lemma \ref{effect of T_1 on signature}, $\lfloor \frac{|\sigma'|+1}{2}\rfloor \geq k+1$. When $l=2n$, the argument is similar and hence omitted.
\end{proof}
\bibliographystyle{abbrv
| {
"timestamp": "2018-01-12T02:01:07",
"yymm": "1712",
"arxiv_id": "1712.04993",
"language": "en",
"url": "https://arxiv.org/abs/1712.04993",
"abstract": "Fox conjectured the Alexander polynomial of an alternating knot is trapezoidal, i.e. the coefficients first increase, then stabilize and finally decrease in a symmetric way. Recently, Hirasawa and Murasugi further conjectured a relation between the number of the stable coefficients in the Alexander polynomial and the signature invariant. In this paper we prove the Hirasawa-Murasugi conjecture for two-bridge knots.",
"subjects": "Geometric Topology (math.GT)",
"title": "On the Alexander polynomial and the signature invariant of two-bridge knots",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969670030071,
"lm_q2_score": 0.8152324871074608,
"lm_q1q2_score": 0.8018602017212165
} |
https://arxiv.org/abs/1705.04271 | Lifting in Besov Spaces | Let $\Omega$ be a smooth bounded domain in $\mathbb R^n$ and u be a measurable function on $\Omega$ such that $|u(x)|=1$ almost everywhere in $\Omega$. Assume that u belongs to the $B^s_{p,q}(\Omega)$ Besov space. We investigate whether there exists a real-valued function $\varphi \in B^s_{p,q}$ such that $u=e^{i\varphi}$. This extends the corresponding study in Sobolev spaces due to Bourgain, Brezis and the first author. The analysis of this lifting problem leads us to prove some interesting new properties of Besov spaces, in particular a non restriction property when $q>p$. | \section*{Big picture}
\section{Introduction}
${}$
Let $\Omega\subset {\mathbb R}^n$ be a bounded simply connected domain and $u:\Omega\rightarrow {\mathbb S}^1$ a continuous (resp. $C^k$, $k\geq 1$) function. It is a well-known fact that there exists a continuous (resp. $C^k$) real-valued function $\varphi$ such that $u=e^{\imath\varphi}$. In other words, $u$ has a continuous (resp. $C^k$) lifting. \par
\noindent The analogous problem when $u$ belongs to the fractional Sobolev space $W^{s,p}$, $s>0$, $1\le p<\infty$, received an complete answer in \cite{lss}. Let us briefly recall the results:
\begin{enumerate}
\item when $n=1$, $u$ has a lifting in $W^{s,p}$ for all $s>0$ and all $p\in [1,\infty)$,
\item when $n\geq 2$ and $0<s<1$, $u$ has a lifting in $W^{s,p}$ if and only if $sp<1$ or $sp\geq n$,
\item when $n\ge 2$ and $s\geq 1$, $u$ has a lifting in $W^{s,p}$ if and only if $sp\geq 2$.
\end{enumerate}
Further developments in the Sobolev context can be found in \cite{bethuelchiron,nguyenphase,mironescuphase,mironescucras2}.
In the present paper, we address the corresponding question in the framework of Besov spaces. More specifically, given $s, p, q$ in suitable ranges defined later, we ask whether a map $u\in B^s_{p,q}(\Omega ; {\mathbb S}^1)$ can be lifted as $u=e^{\imath\varphi}$, with $\varphi\in B^s_{p,q}(\Omega ; \mathbb R)$. We say that $B^s_{p,q}$ has the lifting property if and only if the answer is positive.
When dealing with circle-valued functions and their phases, it is natural to consider only maps in $L^1_{loc}$. This is why we assume that $s>0$,\footnote{ However, we will discuss an appropriate version of the lifting problem when $s\le 0$; see Remark \ref{aa1} and Case \ref{T} below. } and we take
the exponents $p$ and $q$ in the classical range $p\in [1,\infty)$, $q\in [1,\infty]$.\footnote{ We discard the uninteresting case where $p=\infty$. In that case, maps in $B^s_{\infty,q}$ are continuous. Arguing as in Case \ref{tri} below, we obtain the existence of a $B^s_{\infty,q}$ phase for every $u\in B^s_{\infty,q}(\Omega ; {\mathbb S}^1)$.}
Since Besov spaces are microscopic modifications of Sobolev (or Slobodeskii) spaces, one expects a global picture similar to the one described before for Sobolev spaces. The analysis in Besov spaces is indeed partly similar to the one in Sobolev spaces, as far as the results and the techniques are concerned. However, several difficulties occur and some cases still remain open. Thus, the analysis of the lifting problem leads us to prove several new properties for Besov spaces (in connection with restriction or absence of restriction properties, sums of integer valued functions which are constant, products of functions in Besov spaces, disintegration properties for the Jacobian), which are interesting in their own right. We also provide detailed arguments for classical properties (some embeddings, Poincar\'e inequalities) which could not be precisely located in the literature.
\medskip
Let us now describe more precisely our results and methods. When $sp>n$, functions in $B^s_{p,q}$ are continuous, which readily implies that $B^s_{p,q}$ has the lifting property (Case \ref{tri}).
\medskip
In the case where $sp<1$, we rely on a characterization of $B^s_{p,q}$ in terms of the Haar basis \cite[Th\'eor\`eme 5]{bourdaud}, to prove that $B^s_{p,q}$ has the lifting property (Case \ref{A}).
\medskip
Assume now that $0<s\leq 1$, $sp=n$ and $q<\infty$. Let $u\in B^s_{p,q}(\Omega ; {\mathbb S}^1)$ and let $F(x,\varepsilon):=u\ast\rho_{\varepsilon}$, where $\rho$ is a standard mollifier. Since $B^s_{p,q}\hookrightarrow \text{\rm VMO}$, for all $\varepsilon$ sufficiently small and all $x\in \Omega$ we have $1/2<\left\vert F(x,\varepsilon)\right\vert\le 1$. Writing $F(x,\varepsilon)/\left\vert F(x,\varepsilon)\right\vert=e^{\imath\psi_\varepsilon}$, where $\psi_\varepsilon$ is $C^{\infty}$, and relying on a slight modification of the trace theory for weighted Sobolev spaces developed in \cite{tracesoldnew}, we conclude, letting $\varepsilon$tend to $0$, that $u=e^{\imath\psi_0}$, where $\psi_0=\lim_{\varepsilon\to 0}\psi_\varepsilon\in B^s_{p,q}$, and therefore $B^s_{p,q}$ still has the lifting property (Case \ref{X}).
\medskip
Consider now the case where $s>1$ and $sp\geq 2$. Arguing as in \cite[Section 3]{lss}, it is easily seen that the lifting property for $B^s_{p,q}$ will follow from the following property: given $u\in B^s_{p,q}(\Omega; {\mathbb S}^1)$, if $F:=u\wedge\nabla u\in L^p(\Omega ; {\mathbb R}^n)$, then $(*)$ $\operatorname{curl} F=0$. The proof of $(*)$ is much more involved than the corresponding one for $W^{s,p}$ spaces \cite[Section 3]{lss}. It relies on a disintegration argument for the Jacobians, more generally applicable in $W^{1/p,p}$. This argument, in turn, relies on the fact that $\operatorname{curl} F=0$ when $u\in \text{\rm VMO}$ and $n=2$, and a slicing argument. In particular, we need a {\it restriction property for Besov spaces}, namely the fact that, for $s>0$, $1\le p<\infty$ and $1\le q\le p$, for all $f\in B^s_{p,q}$, the partial maps of $f$ still belong to $B^s_{p,p}$ (see Lemma \ref{oa1} below). Thus, we obtain that, when $s>1$ and $1\leq p<\infty$, $B^s_{p,q}$ does have the lifting property when [$1\le q<\infty$, $n=2$, and $sp=2$], or [$1\le q\le p$, $n\ge 3$, and $sp= 2$], or [$1\le q\le \infty$, $n\ge 2$, and $sp> 2$].
One can improve the conclusion of Lemma \ref{oa1} as follows. For $s>0$, $1\le p<\infty$ and $1\le q\le p$, for all $f\in B^s_{p,q}$, the partial maps of $f$ belong to $B^s_{p,q}$ (Proposition \ref{qh1}). We emphasize the fact that this type of property holds only under the crucial assumption $q\le p$.
More precisely, if $q>p$ and $s>0$, then we exhibit a compactly supported function $f\in B^s_{p,q}({\mathbb R}^2)$ such that, for almost every $x\in (0,1)$, $f(x,\cdot)\notin B^s_{p,\infty}({\mathbb R})$ (Proposition \ref{l7.26}). This phenomenon, which has not been noticed before, shows a picture strikingly different from the one for $W^{s,p}$, and even more generally for Triebel-Lizorkin spaces \cite[Section 2.5.13]{triebel2}.
\medskip
Let us return to the case when $0<s<1$, $1\le p< \infty$ and $n\geq 2$. Assume now that [$1\le q<\infty$ and $1\leq sp <n$], or [$q=\infty$ and $1< sp <n$]. In this case, we show that $B^s_{p,q}$ does not have the lifting property. The argument uses embedding theorems and the following fact, for which we provide a proof: let $s_i>0$, $1\le p_i<\infty$, and [$s_jp_j=1$ and $1\le q_j<\infty$], or [$s_jp_j>1$ and $1\le q_j\le\infty$], $i=1,2$. Then, if $f_i\in B^{s_i}_{p_i,q_i}$ and $f_1+f_2$ only takes integer values, then the function $f_1+f_2$ is constant.
\medskip
Assume finally that $0<s<\infty$, $1\le p<\infty$, $n\ge 2$ and [$1\le q< \infty$ and $1\le sp<2$] or [$q=\infty$ and $1\le sp\le 2$]. In this case, $B^s_{p,q}$ does not have the lifting property either. We provide a counterexample of topological nature, inspired by \cite[Section 4]{lss}: namely, the function $\displaystyle u(x)=\frac{(x_1,x_2)}{\displaystyle\left(x_1^2+x_2^2\right)^{1/2}}$ belongs to $B^s_{p,q}$ but has no lifting in $B^s_{p,q}$.
\medskip
Contrary to the case of Sobolev spaces, some cases remain open. A first case occurs when $s>1$, $1\le p<\infty$, $p<q<\infty$, $n\ge 3$, and $sp=2$. In this situation, since the restriction property for $B^s_{p,q}$ does not hold, the argument sketched before does not work any longer and we do not know if $B^s_{p,q}$ has the lifting property.
The case where $s=1$, $1\le p<\infty$, $n\ge 3$, and [$1\le q<\infty$ and $2\le p<n$] or [$q=\infty$ and $2< p\le n$] is also open (except when $s=1$ and $p=q=2$, since in this case, $B^1_{2,2}=W^{1,2}$ has the lifting property). This is related to the fact that it is not known whether the map $\varphi\mapsto e^{\imath\varphi}$ maps $B^1_{p,q}$ into itself.
When $1\le p<\infty$, $s=1/p$ and $q=\infty$, we do not know if $B^{1/p}_{p,\infty}$ has the lifting property. In particular, it is unclear whether the Haar system provides a basis of $B^{1/p}_{p,\infty}$. The case where $q=\infty$, $n\le p<\infty$, $n\ge 3$ and $s=n/p$ is also open. Indeed, $B^s_{p,q}$ is not embedded into $\text{\rm VMO}$ in this case, and the argument briefly described above is not applicable any more.
\medskip
Let us summarize the main results of this paper concerning the lifting problem.
We start with positive cases.
\begin{theorem} \label{positive}
Let $s>0$, $1\le p<\infty$, $1\le q\le \infty$. The lifting problem has a positive answer in the following cases:
\begin{enumerate}
\item $s>0$, $1\le q\le\infty$, and $sp>n$,
\item $0<s<1$, $1\le q\le\infty$, and $sp<1$,
\item $0<s\le 1$, $1\le q<\infty$, and $sp=n$,
\item
\begin{enumerate}
\item $s>1$, $1\le q<\infty$, $n=2$, and $sp=2$,
\item $s>1$, $1\le q\le p$, $n\ge 3$, and $sp= 2$,
\item $s>1$, $1\le q\le \infty$, $n\ge 2$,
and $sp> 2$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\noindent The negative cases are as follows:
\begin{theorem} \label{negative}
Let $s>0$, $1\le p<\infty$, $1\le q\le \infty$. The lifting problem has a negative answer in the following cases:
\begin{enumerate}
\item
\begin{enumerate}
\item $0<s<1$, $1\le q<\infty$, $n\ge 2$, and $1\leq sp <n$,
\item $0<s<1$, $q=\infty$, $n\ge 2$, and $1< sp <n$,
\end{enumerate}
\item
\begin{enumerate}
\item $0<s<\infty$, $1\le q<\infty$, $n\ge 2$, and $1\le sp<2$,
\item $0<s<\infty$, $1\le p<\infty$, $q=\infty$, $n\ge 2$, and $1<sp\le 2$.
\end{enumerate}
\end{enumerate}
\end{theorem}
The paper is organized as follows. In Section \ref{fun}, we briefly recall the standard definition of Besov spaces and some classical characterizations of these spaces (by Littlewood-Paley theory and wavelets). In Section \ref{pos} we establish Theorem \ref{positive}, namely the cases where $B^s_{p,q}$ does have the lifting property, while Section \ref{neg} is devoted to negative cases (Theorem \ref{negative}). In Section \ref{ope}, we discuss the remaining cases, which are widely open. The final section gathers statements and proofs of various results on Besov spaces needed in the proofs of Theorems \ref{positive} and \ref{negative}.
\subsubsection*{Acknowledgments}
P. Mironescu thanks N. Badr, G. Bourdaud, P. Bousquet, A.C. Ponce and W. Sickel for useful discussions. He warmly thanks J. Kristensen for calling his attention to the reference \cite{uspenskii}.
All the authors are supported by the ANR project \enquote{Harmonic Analysis at its Boundaries}, ANR-12-BS01-0013-03. P. Mironescu was also supported by the LABEX MILYON (ANR- 10-LABX-0070) of
Universit\'e de Lyon, within the program \enquote{Investissements d'Avenir}
(ANR-11-IDEX-0007) operated by the French National Research Agency
(ANR).
\section*{Notation, framework}
\begin{enumerate}
\item
Most of our results are stated in a smooth bounded domain $\Omega\subset{\mathbb R}^n$.
\item
In few cases, proofs are simpler if we consider ${\mathbb Z}^n$-periodic maps $u: (0,1)^n\to{\mathbb S}^1$. In this case, we denote the corresponding function spaces $B^s_{p,q}({\mathbb T}^n ; {\mathbb S}^1)$, and the question is whether a map $u\in B^s_{p,q}({\mathbb T}^n ; {\mathbb S}^1)$ has a lifting $\varphi\in B^s_{p,q}((0,1)^n ; {\mathbb R})$. [Of course, $\varphi$ need not be, in general, ${\mathbb Z}^n$-periodic.] If such a $\varphi$ exists for every $u\in B^s_{p,q}({\mathbb T}^n ; {\mathbb S}^1)$, then $B^s_{p,q}({\mathbb T}^n ; {\mathbb S}^1)$ has the lifting property.
However, in these results it is not crucial to work in ${\mathbb T}^n$. An inspection of the proofs shows that, with some extra work,
we could take any smooth bounded domain.
\item
In the same vein, it is sometimes easier to work in $\Omega =(0,1)^n$ (with no periodicity assumption).
\item
Partial derivatives are denoted $\partial_j$, $\partial_j\partial_k$, and so on, or $\partial^\alpha$.
\item
$\wedge$ denotes vector product of complex numbers: $a\wedge b:=a_1b_2-a_2b_1$. Similarly, $u\wedge \nabla v:=u_1\nabla v_2-u_2\nabla v_1$.
\item
If $u:\Omega\to{\mathbb C}$ and if $\varpi$ is a $k$-form on $\Omega$ (with $k\in\llbracket 0, n-1\rrbracket$), then $\varpi\wedge (u\wedge\nabla u) $ denotes the $(k+1)$-form $\varpi\wedge(u_1d u_2-u_2du_1)$.
\item
We let ${\mathbb R}^n_+$ denote the open set ${\mathbb R}^{n-1}\times (0,\infty)$.
\end{enumerate}
\tableofcontents
\section{Crash course on Besov spaces}
\label{fun}
${}$
We briefly recall here the basic properties of the Besov spaces in ${\mathbb R}^n$, with special focus on the properties which will be instrumental for our purposes. For a complete treatment of these spaces, see \cite{triebel2,fjw,triebel3,runstsickel}. \par
\subsection{Preliminaries}
${}$
In the sequel, ${\mathcal S}({\mathbb R}^n)$ is the usual Schwartz space of rapidly decreasing $C^{\infty}$ functions. Let ${\mathcal Z}({\mathbb R}^n)$ denote the subspace of
${\mathcal S}({\mathbb R}^n)$ consisting of functions $\varphi\in {\mathcal S}({\mathbb R}^n)$ such that $\partial^{\alpha}\varphi(0) = 0$ for every multi-index $\alpha\in {\mathbb N}^n$. Let ${\mathcal Z}^{\prime}({\mathbb R}^n)$ stand for the topological dual of ${\mathcal Z}({\mathbb R}^n)$. It is well-known \cite[Section 5.1.2]{triebel2} that ${\mathcal Z}^{\prime}({\mathbb R}^n)$ can be identified with the quotient space ${\mathcal S}^{\prime}({\mathbb R}^n)/{\mathcal P}({\mathbb R}^n)$, where ${\mathcal P}({\mathbb R}^n)$ denotes the space of all polynomials in ${\mathbb R}^n$.
We denote by ${\mathcal F}$ the Fourier transform.
For all sequence $(f_j)_{j\geq 0}$ of measurable functions on ${\mathbb R}^n$, we set
\begin{equation*}
\left\Vert (f_j)\right\Vert_{l^q(L^p)}:=\left(\sum_{j\geq 0} \left(\int_{{\mathbb R}^n} \left\vert f_j(x)\right\vert^pdx\right)^{q/p}\right)^{1/q},
\end{equation*}
with the usual modification when $p=\infty$ and/or $q=\infty$. If $(f_j)$ is labelled by ${\mathbb Z}$, then $\left\Vert (f_j)\right\Vert_{l^q(L^p)}$ is defined analogously with $\sum_{j\geq 0}$ replaced by $\sum_{j\in {\mathbb Z}}$.
Finally, we fix some notation for finite order differences. Let $\Omega\subset {\mathbb R}^n$ be a domain and let $f:\Omega\to{\mathbb R}$.
For all integers $M\geq 0$, all $t>0$ and all $x, h\in {\mathbb R}^n$, set
\begin{equation}
\label{ia1}
\Delta^M_hf(x)=\begin{cases}\displaystyle\sum_{l=0}^M {M \choose l} (-1)^{M-l} f(x+lh),&\text{if } x,\, x+h,\ldots,\, x+Mh\in \Omega\\
0,&\text{otherwise}
\end{cases}.
\end{equation}
\subsection{Definitions of Besov spaces}
\label{mm7}
${}$
We first focus on inhomogeneous Besov spaces. Fix a sequence of functions $(\varphi_j)_{j\geq 0}\in {\mathcal S}({\mathbb R}^n)$ such that:
\begin{itemize}
\item[$1.$] $\displaystyle \mbox{supp }\varphi_0\subset B(0,2)$ and $\displaystyle \mbox{supp }\varphi_j\subset B(0,2^{j+1})\setminus B(0,2^{j-1}) \mbox{ for all }j\geq 1$.
\item[$2.$]
For all multi-index $\alpha\in {\mathbb N}^n$, there exists $c_{\alpha}>0$ such that $\displaystyle
\left\vert D^{\alpha}\varphi_j(x)\right\vert\leq c_{\alpha}2^{-j\left\vert \alpha\right\vert}$, for all $x\in {\mathbb R}^n$ and all $j\geq 0$.
\item[$3.$]
For all $x\in {\mathbb R}^n$, it holds $
\sum_{j\geq 0} \varphi_j(x)=1$.
\end{itemize}
\begin{definition} [Definition of inhomogeneous Besov spaces]
Let $s\in {\mathbb R}$, $1\leq p<\infty$ and $1\leq q\leq \infty$. Define $B^s_{p,q}({\mathbb R}^n)$ as the space of tempered distributions $f\in {\mathcal S}^{\prime}({\mathbb R}^n)$ such that
\begin{equation*}
\left\Vert f\right\Vert_{B^s_{p,q}({\mathbb R}^n)}:=\left\Vert \left(2^{sj} {\mathcal F}^{-1}\left(\varphi_j{\mathcal F}f(\cdot)\right)\right)\right\Vert_{l^q(L^p)}<\infty.
\end{equation*}
\end{definition}
Recall \cite[Section 2.3.2, Proposition 1, p. 46]{triebel2} that $B^s_{p,q}({\mathbb R}^n)$ is a Banach space which does not depend on the choice of the sequence $(\varphi_j)_{j\geq 0}$, in the sense that two different choices for the sequence $(\varphi_j)_{j\geq 0}$ give rise to equivalent norms. Once the $\varphi_j$'s are fixed, we refer to the equality $f=\sum_j f_j$ in ${\mathcal S}'$ as the Littlewood-Paley decomposition of $f$.
Let us now turn to the definition of homogeneous Besov spaces. Let $(\varphi_j)_{j\in {\mathbb Z}}$ be a sequence of functions satisfying:
\begin{itemize}
\item[$1.$]
$\displaystyle
\mbox{supp }\varphi_j\subset B(0,2^{j+1})\setminus B(0,2^{j-1}) \mbox{ for all }j\in {\mathbb Z}$.
\item[$2.$]
For all multi-index $\alpha\in {\mathbb N}^n$, there exists $c_{\alpha}>0$ such that$\displaystyle
\left\vert D^{\alpha}\varphi_j(x)\right\vert\leq c_{\alpha}2^{-j\left\vert \alpha\right\vert} $, for all $x\in {\mathbb R}^n$ and all $j\in {\mathbb Z}$.
\item[$3.$]
For all $x\in {\mathbb R}^n\setminus \left\{0\right\}$, it holds
$
\sum_{j\in {\mathbb Z}} \varphi_j(x)=1$.
\end{itemize}
\begin{definition} [Definition of homogeneous Besov spaces]
Let $s\in {\mathbb R}$, $1\leq p<\infty$ and $1\leq q\leq \infty$. Define $\dot{B}^s_{p,q}({\mathbb R}^n)$ as the space of $f\in {\mathcal Z}^{\prime}({\mathbb R}^n)$ such that
\begin{equation*}
\left\vert f\right\vert_{B^s_{p,q}({\mathbb R}^n)}:=\left\Vert \left(2^{sj} {\mathcal F}^{-1}\left(\varphi_j{\mathcal F}f(\cdot)\right)\right)\right\Vert_{l^q(L^p)}<\infty.
\end{equation*}
\end{definition}
Note that this definition makes sense since, for all polynomial $P$ and all $f\in {\mathcal S}^{\prime}({\mathbb R}^n)$, we have $\displaystyle
\left\vert f\right\vert_{B^s_{p,q}({\mathbb R}^n)}=\left\vert f+P\right\vert_{B^s_{p,q}({\mathbb R}^n)}$.
Again, $\dot{B}^s_{p,q}({\mathbb R}^n)$ is a Banach space which does not depend on the choice of the sequence $(\varphi_j)_{j\in {\mathbb Z}}$ \cite[Section 5.1.5, Theorem, p. 240]{triebel2}.\par
\noindent For all $s>0$ and all $1\le p<\infty$, $1\leq q\leq \infty$, we have \cite[Section 2.3.3, Theorem]{triebel3}, \cite[Section 2.6.2, Proposition 3]{runstsickel}
\begin{equation} \label{homoglp}
B^s_{p,q}({\mathbb R}^n)=L^p({\mathbb R}^n)\cap \dot{B}^s_{p,q}({\mathbb R}^n)\mbox{ and }\left\Vert f\right\Vert_{B^s_{p,q}({\mathbb R}^n)}\sim \left\Vert f\right\Vert_{L^p({\mathbb R}^n)}+\left\vert f\right\vert_{B^s_{p,q}({\mathbb R}^n)}.
\end{equation}
Besov spaces on domains of ${\mathbb R}^n$ are defined as follows.
\begin{definition} [Besov spaces on domains]
Let $\Omega\subset {\mathbb R}^n$ be an open set. Then
\begin{itemize}
\item[$1.$]
$\displaystyle
B^s_{p,q}(\Omega):=\left\{f\in {\mathcal D}^{\prime}(\Omega);\mbox{ there exists }g\in B^s_{p,q}({\mathbb R}^n)\mbox{ such that } f=g\vert_{\Omega}\right\}$,\\
equipped with the norm
\begin{equation*}
\left\Vert f\right\Vert_{B^s_{p,q}(\Omega)}:=\inf \left\{\left\Vert g\right\Vert_{B^s_{p,q}({\mathbb R}^n)};\, g\vert_{\Omega}=f\right\}.
\end{equation*}
\item[$2.$]
$\displaystyle
\dot{B}^s_{p,q}(\Omega):=\left\{f\in {\mathcal D}^{\prime}(\Omega);\mbox{ there exists }g\in \dot{B}^s_{p,q}({\mathbb R}^n)\mbox{ such that } f=g\vert_{\Omega}\right\}$,\\
equipped with the semi-norm
\begin{equation*}
\left\Vert f\right\Vert_{\dot{B}^s_{p,q}(\Omega)}:=\inf \left\{\left\Vert g\right\Vert_{\dot{B}^s_{p,q}({\mathbb R}^n)};\, g\vert_{\Omega}=f\right\}.
\end{equation*}
\end{itemize}
\end{definition}
Local Besov spaces are defined in the usual way: $f\in B^s_{p,q}$ near a point $x$ if for some cutoff $\varphi$ which equals $1$ near $x$ we have $\varphi f\in B^s_{p,q}$.
If $f$ belongs to $B^s_{p,q}$ near each point, then we write $f\in (B^s_{p,q})_{loc}$.
The following is straightforward.
\begin{lemma}
\label{ka3}
Let $f:\Omega\to{\mathbb R}$. If, for each $x\in\overline\Omega$, $f\in B^s_{p,q}(B(x,r)\cap \Omega)$ for some $r=r(x)>0$, then $f\in B^s_{p,q}$.
\end{lemma}
\subsection{Besov spaces on ${\mathbb T}^n$}
\label{mm6}
${}$
Let $\varphi_0\in {\mathcal D}({\mathbb R}^n)$ be such that
\begin{equation*}
\varphi_0(x)=1\mbox{ for all } \left\vert x\right\vert<1\mbox{ and }\varphi_0(x)=0\mbox{ for all }\left\vert x\right\vert\geq \frac 32.
\end{equation*}
For all $k\geq 1$ and all $x\in {\mathbb R}^n$, define
\begin{equation*}
\varphi_k(x):=\varphi_0(2^{-k}x)-\varphi_0(2^{-k+1}x).
\end{equation*}
\begin{definition} \label{periodicbesov}
Let $s\in {\mathbb R}$, $1\leq p<\infty$ and $1\le q\leq \infty$. Define $B^s_{p,q}({\mathbb T}^n)$ as the space of distributions
$f\in {\mathcal D}^{\prime}({\mathbb T}^n)$ whose Fourier coefficients $(a_m)_{m\in{\mathbb Z}^n}$ satisfy
\begin{equation*}
\left\Vert f\right\Vert_{B^s_{p,q}({\mathbb T}^n)}:=\left(\sum_{j=0}^{\infty} 2^{jsq} \left\Vert x\mapsto \sum_{m\in {\mathbb Z}^n} a_m\varphi_j(2\pi m)e^{2\imath\pi m\cdot x}\right\Vert_{L^p({\mathbb T}^n)}^q\right)^{1/q}<\infty
\end{equation*}
(with the usual modification when $q=\infty$). Again, the choice of the system $(\varphi_j)_{j\ge 0}$ is irrelevant, and the equality $f=\sum f_j$, with $f_j:=\sum_{m} a_m\varphi_j(2\pi m)e^{2\imath\pi m\cdot x}$, is the Littlewood-Paley decomposition of $f$.
\end{definition}
Alternatively, we have $f\in B^s_{p,q}({\mathbb T}^n)$ if and only if $f$ can be identified with a ${\mathbb Z}^n$-periodic distribution in ${\mathbb R}^n$, still denoted $f$, which belongs to $(B^s_{p,q})_{loc}({\mathbb R}^n)$ \cite[Section 3.5.4, pp. 167-169]{schmeisser}.
\subsection{Characterization by differences}
\label{mm5}
${}$
Among the various characterizations of Besov spaces, we recall here the ones involving differences \cite[Section 5.2.3]{triebel2}, \cite[Theorem, p. 41]{runstsickel}, \cite[Section 1.11.9, Theorem 1.118, p. 74]{triebel06}.
\begin{proposition}
\label{p2.4}
Let $s>0$, $1\leq p<\infty$ and $1\leq q\leq \infty$. Let $M>s$ be an integer. Then, with the usual modification when $q=\infty$:
\begin{itemize}
\item[$1.$] In the space $\dot B^s_{p,q}({\mathbb R}^n)$ we have the equivalence of semi-norms
\begin{equation} \label{equivnormhomogrn}
\begin{aligned}
\left\vert f\right\vert_{B^s_{p,q}({\mathbb R}^n)}\sim &\left(\int_{{\mathbb R}^n} \left\vert h\right\vert^{-sq}
\left\|\Delta_h^Mf\right\|_{L^p ({\mathbb R}^n)}^q\,
\frac{dh}{\left\vert h\right\vert^n}\right)^{1/q}\\
\sim & \sum_{j=1}^n\left(\int_{{\mathbb R}} \left\vert h\right\vert^{-sq}
\left\|\Delta_{h e_j}^Mf\right\|_{L^p ({\mathbb R}^n)}^q\,
\frac{dh}{\left\vert h\right\vert}\right)^{1/q}.
\end{aligned}
\end{equation}
\item[$2.$]
The full $B^s_{p,q}$ norm satisfies, for all $\delta>0$,
\begin{equation*}
\left\Vert f\right\Vert_{B^s_{p,q}({\mathbb R}^n)}\sim \left\Vert f\right\Vert_{L^p({\mathbb R}^n)}+\left(\int_{\left\vert h\right\vert\le \delta} \left\vert h\right\vert^{-sq}
\left\|\Delta_h^Mf\right\|_{L^p ({\mathbb R}^n)}^q\,
\frac{dh}{\left\vert h\right\vert^n}\right)^{1/q}.
\end{equation*}
\end{itemize}
\end{proposition}
\subsection{Characterization by harmonic extensions}
\label{chha}
${}$
In Section \ref{pos}, it will be convenient to work with extensions of maps in $B^{s}_{p,q}$. The connection between regularity of maps and the properties of their suitable extensions is a classical topic in the theory of function spaces. Here is a typical result in this direction.
It characterizes $B^{s}_{p,q}$ by means of the harmonic extension \cite{triebelheat}, \cite[Section 2.12.2, Theorem, p. 184]{triebel2}. More specifically, if $f$ is measurable in ${\mathbb R}^n$ and $s\in (0,1)$, then we have
\begin{equation} \label{besovnorm}
\left\Vert f\right\Vert_{B^s_{p,q}({\mathbb R}^n)}\sim \left\Vert f\right\Vert_{L^p({\mathbb R}^n)}+ \left(\int_0^{\infty} t^{(1-s)q}\left\Vert \frac{\partial P_tf}{\partial t}(\cdot)\right\Vert_{L^p({\mathbb R}^n)}^q \frac{dt}t\right)^{1/q},
\end{equation}
where $P_t$ stands for the Poisson semigroup generated by $-\Delta$, so that $(x,t)\mapsto P_tf(x)$, $t>0$, $x\in{\mathbb R}^n$, is the harmonic extension of $f$ to the upper-half space.
Since when $p>1$ we have
\begin{equation*}\left\Vert \frac{\partial P_tf}{\partial t}\right\Vert_{L^p({\mathbb R}^n)} =\left\Vert (-\Delta_x)^{1/2}P_tf\right\Vert_{L^p({\mathbb R}^n)}\sim \left\Vert \nabla_x P_tf\right\Vert_{L^p({\mathbb R}^n)},
\end{equation*}
one also has, for $1<p<\infty$ and $1\le q\le \infty$,
\begin{equation} \label{besovnormbis}
\left\Vert f\right\Vert_{B^s_{p,q}({\mathbb R}^n)}\sim \left\Vert f\right\Vert_{L^p({\mathbb R}^n)}+ \left(\int_0^{\infty} t^{(1-s)q}\left\Vert \nabla P_tf(\cdot)\right\Vert_{L^p({\mathbb R}^n)}^q \frac{dt}t\right)^{1/q}
\end{equation}
(with the usual modification when $q=\infty$).
The results in the literature are not suited to our context. We will need some variants of \eqref{besovnormbis}, which will be stated and proved in Section
\ref{characext} below.
\subsection{Lizorkin type characterizations}
\label{mm1}
${}$
Such characterizations involve restrictions of the Fourier transform on cubes or corridors; see e.g. \cite[Section 2.5.4, pp. 85-86]{triebel2} or \cite[Section 3.5.3, pp. 166-167]{schmeisser}. The following special case \cite[Section 3.5.3, Theorem, p. 167]{schmeisser} will be useful later.
\begin{proposition}
\label{mm2}
Let $s\in{\mathbb R}$, $1<p<\infty$ and $1\le q\le\infty$. Set $K_0:=\{0\}\subset{\mathbb Z}^n$ and, for $j\ge 1$, let $K_j:=\{ m\in{\mathbb Z}^n;\, 2^{j-1}\le |m|<2^j\}$.\footnote{ Here, $|m|:=\max_{l=1}^n|m_l|$.} Let $f\in{\cal D}'({\mathbb T}^n)$ have the Fourier series expansion $f=\sum_{m\in{\mathbb Z}^n} a_me^{2\imath\pi m\cdot x}$. We set $f_j:=\sum_{m\in K_j} a_me^{2\imath\pi m\cdot x}$. Then we have the norm equivalence
\begin{equation*}
\|f\|_{B^s_{p,q}({\mathbb T}^n)}\sim \left(\sum_{j=0}^{\infty} 2^{jsq} \left\Vert f_j\right\Vert_{L^p({\mathbb T}^n)}^q\right)^{1/q}
\end{equation*}
(with the usual modification when $q=\infty$).
\end{proposition}
\subsection{Characterization by the Haar system}
\label{at7}
${}$
Besov spaces can also be described via the size of their wavelet coefficients. To illustrate this, we start with low smoothness Besov spaces, which can be described using the Haar basis. (The next section is devoted to smoother spaces and bases.) For the results of this section, see e.g. \cite[Corollary 5.3]{devorepopov}, \cite[Section 7]{bourdaud}, \cite[Theorem 1.58]{triebel06}, \cite[Theorem 2.21]{triebel10}. \par
\noindent Let
\begin{equation}
\label{qa8}
\psi_M(x):=
\begin{cases}
1, & \text{if }0\leq x<1/2\\
-1, & \text{if }1/2\leq x\leq 1\\
0, & \text{if }x\notin [0,1]
\end{cases}, \text{ and }\psi_F(x):=\left\vert \psi_M(x)\right\vert.
\end{equation}
When $j\in{\mathbb N}$, we let
\begin{equation}
\label{qa1}
G^j:=\begin{cases}
\left\{F,M\right\}^{n},&\text{if }j=0\\
\left\{F,M\right\}^{n}\setminus\{ (F, F, \ldots, F)\},&\text{if }j>0
\end{cases}.
\end{equation}
For all $m\in {\mathbb Z}^n$, all $x\in {\mathbb R}^n$ and all $G\in \left\{F,M\right\}^{n}$, define
\begin{equation}
\label{qa2}
\Psi_m^G(x):=\prod_{r=1}^n \psi_{G_r}(x_r-m_r).
\end{equation}
Finally, for all $m\in {\mathbb Z}^n$, all $j\in {\mathbb N}$, all $G\in
G^j$
and all $x\in {\mathbb R}^n$, let
\begin{equation}
\label{qa3}
\Psi_m^{j,G}(x):=
\begin{cases}
\displaystyle \Psi_m^G(x), & \text{if }j=0\\
\displaystyle 2^{nj/2}\Psi^G_m(2^jx), & \text{if }j\geq 1
\end{cases}.
\end{equation}
Recall that the family $(\Psi_m^{j,G})$, called the Haar system, is an orthonormal basis of $L^2({\mathbb R}^n)$ \cite[Proposition 1.53]{triebel06}. Moreover, we have the following result \cite[Theorem 2.21]{triebel10}.
\begin{proposition}
\label{at11}
Let $s>0$, $1\leq p<\infty$, and $1\le q\le \infty$ be such that $sp<1$. Let $f\in {\mathcal S}^{\prime}({\mathbb R}^n$). Then $f\in B^s_{p,q}({\mathbb R}^n)$ if and only if there exists a sequence $\left(\mu^{j, G}_{m}\right)_{j\geq 0,\ G\in G^j,\ m\in {\mathbb Z}^n}$ such that
\begin{equation}
\label{qa4}
\sum_{j=0}^{\infty}\ \sum_{G\in G^j} \left(\sum_{m\in {\mathbb Z}^n} \left\vert \mu^{j, G}_{m}\right\vert^p\right)^{q/p}<\infty
\end{equation}
(obvious modification when $q=\infty$) and
\begin{equation} \label{decompof}
f=\sum_{j=0}^{\infty}\ \sum_{G\in G^j} \sum_{m\in {\mathbb Z}^n} \mu^{j, G}_{m} 2^{-j\left(s-n/p\right)} 2^{-nj/2} \Psi^{j,G}_{m}.
\end{equation}
Here, the series in \eqref{decompof} converges unconditionally in $B^s_{p,q}({\mathbb R}^n)$ when $q<\infty$. Moreover,
\begin{equation}
\label{qa5}
\left\Vert f\right\Vert_{B^s_{p,q}({\mathbb R}^n)}\sim \left(\sum_{j=0}^{\infty}\ \sum_{G\in G^j} \left(\sum_{m\in {\mathbb Z}^n} \left\vert \mu^{j, G}_{m}\right\vert^p\right)^{q/p}\right)^{1/q}
\end{equation}
(obvious modification when $q=\infty$).
\end{proposition}
Equivalently, Proposition \ref{at11} can be reformulated as follows. Consider the partition of ${\mathbb R}^n$ into standard dyadic cubes $Q$ of side $2^{-j}$. \footnote{ Thus the $Q$'s are of the form $Q=2^{-j}\prod_{k=1}^n[m_k,m_k+1)$, with $m_k\in{\mathbb Z}$.} For all $x\in{\mathbb R}^n$, denote by $Q_j(x)$ the unique dyadic cube of side $2^{-j}$ containing $x$. If $f\in L^1_{loc}({\mathbb R}^n)$, define $E_j(f)(x):=\fint_{Q_j(x)}f$ for all $j\geq 0$. We also set $E_{-1}(f):=0$.
We have the following results (see \cite[Theorem 5 with $m=0$]{bourdaud} in ${\mathbb R}^n$; see also \cite[Appendix A]{lss} in the framework of Sobolev spaces on ${\mathbb T}^n$).
\begin{proposition}\label{caracBesov} Let $s>0$, $1\leq p<\infty$, and $1\le q\le \infty$ be such that $sp<1$. Let $f\in L^1_{loc}({\mathbb R}^n$).
Then
\begin{equation*}\label{caracBesovBourdaud}
\left\Vert f\right\Vert^q_{B^s_{p,q}({\mathbb R}^n)}\sim \sum_{j \ge 0} 2^{sjq}\|E_j(f)-E_{j-1}(f)\|_{L^p}^q
\end{equation*}
(obvious modification when $q=\infty$).
\end{proposition}
Similar results hold when ${\mathbb R}^n$ is replaced by $(0,1)^n$ or ${\mathbb T}^n$; it suffices to consider only dyadic cubes contained in $[0,1)^n$.
\begin{corollary}
\label{mq2}
Let $s>0$, $1\leq p<\infty$, and $1\le q\le \infty$ be such that $sp<1$. Let $f\in L^1_{loc}({\mathbb R}^n$).
Then
\begin{equation*}\label{mq3}
\left\Vert f\right\Vert^q_{B^s_{p,q}({\mathbb R}^n)}\sim \sum_{j \ge 0} 2^{sjq}\|f-E_{j}(f)\|_{L^p}^q
\end{equation*}
(obvious modification when $q=\infty$).
Similar results hold when ${\mathbb R}^n$ is replaced by $(0,1)^n$ or ${\mathbb T}^n$.
\end{corollary}
\begin{corollary}
\label{mp1}
Let $s>0$, $1\leq p<\infty$, and $1\le q\le \infty$ be such that $sp<1$. Let $(\varphi_j)_{j\ge 0}$ be a sequence of functions on $(0,1)^n$ such that: for any $j$, $\varphi_j$ is constant on each dyadic cube $Q$ of size $2^{-j}$.
Assume that
$
\sum_{j \ge 1} 2^{sjq}\|\varphi_j-\varphi_{j-1}\|_{L^p}^q < \infty$.
Then $(\varphi_j)$ converges in $L^p$ to some $\varphi \in B^s_{p,q}$, and we have
\begin{equation*}
\left\Vert \varphi \right\Vert_{B^s_{p,q}((0,1)^n)} \lesssim \left(\sum_{j \ge 0} 2^{sjq}\|\varphi_j-\varphi_{j-1}\|_{L^p}^q\right)^{1/q}
\end{equation*}
(with the convention $\varphi_{-1}:=0$ and with the usual modification when $q=\infty$).
\end{corollary}
In the framework of Sobolev spaces, Corollaries \ref{mq2} and \ref{mp1} are easy consequences of Proposition \ref{caracBesov}; see \cite[Appendix A, Theorem A.1]{lss} and \cite[Appendix A, Corollary A.1]{lss}. The arguments in \cite{lss} apply with no changes to Besov spaces. Details are left to the reader.
\subsection{Characterization via smooth wavelets}
\label{qa6}
Proposition \ref{at11} has a counterpart when $sp\ge 1$; this requires smoother \enquote{mother wavelet} $\psi_M$ and \enquote{father wavelet} $\psi_F$. Given $\psi_F$ and $\psi_M$ two real functions, define $\psi_m^{j, G}$ as in \eqref{qa1}--\eqref{qa3}. Then \cite[Chapter 6]{meyer92}, \cite[Section 1.7.3]{triebel06} for every integer $k>0$ we may find some $\psi_F\in C^k_c({\mathbb R})$ and $\psi_M\in C^k_c({\mathbb R})$ such that the following result holds.
\begin{proposition}
\label{qb1}
Let $s>0$, $1\leq p<\infty$, and $1\le q\le \infty$ be such that $s<k$. Let $f\in {\mathcal S}^{\prime}({\mathbb R}^n$). Then $f\in B^s_{p,q}({\mathbb R}^n)$ if and only if there exists a sequence $\left(\mu^{j, G}_{m}\right)_{j\geq 0,\ G\in G^j,\ m\in {\mathbb Z}^n}$ such that
\begin{equation}
\label{qb2}
\sum_{j=0}^{\infty}\ \sum_{G\in G^j} \left(\sum_{m\in {\mathbb Z}^n} \left\vert \mu^{j, G}_{m}\right\vert^p\right)^{q/p}<\infty
\end{equation}
(obvious modification when $q=\infty$) and
\begin{equation} \label{qb3}
f=\sum_{j=0}^{\infty}\ \sum_{G\in G^j} \sum_{m\in {\mathbb Z}^n} \mu^{j, G}_{m} 2^{-j\left(s-n/p\right)} 2^{-nj/2} \Psi^{j,G}_{m}.
\end{equation}
Here,
the series in \eqref{decompof} converges unconditionally in $B^s_{p,q}({\mathbb R}^n)$ when $q<\infty$. Moreover,
\begin{equation}
\label{qb4}
\left\Vert f\right\Vert_{B^s_{p,q}({\mathbb R}^n)}\sim \left(\sum_{j=0}^{\infty}\ \sum_{G\in G^j} \left(\sum_{m\in {\mathbb Z}^n} \left\vert \mu^{j, G}_{m}\right\vert^p\right)^{q/p}\right)^{1/q}
\end{equation}
(obvious modification when $q=\infty$).
\end{proposition}
For further use, let us note that, if $f\in B^s_{p,q}({\mathbb R}^n)$ for some $s>0$, $1\le p<\infty$ and $1\le q\le\infty$, then we have
\begin{equation}
\label{qb40}
\mu^{j, G}_{m}=\mu^{j, G}_{m}(f)=2^{j(s-n/p+n/2)}\, \int_{{\mathbb R}^n} f(x)\, \Psi^{j,G}_{m}(x)\, dx.
\end{equation}
This immediately leads to the following consequence of Proposition \ref{qb1}, the proof of which is left to the reader.
\begin{corollary}
\label{qb400}
Let $s>0$, $1\le p<\infty$ and $1\le q\le\infty$ be such that $s<k$. Assume that $f\in L^p({\mathbb R}^n)$ is such that the coefficients $\mu^{j, G}_{m}$ given by \eqref{qb40} satisfy
\begin{equation}
\label{qb50}
\sum_{j=0}^{\infty}\ \sum_{G\in G^j} \left(\sum_{m\in {\mathbb Z}^n} \left\vert \mu^{j, G}_{m}\right\vert^p\right)^{q/p}=\infty
\end{equation}
(obvious modification when $q=\infty$). Then $f\not\in B^s_{p,q}({\mathbb R}^n)$.
\end{corollary}
\subsection{Nikolski\u\i{} type decompositions}
\label{mm8}
${}$
In practice, we often do not know the Littlewood-Paley decomposition of some given $f$, but only a Nikolski\u\i{} representation (or decomposition) of $f$. More specifically, set $\mathcal{C}_j:=B(0,2^{j+2})$, with $j\in{\mathbb N}$. Let $f^j\in\mathcal{S}'$ satisfy
\begin{equation}\label{e230501}
\operatorname{supp}{\mathcal F}f^j\subset\mathcal{C}_j,\ \forall\, j\in{\mathbb N},\ \text{and } f=\sum_jf^j\text{ in }{\mathcal S}';
\end{equation}
the decomposition $f=\sum_j f^j$ is a Nikolski\u\i{} decomposition of $f$. Note that the Littlewood-Paley decomposition is a special Nikolski\u\i{} decomposition.
We have the following result.
\begin{proposition}
\label{mm9}
Let $s>0$, $1\le p<\infty$, $1\le q\le\infty$. Assume that \eqref{e230501} holds. Then we have
\begin{equation}
\label{28024}
\Big\|\sum_{j}f^j\Big\|_{B^{s}_{p,q}}\lesssim\left(\sum_{j}2^{ sqj }\|f^j\|^q_{L^p}\right)^{1/q},
\end{equation}
with the usual modification when $q=\infty$.
\end{proposition}
\noindent
The above was proved in \cite[Lemma 1]{gnp} (see also \cite{yamazaki}) in the framework of Triebel-Lizorkin spaces $F^s_{p,q}$; the proof applies with no change to Besov spaces and will be omitted here. For related results in the framework of Besov spaces, see \cite[Section 2.5.2, pp. 79-80]{triebel2} and \cite[Section 2.3.2, Theorem, p. 105]{schmeisser}.
\section{Positive cases}
\label{pos}
${}$
We start with the trivial case.
\begin{case}
\label{tri}
{\it Range.} $s>0$, $1\le p<\infty$, $1\le q\le\infty$, and $sp>n$.
\smallskip
\noindent
{\it Conclusion.}
$B^s_{p,q}(\Omega ; {\mathbb S}^1)$ does have the lifting property.
\end{case}
\begin{proof}
Since $B^s_{p,q}(\Omega)\hookrightarrow C^0(\overline\Omega)$ (Lemma \ref{ka1}), we may write $u=e^{\imath\varphi}$, with $\varphi$ continuous. Locally, we have $\varphi=-\imath \ln u$, for some smooth determination $\ln$ of the complex logarithm. Then $\varphi$ belongs to $B^s_{p,q}$ locally in $\overline\Omega$ (Lemma \ref{ka2}), and thus globally (Lemma \ref{ka3}).
\end{proof}
\begin{case}
\label{A}
{\it Range.} $0<s<1$, $1\le p<\infty$, $1\le q\le\infty$, and $sp<1$.
\smallskip
\noindent
{\it Conclusion.}
$B^s_{p,q}(\Omega ; {\mathbb S}^1)$ does have the lifting property.
\end{case}
\begin{proof} The argument being essentially the one in \cite[Section 1]{lss}, we will be sketchy. Assume for simplicity that $\Omega=(0,1)^n$.
Let $u \in B^s_{p,q}(\Omega ; {\mathbb S}^1)$. For all $j\in{\mathbb N}$, consider the function $U_j$ defined by
\begin{equation*}
U_j(x):=
\begin{cases}
\displaystyle E_j(u)(x)/|E_j(u)(x)|,&\mbox{if }E_j(u)(x) \neq 0\\
1,&\mbox{if }E_j(u)(x) = 0
\end{cases}.
\end{equation*}
Since $E_j(u)\to u$ a.e., we find that $U_j \rightarrow u$ a.e. on $\Omega$. By induction on $j$, for all $j\in{\mathbb N}$
we construct a phase $\varphi_j$ of $U_j$, constant on each dyadic cube of size $2^{-j}$, and satisfying the inequality
\begin{equation}\label{mq1}
|\varphi_j -\varphi_{j-1}| \leq \pi |U_j-U_{j-1}| \quad\mbox{on }\Omega,\, \forall\, j\ge 1.\footnotemark
\end{equation}
\footnotetext{ Thus $\varphi_j$ is the phase of $U_j$ closest to $\varphi_{j-1}$.}
As in \cite{lss}, \eqref{mq1} implies
\begin{equation*}
|\varphi_j -\varphi_{j-1}| \lesssim |u -E_j(u)|+ |u -E_{j-1}(u)|,
\end{equation*}
and thus, e.g. when $q<\infty$, we have
\begin{equation*}
\sum_{j \ge 1} 2^{sjq}\|\varphi_j-\varphi_{j-1}\|_{L^p}^q \lesssim \sum_{j \ge 0} 2^{sjq}\|u-E_{j}(u)\|_{L^p}^q.
\end{equation*}
Applying Corollaries \ref{mq2} and \ref{mp1},
we obtain that $\varphi_j \rightarrow \varphi $ in $L^p$ to some $\varphi \in B^s_{p,q}(\Omega ; {\mathbb R})$. Since $\varphi_j$ is a phase of $U_j$ and $U_j\to u$ a.e., we find that $\varphi$ is a phase of $u$. In addition, we have the control
$\|\varphi\|_{B^s_{p,q}} \lesssim \|u\|_{B^s_{p,q}}$.
\end{proof}
\begin{case}
\label{X}
{\it Range.} $0<s<1$, $1\le p<\infty$, $1\le q<\infty$, and $sp=n$.
\smallskip
\noindent
{\it Conclusion.}
$B^s_{p,q}(\Omega ; {\mathbb S}^1)$ does have the lifting property.
\end{case}
\begin{proof} Here, it will be convenient to work with $\Omega={\mathbb T}^n$.
Let $|\ |$ denote the sup norm in ${\mathbb R}^n$. Let $\rho\in C^\infty$ be a mollifier supported in $\{ |x|\le 1\}$ and set $F(x,\varepsilon):=u\ast\rho_\varepsilon(x)$, $x\in{\mathbb T}^n$, $\varepsilon>0$. Since $sp=n$, we have $u\in \text{\rm VMO}({\mathbb T}^n)$, by Lemma \ref{B-VMO}. Let us recall that, if $u\in\text{\rm VMO}({\mathbb T}^n ; {\mathbb S}^1)$ then, for some $\delta>0$ (depending on $u$) we have \cite[Remark 3, p. 207]{brezisnirenberg1}
\begin{equation} \label{boundsv}
\frac 12<\left\vert F(x,\varepsilon)\right\vert\le 1\mbox{ for all } x\in {\mathbb T}^n\mbox{ and all }\varepsilon\in (0,\delta).\footnotemark
\end{equation}
\footnotetext{ For an explicit calculation leading to \eqref{boundsv}, see e.g. \cite[p. 415]{surveypetru}.}
Define
\begin{equation*}
w(x,\varepsilon):=\frac{F(x,\varepsilon)}{\left\vert F(x,\varepsilon)\right\vert}\mbox{ for all }x\in {\mathbb T}^n\mbox{ and all }\varepsilon\in (0,\delta).
\end{equation*}
Pick up a function $\psi\in C^{\infty}({\mathbb T}^n\times (0,\delta) ; {\mathbb R})$ such that $w=e^{\imath \psi}$.
We note that for all $j\in\llbracket 1, n\rrbracket$ we have
$
\nabla\psi=-\imath \overline{w}\nabla w
$, and
$
\partial_j |F|= |F|^{-1}(F\partial_j\overline{F}+\overline{F}\partial_jF)/2
$. Therefore,
\eqref{boundsv} yields
\begin{equation} \label{nablaw}
\left\vert \nabla\psi\right\vert=\left\vert \nabla w\right\vert\lesssim \left\vert \nabla F\right\vert.
\end{equation}
In view of \eqref{nablaw} and estimate \eqref{cg1} in Lemma \ref{ab1}, we find that
\begin{equation}
\label{ka5}
|u|_{B^{s,p}_q({\mathbb T}^n)}^q \gtrsim \displaystyle \int_0^\delta\varepsilon^{q-sq}\|(\nabla F)(\cdot,\varepsilon)\|_{L^p}^q\, \frac{d\varepsilon}{\varepsilon} \gtrsim \int_0^\delta\varepsilon^{q-sq}\|(\nabla \psi)(\cdot,\varepsilon)\|_{L^p}^q\, \frac{d\varepsilon}{\varepsilon}.
\end{equation}
Combining \eqref{ka5} with the conclusion of Lemma \ref{ab1}, we obtain that the phase $\psi$ has, on ${\mathbb T}^n$, a trace $\varphi\in B^s_{p,q}$, in the sense that the limit $\varphi:=\lim_{\varepsilon\to 0}\psi(\cdot,\varepsilon)$ exists in $B^s_{p,q}$. In particular (using Lemma \ref{kc2}), we have that $\psi(\cdot, \varepsilon_j)\to\varphi$ a.e. along some sequence $\varepsilon_j\to 0$; this leads to $w(\cdot, \varepsilon_j)=e^{\imath\psi(\cdot,\varepsilon_j)}\to e^{\imath\varphi}$ a.e. Since, on the other hand, we have $\lim_{\varepsilon\to 0}w(\cdot,\varepsilon)=u$ a.e., we find that $\varphi$ is a $B^s_{p,q}$ phase of $u$.
\end{proof}\color{black}
The next case is somewhat similar to Case \ref{X}, so that our argument is less detailed.
\begin{case}
\label{kc3}
{\it Range.} $s=1$, $p=n$, $1\le q<\infty$.
\smallskip
\noindent
{\it Conclusion.}
$B^1_{n,q}(\Omega ; {\mathbb S}^1)$ does have the lifting property.
\end{case}
\begin{proof}
We consider $\delta$, $w$ and $\psi$ as in Case \ref{X}. The analog of \eqref{nablaw} is the estimate
\begin{equation}
\label{kc4}
|\partial_j\partial_k\psi|+|\nabla\psi|^2\lesssim |\partial_j\partial_kF|+|\nabla F|^2,
\end{equation}
which is a straightforward consequence of the identities
\begin{equation*}
\nabla\psi=-\imath \overline w\nabla w\text{ and }\partial_j\partial_k\psi=-\imath \overline w\partial_j\partial_kw+\imath w^2\partial_j w\partial_kw.
\end{equation*}
Combining \eqref{kc4} with the second part of Lemma \ref{kb2}, we obtain
\begin{equation}
\label{kg1a}
|u|_{B^1_{n,q}}^q\gtrsim \int_0^\delta\varepsilon^q\left(\sum_{j, k=1}^n\left\|\partial_j\partial_k\psi(\cdot,\varepsilon)\right\|_{L^n}^q+\left\|\partial_\varepsilon\partial_\varepsilon\psi(\cdot,\varepsilon)\right\|_{L^n}^q+\|\nabla\psi(\cdot,\varepsilon)\|_{L^{2n}}^{2q}\right)\, \frac{d\varepsilon}{\varepsilon}.
\end{equation}
By \eqref{kg1a} and the first part of Lemma \ref{kb2}, we find that $\psi$ has a trace $\varphi:=\operatorname{tr} \psi \in B^1_{n,q}({\mathbb T}^n)$. Clearly, $\varphi$ is a $B^1_{n,q}$ phase of $u$.
\end{proof}
\begin{case}
\label{Y}
{\it Range.} $s>1$, $1\le p<\infty$, $1\le q<\infty$, $n=2$, and $sp=2$.
Or
$s>1$, $1\le p<\infty$, $1\le q\le p$, $n\ge 3$, and $sp= 2$.
Or: $s>1$, $1\le p<\infty$, $1\le q\le \infty$, $n\ge 2$,
and $sp> 2$.
\smallskip
\noindent
{\it Conclusion.}
$B^s_{p,q}(\Omega ; {\mathbb S}^1)$ does have the lifting property.
\end{case}
Note that, in the critical case where $sp=2$, our result is weaker in dimension $n\ge 3$ (when we ask $1\le q\le p$) than in dimension $2$ (when we merely ask $1\le q<\infty$).
\begin{proof}
The general strategy is the same as in \cite[Section 3, Proof of Theorem 3]{lss},\footnote{ See also \cite{carbou}.} but the key argument (validity of \eqref{at1} below) is much more involved in our case.
It will be convenient to work in $\Omega={\mathbb T}^n$. Let $u\in B^s_{p, q}({\mathbb T}^n ; {\mathbb S}^1)$. Assume first that we do may write $u=e^{\imath\varphi}$, with $\varphi\in B^s_{p, q}((0,1)^n ; {\mathbb R})$. Then $u, \varphi\in W^{1,p}$ (Lemma \ref{kc2}). We are thus in position to apply chain's rule and infer that $\nabla u=\imath u\nabla \varphi$, and therefore
\begin{equation}
\label{at2}
\nabla\varphi=\frac 1{\imath u}\nabla u=F,\ \text{with }F:=u\wedge\nabla u\in L^p({\mathbb T}^n ; {\mathbb R}^n).
\end{equation}
The assumptions on $s$, $p$, $q$ imply that $F\in B^{s-1}_{p, q}$ (Lemma \ref{at3}). We may now argue as follows. If $\varphi$ solves \eqref{at2}, then $\nabla\varphi\in B^{s-1}_{p, q}$, and thus $\varphi\in B^s_{p, q}$ (Lemma \ref{at4}). Next, since $u, \varphi\in W^{1,p}\cap L^{\infty}$, we find that
\begin{equation*}
\nabla(u\, e^{-\imath\varphi})=\nabla u\, e^{-\imath\varphi}-\imath u\, e^{-\imath\varphi}\nabla\varphi=\imath u\, e^{-\imath\varphi}(u\wedge\nabla u-\nabla\varphi)=0.
\end{equation*}
Thus $u\, e^{-\imath\varphi}$ is constant, and therefore $\varphi$ is, up to an appropriate additive constant, a $B^s_{p, q}$ phase of $u$.
There is a flaw in the above. Indeed, \eqref{at2} need not have a solution. In ${\mathbb T}^n$, the necessary and sufficient conditions for the solvability of \eqref{at2} are\footnote{ This is easily seen by an inspection of the Fourier coefficients.}
\begin{equation}
\label{at5}
\int_{{\mathbb T}^n}F=\widehat F(0)=0
\end{equation}
and
\begin{equation}
\label{at1}
\operatorname{curl} F=0.
\end{equation}
Clearly, \eqref{at5} holds.\footnote{ Expand $u\wedge\nabla u$ in Fourier series.} We complete Case \ref{Y} by noting that \eqref{at1} holds in the relevant range of $s$, $p$, $q$ and $n$ (Lemma \ref{at6}).
\end{proof}
\begin{remark}
\label{aa1} We briefly discuss the lifting problem when $s\le 0$. For such $s$, distributions in $B^s_{p,q}$ need not be integrable functions, and thus the meaning of the equality $u=e^{\imath\varphi}$ is unclear. We therefore address the following reasonable version of the lifting problem: let $u:\Omega\to {\mathbb S}^1$ be a measurable function such that $u\in B^s_{p, q}(\Omega)$. Is there any $\varphi\in L^1_{loc}\cap B^s_{p,q}(\Omega ; {\mathbb R})$ such that $u=e^{\imath\varphi}$?
Let us note that the answer is trivially positive when $s<0$, $1\le p<\infty$, $1\le q\le\infty$.
Indeed, let $\varphi$ be any bounded measurable lifting of $u$. Then $\varphi\in B^s_{p, q}$, since $L^\infty\hookrightarrow B^s_{p,q}$ when $s<0$ (see Lemma \ref{ia2}).
\end{remark}
\section{Negative cases}
\label{neg}
\begin{case}
\label{B}
{\it Range.} $0<s<1$, $1\le p< \infty$, $1\le q<\infty$, $n\ge 2$, and $1\leq sp <n$.
Or $0<s<1$, $1\le p< \infty$, $q=\infty$, $n\ge 2$, and $1< sp <n$.
\smallskip
\noindent
{\it Conclusion.} $B^s_{p,q}(\Omega ; {\mathbb S}^1)$ does not have the lifting property.
\end{case}
\begin{proof}
We want to show that there exists a function $u\in B^{s}_{p,q}$ such that $u\neq e^{{\imath} \varphi}$ for any $\varphi \in B^{s}_{p,q}$.
For sufficiently small $\varepsilon>0$, set $
s_1:=s/(1-\varepsilon)$ and $p_1:=(1-\varepsilon)p$. By Lemma \ref{Besovemb}, we have $B^{s_1}_{p_1,q_1}\not\hookrightarrow B^s_{p,q}$ (for any $q_1$). We will use later this fact for $q_1:=(1-\varepsilon)q$.
Let $\psi\in B^{s_1}_{p_1,q_1}\setminus B^s_{p,q}$ and set $u:=e^{{\imath} \psi}$. Then $u\in B^{s_1}_{p_1,q_1}\cap L^{\infty}$ (Lemma \ref{eipsi}) and thus $u\in B^{s}_{p,q}$
(Lemma \ref{gn}).
We claim that there is no $\varphi\in B^s_{p,q}$ such that $u=e^{{\imath} \varphi}$. Argue by contradiction. Since $u=e^{\imath\varphi}=e^{\imath\psi}$, the function $(\varphi-\psi)/2\pi$ belongs to $(B^s_{p,q}+B^{s_1}_{p_1,q_1})(\Omega ; {\mathbb Z})$. By Lemma \ref{Eunicite}, this implies that $\varphi-\psi$ is constant, and thus $\psi\in B^{s}_{p,q}$, which is a contradiction.
\end{proof}
\begin{case}
\label{xa2}
{\it Range.}
$0<s<\infty$, $1\le p<\infty$, $1\le q< \infty$, $n\ge 2$, and $1\le sp<2$.
Or $0<s<\infty$, $1\le p<\infty$, $ q= \infty$, $n\ge 2$, and $1<sp\le 2$.
\smallskip
\noindent
{\it Conclusion.}
$B^s_{p,q}(\Omega ; {\mathbb S}^1)$ does not have the lifting property.
\end{case}
\begin{proof}
The proof is based on the example of a topological obstruction considering the case $n=2$. Consider the map $\displaystyle
u(x)=\frac{x}{|x|}$, $\forall\, x\in{\mathbb R}^2$.
We first prove that $u \in B^s_{p,q}(\Omega)$ for any smooth bounded domain $\Omega\subset{\mathbb R}^2$. We distinguish two cases: firstly, $q \le \infty$ and $sp <2$ and secondly, $q=\infty$ and $sp=2$.
In the first case, let $s_1>s$ such that $s_1$ is not an integer and $1<s_1p<2$, which implies $W^{s_1,p}=B^{s_1}_{p,p} \hookrightarrow B^{s}_{p,q}$. Since
$u \in W^{s_1,p}$ \cite[Section 4]{lss}, we find that $u\in B^{s}_{p,q}$.
The second case is slightly more involved. By the Gagliardo-Nirenberg inequality (Lemma \ref{gn} below), it suffices to prove that $u \in B^2_{1,\infty}(\Omega)$. Using Proposition \ref{p2.4}, a sufficient condition for this to hold is
\begin{equation}
\label{qf1}
\left\Vert \Delta^3_{h}u\right\Vert_{L^1({\mathbb R}^2)}\lesssim \left\vert h\right\vert^2,\ \forall\, h\in {\mathbb R}^2.
\end{equation}
Since $u$ is radially symmetric and $0$-homogeneous, this amounts to checking \color{black} that
\begin{equation}\label{delta3l1}
\|\Delta_{e_1}^3u\|_{L^1(\mathbb R^2)}<\infty.
\end{equation}
However, by the mean-value theorem, for all $\left\vert x\right\vert\ge 1$ we have
\begin{equation}\label{delta3infty}
|\Delta^3_{e_1} u(x)|\lesssim 1/|x|^3,
\end{equation}
while $\Delta^3_{e_1}u$ is bounded in $B(0,1)$ since $u$ is ${\mathbb S}^1$-valued. Using this fact and estimate \eqref{delta3infty}, we obtain \eqref{delta3l1}.
We next claim that $u$ has no $B^s_{p,q}$ lifting in $\Omega$ provided $\Omega\subset{\mathbb R}^2$ is a smooth bounded domain containing the origin. Argue by contradiction, and
assume that $u=e^{\imath\varphi}$ for some $\varphi\in B^s_{p,q}(\Omega)$. Let, as in \cite[p. 50]{lss}, $
\theta\in C^{\infty}({\mathbb R}^2\setminus ([0,\infty)\times \left\{0\right\}))$ be such that $e^{\imath\theta}=u$.
Note that $\theta\in B^s_{p,q}(\omega)$ for every smooth bounded open set $\omega$ such that $\overline\omega\subset{\mathbb R}^2\setminus ([0,\infty)\times \left\{0\right\}))$. Since $(\varphi-\theta)/(2\pi)$ is ${\mathbb Z}$-valued, Lemma \ref{Eunicite} yields that $\varphi-\theta$ is constant a.e. in $\Omega\setminus ([0,\infty)\times \left\{0\right\})$. Thus, $\theta\in B^s_{p,q}(\Omega)$. Similarly, $\widetilde\theta\in B^s_{p,q}(\Omega)$, where $\widetilde\theta
\in C^{\infty}({\mathbb R}^2\setminus ((-\infty,0]\times \left\{0\right\}))$ is such that $e^{\imath\widetilde\theta}=u$. We find that $(\theta-\widetilde\theta)/(2\pi)\in B^s_{p,q}(\Omega)$. However, this is a non constant integer-valued function. This contradicts Lemma \ref{Eunicite} and proves non existence of lifting in $B^s_{p, q}$.
When $n\ge 3$, the above arguments lead to the following. Let $u(x)=\displaystyle\frac{(x_1, x_2)}{|(x_1, x_2)|}$, and let $\Omega\subset {\mathbb R}^n$ be a smooth bounded domain. Then $u\in B^s_{p, q}(\Omega ; {\mathbb S}^1)$ and, if $0\in\Omega$, then $u$ has no $B^s_{p, q}$ lifting.
\end{proof}
\section{Open cases}
\label{ope}
\begin{case}
\label{xa1}
{\it Range.} $s>1$, $1\le p<\infty$, $p<q<\infty$, $n\ge 3$, and $sp=2$.
\smallskip
\noindent
{\it Discussion.}
This case is complementary to Case \ref{Y}. In the above range, we conjecture that the conclusion of Case \ref{Y} still holds, i.e., that the space $B^s_{p,q}(\Omega ; {\mathbb S}^1)$ does not have the lifting property. The non restriction property (Proposition \ref{l7.26}) prevents us from extending the argument used in Case \ref{Y} to Case \ref{xa1}.
\end{case}
\begin{case}
\label{Z}
{\it Range.} $s=1$, $1\le p<\infty$, $1\le q<\infty$, $n\ge 3$, and $2\le p<n$.
Or: $s=1$, $1\le p<\infty$, $ q=\infty$, $n\ge 3$, and $2< p\le n$.
\smallskip
\noindent
{\it Discussion.}
When $p=q=2$, $B^1_{2,2}(\Omega ; {\mathbb S}^1)=H^1(\Omega ; {\mathbb S}^1)$ does have the lifting property \cite[Lemma 1]{bethuelzheng}. The remaining cases are open. The major difficulty arises from the extension of Lemma \ref{at3} to the range considered in Case \ref{Z}.
\end{case}
\begin{case}
\label{T}
{\it Range.} $s=0$, $1\le p<\infty$, $1\le q<\infty$ (and arbitrary $n$).
\smallskip
\noindent
{\it Discussion.} As explained in Remark \ref{aa1}, we consider only measurable functions $u:\Omega\to{\mathbb S}^1$. We let $B^0_{p, q}(\Omega ; {\mathbb S}^1):=\{ u:\Omega\to{\mathbb S}^1;\, u \text{ measurable and }u\in B^0_{p,q}\}$, and for $u$ in this space we are looking for a phase $\varphi\in L^1_{loc}\cap B^0_{p,q}$.
Note that $B^0_{p,\infty}(\Omega ; {\mathbb S}^1)$ does have the lifting property. Indeed, in this case we have $L^\infty\subset B^0_{p,\infty}$ (Lemma \ref{ia2}) and then it suffices to argue as in the proof of Case \ref{aa1}. More generally, $B^0_{p,q}(\Omega ; {\mathbb S}^1)$ has the lifting property when $L^\infty\hookrightarrow B^0_{p,q}$.\footnote{ A special case of this is $p=q=2$, since $B^0_{2,2}=L^2$. Another special case is $1<p\le 2\le q$. Indeed, in that case we have $L^\infty\hookrightarrow L^p=F^0_{p,2}\hookrightarrow B^0_{p,q}$ \cite[Section 2.3.5, p. 51]{triebel2}, \cite[Section 2.3.2, Proposition 2, p. 47]{triebel2}.} The remaining cases are open.
\end{case}
\begin{case}
\label{xa3}
{\it Range.}
$0<s\le 1$, $p=1/s$, $q=\infty$ (and arbitrary $n$).
\smallskip
\noindent
{\it Discussion.}
We do not know whether $B^s_{p,q}(\Omega ; {\mathbb S}^1)$ does have the lifting property.
\end{case}
\begin{case}
\label{xa4}
{\it Range.}
$0<s\le 1$, $1< p<\infty$, $q=\infty$, $n\ge 3$, and $sp=n$.
\smallskip
\noindent
{\it Discussion.}
We do not know whether $B^s_{p,q}(\Omega ; {\mathbb S}^1)$ does have the lifting property. The difficulty common to Cases \ref{xa3} and \ref{xa4} is that in these ranges $B^s_{p,\infty}\not\subset\text{\rm VMO}$, and thus we are unable to rely on the strategy used in Cases \ref{X} and \ref{kc3}.
\end{case}
\section{Analysis in Besov spaces}
${}$
The results we state here are valid when $\Omega$ is a smooth bounded domain in ${\mathbb R}^n$, or $(0,1)^n$ or ${\mathbb T}^n$. However, in the proofs we will consider only one of these sets, the most convenient for the proof.
\subsection{Embeddings}
\label{ape}
${}$
\begin{lemma}\label{Besovemb} Let $0<s_1<s_0<\infty$, $1\le p_0<\infty$, $1\le p_1< \infty$, $1\le q_0\leq \infty$ and $1\le q_1\leq \infty$. Then the following hold.
\begin{enumerate}
\item
If $q_0<q_1$, then $B^s_{p,q_0}\hookrightarrow B^s_{p,q_1}$.
\item If $s_0-n/p_0=s_1-n/p_1$, then $ B^{s_0}_{p_0,q_0} \hookrightarrow B^{s_1}_{p_1,q_0}$.
\item If $s_0-n/p_0>s_1-n/p_1$, then $ B^{s_0}_{p_0,q_0} \hookrightarrow B^{s_1}_{p_1,q_1}$.
\item If $B^{s_0}_{p_0,q_0} \hookrightarrow B^{s_1}_{p_1,q_1}$, then $s_0-n/p_0\geq s_1-n/p_1$.
\end{enumerate}
Consequently, when $q_0\leq q_1$,
\begin{equation} \label{equiv}
B^{s_0}_{p_0,q_0} \hookrightarrow B^{s_1}_{p_1,q_1} \Longleftrightarrow s_0-\frac{n}{p_0}\geq s_1-\frac{n}{p_1}.
\end{equation}
\end{lemma}
\begin{proof} For item 1, see \cite[Section 3.2.4]{triebel2}. For items 2 and 3, see \cite[Section 3.3.1]{triebel2} or \cite[Theorem 1, p. 82]{runstsickel}. Item 4 follows from a scaling argument. And \eqref{equiv} is an immediate consequence of items 1--4.
\end{proof}
For the next result, see e.g. \cite[Section 2.7.1, Remark 2, pp. 130-131]{triebel2}.
\begin{lemma}
\label{ka1}
Let $s>0$, $1\le p<\infty$, $1\le q\le\infty$ be such that $sp>n$. Then $B^s_{p,q}(\Omega)\hookrightarrow C^0(\overline\Omega)$.
\end{lemma}
\begin{lemma}
\label{ia2}
Let $s< 0$, $1\le p<\infty$ and $1\le q\le \infty$. Then $L^\infty\hookrightarrow B^s_{p,q}$.
Similarly, if $1\le p\le\infty$, then $L^\infty\hookrightarrow B^0_{p,\infty}$.
\end{lemma}
\begin{proof}
We present the argument when $\Omega={\mathbb T}^n$.
Let $f\in L^\infty$, with Fourier coefficients $(a_m)_{m\in{\mathbb Z}^n}$. Consider, as in Definition \ref{periodicbesov}, the functions
\begin{equation*}
f_j(x):=\sum_{m\in{\mathbb Z}^n}a_m\varphi_j(2\pi m)\, e^{2\imath \pi m\cdot x},\ \forall\, j\in{\mathbb N}.
\end{equation*}
By the (periodic version of) the multiplier theorem \cite[Section 9.2.2, Theorem, p. 267]{triebel2} we have
\begin{equation}
\label{kb1}
\|f_j\|_{L^p}\lesssim \|f\|_{L^p},\ \forall\, 1\le p\le \infty,\ \forall\, j\in{\mathbb N}.
\end{equation}
We find that $\|f_j\|_{L^p}\lesssim \|f\|_{L^p}\le \|f\|_{L^\infty}$, and thus (by Definition \ref{periodicbesov}, and with the usual modification when $q=\infty$)
\begin{equation*}
\|f\|_{B^s_{p,q}}\lesssim \left(\sum_j 2^{sjq}\right)^{1/q}<\infty.
\end{equation*}
The second part of the lemma follows from a similar argument. The proof is left to the reader.
\end{proof}
An analogous proof leads to the following result. Details are left to the reader.
\begin{lemma}
\label{kc2}
Let $s>0$, $1\le p<\infty$ and $1\le q\le\infty$. Then $B^s_{p,q}\hookrightarrow L^p$.
More generally, if $k\in{\mathbb N}$, $s>k$, $1\le p<\infty$, and $1\le q\le \infty$, then $B^s_{p,q}\hookrightarrow W^{k, p}$.
\end{lemma}
\begin{lemma}\label{B-VMO} Let $0<s<\infty$, $1\le p<\infty$ and $1\le q<\infty$ be such that $sp=n$. Then
$\displaystyle B^{s}_{p,q} \hookrightarrow \text{\rm VMO}$.\\
Same conclusion if $0<s<\infty$, $1\le p<\infty$ and $q=\infty$ are such that $sp>n$.
\end{lemma}
\begin{proof}
Assume first that $q<\infty$. Let $p_1>\max\left\{n,p,q\right\}$ and set $s_1:=n/{p_1}$. By Lemma \ref{Besovemb} and the fact that $s_1$ is not an integer, we have
\begin{equation*}
B^s_{p,q}\hookrightarrow B^{s_1}_{p_1,q}\hookrightarrow B^{s_1}_{p_1,p_1}=W^{s_1,p_1}.
\end{equation*}
It then suffices to invoke the embedding
\begin{equation*}
W^{s_1,p_1}\hookrightarrow \text{\rm VMO}\mbox{ when }s_1p_1=n
\ \text{\cite[Example 2, p. 210]{brezisnirenberg1}}.\end{equation*}
The case where $q=\infty$ is obtained via the first part of the proof. Indeed, it suffices to choose $0<s_1<\infty$, $1\le p_1<\infty$ and $0<q_1<\infty$ such that $s_1p_1=n$ and $B^s_{p,q}\hookrightarrow B^{s_1}_{p_1,q_1}$. Such $s_1$, $p_1$ and $q_1$ do exist, by Lemma \ref{Besovemb}.
\end{proof}
For the following special case of the Gagliardo-Nirenberg embeddings, see e.g. \cite[Remark 1, pp. 39-40]{runstsickel}.
\begin{lemma} \label{gn}
Let $0<s<\infty$, $1\le p<\infty$, $1\le q\leq \infty$, and $0<\theta<1$. Then $B^s_{p,q}\cap L^{\infty}\hookrightarrow B^{\theta s}_{p/\theta,q/\theta}$.
\end{lemma}
\subsection{Restrictions}
\label{mo6}
${}$
{\it Captatio benevolenti\ae}. Let $f\in L^1({\mathbb R}^2)$. Then, for a.e., $y\in{\mathbb R}$, the restriction $f(\cdot, y)$ of $f$ to the line ${\mathbb R}\times\{y\}$ belongs to $L^1$. In this section and the next one, we examine some analogues of this property in the framework of Besov spaces.
For this purpose, we first introduce some notation for partial functions.
Let $\alpha\subset \{1,\ldots, n\}$ and set $\overline\alpha:=\{1,\ldots, n\}\setminus \alpha$. If $x=(x_1,\ldots, x_n)\in{\mathbb R}^n$, then we identify $x$ with the couple $(x_\alpha, x_{\overline\alpha})$, where $x_\alpha:=(x_j)_{j\in\alpha}$ and $x_{\overline\alpha}:=(x_j)_{j\in\overline\alpha}$.
Given a function $f=f(x_1,\ldots, x_n)$, we let $f_\alpha=f_\alpha(x_\alpha)$ denote the partial function $x_{\overline\alpha}\mapsto f(x)$.
Another useful notation: given an integer $m$ such that $1\le m\le n$, set
\begin{equation*}
I(n-m,n):=\{\alpha \subset \{1,\ldots, n\};\, \#\alpha=n-m\}.
\end{equation*}
Thus, when $\alpha\in I(n-m,n)$, $f_\alpha(x_\alpha)$ is a function of $m$ variables.
When $q=p$, we have the following result.
\begin{lemma}
\label{oa1}
Let $1\le m<n$. Let $s>0$ and $1\le p<\infty$. Let $f\in B^s_{p,p}({\mathbb R}^n)$.
\begin{enumerate}
\item
Let $\alpha\in I(n-m,n)$. Then, for a.e. $x_\alpha\in{\mathbb R}^{n-m}$, we have $f_\alpha(x_\alpha)\in B^s_{p,p}({\mathbb R}^m)$.
\item
We have
\begin{equation*}
\|f\|_{B^s_{p,p}({\mathbb R}^n)}^p\sim \sum_{\alpha\in I(n-m,n)}\int_{{\mathbb R}^{n-m}}\|f_\alpha(x_\alpha)\|_{B^s_{p,p}({\mathbb R}^m)}^p\, dx_\alpha.
\end{equation*}
\end{enumerate}
\end{lemma}
\begin{proof}
For the case where $m=1$, see
\cite[Section 2.5.13, Theorem, (i), p. 115]{triebel2}. The general case is obtained by a straightforward induction on $m$.
\end{proof}
\begin{lemma}
\label{mo7}
Let $s>0$, $1\le p<\infty$ and $1\le q\le p$. Let $1\le m< n$ be an integer. Assume that $sp\ge m$ and let $f\in B^s_{p, q}({\mathbb T}^n)$. Then, for every $\alpha\in I(n-m,n)$ and for a.e. $x_\alpha\in{\mathbb T}^{n-m}$, the partial map $f_\alpha(x_\alpha)$ belongs to $\text{\rm VMO}({\mathbb T}^m)$.
Same conclusion if $s>0$, $1\le p<\infty$ and $1\le q\le \infty$, and we have $sp>m$.
Similar conclusions when $\Omega={\mathbb R}^n$ or $(0,1)^n$.
\end{lemma}
\begin{proof}
In view of the Sobolev embeddings (Lemma \ref{Besovemb}), we may assume that $sp=m$ and $q=p$. By Lemma \ref{oa1} and Lemma \ref{B-VMO}, for a.e. $x_\alpha$ we have $f_\alpha(x_\alpha)\in B^s_{p,p}({\mathbb T}^m)\hookrightarrow \text{\rm VMO}({\mathbb T}^m)$.
\end{proof}
\begin{lemma}
\label{ad1}
Let $s>0$, $1\le p<\infty$ and $1\le q<\infty$. Let $M>s$ be an integer. Let $f\in B^s_{p,q}$. For $x'\in {\mathbb T}^{n-1}$, consider the partial map $v(x_n)=v_{x'}(x_n):=f(x',x_n)$, with $x_n\in{\mathbb T}$. Then there exists a sequence $(t_l)\subset (0,\infty)$ such that $t_l\to 0$ and for a.e. $x'\in{\mathbb T}^{n-1}$, we have
\begin{equation}
\label{ce1}
\lim_{l\to\infty}\frac{\left\|\Delta_{t_l}^Mv\right\|_{L^p({\mathbb T})}}{t_l^{s}}=0.
\end{equation}
More generally, given a finite number of functions $f_j\in B^{s_j}_{p_j,q_j}$, with $s_j>0$, $1\le p_j<\infty$ and $1\le q_j<\infty$, and given an integer $M>\max_j s_j$, we may choose a common set $A$ of full measure in ${\mathbb T}^{n-1}$ and a sequence $(t_l)$ such that the analog of \eqref{ce1}, i.e.,
\begin{equation}
\label{cf1}
\lim_{l\to\infty}\frac{\left\|\Delta_{t_l}^Mf_j(x',\cdot)\right\|_{L^{p_j}({\mathbb T})}}{ t_l^{s_j}}=0,
\end{equation}
holds simultaneously for all $j$ and all $x'\in A$.
\end{lemma}
\begin{proof}
We treat the case of a single function; the general case is similar.
Set $g_t:=\left\|\Delta^M_{t e_n}f\right\|_{L^p}$. By \eqref{equivnormhomogrn}, we have $\int_0^1t^{-sq-1}g_t^q\, dt<\infty$, which is equivalent to
$
\int_{1/2}^1\sum_{m\ge 0}2^{msq}g_{2^{-m}\sigma}^q\, d\sigma<\infty$. Therefore, there exists some $\sigma\in (1/2,1)$ such that
\begin{equation}
\label{ce2}
\sum_{m\ge 0}2^{msq}g_{2^{-m}\sigma}^q <\infty.
\end{equation}
By \eqref{ce2} , we find that
\begin{equation}
\label{ce3}
\lim_{m\to \infty}\frac{g_{2^{-m}\sigma}}{(2^{-m}\sigma)^s}=0.
\end{equation}
Using \eqref{ce3} we find that, along a subsequence $(m_l)$, we have
\begin{equation*}
\lim_{m\to \infty}\frac{\|\Delta_{2^{-m_l}\sigma}v\|_{L^p}}{(2^{-m_l}\sigma)^s}=0\quad\text{for a.e. }x'\in{\mathbb T}^{n-1}.
\end{equation*}
This implies \eqref{ce1} with $t_l:=2^{-m_l}\sigma$.
\end{proof}
\subsection{(Non) restrictions}
${}$
We now address the question whether, given $f\in B^s_{p, q}({\mathbb R}^2)$, we have $f(x, \cdot)\in B^s_{p, q}({\mathbb R})$ for a.e. $x\in{\mathbb R}$. This kind of questions can also be asked in higher dimensions. The answer crucially depends on the sign of $q-p$.
We start with a simple result.
\begin{proposition}
\label{qh1}
Let $s>0$ and $1\le q\le p<\infty$. Let $f\in B^s_{p,q}({\mathbb R}^2)$. Then for a.e. $x\in{\mathbb R}$ we have $f(x, \cdot)\in B^s_{p,q}({\mathbb R})$.
\end{proposition}
\begin{proof}
Let $f\in B^s_{p,q}({\mathbb R}^2)$. Using \eqref{equivnormhomogrn} (part 2) and H\"older's inequality, we find that for every finite interval $[a,b]\subset{\mathbb R}$ and $M>s$ we have
\begin{equation*}
\begin{aligned}
\int_a^b |f(x,\cdot)|_{B^s_{p,q}({\mathbb R})}^q\, dx& \sim
\int_a^b\int_{\mathbb R} \frac 1{|h|^{sq+1}}\left(\int_{{\mathbb R}}|\Delta^M_{h e_2}f(x, y)|^p\, dy\right)^{q/p}\, dh dx\\
&\le (b-a)^{(p-q)/p}\, \int_{\mathbb R} \frac 1{|h|^{sq+1}}\left(\int_{[a,b]\times{\mathbb R}}|\Delta^M_{h e_2}f(x, y)|^p\, dxdy\right)^{q/p}\, dh\\
&\lesssim |f|_{B^s_{p,q}({\mathbb R}^2)}^q<\infty\end{aligned}
\end{equation*}
whence the conclusion.
\end{proof}
When $q>p$, a striking phenomenon occurs.
\begin{proposition}
\label{l7.26}
Let $s>0$ and $1\le p<q\le\infty$. Then there exists some compactly supported $f\in B^s_{p,q}({\mathbb R}^2)$ such that for a.e. $x\in (0,1)$ we have $f(x,\cdot)\not\in B^s_{p,\infty}({\mathbb R})$.
In particular, for any $1\le r<\infty$ and a.e. $x\in (0,1)$ we have $f(x,\cdot)\not\in B^s_{p, r}({\mathbb R})$.
\end{proposition}
Before proceeding to the proof, let us note that if $f\in B^s_{p,q}({\mathbb R}^2)$ then $f\in L^p({\mathbb R}^2)$, and thus the partial function $f(x,\cdot)$ is a well-defined element of $L^p({\mathbb R})$ for a.e. $x$.
\begin{proof}
Since $B^s_{p, q}({\mathbb R}^2)\subset B^s_{p,\infty}({\mathbb R}^2)$, $\forall\, q$, we may assume that $q<\infty$.
We rely on the characterization of Besov spaces in terms of smooth wavelets, as in Section \ref{qa6}.
We start by explaining the construction of $f$. Let $\psi_F$ and $\psi_M$ be as in Section \ref{qa6}. With no loss of generality, we may assume that $\operatorname{supp}\psi_M\subset [0,a]$ with $a\in{\mathbb N}$. Consider $(\alpha, \beta)\subset (0,a)$ and $\gamma>0$ such that $\psi_M\ge\gamma$ in $[\alpha, \beta]$.
Set $\delta:=\beta-\alpha>0$ and consider some integer $N$ such that $[0,1]\subset [\alpha-N\, \delta, \beta+N\, \delta]$. We look for an $f$ of the form
\begin{equation}
\label{qb5}
f=\sum_{\ell=-N}^N\sum_{j\ge j_0} g^\ell_j,
\end{equation}
with
\begin{equation}
\begin{aligned}
\label{qb6}
g^\ell_j(x,y)=\mu_j\, 2^{-j(s-2/p)}\sum_{m_1\in I_j} &\psi_M(2^j x-m_1-\ell\, \delta)\\
&\times\psi_M(2^j y-m_1-2^{j+1}\, \ell\, a-\ell\, \delta).
\end{aligned}
\end{equation}
Here, the set $I_j$ satisfying
\begin{equation}
\label{qb7}
I_j\subset \{ 0, 1,\ldots, 2^j\},
\end{equation}
the integer $j_0$ and the coefficients $\mu_j>0$ will be defined later.
We consider the partial sums $f^\ell_J:=\sum_{j=j_0}^J g^\ell_j$. Clearly, we have $f^\ell_J\in C^k$ and, provided $j_0$ is sufficiently large,
\begin{equation*}
\sup f^\ell_J\subset K_l:=[-N\, \delta , 5/4]\times [2\ell\, a-1/4, (2\ell +1)\, a+1/4].
\end{equation*}
We next note that the compacts $K_\ell$ are mutually disjoint. Using Proposition \ref{p2.4} item 2, we easily find that
\begin{equation}
\label{qb9}
\left\|\sum_{\ell=-N}^N f^\ell_J \right\|_{B^s_{p,q}({\mathbb R}^2)}^q\sim \sum_{\ell=-N}^N \left\| f^\ell_J \right\|_{B^s_{p,q}({\mathbb R}^2)}^q.
\end{equation}
On the other hand, if $\psi_M$ and $\psi_F$ are wavelets such that Proposition \ref{qb1} holds, then so are $\psi_F(\cdot-\lambda)$ and $\psi_M(\cdot-\lambda)$, $\forall\, \lambda\in{\mathbb R}$ \cite[Theorem 1.61 {\it (ii)}, Theorem 1.64]{triebel06}. Combining this fact with \eqref{qb9}, we find that
\begin{equation}
\label{qc1}
\left\|\sum_{\ell=-N}^N f^\ell_J \right\|_{B^s_{p,q}({\mathbb R}^2)}^q\sim \sum_{j=j_0}^J \left( \# I_j\, (\mu_j)^p\right)^{q/p}.
\end{equation}
We now make the size assumption
\begin{equation}
\label{qc2}
\sum_{j=j_0}^\infty \left( \# I_j\, (\mu_j)^p\right)^{q/p}<\infty.
\end{equation}
By \eqref{qc1} and \eqref{qc2}, we see that the formal series in \eqref{qb5} defines a compactly supported $f\in B^s_{p,q}({\mathbb R}^2)$, with $\sum_{\ell=-N}^N f^\ell_J\to f$ in $B^s_{p,q}({\mathbb R}^2)$ (and therefore in $L^p({\mathbb R}^2)$) as $J\to\infty$.
We next investigate the $B^s_{p,\infty}$ norm of the restrictions $f^\ell_J (x,\cdot)$. As in \eqref{qb9}, we have
\begin{equation}
\label{qc3}
\left\|\sum_{\ell=-N}^N f^\ell_J (x,\cdot)\right\|_{B^s_{p,\infty}({\mathbb R})}\sim \sum_{\ell=-N}^N\|f^\ell_J(x,\cdot)\|_{B^s_{p,\infty}({\mathbb R})}.
\end{equation}
Rewriting \eqref{qb6} as
\begin{equation}
\label{qc4}
\begin{aligned}
g^\ell_j(x,y)=\mu_j\, 2^{-j(s-1/p)}\, 2^{j/p}\sum_{m_1\in I_j} & \psi_M(2^j x-m_1-\ell\, \delta)\\
&\times \psi_M(2^j y-m_1-2^{j+1}\, \ell\, a -\ell\, \delta),
\end{aligned}
\end{equation}
we obtain
\begin{equation}
\label{qc5}
\|f^\ell_J(x,\cdot)\|_{B^s_{p,\infty}({\mathbb R})}^p\sim \sup_{j_0\le j\le J} 2^j\, (\mu_j)^p\, \sum_{m_1\in I_j}|\psi_M (2^j\, x- m_1-\ell\, \delta)|^p.
\end{equation}
We now make the size assumption
\begin{equation}
\label{qc6}
\sup_{j\ge j_0} 2^j\, (\mu_j)^p\, \sum_{\ell=-N}^N\sum_{m_1\in I_j}|\psi_M (2^j\, x- m_1-\ell\, \delta)|^p=\infty,\ \forall\, x\in [0,1].
\end{equation}
Then we claim that for a.e. $x\in (0,1)$ we have
\begin{equation}
\label{qc7}
f(x,\cdot)\not\in B^s_{p,\infty}({\mathbb R}).
\end{equation}
Indeed, since $\sum_{\ell=-N}^N f^\ell_J\to f$ in $L^p ({\mathbb R}^2)$, for a.e. $x\in{\mathbb R}$ we have
\begin{equation}
\label{qc8}
\sum_{\ell=-N}^\ell f^\ell_J(x,\cdot)\to f(x,\cdot)\ \text{in }L^p({\mathbb R}).
\end{equation}
We claim that for every $x\in [0,1]$ such that \eqref{qc8} holds, we have $f(x,\cdot)\not\in B^s_{p,\infty}({\mathbb R})$. Indeed, on the one hand \eqref{qc6} implies that for some $\ell$ we have $\lim_{J\to\infty}\|f^\ell_J (x,\cdot)\|_{B^s_{p,\infty}({\mathbb R})}=\infty$. We assume e.g. that this holds when $\ell=0$. Thus
\begin{equation}
\label{qc80}
\sup_{j\ge j_0} 2^j\, (\mu_j)^p\, \sum_{m_1\in I_j}|\psi_M (2^j\, x- m_1)|^p=\infty.
\end{equation}
On the other hand, assume by contradiction that $f(x,\cdot)\in B^s_{p,\infty}({\mathbb R})$. Then we may write $f(x,\cdot)$ as in \eqref{qb3}, with coefficients as in \eqref{qb40}. In particular, taking into account the explicit formula of $g^\ell_j$ and the fact that $\sum_{\ell=-N}^N f^\ell_J(x,\cdot)\to f(x,\cdot)$ in $L^p({\mathbb R})$, we find that for $k\ge j_0$ and $m_1\in I_j$ we have
\begin{equation}
\label{qc800}
\begin{aligned}
\mu^{k, \{ M\}}_{m_1}(f(x,\cdot))&=\mu^{k, \{ M\}}_{m_1}\left(\sum_{j=j_0}^J g^0_j (x, \cdot)\right)=\mu^{k, \{ M\}}_{m_1}(g^0_k (x,\cdot))\\
&=2^{k/p}\, \mu_k\, \psi_M(2^k\, x-m_1),\ \forall\, J\ge k.
\end{aligned}
\end{equation}
We obtain a contradiction combining \eqref{qc80}, \eqref{qc800} and Corollary \ref{qb400}.
It remains to construct $I_j$ and $\mu_j$ satisfying \eqref{qb7}, \eqref{qc2} and \eqref{qc6}. We will let $I_j=\llbracket s_j, t_j\rrbracket$, with $0\le s_j\le t_j\le 2^j$ integers to be determined later. Set $t:=q/p \in (1,\infty)$ and
\begin{equation*}
\mu_j:=\left(\frac 1{(t_j-s_j+1)\, j^{1/t}\, \ln j}\right)^{1/p}.
\end{equation*}
Clearly, \eqref{qb7} and \eqref{qc2} hold. It remains to define $I_j$ in order to have \eqref{qc6}. Consider the dyadic segment $L_j:=[s_j/2^j, t_j/2^j]$. We claim that
\begin{equation}
\label{qa11}
\sum_{\ell=-N}^N\sum_{m_1\in I_j}|\psi_M (2^j\, x- m_1-\ell\, \delta)|^p\ge \gamma^p,\ \forall\, x\in L_j.
\end{equation}
Indeed, let $m_1\in [s_j, t_j]$ be the integer part of $2^j\, x$. By the definition of $\delta$ and by choice of $N$, there exists some $\ell\in \llbracket -N, N\rrbracket$ such that $\alpha\le 2^j\, x- m_1-\ell\, \delta\le \beta$, whence the conclusion.
By the above, \eqref{qc6} holds provided we have
\begin{equation}
\label{qc60}
\sup_{j\ge j_0}2^j\, (\mu_j)^p\, \ensuremath{\mathonebb{1}}_{L_j(x)}=\infty,\ \forall\, x\in [0,1].
\end{equation}
We next note that
\begin{equation}
\label{qc600}
2^j\, (\mu_j)^{p}\sim \frac 1{|L_j|\, j^{1/t}\, \ln j}=\frac {u_j}{|L_j|},
\end{equation}
where $u_j:=1/(j^{1/t}\, \ln j)$ satisfies
\begin{equation}
\label{qc6000}
\sum_{j\ge j_0}u_j=\infty.
\end{equation}
In view of \eqref{qc600} and \eqref{qc6000}, existence of $I_j$ satisfying \eqref{qc60} is a consequence of Lemma \ref{tempSeq} below. The proof of Proposition \ref{l7.26} is complete.\end{proof}
\begin{lemma}\label{tempSeq}
Consider a sequence $(u_j)$ of positive numbers such that $\sum_{j\ge j_0}u_j=\infty$. Then there exists a sequence $(L_j)$ of dyadic intervals $L_j=[s_j/2^j, t_j/2^j]$, such that:
\begin{enumerate}
\item
$s_j, t_j\in{\mathbb N}$, $0\le s_j< 2^j$.
\item
$|L_j|=o(u_j)$ as $j\to\infty$.
\item
Every $x\in [0,1]$ belongs to infinitely many $L_j$'s.
\end{enumerate}
\end{lemma}
\begin{proof}
Consider a sequence $(v_j)$ of positive numbers such that $\sum_{j\ge j_0}v_j\, u_j=\infty$ and $v_j\to 0$. Let $L_{j_0}$ be the largest dyadic interval of the form $[0, t_{j_0}/2^{j_0}]$ of length $\le v_{j_0}\, u_{j_0}$. This defines $s_{j_0}=0$ and $t_{j_0}$.
Assuming $L_j=[s_j/2^j, t_j/2^j]=[a_j, b_j]$ constructed for some $j\ge j_0$, one of the following two occurs. Either $b_j<1$ and then we let $L_{j+1}$ be the largest dyadic interval of the form $[2 t_{j}/2^{j+1}, t_{j+1}/2^{j+1}]$ such that $|L_{j+1}|\le v_{j+1}\, u_{j+1}$. Or $b_j\ge 1$, and then we let $L_{j+1}$ be the largest dyadic interval of the form $[0, t_{j+1}/2^{j+1}]$ such that $|L_{j+1}|\le v_{j+1}\, u_{j+1}$.
Using the assumption $\sum_{j\ge j_0}v_j\, u_j=\infty$ and the fact that $|L_j|\ge v_j\, u_j-2^{-j}$, we easily find that for every $j\ge j_0$ there exists some $k>j$ such that $L_k=[a_k, b_k]$ satisfies $b_k\ge 1$, and thus the intervals $L_j$ cover each point $x\in [0,1]$ infinitely many times. \end{proof}
\begin{remark}
\label{r10}
Following a suggestion of the first author, Brasseur investigated the non restriction property established in Proposition \ref{l7.26}. In \cite{brasseur} (which is independent of the present work), Brasseur extends Proposition \ref{l7.26} to the full range $0<p<q\le \infty$; the construction is somewhat similar to ours (based on the size of the coefficients $\mu_j$ in the decomposition \eqref{qb6}), but relying on a different decomposition (subatomic instead of wavelets). \cite{brasseur} also contains an interesting positive result: it exhibits function spaces $X$ intermediate between $B^s_{p,q}({\mathbb R})$ and $\displaystyle\bigcup_{\varepsilon>0}B^{s-\varepsilon}_{p, q}({\mathbb R})$ such that, if $f\in B^s_{p,q}({\mathbb R}^2)$, then for a.e. $x\in{\mathbb R}$ we have $f(x,\cdot)\in X$.
\end{remark}
\subsection{Poincar\'e type inequalities}
${}$
The next Poincar\'e type inequality for Besov spaces is certainly well-known, but we were unable to find a reference in the literature.
\begin{lemma}
\label{ad2}
Let $0<s<1$, $1\leq p<\infty$, and $1\le q\le\infty$. Then we have
\begin{equation} \label{PBesov}
\left\Vert f-\fint f\right\Vert_{L^p}\lesssim \left\vert f\right\vert_{B^{s}_{p,q}},\quad \forall\, f:\Omega\to{\mathbb R}\text{ measurable function}.
\end{equation}
\end{lemma}
\noindent
Recall (Proposition \ref{p2.4}) that the
semi-norm in \eqref{PBesov} is given by
\begin{equation}
\label{aa4}
|f|_{B^s_{p,q}}=|f|_{B^s_{p,q}({\mathbb R}^n)}:=\left(\int_{{\mathbb R}^n}|h|^{-sq}\|\Delta_h f\|_{L^p}^q\, \frac{dh}{|h|^n}\right)^{1/q}
\end{equation}
when $q<\infty$,
with the obvious modifications when $q=\infty$ or ${\mathbb R}^n$ is replaced by $\Omega$.
\begin{proof}
By \eqref{homoglp}, we have $\|f\|_{B^s_{p,q}}\sim \|f\|_{L^p}+|f|_{B^s_{p,q}}$.
Recall that the embedding $B^{s}_{p,q}\hookrightarrow L^p$ is compact \cite[Theorem 3.8.3, p. 296]{triebel1}. From this we infer that \eqref{PBesov} holds for every function $f\in B^{s}_{p,q}$. Indeed, assume by contradiction that this is not the case. Then there exists a sequence of functions $(f_j)_{j\geq 1}\subset B^s_{p,q}$ such that, for every $j$,
\begin{equation*}
1=\left\Vert f_j-\fint f_j\right\Vert_{L^p}\geq j \left\vert f_j\right\vert_{B^{s}_{p,q}}. \color{black}
\end{equation*}
Set $g_j:={f_j-\fint f_j}$.
Then, up to a subsequence, we have $g_j\to g$ in $L^p$, where $\|g\|_{L^p}=1$ and $\int g=0$.
We claim that $g$ is constant in $\Omega$ (and thus $g=0$). Indeed, by the Fatou lemma, for every $h\in {\mathbb R}^n$ we have
\begin{equation}
\label{aa3}
\|\Delta_hg\|_{L^p}\le \liminf \|\Delta_hg_j\|_{L^p}=\liminf \|\Delta_hf_j\|_{L^p}.
\end{equation}
By \eqref{aa4}, \eqref{aa3} and the Fatou lemma, we have
\begin{equation*}
|g|_{B^s_{p,q}}\le\liminf |g_j|_{B^s_{p,q}}=\liminf |f_j|_{B^s_{p,q}}=0;
\end{equation*}
thus $g=0$, as claimed. This contradicts the fact that $\|g\|_{L^p}=1$.
Let us now establish \eqref{PBesov} only assuming that $|f|_{B^s_{p,q}}<\infty$. We start by reducing the case where $q=\infty$ to the case where $q<\infty$. This reduction relies on the straightforward estimate
\begin{equation*}
|f|_{B^\sigma_{p,r}}\lesssim |f|_{B^s_{p,\infty}},\quad \forall\, 0<\sigma<s,\ \forall\, 0<r<\infty.
\end{equation*}
So let us assume that $q<\infty$.
For every integer $k\geq 1$, let $\Phi_k:{\mathbb R}\rightarrow {\mathbb R}$ be given by
\begin{equation*}
\Phi_k(t):=
\begin{cases}
t, & \mbox{ if }\left\vert t\right\vert \leq k\\
-k, & \mbox{ if }t\leq -k\\
k, & \mbox{ if }t\geq k
\end{cases}.
\end{equation*}
Clearly, $\Phi_k$ is $1$-Lipschitz, so that \eqref{aa4} easily yields
\begin{equation} \label{controlphik}
\left\vert \Phi_k(f)\right\vert_{B^s_{p,q}}\leq \left\vert f\right\vert_{B^s_{p,q}}
\end{equation}
and (by dominated convergence, using $q<\infty$ and \eqref{aa4})
\begin{equation} \label{convphikf}
\lim_{k\rightarrow \infty} \left\vert \Phi_k(f)-f\right\vert_{B^{s}_{p,q}}=0.
\end{equation}
Since $\Phi_k(f)\in L^{\infty}(\Omega)\subset L^p(\Omega)$, one has $\Phi_k(f)\in B^s_{p,q}$ for every $k$. Therefore, \eqref{PBesov} and \eqref{controlphik} imply
\begin{equation} \label{phikck}
\left\Vert \Phi_k(f)-c_k\right\Vert_{L^p}\lesssim \left\vert \Phi_k(f)\right\vert_{B^{s}_{p,q}}\le \left\vert f\right\vert_{B^s_{p,q}}
\end{equation}
with $c_k:=\fint \Phi_k(f)$. Thanks to \eqref{convphikf}, we may pick up an increasing sequence of integers $(\lambda_k)_{k\geq 1}$ such that, for every $k$,
$\displaystyle
\left\vert \Phi_{\lambda_{k+1}}(f)-\Phi_{\lambda_k}(f)\right\vert_{B^s_{p,q}}\leq 2^{-k}$.
Applying \eqref{PBesov} to $ \Phi_{\lambda_{k+1}}(f)-\Phi_{\lambda_k}(f)$, one therefore has
\begin{equation*}
\left\Vert \left(\Phi_{\lambda_{k+1}}(f)-c_{\lambda_{k+1}}\right)-\left(\Phi_{\lambda_k}(f)-c_{\lambda_k}\right)\right\Vert_{L^p}\lesssim \left\vert \Phi_{\lambda_{k+1}}(f)-\Phi_{\lambda_k}(f)\right\vert_{B^s_{p,q}}\leq 2^{-k},
\end{equation*}
which entails that $\displaystyle
\Phi_{\lambda_k}(f)-c_{\lambda_k}\to g\text{ in }L^p$ as $k\to\infty$.
Up to a subsequence, one can also assume that $
\displaystyle
\Phi_{\lambda_k}(f)(x)-c_{\lambda_k}\to g(x)$ for a.e. $x\in \Omega$.
Take any $x\in \Omega$ such that $\Phi_{\lambda_k}(f)(x)-c_{\lambda_k}\to g(x)$.
Since
$\displaystyle
\Phi_{\lambda_k}(f)(x)\to f(x)$ as $k\to\infty$,
one obtains
\begin{equation} \label{ckc}
\lim_{k\rightarrow \infty} c_{\lambda_k}=c\in {\mathbb C}.
\end{equation}
Finally, \eqref{phikck}, \eqref{ckc} and the Fatou lemma yield
$\displaystyle
\left\Vert f-c\right\Vert_{L^p}\lesssim \left\vert f\right\vert_{B^s_{p,q}}$,
from which \eqref{PBesov} easily follows.
\end{proof}
We next state and prove a generalization of Lemma \ref{ad2}.
\begin{lemma}
\label{ad3}
Let $0<s<1$, $1\le p<\infty$, $1\le q\le\infty$, and $\delta\in (0,1]$. Define
\begin{equation}
\label{ad4}
|f|_{B^s_{p,q,\delta}}:=\left(\int_{|h|\le\delta}|h|^{-sq}\|\Delta_h f\|_{L^p}^q\, \frac{dh}{|h|^n}\right)^{1/q}
\end{equation}
when $q<\infty$,
with the obvious modifications when $q=\infty$ or ${\mathbb R}^n$ is replaced by $\Omega$. Then we have
\begin{equation} \label{ad5}
\left\Vert f-\fint f\right\Vert_{L^p}\lesssim \left\vert f\right\vert_{B^{s}_{p,q, \delta}},\quad \forall\, f:\Omega\to{\mathbb R}\text{ measurable function}.
\end{equation}
\end{lemma}
\begin{proof}
Recall that \color{black} $\|f\|_{B^s_{p,q}}\sim \|f\|_{L^p}+|f|_{B^s_{p,q,\delta}}$ (Proposition \ref{p2.4}). We continue as in the proof of Lemma \ref{ad2}.
\end{proof}
We end with an estimate involving derivatives.
\begin{lemma}
\label{at4}
Let $s>0$, $1< p<\infty$ and $1\le q\le\infty$.
Let $f\in {\cal D}'(\Omega)$ be such that $\nabla f\in B^{s-1}_{p,q}(\Omega)$. Then $f\in B^s_{p, q}(\Omega)$ and
\begin{equation}
\label{at9}
\left\| f-\fint f\right\|_{B^s_{p,q}}\lesssim \|\nabla f\|_{B^{s-1}_{p,q}}.
\end{equation}
\end{lemma}
The above result is well-known, but we were unable to find it in the literature; for the convenience of the reader, we present the short argument when $\Omega={\mathbb T}^n$.
\begin{proof}
We use the notation in Proposition \ref{mm2} and the following result \cite[Lemma 2.1.1, p. 16]{chemin}: we have
\begin{equation}
\label{mm3}
\|f_j\|_{L^p}\sim 2^{-j}\|\nabla f_j\|_{L^p},\quad \forall\, 1\le p\le \infty,\ \forall\, j\ge 1.
\end{equation}
By combining \eqref{mm3} with Proposition \ref{mm2}, we obtain, e.g. when $q<\infty$:
\begin{equation}
\label{mm4}
\begin{aligned}
\left\|f-a_0\right\|_{B^s_{p,q}}^q&=\left\|\sum_{j\ge 1}f_j\right\|_{B^s_{p,q}}^q\sim \sum_{j\ge 1}2^{sjq}\|f_j\|_{L^p}^q
\\
&\lesssim \sum_{j\ge 1}2^{sjq}2^{-jq}\|\nabla f_j\|_{L^p}^q
\sim \|\nabla f\|_{B^{s-1}_{p,q}}^q.
\end{aligned}
\end{equation}
In particular, $f\in L^1$ (Lemma \ref{kc2}), and thus $a_0=\fint f$. Therefore, \eqref{mm4} is equivalent to \eqref{at9}.
\end{proof}
\begin{remark}
\label{mn41}
With more work, Lemma \ref{at4} can be extended to the case where $p=1$. Although this will not be needed here, we sketch below the argument. With the notation in Section \ref{mm6}, consider the Littlewood-Paley decomposition $f=\sum f_j$, with $f_j:=\sum a_m\varphi_j(2\pi m)e^{2\imath\pi m\cdot x}$. Note that
the Littlewood-Paley decomposition of $\nabla f$ is simply given by
\begin{equation}
\label{mn7}
\nabla f=\sum \nabla f_j.
\end{equation}
In the spirit of \cite[Lemma 2.1.1, p. 16]{chemin} (see also \cite[Proof of Lemma 1]{leta}), one may prove that we have the following analog of \eqref{mm3}:
\begin{equation}
\label{mn6}
\|f_j\|_{L^p}\sim 2^{-j}\|\nabla f_j\|_{L^p},\quad \forall\, 1\le p\le \infty,\ \forall\, j\ge 1.
\end{equation}
Using Definition \ref{periodicbesov}, \eqref{mn7} and \eqref{mn6}, we obtain \eqref{mm4}. We conclude as in the proof of Lemma \ref{at4}.
\end{remark}
\subsection{Characterization of $B^s_{p,q}$ via extensions} \label{characext}
${}$
The type of results we present in this section are classical for functions defined on the whole ${\mathbb R}^n$ and for the harmonic extension. Such results were obtained by Uspenski\u\i{ } in the early sixties \cite{uspenskii}. For further developments, see \cite[Section 2.12.2, Theorem, p. 184]{triebel2}; see also Section \ref{chha}. When the harmonic extension is replaced by other extensions by regularization, the kind of results we present below were known to experts at least for maps defined on ${\mathbb R}^n$; see \cite[Section 10.1.1, Theorem 1, p. 512]{mazyanew} and also \cite{tracesoldnew} for a systematic treatment of extensions by smoothing.
The local variants (involving extensions by averages in domains) we present below could be obtained by adapting the arguments we developed in a more general setting in \cite{tracesoldnew}, and which are quite involved. However, we present here a more elementary approach, inspired by \cite{mazyanew}, sufficient to our purpose.
In what follows, we let $|\ |$ denote the $\|\ \|_{\infty}$ norm in ${\mathbb R}^n$.
For simplicity, we state our results when $\Omega={\mathbb T}^n$, but they can be easily adapted to arbitrary $\Omega$.
\begin{lemma}
\label{ab1}
Let $0<s<1$, $1\le p<\infty$, $1\le q\le\infty$, and $\delta\in (0,1]$. Set
$
V_\delta:={\mathbb T}^n\times (0,\delta)$.
\begin{enumerate}
\item
Let $F\in C^\infty(V_{\delta})$. If
\begin{equation}
\label{cg6}
\left(\int_0^{\delta/2}\varepsilon^{q-sq}\|(\nabla F)(\cdot,\varepsilon)\|_{L^p}^q\, \frac{d\varepsilon}\varepsilon\right)^{1/q}<\infty
\end{equation}
(with the obvious modification when $q=\infty$), then $F$ has a trace $f\in B^s_{p,q}({\mathbb T}^n)$, satisfying
\begin{equation}
\label{ab2}
|f|_{B^s_{p,q,\delta}}\lesssim \left(\int_0^{\delta/2}\varepsilon^{q-sq}\|(\nabla F)(\cdot,\varepsilon)\|_{L^p}^q\, \frac{d\varepsilon}{\varepsilon}\right)^{1/q}.
\end{equation}
\item
Conversely, let $f\in B^s_{p,q}({\mathbb T}^n)$. Let $\rho\in C^\infty$ be a mollifier supported in $\{ |x|\le 1\}$ and set $F(x,\varepsilon):=f\ast\rho_\varepsilon(x)$, $x\in{\mathbb T}^n$, $0<\varepsilon<\delta$. Then
\begin{equation}
\label{cg1}
\left(\int_0^\delta\varepsilon^{q-sq}\|(\nabla F)(\cdot,\varepsilon)\|_{L^p}^q\, \frac{d\varepsilon}{\varepsilon}\right)^{1/q}\lesssim |f|_{B^s_{p,q,\delta}}.
\end{equation}
\end{enumerate}
\end{lemma}
A word about the existence of the trace in item 1 above. We will prove below that for every $0<\lambda<\delta/4$ we have
\begin{equation}
\label{cg2}
\left|F_{|{\mathbb T}^n\times\{\lambda\}}\right|_{B^s_{p,q}}\lesssim \left(\int_0^{\delta/2}\varepsilon^{q-sq}\|(\nabla F)(\cdot,\varepsilon)\|_{L^p}^q\, \frac{d\varepsilon}{\varepsilon}\right)^{1/q}.
\end{equation}
By Lemma \ref{ad2} and a standard argument, this leads to the existence, in $B^s_{p,q}$, of the limit $\lim_{\varepsilon\to 0}F(\cdot,\varepsilon)$. This limit is the trace of $F$ on ${\mathbb T}^n$ and clearly satisfies \eqref{ab2}.
\begin{proof}
For simplicity, we treat only the case where $q<\infty$; the case where $q=\infty$ is somewhat simpler and is left to the reader.
We claim that in item 1 we may assume that $F\in C^\infty(\overline{V_\delta})$. Indeed, assume that \eqref{ab2} holds (with $\operatorname{tr} F=F(\cdot, 0)$) for such $F$. By Lemma \ref{ad2}, we have the stronger inequality $\left\|\operatorname{tr} F-\fint\operatorname{tr} F\right\|_{B^s_{p,q}}\lesssim I(F)$, where $I(F)$ is the integral in \eqref{cg6}. Then, by a standard approximation argument, we find that \eqref{ab2} holds for every $F$.
So let $F\in C^\infty(\overline{V_\delta})$, and set $f(x):=F(x,0)$, $\forall\, x\in{\mathbb T}^n$. Denote by $I(F)$ the quantity in \eqref{cg6}. We have to prove that $f$ satisfies
\begin{equation}
\label{ab210}
|f|_{B^s_{p,q}}\lesssim I(F).
\end{equation}
If $|h|\le\delta$, then
\begin{equation}
\label{cg4}
|\Delta_hf(x)|\le \left|f(x+h)-F(x+h/2,|h|/2)\right|+\left|f(x)-F(x+h/2,|h|/2)\right|.
\end{equation}
By symmetry and \eqref{cg4}, the estimate \eqref{ab210} will follow from
\begin{equation}
\label{cg5}
\left(\int_{|h|\le\delta}|h|^{-sq}\|f-F(\cdot+h/2,|h|/2)\|_{L^p}^q\, \frac{dh}{|h|^n}\right)^{1/q}\lesssim I(F).
\end{equation}
In order to prove \eqref{cg5}, we start from
\begin{equation}
\label{cg8}
\begin{aligned}
\left|F(x+h/2,|h|/2)-f(x)\right|&=\left|\int_0^1 (\nabla F)(x+th/2,t|h|/2)\cdot (h/2,|h|/2)\, dt\right|\\
&\le |h|\int_0^1|\nabla F(x+th/2,t|h|/2)|\, dt.
\end{aligned}
\end{equation}
Let $J(F)$ denote the left-hand side of \eqref{cg5}. Using \eqref{cg8} and setting $r:=|h|/2$, we obtain
\begin{equation}
\label{ch1}
\begin{aligned}
[J(F)]^q&\le \int_{|h|\le\delta}|h|^{q-sq}\left(\int_0^1\|\nabla F(\cdot+th/2,t|h|/2)\|_{L^p}\, dt\right)^q\, \frac{dh}{|h|^n}\\
&=\int_{|h|\le\delta}|h|^{q-sq}\left(\int_0^1\|\nabla F(\cdot,t|h|/2)\|_{L^p}\, dt\right)^q\, \frac{dh}{|h|^n}\\
&\sim \int_0^{\delta/2}r^{q-sq-1}\left(\int_0^1\|\nabla F(\cdot,tr)\|_{L^p}\, dt\right)^q\, dr\\
&\sim \int_0^{\delta/2}r^{-sq-1}\left(\int_0^r\|\nabla F(\cdot,\sigma)\|_{L^p}\, d\sigma\right)^q\, dr
\lesssim [I(F)]^q.
\end{aligned}
\end{equation}
The last inequality is a special case of Hardy's inequality \cite[Chapter 5, Lemma 3.14]{steinweiss}, that we recall here when $\delta =\infty$.\footnote{ But the argument adapts to a finite $\delta$; see e.g. \cite[Proof of Corollary 7.2]{bousquetmironescu}.} Let $1\le q<\infty$ and $1<\rho<\infty$. If $G\in W^{1,1}_{loc}([0,\infty))$, then
\begin{equation}
\label{e04269}
\int_0^\infty\frac{|G(r)-G(0)|^q}{r^\rho}\,dr\leq \left(\frac{q}{\rho-1}\right)^q\int_{0}^\infty\frac{|G'(r)|^q}{r^{\rho-q}}\,dr.
\end{equation}
We obtain \eqref{ch1} by applying \eqref{e04269} with $G'(r):=\|\nabla F(\cdot, r)\|_{L^p}$ and $\rho:=sq+1$.
The proof of item 1 is complete.
We next turn to item 2. We have
\begin{equation}
\label{kh2}
\nabla F(x,\varepsilon)=\frac 1\varepsilon f\ast \eta_\varepsilon(x),
\end{equation}
where $\nabla$ stands for $(\partial_1,\ldots,\partial_n,\partial_{\varepsilon})$. Here, $\eta=(\eta^1,\ldots,\eta^{n+1})\in C^\infty({\mathbb T}^n ; {\mathbb R}^{n+1})$ is supported in $\{ |x|\le 1\}$ and is given in coordinates by
\begin{equation}
\label{kh3}
\eta^j=\partial_j\rho, \ \forall\, j\in \llbracket 1, n\rrbracket,\ \eta^{n+1}=-\operatorname{div} (x\rho).
\end{equation}
Noting that $\int \eta=0$, we find that
\begin{equation}
\label{ch2}
\begin{aligned}
|\nabla F(x,\varepsilon)|&= \frac 1\varepsilon\left|\int_{|y|\le \varepsilon}(f(x-y)-f(x))\eta_\varepsilon(y)\, dy\right|\\
&\lesssim\frac 1{\varepsilon^{n+1}}\int_{|h|\le\varepsilon}|f(x+h)-f(x)|\, dh.
\end{aligned}
\end{equation}
Integrating \eqref{ch2} and using Minkowski's inequality, we obtain
\begin{equation}
\label{ci1}
\|\nabla F(\cdot,\varepsilon)\|_{L^p}\lesssim \frac 1{\varepsilon^{n+1}}\int_{|h|\le\varepsilon}\|\Delta_hf\|_{L^p}\, dh.
\end{equation}
Let $L(F)$ be the quantity in the left-hand side of \eqref{cg1}. Combining \eqref{ci1} with
H\" older's inequality, we find that
\begin{equation}
\label{mj1}
\begin{aligned}
[L(F)]^q&\color{black} \lesssim \int_0^{\delta}\frac 1{\varepsilon^{nq+sq+1}}\left(\int_{|h|\le\varepsilon}\|\Delta_hf\|_{L^p}\, dh\right)^q\, d\varepsilon\\
&\lesssim \int_0^{\delta}\frac 1{\varepsilon^{nq+sq+1}}\varepsilon^{n(q-1)}\int_{|h|\le\varepsilon}\|\Delta_hf\|_{L^p}^q\, dh\, d\varepsilon\\
&\lesssim \int_{|h|\le\delta}|h|^{-sq}\|\Delta_hf\|_{L^p}^q\, \frac{dh}{|h|^n}=|f|_{B^s_{p,q,\delta}}^q,\end{aligned}
\end{equation}
i.e, \eqref{cg1} holds.
\end{proof}
In the same vein, we have the following result, involving the semi-norm appearing in Proposition \ref{p2.4}, more specifically the quantity
\begin{equation}
\label{kd4}
|f|_{B^1_{p,q,\delta}}:=\left(\int_{|h|\le\delta}|h|^{-q}\|\Delta_h^2 f\|_{L^p}^q\, \frac{dh}{|h|^n}\right)^{1/q}
\end{equation}
when $q<\infty$,
with the obvious modification when $q=\infty$.
We first introduce a notation. Given $F\in C^2(V_\delta)$, we let $D^2_\#F$ denote the collection of the second order derivatives of $F$ which are either completely horizontal (that is of the form $\partial_j\partial_k F$, with $j,k\in\llbracket 1,n\rrbracket$), or completely vertical (that is $\partial_{n+1}\partial_{n+1}F$).
\begin{lemma}
\label{kb2}
Let $1\le p<\infty$ and $1\le q\le\infty$.
Let $F\in C^\infty(V_{\delta})$ and set
\begin{equation*}
M(F):=\left(\int_0^{\delta}\varepsilon^{q}\|(\nabla F)(\cdot,\varepsilon)\|_{L^{2p}}^{2q}\, \frac{d\varepsilon}\varepsilon\right)^{1/q}
\end{equation*}
and
\begin{equation*}
N(F):=\left(\int_0^{\delta}\varepsilon^{q}\left\|(D^2_\# F)(\cdot,\varepsilon)\right\|_{L^p}^q\frac{d\varepsilon}\varepsilon\right)^{1/q}
\end{equation*}
(with the obvious modification when $q=\infty$).
\begin{enumerate}
\item
If $M(F)<\infty$ and $N(F)<\infty$, then $F$ has a trace $f\in B^1_{p,q}({\mathbb T}^n)$, satisfying
\begin{equation}
\label{kf1}
\left\|f-\fint f\right\|_{L^p}\lesssim M(F)^{\frac 12}\end{equation}
and
\begin{equation}
\label{kb3}
|f|_{B^1_{p,q,\delta}}\lesssim N(F).
\end{equation}
\item
Conversely, let $f\in B^1_{p,q}({\mathbb T}^n ; {\mathbb S}^1)$. Let $\rho\in C^\infty$ be an even mollifier supported in $\{ |x|\le 1\}$ and set $F(x,\varepsilon):=f\ast\rho_\varepsilon(x)$, $x\in{\mathbb T}^n$, $0<\varepsilon<\delta$. Then
\begin{equation}
\label{kb4}
M(F)+N(F)\lesssim |f|_{B^1_{p,q,\delta}}.
\end{equation}
\end{enumerate}
\end{lemma}
The above result is inspired by the proof of \cite[ Section 10.1.1, Theorem 1, p. 512]{mazyanew}.
The arguments we present also lead to a (slightly different) proof of Lemma \ref{ab1}.
We start by establishing some preliminary estimates.
We call $H\in{\mathbb R}^n\times{\mathbb R}$ \enquote{pure} if $H$ is either horizontal, or vertical, i.e., either $H\in{\mathbb R}^n\times\{0\}$ or $H\in \{0\}\times{\mathbb R}$. For further use, let us note the following fact, valid for $X\in V_\delta$ and $H\in{\mathbb R}^{n+1}$.
\begin{equation}
\label{ja2}H\text{ pure}\implies|D^2F(X)\cdot(H,H)|\lesssim |D^2_\#F(X)||H|^2.
\end{equation}
\begin{lemma}
\label{jc1}
Let $X, H$ be such that $[X, X+2H]\subset \overline{V_\delta}$. Let $F\in C^2(\overline{V_\delta})$. Then
\begin{equation}
\label{jc2}
|\Delta_H^2F(X)|\le \int_0^2 \tau |D^2F(X+\tau H)\cdot(H,H)|\, d\tau.
\end{equation}
In particular, if $H$ is pure and we write $H=|H|K$, then
\begin{equation}
\label{jc3}
|\Delta_H^2F(X)|\lesssim \int_0^{2|H|} t |D^2_\# F(X+tK)|\, dt.
\end{equation}
\end{lemma}
\begin{proof}
Set
\begin{equation*}
G(s):=F(X+(1-s)H)+F(X+(1+s)H), \ s\in [0,1],
\end{equation*}
so that $G\in C^2$ and in addition we have
\begin{equation}
\label{ja3}
G'(0)=0,\ G''(s)=[D^2F(X+(1-s)H)+D^2F(X+(1+s)H)]\cdot(H,H),
\end{equation}
and
\begin{equation}
\label{ja4}
\int_0^1(1-s)G''(s)\, ds=G(1)-G(0)-G'(0)=\Delta_H^2F(X).
\end{equation}
Estimate \eqref{jc2} is a consequence of \eqref{ja3} and \eqref{ja4} (using the changes of variable $\tau:=1\pm s$). In the special case where $H$ is pure, we rely on \eqref{ja2} and \eqref{jc2} and obtain \eqref{jc3} via the change of variable $t:=\tau|H|$.
\end{proof}
If we combine \eqref{jc3} (applied first with $H=(h,0)$, $h\in{\mathbb R}^n$, next with $H=(0,t)$, $t\in [0,\delta/2]$) with Minkowski's inequality, we obtain the two following consequences\footnote{ In \eqref{jb1}, we let $\Delta_h^2F(\cdot,\varepsilon):=F(\cdot+2h,\varepsilon)-2F(\cdot+h,\varepsilon)+F(\cdot,\varepsilon)$.}
\begin{equation}
\label{jb1}
[h\in{\mathbb R}^n,\ 0\le\varepsilon\le\delta]\implies \|\Delta_h^2F(\cdot,\varepsilon)\|_{L^p}\lesssim |h|^2\|D^2_\#F(\cdot,\varepsilon)\|_{L^p},
\end{equation}
and\footnote{ With the slight abuse of notation $\Delta_{te_{n+1}}^2F(\cdot,\varepsilon):=F(\cdot,\varepsilon+2t)-2F(\cdot,\varepsilon+t)+F(\cdot,\varepsilon)$.}
\begin{equation}
\label{jb2}
\begin{aligned}
[t, \varepsilon\ge 0,\ \varepsilon+2t\le\delta]
\implies \|\Delta_{te_{n+1}}^2F(\cdot,\varepsilon)\|_{L^p}&\lesssim \int_0^{2t} r \|D^2_\#F(\cdot,\varepsilon+r)\|_{L^p}\, dr.
\end{aligned}
\end{equation}
\begin{proof}[Proof of Lemma \ref{kb2}]
We start by proving \eqref{kf1}. By Lemma \ref{ab1} (applied with $s=1/2$ and with $2p$ (respectively $2q$) instead of $p$ (respectively $q$)), $F$ has, on ${\mathbb T}^n$, a trace $\operatorname{tr} F\in B^{1/2}_{2p,2q}$. By Lemma \ref{ab1}, item $1$, and Lemma \ref{ad3}, we have
\begin{equation*}
\left\|\operatorname{tr} F-\fint \operatorname{tr} F\right\|_{L^p}\lesssim\left\|\operatorname{tr} F-\fint \operatorname{tr} F\right\|_{L^{2p}}\lesssim M(F)^{1/2}
\end{equation*}
i.e., \eqref{kf1} holds.
We next establish \eqref{kb3}. Arguing
as at the beginning of the proof of Lemma \ref{ab1}, one concludes that it suffices to prove \eqref{kb3} when $F\in C^\infty(\overline{V_\delta})$.
So let us consider some $F\in C^\infty(\overline{V_\delta})$. We set $f(x)=F(x,0)$, $\forall\, x\in{\mathbb T}^n$.
Then \eqref{kb3} is equivalent to
\begin{equation}
\label{kf2}
|f|_{B^1_{p,q,\delta}}\lesssim N(F).
\end{equation}
We treat only the case where $q<\infty$; the case where $q=\infty$ is slightly simpler and is left to the reader.
The starting point is the following identity, valid when $|h|\le\delta$ and with $t:=|h|$
\begin{equation}
\label{jd1}
\begin{aligned}
\Delta_h^2f=&\Delta_{te_{n+1}/2}^2F(\cdot+2h,0)-2\Delta_{te_{n+1}/2}^2F(\cdot+h,0)+\Delta_{te_{n+1}/2}^2F(\cdot,0)\\
&+2\Delta_h^2F(\cdot, t/2)-\Delta_h^2F(\cdot, t).
\end{aligned}
\end{equation}
By \eqref{jb1}, \eqref{jb2} and \eqref{jd1}, we find that
\begin{equation}
\label{jd2}
\begin{aligned}
\|\Delta_h^2f\|_{L^p}\lesssim & \int_0^{|h|}r\|D^2_\#F(\cdot, r)\|_{L^p}\, dr+|h|^2\|D^2_\#F(\cdot, |h|/2)\|_{L^p}\\
&+|h|^2\|D^2_\#F(\cdot, |h|)\|_{L^p}.
\end{aligned}
\end{equation}
Finally, \eqref{jd2} combined with Hardy's inequality \eqref{e04269} (applied to the integral $\int_0^\delta$ and with $G'(r):=r\|D^2_\#F(\cdot, r)\|_{L^p}$ and $\rho:=q+1$) yields
\begin{equation}
\label{kf6}
\begin{aligned}
|f|_{B^1_{p,q,\delta}}^q&\lesssim \int_{|h|\le\delta}\frac 1{|h|^q}\left(\int_0^{|h|}r \left\|
D^2_\#F(\cdot, r)\right\|_{L^p}\, dr\right)^q\, \frac{dh}{|h|^n}+[N(F)]^q\\
&\lesssim [N(F)]^q.
\end{aligned}
\end{equation}
This implies \eqref{kf2} and completes the proof of item 1.
We now turn to item 2. We claim that
\begin{equation}
\label{kg1}
|f|_{B^{1/2}_{2p,2q,\delta}}\lesssim |f|_{B^1_{p,q,\delta}}^{1/2}.
\end{equation}
Indeed, it suffices to note the fact that
$ |\Delta_h^2f|^{2p}\lesssim |\Delta_h^2f|^p$ (since $|f|=1$).
By combining \eqref{kg1} with
Lemma \ref{ab1}, we find that
\begin{equation}
\label{kg2}
M(F)=\left(\int_0^{\delta}\varepsilon^{q}\|(\nabla F)(\cdot,\varepsilon)\|_{L^{2p}}^{2q}\, \frac{d\varepsilon}\varepsilon\right)^{1/q}\lesssim |f|_{B^1_{p,q,\delta}}.
\end{equation}
Thus, in order to complete the proof of \eqref{kb4}, it suffices to combine \eqref{kg2} with the following estimate
\begin{equation}
\label{kg3}
N(F)
\lesssim |f|_{B^1_{p,q,\delta}},
\end{equation}
that we now establish. The key argument for proving \eqref{kg3} is the following second order analog of \eqref{ch2}:
\begin{equation}
\label{ki1}
|D^2_\#F(x,\varepsilon)|\lesssim \frac 1{\varepsilon^{n+2}}\int_{|h|\le\varepsilon}|\Delta_h^2f(x-h)|\, dh.
\end{equation}
The proof of \eqref{ki1} appears in \cite[p. 514]{mazyanew}. For the sake of completeness, we reproduce below the argument. First,
differentiating the expression defining $F$,
we have
\begin{equation}
\label{ki2}
\partial_j\partial_kF(x,\varepsilon)=\frac 1{\varepsilon^2}f\ast(\partial_j\partial_k\rho)_\varepsilon,\ \forall\, j,\, k\in\llbracket 1,n\rrbracket.
\end{equation}
Using \eqref{ki2} and the fact that $\partial_j\partial_k\rho$ is even and has zero average, we obtain the identity
\begin{equation*}
\partial_j\partial_kF(x,\varepsilon)=\frac 1{2\varepsilon^{n+2}}\int_{|h|\le\varepsilon}\partial_j\partial_k\rho(h/\varepsilon)\Delta_h^2f(x-h)\, dh,
\end{equation*}
and thus \eqref{ki1} holds for the derivatives $\partial_j\partial_kF$, with $j,\, k\in\llbracket 1,n\rrbracket$.
We next note the identity
\begin{equation}
\label{ki4}
F(x,\varepsilon)=\frac 1{2\varepsilon^n}\int \rho(h/\varepsilon)\Delta_h^2f(x-h)\, dh+f(x),
\end{equation}
which follows from the fact that $\rho$ is even.
By differentiating twice \eqref{ki4} with respect to $\varepsilon$, we obtain that \eqref{ki1} holds when $j=k=n+1$. The proof of \eqref{ki1} is complete.
Using \eqref{ki1} and Minkowski's inequality, we obtain
\begin{equation}
\label{mi1}
\|D^2_\#F(\cdot,\varepsilon)\|_{L^p}\lesssim \frac 1{\varepsilon^{n+2}}\int_{|h|\le\varepsilon}\|\Delta_h^2f\|_{L^p}\, dh,
\end{equation}
which is a second order analog of \eqref{ci1}. Once \eqref{ci1} is obtained, we repeat the calculation leading to
\eqref{mj1} and obtain \eqref{kg3}. The details are left to the reader.
The proof of Lemma \ref{kb2} is complete.
\end{proof}
\begin{remark}
\label{av1}
One may put Lemmas \ref{ab1} and \ref{kb2} in the perspective of the theory of weighted Sobolev spaces. Let us start by recalling one of the striking achievements of this theory. As it is well-known, we have $\operatorname{tr} W^{1,1}({\mathbb R}^n_+)=L^1({\mathbb R}^{n-1})$, and, when $n\ge 2$, the trace operator has no linear continuous right-inverse $T:L^1({\mathbb R}^{n-1})\to W^{1,1}({\mathbb R}^n)$ \cite{gagliardo}, \cite{peetre}. The expected analogs of these facts for $W^{2,1}({\mathbb R}^n_+)$ are both wrong. More specifically, we have $\operatorname{tr} W^{2,1}({\mathbb R}^n_+)=B^1_{1,1}({\mathbb R}^{n-1})$ (which is a strict subspace of $W^{1,1}({\mathbb R}^{n-1}))$, and the trace operator has a linear continuous right inverse from $B^1_{1,1}({\mathbb R}^{n-1})$ into $W^{2,1}({\mathbb R}^n_+)$. These results are special cases of the trace theory for weighted Sobolev spaces developed by Uspenski\u\i {} \cite{uspenskii}. For a modern treatment of this theory, see e.g. \cite{tracesoldnew}.
\end{remark}
\subsection{Product estimates}
${}$
Lemma \ref{at3} below is a variant of \cite[Lemma D.2]{lss}. Here, $\Omega$ is either smooth bounded, or $(0, 1)^n$, or ${\mathbb T}^n$.
\begin{lemma}
\label{at3} Let $s>1$, $1\le p<\infty$ and $1\le q\le \infty$. If $u, v\in B^s_{p,q}\cap L^\infty(\Omega)$, then $u\nabla v\in B^{s-1}_{p,q}$.
\end{lemma}
\begin{proof}
After extension to ${\mathbb R}^n$ and cutoff,
we may assume that $u, v\in B^s_{p,q}\cap L^\infty$. It thus suffices to prove that $u, v\in B^s_{p,q}\cap L^\infty({\mathbb R}^n)$$\implies$$u\nabla v\in B^{s-1}_{p,q}({\mathbb R}^n)$.
In order to prove the above, we argue as follows. Let $u=\sum u_j$ and $v=\sum v_j$ be the Littlewood-Paley decompositions of $u$ and $v$. Set
\begin{equation*}
f^j:=\sum_{k\le j}u_k\nabla v_j+\sum_{k<j}u_j\nabla v_k.
\end{equation*}
Since $\operatorname{supp}{\mathcal F} (u_k\nabla v_j)\subset B(0, 2^{\max\{ k, j\}+2})$, we find that $u\nabla v=\sum f^j$ is a Nikolski\u\i {} decomposition of $u\nabla v$; see Section \ref{mm8}. Assume e.g. that $q<\infty$. In view of Proposition \ref{mm9}, the conclusion of Lemma \ref{at3} follows if we prove that
\begin{equation}
\label{mn1}
\sum 2^{(s-1)jq}\|f^j\|_{L^p}^q<\infty.
\end{equation}
In order to prove \eqref{mn1}, we rely on the elementary estimates \cite[Lemma 2.1.1, p. 16]{chemin}, \cite[formulas (D.8), (D.9), p. 71]{lss}
\begin{equation}
\label{mn2}
\left\|\sum_{k\le j}u_k\right\|_{L^\infty}\lesssim \|u\|_{L^\infty}, \quad \forall\, j\ge 0,
\end{equation}
\begin{equation}
\label{mn3}
\left\| \sum_{k< j}\nabla v_k\right\|_{L^\infty}\lesssim 2^j\|v\|_{L^\infty}, \quad \forall\, j\ge 0,
\end{equation}
and
\begin{equation}
\label{mn4}
\|\nabla v_j\|_{L^p}\lesssim 2^j\|v_j\|_{L^p},\quad \forall\, j\ge 0.
\end{equation}
By combining \eqref{mn2}-\eqref{mn4}, we obtain
\begin{equation*}
\begin{aligned}
\sum 2^{(s-1)jq}\|f^j\|_{L^p}^q&\lesssim \sum 2^{(s-1)jq}\left(\left\|\sum_{k\le j}u_k\right\|_{L^\infty}^q\|\nabla v_j\|_{L^p}^q+\left\|\sum_{k<j}\nabla v_k\right\|_{L^\infty}^q\| u_j\|_{L^p}^q\right)\\
&\lesssim \|u\|_{L^\infty}^q\sum 2^{sjq}\|v_j\|_{L^p}^q+\|v\|_{L^\infty}^q\sum 2^{sjq}\|u_j\|_{L^p}^q
\\
&\lesssim \|u\|_{L^\infty}^q\|v\|_{B^s_{p,q}}^q+\|v\|_{L^\infty}^q\|u\|_{B^s_{p,q}}^q,
\end{aligned}
\end{equation*}
and thus \eqref{mn1} holds.
\end{proof}
\subsection{Superposition operators}
${}$
In this section, we examine the mapping properties of the operator
\begin{equation*}
T_\Phi,\ \psi\xmapsto{T_\Phi} \Phi\circ\psi.
\end{equation*}
We work in $\Omega$ smooth bounded, or $(0,1)^n$, or ${\mathbb T}^n$.
The next result is classical and straightforward; see e.g. \cite[Section 5.3.6, Theorem 1]{runstsickel}.
\begin{lemma} \label{eipsi}
Let $0<s<1$, $1\le p<\infty$, and $1\le q<\infty$. Let $\Phi:{\mathbb R}^k\to{\mathbb R}^l$ be a Lipschitz function
.
Then $
T_\Phi$
maps $B^{s}_{p,q}(\Omega ; {\mathbb R}^k)$ into $B^{s}_{p,q}(\Omega ; {\mathbb R}^l)$.
Special case: $\psi\mapsto e^{{\imath} \psi}$ maps $B^s_{p,q}(\Omega ; {\mathbb R})$ into $B^s_{p,q}(\Omega ; {\mathbb S}^1)$.
In addition, when $q<\infty$, $T_\Phi$ is continuous.
\end{lemma}
For the next result, see \cite[Section 5.3.4, Theorem 2, p. 325]{runstsickel}.
\begin{lemma}
\label{ka2}
Let $s>0$, $1\le p<\infty$ and $1\le q\le\infty$. Let $\Phi\in C^\infty({\mathbb R}^k ; {\mathbb R}^l)$. Then $T_\Phi$ maps $(B^{s}_{p,q}\cap L^\infty)(\Omega ; {\mathbb R}^k)$ into $(B^{s}_{p,q}\cap L^\infty)(\Omega ; {\mathbb R}^l)$.
Special case: $\psi\mapsto e^{{\imath} \psi}$ maps $(B^{s}_{p,q}\cap L^\infty)(\Omega ; {\mathbb R})$ into $(B^{s}_{p,q}\cap L^\infty)(\Omega ; {\mathbb S}^1)$.
\end{lemma}
\subsection{Integer valued functions}
${}$
The next result is a cousin of \cite[Appendix B]{lss},\footnote{ The context there is the one of the Sobolev spaces.} but the argument in \cite{lss} does not seem to apply in our situation. Lemma \ref{Eunicite} can be obtained from the results in \cite{bbmuni}, but we present below a simpler direct argument.
\begin{lemma}\label{Eunicite}Let $s>0$, $1\le p<\infty$ and $1\le q<\infty$ be such that $sp\ge 1$. Then the functions in $B^{s}_{p,q}(\Omega ;\mathbb{Z})$ are constant.
Same result when $s>0$, $1\le p<\infty$, $q=\infty$ and $sp>1$.
The same conclusion holds for functions in $\sum_{j=1}^k B^{s_j}_{p_j,q_j}(\Omega ; \mathbb{Z})$, provided we have for all $j\in\llbracket 1,k\rrbracket$: either $s_jp_j=1$ and $1\le q_j<\infty$, or $s_jp_j>1$ and $1\le q_j\le\infty$.
\end{lemma}
\begin{proof}
The case where $n=1$ is simple. Indeed, by Lemma \ref{B-VMO} we have $B^{s}_{p,q}\hookrightarrow \text{\rm VMO}$ (and similarly $\sum_{j=1}^k B^{s_j}_{p_j,q_j}\hookrightarrow \text{\rm VMO}$). The conclusion follows from the fact that $\text{\rm VMO}((0,1) ;{\mathbb Z})$ functions are constant \cite[Step 5, p. 229]{brezisnirenberg1}.
We next turn to the general case. Let $f=\sum_{j=1}^kf_j$, with $f_j\in B^{s_j}_{p_j,q_j}(\Omega ; {\mathbb Z})$, $\forall\, j\in\llbracket 1, k\rrbracket$. In view of the conclusion, we may assume that $\Omega =(0,1)^n$. By the Sobolev embeddings, we may assume that for all $j$ we have $s_jp_j=1$ (and thus either $1<p_j<\infty$ and $s_j=1/p_j$, or $p_j=1$ and $s_j=1$) and $1\le q_j<\infty$. Let, as in Lemma \ref{ad1}, $A\subset (0,1)^{n-1}$ be a set of full measure such that \eqref{cf1} holds with $M=2$. The proof of the lemma relies on the following key implication:
\begin{equation}
\label{cf2}
[x_1+\cdots+x_k\in{\mathbb Z},\ 1\le p_1,\ldots, p_k<\infty]\implies |x_1+\cdots+x_k|\lesssim |x_1|^{p_1}+\cdots+|x_k|^{p_k}.
\end{equation}
This leads to the following consequence: if $g:=g_1+\cdots+g_k$ is integer-valued, then
\begin{equation}
\label{aaa1}
\|\Delta_h^2g\|_{L^1}\lesssim \|\Delta_h^2g_1\|_{L^{p_1}}^{p_1}+\cdots+\|\Delta_h^2g_k\|_{L^{p_k}}^{p_k}.
\end{equation}
By combining \eqref{cf1} with \eqref{aaa1},
we find that
\begin{equation}
\label{cf3}
\lim_{l\to\infty}\frac{\left\|\Delta_{t_le_n}^2f(x',\cdot)\right\|_{L^{1}((0,1))}}{t_l}=0,\quad\forall\, x'\in A,\text{ for some sequence }t_l\to 0.
\end{equation}
By Lemma \ref{cf4} below, we find that $f(x',\cdot)$ is constant, for every $x'\in A$. By a permutation of the coordinates, we find that
for every $i\in \llbracket 1, n\rrbracket$, the function
\begin{equation}
\label{aa2}
\text{$t\mapsto f(x_1,...,x_{i-1},t, x_{i+1},...,x_n)$ is constant, $\forall\, i\in \llbracket 1, n\rrbracket$, a.e. $\widehat x_i \in (0,1)^{n-1}$;}
\end{equation}
here, $\widehat x_i:=(x_1,...,x_{i-1},x_{i+1},...,x_n) \in (0,1)^{n-1}$.
We next invoke the fact that every measurable function satisfying \eqref{aa2} is constant \cite[Lemma 2]{blmn}.
\end{proof}
\begin{lemma}
\label{cf4}
Let $g\in L^1((0,1) ; {\mathbb Z})$ be such that, for some sequence $t_l\to 0$, we have
\begin{equation}
\label{cf5}
\lim_{l\to\infty}\frac{\left\|\Delta_{t_l}^2g\right\|_{L^{1}((0,1))}}{t_l}=0.
\end{equation}
Then $g$ is constant.
\end{lemma}
\begin{proof}
In order to explain the main idea, let us first assume that $g=\ensuremath{\mathonebb{1}}_B$ for some $B\subset (0,1)$. Let $h\in (0,1)$. If $x\in B$ and $x+2h\not\in B$, then $\Delta_h^2g(x)$ is odd, and thus $|\Delta_h^2g(x)|\ge 1$. The same holds if $x\not\in B$ and $x+2h\in B$. On the other hand, we have $|\Delta_{2h}g(x)|\le 1$, with equality only when either $x\in B$ and $x+2h\not\in B$, or $x\not\in B$ and $x+2h\in B$. By the preceding, we obtain the inequality
\begin{equation}
\label{cf6}|\Delta^2_hg(x)|\ge |\Delta_{2h}g(x)|,\quad\forall\, x,\, \forall\, h.
\end{equation}
Using \eqref{cf5} and \eqref{cf6}, we obtain
\begin{equation}
\label{cf7}
g'=\lim_{l\to\infty}\frac{\Delta_{2t_l}g}{2t_l}=0.\footnotemark
\end{equation}
\footnotetext{
In \eqref{cf7}, the first limit is in ${\cal D}'$, the second one in $L^1$.}
Thus either $g=0$, or $g=1$.
We next turn to the general case. Consider some $k\in{\mathbb Z}$ such that the measure of the set $g^{-1}(\{k\})$ is positive. We may assume that $k=0$, and we will prove that $g=0$. For this purpose, we set $B:=g^{-1}(2{\mathbb Z})$, and we let $\overline g:=\ensuremath{\mathonebb{1}}_B$. Arguing as above, we have
$ |\Delta^2_h g(x)|\ge |\Delta_{2h}\overline g(x)|$, $\forall\, x$, $\forall\, h$,
and thus $\overline g=0$. We find that $g$ takes only even values. We next consider the integer-valued map $g/2$. By the above, $g/2$ takes only even values, and so on. We find that $g=0$.
\end{proof}
\subsection{Disintegration of the Jacobians}
\label{au1}
${}$
The purpose of this section is to prove and generalize the following result, used in the analysis of Case \ref{Y}.
\begin{lemma}
\label{at6}
Let $s>1$, $1\le p<\infty$, $1\le q\le p$ and $n\ge 3$, and assume that $sp\ge 2$. Let $u\in B^s_{p,q}(\Omega ; {\mathbb S}^1)$ and set $F:=u\wedge\nabla u$.
Then $\operatorname{curl} F=0$.
Same conclusion if $s>1$, $1\le p<\infty$, $1\le q\le \infty$ and $n\ge 2$, and we have $sp>2$.
Same conclusion if $s>1$, $1\le p<\infty$, $1\le q< \infty$ and $n=2$, and we have $sp=2$.
\end{lemma}
In view of the conclusion, we may assume that $\Omega=(0,1)^n$.
Note that in the above we have $n\ge 2$; for $n=1$ there is nothing to prove.
Since the results we present in this section are of independent interest, we go beyond what is actually needed in Case \ref{Y}.
The conclusion of (the generalization of) Lemma \ref{at6} relies on three ingredients. The first one is that it is possible to define, as a distribution, the product $F: =u\wedge\nabla u$ for $u$ in a low regularity Besov space; this goes back to \cite{lddjr} when $n=2$, and the case where $n\ge 3$ is treated in \cite{bousquetmironescu}. The second one is a Fubini (disintegration) type result for the distribution $\operatorname{curl} F$. Again, this result holds even in Besov spaces with lower regularity than the ones in Lemma \ref{at6}; see Lemma \ref{mo2} below. The final ingredient is the fact that when $u\in\text{\rm VMO}((0,1)^2 ; {\mathbb S}^1)$ we have $\operatorname{curl} F=0$; see Lemma \ref{mo3}. Lemma \ref{at6} is obtained by combining Lemmas \ref{mo2} and \ref{mo3} via a dimensional reduction (slicing) based on Lemma \ref{mo7}; a more general result is presented in Lemma \ref{mo4}.
Now let us proceed. First, following \cite{lddjr} and \cite{bousquetmironescu}, we explain how to define the Jacobian $Ju:=1/2\operatorname{curl} F$ of low regularity unimodular maps $u\in W^{1/p,p}((0,1)^n ; {\mathbb S}^1)$, with $1\le p<\infty$.\footnote{ In \cite{lddjr} and \cite{bousquetmironescu}, maps are from ${\mathbb S}^n$ (instead of $(0,1)^n$) into ${\mathbb S}^1$, but this is not relevant for the validity of the results we present here.} Assume first that $n=2$ and that $u$ is smooth. Then, in the distributions sense, we have
\begin{equation}
\label{oa2}
\begin{aligned}
\langle Ju,\zeta\rangle&=\frac 12\int_{(0,1)^2}\operatorname{curl} F\, \zeta=-\frac 12\int_{(0,1)^2}\nabla\zeta\wedge(u\wedge\nabla u)\\&=\frac 12\int_{(0,1)^2}[(u\wedge\partial_1u)\partial_2\zeta-(u\wedge\partial_2u)\partial_1\zeta]\\
&=\frac 12\int_{(0,1)^2}(u_1\nabla u_2\wedge\nabla\zeta-u_2\nabla u_1\wedge\nabla\zeta),\quad\forall\, \zeta\in C^\infty_c((0,1)^2).
\end{aligned}
\end{equation}
In higher dimensions, it is better to identify $Ju$ with the $2$-form (or rather a $2$-current) $Ju\equiv 1/2\, d(u\wedge du)$.\footnote{ We recover the two-dimensional formula \eqref{oa2} via the usual identification of $2$-forms on $(0,1)^2$ with scalar functions (with the help of the Hodge $\ast$-operator).} With this identification and modulo the action of the Hodge $\ast$-operator, $Ju$ acts
either or $(n-2)$-forms, or on $2$-forms. The former point of view is usually adopted, and is expressed by the formula
\begin{equation}
\label{oa3}
\begin{aligned}
\langle Ju,\zeta\rangle&=\frac {(-1)^{n-1}}2\int_{(0,1)^n}d\zeta\wedge(u\wedge\nabla u)\\
&=\frac {(-1)^{n-1}}2\int_{(0,1)^n}d\zeta\wedge(u_1\, du_2-u_2\, du_1),\quad\forall\, \zeta\in C^\infty_c(\Lambda^{n-2}(0,1)^n).
\footnotemark
\end{aligned}
\end{equation}
\footnotetext{ Here, $C^\infty_c(\Lambda^{n-2}(0,1)^n)$ denotes the space of smooth compactly supported $(n-2)$-forms on $(0,1)^n$.}
The starting point in extending the above formula to lower regularity maps $u$ is provided by the identity \eqref{oa4} below; when $u$ is smooth, \eqref{oa4} is obtained by a simple integration by parts.
More specifically, consider any smooth extension $U:(0,1)^n\times [0,\infty)\to{\mathbb C}$, respectively $\varsigma\in C^\infty_c(\Lambda^{n-2}((0,1)^n\times [0,\infty)))$ of $u$, respectively of $\zeta$.\footnote{ We do not claim that $U$ is ${\mathbb S}^1$-valued. When $u$ is not smooth, existence of ${\mathbb S}^1$-valued extensions is a delicate matter \cite{soreview}.} Then we have the identity \cite[Lemma 5.5]{bousquetmironescu}
\begin{equation}
\label{oa4}
\langle Ju,\zeta\rangle=(-1)^{n-1}\int_{(0,1)^n\times (0,\infty)}d\varsigma\wedge\ dU_1\wedge dU_2.
\end{equation}
For a low regularity $u$ and for a well-chosen $U$, we take the right-hand side of \eqref{oa4} as the definition of $Ju$. More specifically, let $\Phi\in C^\infty({\mathbb R}^2; {\mathbb R}^2)$ be such that $\Phi(z)=z/|z|$ when $|z|\ge 1/2$, and let $v$ be a standard extension of $u$ by averages, i.e., $v(x,\varepsilon)=u\ast\rho_\varepsilon(x)$, $x\in (0,1)^n$, $\varepsilon>0$, with $\rho$ a standard mollifier. Set $U:=\Phi(v)$. With this choice of $U$, the right-hand side of \eqref{oa4} does not depend on $\varsigma$ (once $\zeta$ is fixed) \cite[Lemma 5.4]{bousquetmironescu} and the map $u\mapsto Ju$ is continuous from $W^{1/p,p}((0,1)^n ; {\mathbb S}^1)$ into the set of $2$- (or $(n-2)$-)currents. When $p=1$, continuity is straightforward. For the continuity when $p>1$, see \cite[Theorem 1.1 item 2]{bousquetmironescu}. In addition, when $u$ is sufficiently smooth (for example when $u\in W^{1,1}((0,1)^n ; {\mathbb S}^1)$), $Ju$ coincides\footnote{ Up to the action of the $\ast$ operator.} with $\operatorname{curl} F$ \cite[Theorem 1.1 item 1]{bousquetmironescu}. Finally, we have the estimate \cite[Theorem 1.1 item 3]{bousquetmironescu}
\begin{equation}
\label{oa6}
|\langle Ju,\zeta\rangle|\lesssim |u|_{W^{1/p,p}}^p\|d\zeta\|_{L^\infty},\quad\forall\, \zeta\in C^\infty_c(\Lambda^{n-2}(0,1)^n).
\end{equation}
We are now in position to explain disintegration along two-planes. We use the notation in Section \ref{mo6}. Let $u\in W^{1/p,p}((0,1)^n ; {\mathbb S}^1)$, with $n\ge 3$. Let $\alpha\in I(n-2, n)$. Then for a.e. $x_\alpha\in (0,1)^{n-2}$, the partial map $u_\alpha(x_\alpha)$ belongs to $W^{1/p,p}((0,1)^2 ; {\mathbb S}^1)$ (Lemma \ref{oa1}), and therefore $Ju_\alpha(x_\alpha)$ makes sense and acts on functions.\footnote{ Or rather on $2$-forms, in order to be consistent with our construction in dimension $\ge 3$.} Let now $\zeta\in C^\infty_c(\Lambda^{n-2}(0,1)^n)$. Then we may write
\begin{equation*}
\zeta=\sum_{\alpha\in I(n-2,n)}\zeta^\alpha\, dx^{\alpha}=\sum_{\alpha\in I(n-2,n)}\left(\zeta^\alpha\right)_\alpha(x_{\overline\alpha})\, dx^{\alpha}.
\end{equation*}
Here, $dx^{\alpha}$ is the canonical $(n-2)$-form induced by the coordinates $x_j$, $j\in\alpha$, and $(\zeta^\alpha)_\alpha(x_{\overline\alpha})=\zeta^\alpha(x_\alpha, x_{\overline\alpha})$ belongs to $C^\infty_c((0,1)^2)$ (for fixed $x_\alpha$).
We next note the following formal calculation.
Fix $\alpha\in I(n-2,n)$, and let $\overline\alpha=\{ j, k\}$, with $j<k$. Then
\begin{equation*}
\begin{aligned}
2(-1)^{n-1}\langle Ju,\zeta^\alpha\, dx^\alpha\rangle&=\int_{(0,1)^n}d(\zeta^\alpha\, dx^\alpha)\wedge(u\wedge \nabla u)\\
&=\int_{(0,1)^n}(\partial_j\zeta^\alpha\, dx_j+\partial_k\zeta^\alpha\, dx_k)\wedge dx^\alpha\wedge u\wedge (\partial_j u\, dx_j+\partial_k u\, dx_k)\\
&=\int_{(0,1)^n}(\partial_j\zeta^\alpha\, u\wedge \partial_k u-\partial_k\zeta^\alpha\, u\wedge \partial_j u)\, dx_j\wedge dx^\alpha\wedge dx_k,
\end{aligned}
\end{equation*}
that is,
\begin{equation}
\label{oc2}
\langle Ju,\zeta\rangle=\frac 12\ \sum_{\alpha\in I(n-2,n)}\varepsilon(\alpha)\int_{(0,1)^{n-2}}\langle Ju_\alpha,\left(\zeta^\alpha\right)_\alpha(x_\alpha)\rangle\, dx_\alpha,
\end{equation}
where $\varepsilon(\alpha)\in \{-1, 1\}$ depends on $\alpha$.
When $u\in W^{1,1}((0,1)^n ; {\mathbb S}^1)$, it is easy to see that \eqref{oc2} is true (by Fubini's theorem). The validity of \eqref{oc2} under weaker regularity assumptions is the content of our next result.
\begin{lemma}
\label{mo2}
Let $1\le p<\infty$ and $n\ge 3$. Let $u\in W^{1/p,p}((0,1)^n ; {\mathbb S}^1)$. Then \eqref{oc2} holds.
\end{lemma}
\begin{proof}
The case $p=1$ being clear, we may assume that $1<p<\infty$. We may also assume that $\zeta=\zeta^\alpha\, dx^\alpha$ for some fixed $\alpha\in I(n-2,n)$. A first ingredient of the proof of \eqref{oc2} is the density of $W^{1,1}((0,1)^n ; {\mathbb S}^1)\cap W^{1/p,p}((0,1)^n ; {\mathbb S}^1)$ into $W^{1/p,p}((0,1)^n ; {\mathbb S}^1)$ \cite[Lemma 23]{bbmihes}, \cite[Lemma A.1]{lddjr}. Next, we note that the left-hand side of \eqref{oc2} is continuous with respect to the $W^{1/p,p}$ convergence of unimodular maps \cite[Theorem 1.1 item 2]{bousquetmironescu}. In addition, as we noted, \eqref{oc2} holds when $u\in W^{1,1}((0,1)^n ; {\mathbb S}^1)$. Therefore, it suffices to prove that the right-hand side of \eqref{oc2} is continuous with respect to $W^{1/p,p}$ convergence of ${\mathbb S}^1$-valued maps. This is proved as follows. Let $u_j, u\in W^{1/p,p}((0,1)^n ; {\mathbb S}^1)$ be such $u_j\to u$ in $W^{1/p,p}$. By a standard argument, since the right-hand side of \eqref{oc2} is uniformly bounded with respect to $j$ by \eqref{oa6}, it suffices to prove that the right-hand side of \eqref{oc2} corresponding to $u_j$ tends to the one corresponding to $u$ possibly along a subsequence.
In turn, convergence up to a subsequence is proved as follows. Recall the following vector-valued version of the \enquote{converse} to the dominated convergence theorem \cite[Theorem 4.9, p. 94]{brezisfa}. If $X$ is a Banach space, $\omega$ a measured space and $f_j\to f$ in $L^p(\omega, X)$, then (possibly along a subsequence) for a.e. $\varpi\in\omega$ we have $f_j(\varpi,\cdot )\to f(\varpi,\cdot)$ in $X$, and in addition there exists some $g\in L^p(\omega)$ such that $\|f_j(\varpi,\cdot)\|_{X}\le g(\varpi)$ for a.e. $\varpi\in \omega$.
Using the above and Lemma \ref{oa1} item 2 (applied with $s=1/p$), we find that, up to a subsequence, we have
\begin{equation}
\label{oc3}
(u_j)_\alpha(x_\alpha)\to u_\alpha(x_\alpha)\text{ in }W^{1/p,p}((0,1)^2 ; {\mathbb S}^1)\text{ for a.e. }x_\alpha\in (0,1)^{n-2},
\end{equation}
and in addition we have, for some $g\in L^p((0,1)^{n-2})$,
\begin{equation}
\label{oc4}
|(u_j)_\alpha(x_\alpha)|_{W^{1/p,p}((0,1)^2)}\le g(x_\alpha)\text{ for a.e. }x_\alpha\in(0,1)^{n-2}.
\end{equation}
The continuity of the right-hand side of \eqref{oc2} (along some subsequence) is obtained by combining \eqref{oc3} and \eqref{oc4} with \eqref{oa6} (applied with $n=2$).\footnote{In order to be complete, we should also check that the right-hand side of \eqref{oc2} is measurable with respect to $x_\alpha$. This is clear when $u\in W^{1,1}((0,1)^n ; {\mathbb S}^1)$. The general case follows by density and \eqref{oc3}.}
\end{proof}
\begin{lemma}
\label{mo3}
Let $1\le p<\infty$. Let $u\in W^{1/p,p}\cap\text{\rm VMO}((0,1)^2 ; {\mathbb S}^1)$. Then $Ju=0$.
\end{lemma}
\begin{proof}
Assume first that in addition we have $u\in C^\infty$. Then $u=e^{\imath\varphi}$ for some $\varphi\in C^\infty$, and thus $Ju=1/2\operatorname{curl} (u\wedge\nabla u)=1/2\operatorname{curl}\nabla\varphi=0$.
We now turn to the general case. Let $F(x,\varepsilon):=u\ast\rho_\varepsilon(x)$, with $\rho$ a standard mollifier. Since $u\in\text{\rm VMO}((0,1)^2 ; {\mathbb S}^1)$, there exists some $\delta>0$ such that $1/2<|F(x,\varepsilon)|\le 1$ when $0<\varepsilon<\delta$ (see \eqref{boundsv} and the discussion in Case \ref{X}). Let $\Phi\in C^\infty({\mathbb R}^2 ; {\mathbb R}^2)$ be such that $\Phi(z):=z/|z|$ when $|z|\ge 1/2$, and define $F_\varepsilon(x):=F(x,\varepsilon)$ and $u_\varepsilon:=\Phi\circ F_\varepsilon$, $\forall\, 0<\varepsilon<\delta$. Then $F_\varepsilon\to u$ in $W^{1/p,p}$ and (by Lemma \ref{eipsi} when $p>1$, respectively by a straightforward argument when $p=1$) we have $u_\varepsilon=\Phi(F_\varepsilon)\to \Phi(u)=u$ in $W^{1/p,p}((0,1)^2 ; {\mathbb S}^1)$ as $\varepsilon\to 0$. Since (by the beginning of the proof) we have $Ju_\varepsilon=0$, we conclude via the continuity of $J$ in $W^{1/p,p}((0,1)^2 ; {\mathbb S}^1)$ \cite[Theorem 1.1 item 2]{bousquetmironescu}.
\end{proof}
We may now state and prove the following generalization of Lemma \ref{at6}.
\begin{lemma}
\label{mo4}
Let $s>0$, $1\le p<\infty$, $1\le q\le p$, $n\ge 3$, and assume that $sp\ge 2$. Let $u\in B^s_{p,q}(\Omega ; {\mathbb S}^1)$. Then $Ju=0$.
Same conclusion if $s>0$, $1\le p<\infty$, $1\le q\le\infty$, $n\ge 2$, and we have $sp>2$.
Same conclusion if $s>0$, $1\le p<\infty$, $1\le q<\infty$, $n= 2$, and we have $sp=2$.
\end{lemma}
\begin{proof}
We may assume that $\Omega=(0,1)^n$.
By the Sobolev embeddings (Lemma \ref{Besovemb}), it suffices to consider the limiting case where:
\begin{enumerate}
\item
$s>0$, $1\le p<\infty$, $1\le q<\infty$, $n=2$, and $sp=2$.
Or
\item
$s>0$, $1\le p<\infty$, $q= p$, $n\ge 3$, and $sp=2$.
\end{enumerate}
In view of Lemmas \ref{Besovemb} and \ref{B-VMO}, the case where $n=2$ is covered by Lemma \ref{mo3}. Assume that $n\ge 3$. Then the desired conclusion is obtained by combining Lemmas \ref{oa1}, \ref{mo7}, \ref{mo2} and \ref{mo3}.
\end{proof}
\begin{remark}
\label{oc1}
Arguments similar to the one developed in this section lead to the conclusion that the Jacobians of maps $u\in W^{s,p}((0,1)^n ; {\mathbb S}^k)$, defined when $sp\ge k$ \cite{lddjr}, \cite{bousquetmironescu}, disintegrate over $(k+1)$-planes.
When $s=1$ and $p\ge k$, this assertion is implicit in \cite[Proof of Proposition 2.2, pp. 701-704]{isobe2}.
\end{remark}
| {
"timestamp": "2017-06-20T02:10:47",
"yymm": "1705",
"arxiv_id": "1705.04271",
"language": "en",
"url": "https://arxiv.org/abs/1705.04271",
"abstract": "Let $\\Omega$ be a smooth bounded domain in $\\mathbb R^n$ and u be a measurable function on $\\Omega$ such that $|u(x)|=1$ almost everywhere in $\\Omega$. Assume that u belongs to the $B^s_{p,q}(\\Omega)$ Besov space. We investigate whether there exists a real-valued function $\\varphi \\in B^s_{p,q}$ such that $u=e^{i\\varphi}$. This extends the corresponding study in Sobolev spaces due to Bourgain, Brezis and the first author. The analysis of this lifting problem leads us to prove some interesting new properties of Besov spaces, in particular a non restriction property when $q>p$.",
"subjects": "Classical Analysis and ODEs (math.CA); Analysis of PDEs (math.AP)",
"title": "Lifting in Besov Spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.983596967483837,
"lm_q2_score": 0.8152324848629214,
"lm_q1q2_score": 0.8018601999054825
} |
https://arxiv.org/abs/math/0512131 | Unimodality and convexity of f-vectors of polytopes | We consider unimodality and related properties of f-vectors of polytopes in various dimensions. By a result of Kalai (1988), f-vectors of 5-polytopes are unimodal. In higher dimensions much less can be said; we give an overview on current results and present a potentially interesting construction as well as a conjecture arising from this. | \section{Introduction} \label{sec:introduction}
Let $f = (f_0,\ldots,f_{d-1})$ be the $f$-vector of a $d$-polytope.
It is natural to ask whether the $f$-vector necessarily has one (or more) of the following properties:
\begin{itemize}
\item[{\rm (C)}] convexity: $f_k \geq (f_{k-1}+f_{k+1})/2$ for all $k \in \{ 1,\ldots,d-2 \}$
\item[{\rm (L)}] logarithmic convexity: $f_k^2 \geq f_{k-1} f_{k+1}$ for all $k \in \{ 1,\ldots,d-2 \}$
\item[{\rm (U)}] unimodality: $f_0 \leq \ldots \leq f_k \geq \ldots \geq f_{d-1}$ for some $k \in \{ 0,\ldots,d-1 \}$
\item[{\rm (B)}] B\'ar\'any's property: $f_k \geq \min \{ f_0,f_{d-1} \}$ for all $k \in \{ 1,\ldots,d-2 \}$
\end{itemize}
Clearly each property implies the next one: {\rm (C)}\ $\Rightarrow$ {\rm (L)}\ $\Rightarrow$ {\rm (U)}\ $\Rightarrow$ {\rm (B)}.
Unimodality is known to be false in general for $d \geq 8$ and (rather trivially) true for $d \leq 4$.
For simplicial (and therefore also for simple) polytopes of arbitrary dimension a weaker version of unimodality
was proved by Bj\"orner \cite[Section 8.6]{MR96a:52011}.
Similarly, convexity is trivially true up to $d \leq 3$ and for $d=4$ follows easily from
$f_0 \geq 5$ and $f_2 \geq 2 f_3$ together with Euler's equation and duality.
\subsection*{Toric $g$-vectors} \label{sec:toric-g-vector}
To every $d$-polytope $P$ we can assign a $(\lfloor d/2 \rfloor+1)$-dimensional vector
$g(P) = (g_0^d(P), \ldots, g_{\lfloor d/2 \rfloor}^d(P))$, the \emph{toric $g$-vector} of $P$.
Its entries can be calculated via recursion \cite[Section 3.14]{MR98a:05001},
and interpreted geometrically for simplicial polytopes.
It is well known \cite{MR89f:52016} that $g_i^d(P) \geq 0$ for rational polytopes $P$
and only recently Karu \cite{MR2076929} showed that nonnegativity also holds for nonrational polytopes.
The entries of the toric $g$-vector of a polytope $P$ can be rewritten as a
linear combination of entries of the flag vector of $P$. Some special cases which
we will need are $g_0^d (P) = 1$ and $g_1^d (P) = f_0 - (d+1)$ for $d$-polytopes $P$
(note that $1 = f_\emptyset(P)$). See \cite{MR2132764} for a general description.
\subsection*{Convolutions} \label{sec:convolution}
Let $m_1$ and $m_2$ be linear forms on flag vectors of $d_1$-, resp. $d_2$-polytopes.
Then we obtain a linear form $m = m_1 * m_2$ by defining
\begin{displaymath}
m (P) \; := \; \sum_{F \; d_1\text{-face of } P} m_1 (F) \, m_2 (P/F)
\end{displaymath}
for every $(d_1+d_2+1)$-polytope $P$.
Alternatively, the convolution can be described by defining
\begin{displaymath}
f_S * f_T \; := \; f_{S \cup \{d_1\} \cup (T+d_1+1)}
\end{displaymath}
for $S \subseteq \{ 0,\ldots,d_1 \}$, $T \subseteq \{ 0,\ldots,d_2 \}$ (where $M+x := \{ m+x \mid m \in M \})$
and extending linearly \cite[Section 3]{MR90a:52012} \cite[Section 7]{MR2001c:52009}.
We will use this notation, occasionally writing $f_S^d$ to indicate the dimension $d$
of the polytopes the respective flag vector refers to.
\subsection*{{{\bf cd}}-index} \label{sec:cd-index}
Connected with every polytope (in fact with every Eulerian poset) is its {{\bf cd}}-index,
which is a polynomial in the non-commuting variables $\c$ and $\d$. The coefficients of the
{{\bf cd}}-index can again be viewed as linear combinations of flag vector entries \cite[Section 7]{MR2001c:52009}.
Stanley \cite{MR96b:06006} showed that the coefficients of the {{\bf cd}}-index of a polytope are nonnegative,
which again yields inequalities for the flag vector. Further useful results were obtained
by Ehrenborg \cite{MR2132764}.
From there we will adopt the following notation: write $\langle u \mid \Psi(P) \rangle$
for the coefficient of the {{\bf cd}}-monomial $u$ in the {{\bf cd}}-index of the polytope $P$.
Using linearity we can then define the number $\langle p \mid \Psi(P) \rangle$
for any {{\bf cd}}-polynomial $p$.
\bigskip
In some of the following proofs we omit the longer calculations.
For more details see the appendix.
\section{Dimension 5} \label{sec:dimension-5}
\begin{theorem} \label{thm:unimodal5}
Unimodality {\rm (U)}\ holds for $f$-vectors of polytopes of dimension $d \leq 5$.
\end{theorem}
\begin{proof}
Let $P$ be a $5$-polytope and $f(P) = (f_0,f_1,f_2,f_3,f_4)$ its $f$-vector.
Trivially, $5f_0 \leq 2f_1$ and $5f_4 \leq 2f_3$, therefore $f_0 < f_1$ and $f_3 > f_4$.
Kalai \cite{MR90a:52012} showed that $3f_2 \geq 2f_1 + 2f_3$, hence
\begin{displaymath}
f_2 \; \geq \; \frac{2}{3} \, (f_1+f_3) \; > \; \frac{f_1+f_3}{2}
\end{displaymath}
which implies that ``there cannot be a dip'' at $f_2$.
Therefore $f(P)$ is unimodal.
\end{proof}
\begin{theorem}
Convexity {\rm (C)}\ fails to hold for $d \geq 5$, that is,
the $f$-vectors of $d$-polytopes are not convex in general.
\end{theorem}
\begin{proof}
For dimension $5$ the $f$-vector of the cyclic polytope with $n$ vertices is
\begin{displaymath}
f (\cyclic{5}{n}) \; = \; \left( n, \, \tfrac{n(n-1)}{2}, \, 2(n^2-6n+10), \, \tfrac{5(n-3)(n-4)}{2}, \, (n-3)(n-4) \right)
\end{displaymath}
(cf.~\cite[Chapter~8]{MR96a:52011}), which implies
\begin{displaymath}
f_1 \; = \; \frac{n^2-n}{2} \; < \; \frac{2n^2-11n+20}{2} \; = \; \frac{f_0+f_2}{2}
\end{displaymath}
for $n \geq 8$; see Figure \ref{fig:f-c-5-8}.
\begin{figure}[tb]
\centering
\includegraphics{cyclic5-8.f-vector.eps}
\caption{(non-convex) $f$-vector of $\cyclic{5}{8}$} \label{fig:f-c-5-8}
\end{figure}
\par
For $d \geq 6$, cyclic $d$-polytopes are $2$-neighbourly, therefore
$f_1 = {f_0 \choose 2}$ and $f_2 = {f_0 \choose 3}$ for $f_0 \geq d+1$. We conclude that
\begin{displaymath}
f_0 + f_2 - 2f_1 \; = \; \frac{1}{6} \, f_0 (f_0-2)(f_0-7) > 0
\end{displaymath}
for cyclic $d$-polytopes with $f_0 \geq \max\{d+2,8\}$ vertices.
Thus for $d \geq 7$ already the simplex is a counterexample for {\rm (C)}.
\end{proof}
\section{Dimension 6} \label{sec:dimension-6}
Concerning unimodality for $f$-vectors of $6$-polytopes, we have a couple of
trivial facts, such as $f_0 < f_1$ and $f_4 > f_5$. Unimodality would therefore simply follow
from the statement $(*) \; f_1 \leq f_2$ or equivalently from $f_3 \geq f_4$ by duality.
Bj\"orner showed that the latter is true for simplicial polytopes (cf. \cite{MR96a:52011}, Theorem 8.39),
therefore in particular for cyclic polytopes, which seems to indicate that it is true in general.
However, it does not follow from the yet known inequalities -- we only have a weaker statement.
\begin{proposition} \label{prop:barany6}
Let $f=(f_0,\ldots,f_5)$ be the $f$-vector of a $6$-polytope. Then
\[ f_2 \; \geq \; \frac{2}{3} \, f_1 + 63 . \]
\end{proposition}
\begin{proof}
We claim that the following inequalities hold for $f$:
\begin{eqnarray}
\label{in6-1} f_1 - 3f_0 & \geq & 0 \\
\label{in6-2} f_0 - f_1 + f_2 - 21 & \geq & 0
\end{eqnarray}
The assertion then follows by multiplying \eqref{in6-2} by $3$ and adding \eqref{in6-1}.
\par
Inequality \eqref{in6-1} is trivial, simply stating that every vertex is in at least 6 edges.
For the proof of \eqref{in6-2} we use \cite[Theorem 3.7]{MR2132764}, which implies that
$\langle \c^2\d\c^2 - 19 \c^6 \mid \Psi(P) \rangle \geq 0$. Expressing the {{\bf cd}}-polynomial
$\c^2\d\c^2 - 19 \c^6$ as linear combination of flag vector entries gives
$f_0-f_1+f_2-21$ and therefore yields inequality \eqref{in6-2}.
See the last section for detailed calculations.
\end{proof}
\begin{corollary}
The $f$-vectors of $6$-polytopes satisfy B\'ar\'any's property {\rm (B)}.
\end{corollary}
\begin{proof}
Let $f=(f_0,\ldots,f_5)$ be the $f$-vector of a $6$-polytope.
Clearly, $f_1 \geq 3 f_0 > f_0$, thus by Proposition~\ref{prop:barany6}
\begin{displaymath}
f_2 \; \geq \; \frac{2}{3} \, f_1 + 21 \; \geq \; 2 f_0 + 21 \; > \; f_0 .
\end{displaymath}
Dually, we have $f_3 > f_5$ and $f_4 > f_5$.
\end{proof}
As the desired inequality $(*)$ for unimodality does not follow from the known linear inequalities,
one can find vectors that satisfies all these, but not $(*)$. An example for a family of vectors is
\begin{align*}
f^{(\ell)} = ( & f_0 , f_1 , f_2 , f_3 , f_4 ; \\
& f_{02} , f_{03} , f_{04} , f_{13} , f_{14} , f_{24} ; \\
& f_{024} ) \\
\phantom{f^{(\ell)} }= ( & 22 + \ell , 111 + 3\ell , 110 + 2\ell , 35 + 4\ell , 21 + 6\ell ; \\
& 780 + 15\ell , 1340 + 50\ell , 1080 + 51\ell , 2010 + 90\ell , 2160 + 132\ell , 1260 + 114\ell ; \\
& 6480 + 396\ell )
\end{align*}
for $\ell \geq 0$.
The other components of these (potential) flag vectors can be calculated from the Generalized Dehn--Sommer\-ville equations.
In particular, the number of facets is $f_5 = 7+2\ell$. However it is not at all clear
that there exist polytopes having these as flag vectors.
\section{Dimension 7} \label{sec:dimension-7}
A similar statement to the one in Proposition~\ref{prop:barany6} holds for $7$-polytopes.
Nevertheless, this is not enough to prove even B\'ar\'any's property {\rm (B)}, since we yet have
no condition for $f_3$.
\begin{proposition} \label{prop:barany7}
Let $f=(f_0,\ldots,f_6)$ be the $f$-vector of a $7$-polytope. Then
\[ f_2 \; \geq \; \frac{5}{7} \, f_1 + 36 \]
\end{proposition}
\begin{proof}
As before, we consider two valid inequalities for $f$ which together imply the assertion:
\begin{eqnarray}
\label{in7-1} 2f_1 - 7f_0 & \geq & 0 \\
\label{in7-2} f_0 - f_1 + f_2 - 36 & \geq & 0
\end{eqnarray}
Again, \eqref{in7-1} is trivial.
The nonnegativity of $\langle \c^2\d\c^3-34\c^7 \mid \Psi(P) \rangle$ gives inequality \eqref{in7-2};
see the last section.
\end{proof}
Again, one can find vectors satisfying all known linear inequalities, but violating both
$f_3 \geq f_0$ and $f_3 \geq f_6$; take, for instance, the potential flag vector
\begin{align*}
f \; & = \; ( f_0 , f_1 , f_2 , f_3 , f_4 , f_5 ; \\
& \qquad \quad f_{02} , f_{03} , f_{04} , f_{05} , f_{13} , f_{14} , f_{15} , f_{24} , f_{25} , f_{35} ; \\
& \qquad \quad f_{024} , f_{025} , f_{035} , f_{135} ) \\
& = \; ( 134 , 469 , 371 , 70 , 371 , 469 ; \\
& \qquad \quad 2814 , 6580 , 10360 , 8484 , 9870 , 20720 , 21210 , 13790 , 20720 , 9870 ; \\
& \qquad \quad 62160 , 84840 , 84840 , 127260 ) .
\end{align*}
From Euler's equation, we get $f_6 = 134$; nevertheless, it is again open
whether this really is the flag vector of some $7$-polytope.
As it is an open question whether logarithmic convexity holds for $f$-vectors of $7$-polytopes,
one could try to find counterexamples.
Most promising may be connected sums of cyclic polytopes,
since this construction yields counterexamples for unimodality in dimension 8
(see \cite[pp.~274f]{MR96a:52011}).
\begin{definition}
Let $P$ and $Q$ be polytopes of the same dimension.
If $P$ is simplicial and $Q$ simple, then a \emph{connected sum} $P \# Q$ of $P$ and $Q$ is obtained by
cutting one vertex off $Q$ and stacking the result --- with the newly created facet --- onto $P$
(cf.~\cite[p.~274]{MR96a:52011}).
\end{definition}
The effect of these construction on the $f$-vector of the involved polytopes
can be described as follows.
\begin{proposition} \label{prop:f-consum}
Let $d \geq 3$ and $P$ a simplicial and $Q$ a simple $d$-polytope.
Then the $f$-vector of $P\#Q$ is given by
\begin{displaymath}
f_i(P\#Q) \; = \; \left\{ \begin{array}{l@{\quad\text{ if }}l} f_i(P) + f_i(Q) & 1 \leq i \leq d-2 \\ f_i(P) + f_i(Q) - 1 & i = 0 \text{ or } i = d-1 \end{array} \right.
\end{displaymath}
Additionally, the $f$-vector of the connected sum $P\#\polar{P}$
of a polytope $P$ with its dual is symmetric.
\end{proposition}
\begin{proof}
Cutting one vertex $v$ off $Q$ decreases $f_0(Q)$ by $1$ and creates a new facet $F$,
isomorphic to a $(d-1)$-simplex. Therefore, $f_i(Q)$ increases by ${d \choose i+1}$ if $i>0$
and by $d-1$ if $i=0$. Afterwards all faces of both polytopes are again faces of $P\#Q$,
except the facet $F$ in both polytopes (which completely disappears) and the new faces of $F$
in $Q$ (which are identified with their counterparts in $P$).
\par
The $f$-vector of $P\#\polar{P}$ is obviously symmetric, since
\begin{displaymath}
f_i(P\#\polar{P}) \; = \; \left\{ \begin{array}{l@{\quad\text{ if }}l} f_i(P) + f_{d-1-i}(P) & 1 \leq i \leq d-2 \\ f_i(P) + f_{d-1-i}(P) - 1 & i = 0 \text{ or } i = d-1 \end{array} \right.
\end{displaymath}
\end{proof}
\begin{proposition} \label{prop:logconv7}
For all $n \geq 8$, the $f$-vector of $P_7^n := \cyclic{7}{n} \# \cyclic{7}{n}^\Delta$
is logarithmically convex and
\[ \frac{f_3(P_7^n)^2}{f_2(P_7^n) f_4(P_7^n)} \; \stackrel{n \rightarrow \infty}{\longrightarrow} \; 1 \]
\end{proposition}
\begin{proof}
The proof is done by straightforward calculation; see the last section for the main steps.
\end{proof}
So in a sense, the connected sums of cyclic $7$-polytopes are as close as polytopes can get to
logarithmic non-convexity.
\section{Summary} \label{sec:summary}
The results can be summarized as in Table~\ref{tab:summary}.
\begin{table}[bt]
\centering
\begin{tabular}{|c|ccccc|} \hline
Dimension & $\leq 4$ & $5$ & $6$ & $7$ & $\geq 8$ \\ \hline
{\rm (C)} & \ding{52} & \ding{56} & \ding{56} & \ding{56} & \ding{56} \\
{\rm (L)} & \ding{52} & ? & ? & ? & \ding{56} \\
{\rm (U)} & \ding{52} & \ding{52} & ? & ? & \ding{56} \\
{\rm (B)} & \ding{52} & \ding{52} & \ding{52} & ? & ? \\ \hline
\end{tabular}
\caption{Summary of known properties for polytopes --- a \ding{52}, resp.~\ding{56}\/ indicates
that the given property holds, resp.~does not hold for all polytopes of the given dimension.}
\label{tab:summary}
\end{table}
In the light of Proposition~\ref{prop:logconv7}, the following conjecture seems natural.
\begin{conjecture}
{\rm (L)}\ holds for $d$-polytopes of dimension $d \leq 7$.
\end{conjecture}
\begin{appendix}
\section*{Detailed calculations} \label{sec:deta-calc}
\subsection*{Proof of Theorem \ref{thm:unimodal5} \\ {\normalsize \rm (cf.~\cite[Section 7]{MR90a:52012})}} \label{sec:proof-theor-refthm}
Let $P$ be a $5$-polytope.
\begin{eqnarray*}
(g_0^1 * g_1^2 * g_0^0)(P) & = & f_\emptyset^1 * (f_0^2-3) * f_\emptyset^0 \; = \; (f_{12}^4 - 3f_1^4) * f_\emptyset^0 \; = \; f_{124} - 3f_{14} \\
& = & f_{123} - 3 \, (2f_1 - f_{12} + f_{13}) \; = \; -6f_1 + 3f_{02} - f_{13} \\
(g_0^0 * g_1^2 * g_0^1)(P) & = & f_\emptyset^0 * (f_0^2-3) * f_\emptyset^1 \; = \; (f_{01}^3 - 3f_0^3) * f_\emptyset^1 \; = \; f_{013} - 3f_{03} \\
& = & 2f_{13} - 3f_{03} \\
(g_1^2 * g_1^2)(P) & = & (f_0^2 - 3) * (f_0^2 - 3) \; = \; f_{023} - 3f_{02} - 3f_{23} + 9f_2 \\
& = & f_{013} - 3f_{02} - 3 \, (2f_3 - f_{03} + f_{13}) + 9f_2 \\
& = & -f_{13} - 3f_{02} - 6f_3 + 3f_{03} + 9f_2
\end{eqnarray*}
by the rules of convolution and the Generalized Dehn--Sommerville equations \cite{MR86f:52010b}.
Hence we have
\begin{displaymath}
\begin{array}{rrcrcrcrcrcl}
-6 f_1 & & & & + & 3 f_{02} & & & - & f_{13} & \geq & 0 \\
& & & & & & - & 3 f_{03} & + & 2 f_{13} & \geq & 0 \\
& 9 f_2 & - & 6 f_3 & - & 3 f_{02} & + & 3 f_{03} & - & f_{13} & \geq & 0
\end{array}
\end{displaymath}
Adding up all three inequalities yields $ -6 f_1 + 9 f_2 - 6 f_3 \geq 0 $,
that is the assertion $3f_2 \geq 2f_1 + 2f_3$. \qed
\subsection*{Proof of Inequality~\eqref{in6-2}} \label{sec:proof:prop:barany6}
We express the {{\bf cd}}-word $\c^2\d\c^2$ in terms of the flag vector of the $6$-polytope $P$
by applying \cite[Proposition~7.1]{MR2001c:52009}:
\begin{displaymath}
\langle \c^2\d\c^2 \mid \Psi(P) \rangle \; = \; \sum_{i=0}^2 (-1)^{4-i} k_i \; = \; k_0 - k_1 + k_2 .
\end{displaymath}
For the sparse flag $k$-vector we have
\begin{displaymath}
k_i \; = \; \sum_{T \subseteq \{i\}} (-2)^{1-|T|} f_T \; = \; -2 f_\emptyset + f_i \; = \; f_i - 2
\end{displaymath}
and therefore $\langle \c^2\d\c^2 \mid \Psi(P) \rangle = f_0-f_1+f_2-2$. The trivial {{\bf cd}}-word
$\c^6$ translates into $f_\emptyset = 1$, hence
\begin{displaymath}
\langle \c^2\d\c^2 - 19\c^6 \mid \Psi(P) \rangle \; = \; f_0-f_1+f_2-21 .
\end{displaymath} \qed
\subsection*{Proof of Inequality~\eqref{in7-2}} \label{sec:proof:prop:barany7}
We calculate for the $7$-polytope $P$ exactly as above (the additional $\c$ at the end
has no influence whatsoever on the calculation):
\begin{displaymath}
\langle \c^2\d\c^3 \mid \Psi(P) \rangle \; = \; f_0-f_1+f_2-2 .
\end{displaymath}
Together with $\c^7$, which again represents $f_\emptyset$, we get
\begin{displaymath}
\langle \c^2\d\c^3 - 34\c^7 \mid \Psi(P) \rangle \; = \; f_0-f_1+f_2-36 .
\end{displaymath} \qed
\subsection*{Proof of Proposition~\ref{prop:logconv7}}
The $f$-vector of the cyclic $7$-polytope on $n$ vertices is given by
\begin{align*}
f(\cyclic{7}{n}) \; = \; \big( n & , \, \tfrac{n(n-1)}{2} , \, \tfrac{n(n-1)(n-2)}{6} , \, \tfrac{5(n-4)(n^2-8n+21)}{6} , \\
& \tfrac{(n-4)(3n^2-31n+84)}{2} , \, \tfrac{7(n-4)(n-5)(n-6)}{6} , \, \tfrac{(n-4)(n-5)(n-6)}{3} \big)
\end{align*}
(cf.~\cite[Chapter~8]{MR96a:52011}).
From this we obtain $f(P_7^n)=(f_0(n),\ldots,f_6(n))$ by Proposition~\ref{prop:f-consum}:
\begin{align*}
f_0(n) \; = \; \tfrac{(n-3)(n^2-12n+41)}{3} \quad , & \quad f_1(n) \; = \; \tfrac{7n^3-102n^2+515n-840}{6} , \\
f_2(n) \; = \; \tfrac{5n^3-66n^2+313n-504}{3} \quad , & \quad f_3(n) \; = \; \tfrac{5(n-4)(n^2-8n+21)}{3}
\end{align*}
By symmetry of $f(P_7^n)$, these entries suffice to verify logarithmic convexity.
We get
\begin{align*}
\frac{f_1(n)^2}{f_0(n)f_2(n)} \; & = \; \frac{(7n^3-102n^2+515n-840)^2}{4(n-3)(n^2-12n+41)(5n^3-66n^2+313n-504)} \; > \; 1 \, , \\
\frac{f_2(n)^2}{f_1(n)f_3(n)} \; & = \; \frac{2(5n^3-66n^2+313n-504)^2}{5(n-4)(n^2-8n+21)(7n^3-102n^2+515n-840)} \; > \; 1 \, , \\
\frac{f_3(n)^2}{f_2(n)f_4(n)} \; & = \; \frac{25(n-4)^2(n^2-8n+21)^2}{(5n^3-66n^2+313n-504)^2} \; > \; 1
\end{align*}
for $n \geq 8$. Since the leading coefficients of the polynomials in the numerator and the denominator
of the last fraction are equal,
\begin{displaymath}
\frac{f_3(P_7^n)^2}{f_2(P_7^n) f_4(P_7^n)} \; \stackrel{n \rightarrow \infty}{\longrightarrow} \; 1 .
\end{displaymath} \qed
\end{appendix}
\vfill \bibliographystyle{siam}
| {
"timestamp": "2005-12-06T18:54:42",
"yymm": "0512",
"arxiv_id": "math/0512131",
"language": "en",
"url": "https://arxiv.org/abs/math/0512131",
"abstract": "We consider unimodality and related properties of f-vectors of polytopes in various dimensions. By a result of Kalai (1988), f-vectors of 5-polytopes are unimodal. In higher dimensions much less can be said; we give an overview on current results and present a potentially interesting construction as well as a conjecture arising from this.",
"subjects": "Combinatorics (math.CO)",
"title": "Unimodality and convexity of f-vectors of polytopes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969665221771,
"lm_q2_score": 0.8152324848629214,
"lm_q1q2_score": 0.8018601991215062
} |
https://arxiv.org/abs/2110.03755 | Fast and stable approximation of analytic functions from equispaced samples via polynomial frames | We consider approximating analytic functions on the interval $[-1,1]$ from their values at a set of $m+1$ equispaced nodes. A result of Platte, Trefethen \& Kuijlaars states that fast and stable approximation from equispaced samples is generally impossible. In particular, any method that converges exponentially fast must also be exponentially ill-conditioned. We prove a positive counterpart to this `impossibility' theorem. Our `possibility' theorem shows that there is a well-conditioned method that provides exponential decay of the error down to a finite, but user-controlled tolerance $\epsilon > 0$, which in practice can be chosen close to machine epsilon. The method is known as \textit{polynomial frame} approximation or \textit{polynomial extensions}. It uses algebraic polynomials of degree $n$ on an extended interval $[-\gamma,\gamma]$, $\gamma > 1$, to construct an approximation on $[-1,1]$ via a SVD-regularized least-squares fit. A key step in the proof of our main theorem is a new result on the maximal behaviour of a polynomial of degree $n$ on $[-1,1]$ that is simultaneously bounded by one at a set of $m+1$ equispaced nodes in $[-1,1]$ and $1/\epsilon$ on the extended interval $[-\gamma,\gamma]$. We show that linear oversampling, i.e., $m = c n \log(1/\epsilon) / \sqrt{\gamma^2-1}$, is sufficient for uniform boundedness of any such polynomial on $[-1,1]$. This result aside, we also prove an extended impossibility theorem, which shows that such a possibility theorem (and consequently the method of polynomial frame approximation) is essentially optimal. | \section{Introduction}
In this paper, we consider the problem of approximating an analytic function $f : [-1,1] \rightarrow \bbC$ from its values at $m+1$ equispaced points in $[-1,1]$. Several years ago, Platte, Trefethen \& Kuijlaars \cite{platte2011impossibility} demonstrated that this problem is intrinsically difficult. They proved an `impossibility' theorem which states that any method that offers exponential rates of convergence in $m$ for all functions analytic in a fixed, but arbitrary region of the complex plane must necessarily be exponentially ill-conditioned. Furthermore, the best rate of convergence achievable by a stable method is necessarily subexponential -- specifically, root-exponential in $m$.
This result generalizes what has long been known for polynomial interpolation at equispaced nodes: namely, Runge's phenomenon. Polynomial interpolation is divergent for functions that not analytic in a sufficiently large complex region (the Runge region). And while it converges exponentially fast for functions that are analytic in this region, its condition number is also exponentially large. Such ill-conditioning means that, when computed in floating point arithmetic, the error of polynomial interpolation eventually increases, even for entire functions, due to the accumulation of round-off error.
Many methods have been proposed to overcome Runge's phenomenon by stably and accurately approximating analytic functions from equispaced nodes (see, e.g., \cite{adcock2016mapped,boyd2009exponentially,platte2011impossibility} and references therein). Several such methods appear to offer fast convergence in practice{, which, for some functions at least, appears faster than root-exponential.} Yet full mathematical explanations for this phenomenon have hereto been lacking.
The purpose of this paper is to fully analyze one such method in view of the impossibility theorem. This method is termed \textit{polynomial frame approximation} or \textit{polynomial extensions} \cite{adcock2020approximating}, and is closely related to so-called \textit{Fourier extensions} \cite{matthysen2017function,lyon2012fast,huybrechs2010fourier,bruno2007accurate,boyd2002fourier,pasquetti1996spectral}. It approximates a function $f$ on $[-1,1]$ using orthogonal polynomials on an \textit{extended} interval $[-\gamma,\gamma]$ for some fixed $\gamma > 1$. The approximation is then computed by solving a regularized least-squares problem, with user-controlled regularization parameter $\epsilon > 0$.
Our main contribution is to show that this method offers a positive counterpart to the impossibility theorem of \cite{platte2011impossibility}. We prove a `possibility' theorem, which asserts that the polynomial frame approximation method is well-conditioned and, for all functions that are analytic in a sufficiently large region (related to the parameter $\gamma$), its error decreases exponentially fast down to roughly $\ordu{m^{3/2} \epsilon}$, where $\epsilon$ is the user-determined tolerance. This tolerance may be taken to be on the order of machine epsilon without impacting the conditioning of the method, thus rendering the approach suitable for practical purposes. But it may also be taken larger (with benefits in terms of stability) if fewer digits of accuracy are required.
While the impossibility theorem dictates that exponential convergence to zero cannot be achieved by a well-conditioned method, we show that exponential decrease of the error down to an arbitrary tolerance, multiplied by the slowly growing factor proportional to $m^{3/2}$, is indeed possible. Additionally, we establish an `extended' impossibility theorem, which relates conditioning and error decay for approximation methods that achieve only a finite accuracy $\epsilon$. This theorem explains how the method we consider is essentially optimal, in a suitable sense.
Our main result hinges on a new bound for the maximal behaviour of polynomials that are bounded simultaneously at a set of $m+1$ equispaced nodes on $[-1,1]$ and on the extended interval $[-\gamma,\gamma]$. The impossibility theorem of \cite{platte2011impossibility} uses a classical result of Coppersmith and Rivlin \cite{coppersmith1992growth}, which states that a polynomial of degree $n$ that is bounded by one at $m \geq n$ equispaced nodes can grow as large as $\alpha^{n^2/m}$ on $[-1,1]$ outside of these nodes, where $\alpha > 1$ is a constant. In particular, $m = c n^2$ equispaced points are both sufficient and necessary to ensure boundedness of such a polynomial on $[-1,1]$. We consider a nontrivial variation of this setting, where the polynomial is also assumed to be no larger than $1/\epsilon$ on $[-\gamma,\gamma]$. Our key result shows that $m = c n \log(1/\epsilon) / \sqrt{\gamma^2-1}$ equispaced points suffice for ensuring boundedness of any such polynomial on $[-1,1]$. While we use this bound to analyze polynomial frames, we expect it to be of independent interest from a pure approximation-theoretic perspective.
{
More broadly, our work has some connections with optimal recovery. We note that optimal recovery of functions in general, and from equispaced samples
in particular has been actively studied in approximation theory, with emphasis on
optimal approximation methods without taking stability issues into considerations. See, for instance, \cite{temlyakov1993approximate,michelli1985lecture,temlyakov2018multivariate}.
}
The outline of the remainder of this paper is as follows. In \S \ref{s:overview} we present an overview of the main components of the paper. We introduce notation, describe the impossibility theorem of \cite{platte2011impossibility} and then present polynomial frame approximation. We next state our main results, then conclude with a discussion of related work. In \S \ref{s:acc-stab-poly-frame} we analyze the accuracy and conditioning of polynomial frame approximation. Then in \S \ref{s:maximal-behaviour-proof} we give the proof of the aforementioned result on the maximal behaviour of polynomials bounded at equispaced nodes and on the extended interval $[-\gamma,\gamma]$. With this in hand, in \S \ref{s:main-thm-proof} we give the proof of the main results: the possibility theorem for polynomial frame approximation and the extended impossibility theorem. We then present several numerical examples in \S \ref{s:numerical}, before offering some concluding remarks in \S \ref{s:conclusions}.
\section{Overview of the paper}\label{s:overview}
\subsection{Notation}
Throughout, $\bbP_n$ denotes the space of polynomials of degree at most $n$.
For $m \geq 1$, we let $\{ x_i \}^{m}_{i=0}$ be a set of $m+1$ equispaced points in $[-1,1]$ including endpoints, i.e.\ $x_i = -1 + 2 i /m$. Given an interval $I$, we let $C(I)$ be the space of continuous functions on $I$ and
\bes{
\nm{g}_{I,\infty} = \sup_{x \in I} |g(x)|,\quad g \in C(I),
}
be the uniform norm over $I$.
We also let
\bes{
\ip{f}{g}_{I,2} = \int_{I} f(x) \overline{g(x)} \D x,\quad f , g \in C(I),
}
be the usual $L^2$-inner product over $I$ and $\nm{\cdot}_{I,2} = \sqrt{\ip{\cdot}{\cdot}_{I,2}}$ be the corresponding $L^2$-norm.
Next, we define several discrete semi-norms and semi-inner products. We define
\bes{
\nm{g}_{m,\infty} = \max_{i = 0,\ldots,m} |g(x_i) |,\quad g \in C(I),
}
where $\{ x_i \}^{m}_{i=0}$ is the equispaced grid on $[-1,1]$, and
\bes{
\ip{f}{g}_{m,2} = \frac{2}{m+1} \sum^{m}_{i=0} f(x_i) \overline{g(x_i)},\quad f , g \in C(I).
}
We also let $\nm{\cdot}_{m,2} = \sqrt{\ip{\cdot}{\cdot}_{m,2}}$ be the corresponding discrete semi-norm. Note that $\ip{\cdot}{\cdot}_{m,2}$ is an inner product on $\bbP_n$ for any $m \geq n$, since a polynomial of degree $n$ cannot vanish at $m+1 \geq n+1$ distinct points unless it is the zero polynomial. Observe also that
\be{
\label{discnormbds}
\sqrt{2/(m+1)} \nm{f}_{m,\infty} \leq \nm{f}_{m,2} \leq \sqrt{2} \nm{f}_{m,\infty} \leq \sqrt{2} \nm{f}_{[-1,1],\infty},
}
for any $f \in C(I)$.
Finally, given a compact set $E \subset \bbC$, we write $B(E)$ for the set of functions that are continuous on $E$ and analytic in its interior. We also define $\nm{f}_{E,\infty} = \sup_{z \in E} |f(z)|$.
\subsection{The impossibility theorem}
Throughout this paper, we consider families of mappings
\bes{
\cR_m : C([-1,1]) \rightarrow C([-1,1]),
}
where, for each $m \geq 1$ and $f \in C([-1,1])$, $\cR_m(f)$ depends only on the values $\{f(x_i)\}^{m}_{i=0}$ of $f$ on the equispaced grid $\{x_i\}^{m}_{i=0}$. We define the (absolute) condition number of $\cR_m$ (in terms of the continuous and discrete uniform norms) as
\be{
\label{kappa-def}
\kappa(\cR_m) = \sup_{f \in C([-1,1])} \lim_{\delta \rightarrow 0^+} \sup_{\substack{h \in C([-1,1]) \\ 0 < \nm{h}_{m,\infty} \leq \delta }} \frac{\nmu{\cR_{m}(f+h) - \cR_{m}(f) }_{[-1,1],\infty}}{\nm{h}_{m,\infty}}.
}
We are now ready to state the impossibility theorem:
\thm{
[The impossibility theorem, \cite{platte2011impossibility}]
\label{t:impossibility}
Let $E \subset \bbC$ be a compact set containing $[-1,1]$ in its interior and $\{ \cR_m \}^{\infty}_{m=1}$ be an approximation procedure based on equispaced grids of $m+1$ points such that, for some $C,\rho > 1$ and $1/2 < \tau \leq 1$, we have
\bes{
\nmu{f - \cR_m(f) }_{[-1,1],\infty} \leq C \rho^{-m^{\tau}} \nm{f}_{E,\infty},\quad \forall m \in \bbN,\ f \in B(E).
}
Then the condition numbers \R{kappa-def} satisfy
\bes{
\kappa(\cR_m) \geq \sigma^{m^{2 \tau - 1}}
}
for some $\sigma > 1$ and all sufficiently large $m$.
}
When $\tau = 1$, this implies that any approximation procedure that achieves exponential convergence at a geometric rate must also be exponentially ill-conditioned at a geometric rate. Furthermore, the best possible (and achievable \cite{adcock2019optimal}) rate of convergence of a stable method is root-exponential in $m$, i.e.\ the error decays like $\rho^{-\sqrt{m}}$ for some $\rho > 1$.
\subsection{Polynomial frame approximation}
We now describe polynomial frame approximation. As observed, this method was formalized in \cite{adcock2020approximating}, and is related to earlier works on Fourier extensions \cite{adcock2014parameter,adcock2014numerical,matthysen2017function,lyon2012fast,huybrechs2010fourier,bruno2007accurate,boyd2002fourier,pasquetti1996spectral}, and more generally, numerical approximations with frames \cite{adcock2019frames,adcock2020frames}.
Let $\gamma > 1$. Polynomial frame approximation uses orthogonal polynomials on an extended interval $[-\gamma,\gamma]$ to construct an approximation to a function over $[-1,1]$. In this paper, we use orthonormal Legendre polynomials, although we remark in passing that other orthogonal polynomials such as Chebyshev polynomials could also be employed. Note that an orthonormal basis on $[-\gamma,\gamma]$ fails to constitute a basis when restricted to the smaller interval $[-1,1]$. It forms a so-called \textit{frame} \cite{adcock2019frames,christensen2016introduction}, hence the name polynomial `frame' approximation.
Let $P_i(x)$ be the classical Legendre polynomial on $[-1,1]$, normalized so that $P_i(1) = 1$. Since $\int^{1}_{-1} |P_i(x)|^2 \D x = (i+1/2)^{-1}$, we define Legendre polynomial frame on $[-1,1]$ as $\{ \psi_i \}^{\infty}_{i=0}$, where the $i$th such function is given by
\be{
\label{psii-def}
\psi_i(x) = \sqrt{i+1/2} P_i (x/\gamma ) /\sqrt{\gamma},\quad x \in [-1,1].
}
Let $m,n \geq 0$ and consider a function $f \in C([-1,1])$. Our aim is to compute a polynomial approximation to $f$ of the form
\bes{
f \approx \hat{f} = \sum^{n}_{i=0} \hat{c}_i \psi_i \in \bbP_n
}
for suitable coefficients $\hat{c}_i$. It is natural to strive to do this via a least-squares fit, i.e.
\be{
\label{LScoeff}
\hat{c} = (\hat{c}_i)^{n}_{i=0} \in \argmin{c \in \bbC^{n+1}} \nm{A c - b}_{2},
}
where
\be{
\label{Adef}
A = \sqrt{2/(m+1)} (\psi_j(x_i))^{m,n}_{i,j=0} \in \bbC^{m \times n},\qquad b = \sqrt{2/(m+1)} (f(x_i))^{m}_{i=0} \in \bbC^m.
}
Note that $\sqrt{2/(m+1)}$ is simply a normalization factor that is included for convenience. Unfortunately, as described in \cite{adcock2020approximating}, this least-squares problem is ill-conditioned for large $n$ (even when $m \gg n$) due to the use of a frame rather than a basis \cite{adcock2019frames}. Therefore, we instead solve a suitably regularized least-squares problem. There are a number of different ways to do this, but, following previous works we consider an $\epsilon$-truncated Singular Value Decomposition (SVD).
Suppose that the least-squares matrix \R{Adef} has SVD $A = U \Sigma V^*$, where $\Sigma = \diag(\sigma_0,\ldots,\sigma_n) \in \bbR^{m \times n}$ is the diagonal matrix of singular values. Recall that the minimal $2$-norm solution $\hat{c}$ of \R{LScoeff} is given by
\bes{
\hat{c} = A^{\dag} b = V \Sigma^{\dag} U^* b,
}
where $\dag$ denotes the pseudoinverse. Given $\epsilon > 0$, we define $\Sigma^{\epsilon}$ as $\epsilon$-regularized version of $\Sigma$ as
\bes{
(\Sigma^{\epsilon} )_{ii} = \begin{cases} \sigma_i & \sigma_i > \epsilon \\ 0 & \mbox{otherwise} \end{cases} ,
}
and let $\Sigma^{\epsilon,\dag}$ be its pseudoinverse, i.e.
\bes{
(\Sigma^{\epsilon,\dag} )_{ii} = \begin{cases} 1/\sigma_i & \sigma_i > \epsilon \\ 0 & \mbox{otherwise} \end{cases} .
}
Then we define the $\epsilon$-regularized approximation of \R{LScoeff} as $\hat{c}^{\epsilon} = V \Sigma^{\epsilon,\dag} U^* b$ and the corresponding approximation to $f$ as
\bes{
\hat{f}^{\epsilon} = \sum^{n}_{i=0} \hat{c}^{\epsilon}_i \psi_i.
}
With this in hand, we define the overall approximation procedure as the mapping
\be{
\label{poly-frame-approx-1}
\cP^{\epsilon,\gamma}_{m,n} : C([-1,1]) \rightarrow C([-1,1]),\ f \mapsto \hat{f}^{\epsilon} = \sum^{n}_{i=0} \hat{c}^{\epsilon}_{i} \psi_i,
}
where
\be{
\label{poly-frame-approx-2}
\hat{c}^{\epsilon} = (\hat{c}^{\epsilon}_i)^{n}_{i=0} = V \Sigma^{\epsilon,\dag} U^* b,\qquad b = \sqrt{2/(m+1)} (f(x_i))^{m}_{i=0} .
}
\rem{
[Why not use orthogonal polynomials on the original interval]
\label{r:why-not-unit-interval}
The use of orthogonal polynomials on an extended interval may at first seem bizarre, since, as noted, the infinite collection of such functions no longer forms a basis when restricted to $[-1,1]$, but a frame. And even though the first $n+1$ such functions $\psi_0,\ldots,\psi_n$ constitute a basis for $\bbP_n$, they are extremely ill-conditioned as $n \rightarrow \infty$. To be precise, the condition number of their Gram matrix $G = (\ip{\psi_j}{\psi_i}_{[-1,1],2} )^{n}_{i,j=0}$ is exponentially-large in $n$. In turn, the matrix $A$ is also exponentially ill-conditioned in $n$, even when $m \gg n$. To understand why, observe that $A^* A = (\ip{\psi_j}{\psi_i}_{m,2} )^{n}_{i,j=0}$ is simply a discrete approximation to $G$.
Why then, do we not consider Legendre polynomials in $[-1,1]$? These constitute a perfectly well-conditioned basis, which means that the corresponding matrix $A$ would also be well-conditioned for $m \gg n$. Thus there is no need for regularization, and we may simply compute the least-squares fit by solving \R{LScoeff}. Note that this simply corresponds to $\cP^{\epsilon,\gamma}_{m,n}$ with $\epsilon = 0$ and $\gamma = 1$. The problem is that such an approximation, which we term \textit{polynomial least-squares approximation}, {behaves exactly as the} impossibility theorem (Theorem \ref{t:impossibility}) {predicts in the best case}. It is ill-conditioned if $m = o(n^2)$ as $n \rightarrow \infty$, and in particular, if $m \sim c n$ as $n \rightarrow \infty$, then the condition number of the mapping grows exponentially fast. On the other hand, with \textit{quadratic oversampling}, i.e.\ $m \sim c n^2$ as $n \rightarrow \infty$, the approximation is well-conditioned and its convergence rate is root-exponential in $m$ for all analytic functions. See \cite{adcock2019optimal} for a discussion on polynomial least-squares and the impossibility theorem. See also Remark \ref{r:least-squares-cond} below.
}
\subsection{Maximal behaviour of polynomials bounded at equispaced nodes}
As mentioned, both the impossibility theorem and the subsequent possibility theorem rely on estimates for the maximal behaviour of polynomials that are bounded at equispaced nodes. The former is based on a classical result due to Coppersmith and Rivlin concerning the maximal growth of a polynomial $p \in \bbP_n$ that is at most one at the $m+1$ equispaced nodes $\{ x_i \}^{m}_{i=0}$. Specifically, in \cite{coppersmith1992growth} they showed that there exist constants $\beta \geq \alpha > 1$ such that, if
\bes{
B(m,n) = \sup \{ \nm{p}_{[-1,1],\infty} : p \in \bbP_n,\ \nm{p}_{m,\infty} \leq 1 \},
}
then
\be{
\label{coppersmith-rivlin}
\alpha^{n^2/m} \leq B(m,n) \leq \beta^{n^2/m},\quad \forall 1 \leq n \leq m.
}
\rem{
[The condition number of polynomial least-squares approximation]
\label{r:least-squares-cond}
It is not difficult to show that the condition number of polynomial least-squares approximation $\cP_{m,n} = \cP^{0,1}_{m,n}$ satisfies
\bes{
B(m,n) \leq \kappa(\cP_{m,n}) \leq \sqrt{m+1} B(m,n),
}
where $B(m,n)$ is as in \R{coppersmith-rivlin} (this follows by setting $\gamma = 1$ and $\epsilon = 0$ in a result we show later, Lemma \ref{l:poly-frame-accuracy-stability}). Thus, \R{coppersmith-rivlin} immediately explains why this approximation is ill-conditioned when $m = o(n^2)$ as $n \rightarrow \infty$, and only well-conditioned when $m \sim c n^2$ (or faster) as $n \rightarrow \infty$.
}
As described above, the polynomial frame approximation is constructed via an $\epsilon$-truncated SVD. We will see later in \S \ref{s:acc-stab-poly-frame} that such truncation means that the approximation $\cP^{\epsilon,\gamma}_{m,n}(f)$ of a function $f$ belongs to (a subspace of) the set of polynomials
\bes{
P^{\epsilon,\gamma}_{m,n} = \left \{ p \in \bbP_n : \nm{p}_{[-\gamma,\gamma],2} \leq \nm{p}_{m,2} / \epsilon \right \} \subseteq \bbP_n
}
whose $L^2$-norm over the extended interval $[-\gamma,\gamma]$ is at most $1/\epsilon$ times larger than their discrete $2$-norm over the equispaced grid. In other words, the effect of regularization via the truncated SVD is to restrict the type of polynomial the approximation can take to one that does not grow too large on the extended interval.
After interchanging the $2$-norms for uniform norms, this observation motivates the study of the quantity
\be{
\label{C-def-general}
C(m,n,\gamma,\epsilon) = \sup \{ \nm{p}_{[-1,1],\infty} : p \in \bbP_n,\ \| p \|_{m,\infty} \leq 1,\ \| p \|_{[-\gamma,\gamma],\infty} \leq 1/\epsilon \}.
}
In \S \ref{s:acc-stab-poly-frame}, we show that the condition number of the polynomial frame approximation $\cP^{\epsilon,\gamma}_{m,n}$ satisfies
\bes{
\kappa(\cP^{\epsilon,\gamma}_{m,n}) \leq \sqrt{m+1} C(m,n,\gamma,\epsilon).
}
Notice that $C(m,n,\gamma,0) = B(m,n)$ and, in general,
\bes{
C(m,n,\gamma,\epsilon) \leq B(m,n),
}
where $B(m,n)$ is the as in \R{coppersmith-rivlin}. However, whereas $B(m,n)$ is only bounded as $m,n\rightarrow\infty$ in the quadratic oversampling regime (i.e.\ $m \sim c n^2$ for some $c>0$), for $C(m,n,\gamma,\epsilon)$ we show that linear oversampling is sufficient. Specifically:
\thm{
[Maximal behaviour of polynomials bounded at equispaced nodes and extended intervals]
\label{t:polynomial-inequality-main}
Let $0 < \epsilon \leq 1/\E$, $\gamma > 1$ and $n \geq \sqrt{\gamma^2-1} \log(1/\epsilon)$, and consider the quantity $C(m,n,\gamma,\epsilon)$ defined in \R{C-def-general}. Suppose that
\be{
\label{m-n-eps-poly-growth}
m \geq 36 n \log(1/\epsilon) / \sqrt{\gamma^2-1}.
}
Then
\bes{
C(m,n,\gamma,\epsilon) \leq c,
}
for some numerical constant $c > 0$. Specifically, $c$ can be taken as $c = 4 \beta + 3$ for $\beta$ as in \R{coppersmith-rivlin}.
}
This theorem is a direct consequence of a more general result (Theorem \ref{t:poly-inequality-general}) that we state and prove in \S \ref{s:maximal-behaviour-proof}. Note that the factor $36$ appearing in \R{m-n-eps-poly-growth} was chosen for convenience. Theorem \ref{t:poly-inequality-general} describes in general how a condition roughly of the form $m \geq c_1 n \log(1/\epsilon) / \sqrt{\gamma^2-1}$ leads to a bound $C(m,n,\gamma,\epsilon) \leq h(c_1)$ for some function $h(c_1)$ depending on $c_1$.
\subsection{The motivations for considering polynomials on an extended interval}
Above we asserted that, by considering orthogonal polynomials on an extended interval and using regularization, the polynomial frame approximation constructs an approximation in a space within which the polynomials cannot grow too large on $[-\gamma,\gamma]$. As Theorem \ref{t:polynomial-inequality-main} makes clear, this prohibits such polynomials from behaving too wildly on $[-1,1]$ away from the equispaced grid, whenever $m$ scales linearly with $n$. Thus, using orthogonal polynomials on an extended interval allows for a reduction in the oversampling rate from quadratic in $n$ (as is the case for standard polynomial-least squares approximation -- see Remark \ref{r:least-squares-cond}) to linear in $n$.
On the other hand, by restricting the approximation space in this way, we potentially limit the ability of the scheme to accurately approximate analytic functions. In Theorem \ref{t:acc-cond-poly-frame}, we establish an error bound for the polynomial frame approximation that takes the form
\bes{
\nmu{f - \cP^{\epsilon',\gamma}_{m,n}(f)}_{[-1,1],\infty} \leq 2 c \sqrt{m+1} \inf_{p \in \bbP_n} \left \{ \nm{f - p}_{[-1,1],\infty} + (n+1)\epsilon \nm{p}_{[-\gamma,\gamma],\infty} \right \},
}
where $\epsilon' = \epsilon (n+1) / \sqrt{\gamma}$ (the reasons for this choice of $\epsilon'$ are discussed further below). This holds for any $c > 1$ and $f \in C([-1,1])$, provided $m$ and $n$ are chosen so that
\bes{
C(m,n,\gamma,\epsilon) \leq c.
}
Such a bound is very similar to those shown previously for both Fourier extensions \cite{adcock2014numerical} and polynomial frame approximations \cite{adcock2020approximating}, the main difference being the use of the $L^{\infty}$-norm instead of the $L^2$-norm. The key component of it is the best approximation term
\bes{
\inf_{p \in \bbP_n} \left \{ \nm{f - p}_{[-1,1],\infty} + (n+1)\epsilon \nm{p}_{[-\gamma,\gamma],\infty} \right \}.
}
In other words, the effect of the truncated SVD regularization is to replace the classical best approximation error term
\bes{
\inf_{p \in \bbP_n} \nm{f - p}_{[-1,1],\infty},
}
(which arises in the case $\epsilon = 0$, i.e.\ standard polynomial least-squares approximation)
by one that also involves a term depending on $\epsilon$ multiplied by the norm of $p$ over the extended interval. As a result, the overall approximation error depends on how well $f$ can be approximated by a polynomial $p \in \bbP_n$ uniformly on $[-1,1]$ (the term $\nm{f - p}_{[-1,1],\infty}$) that does not grow too large on the extended interval $[-\gamma,\gamma]$ (the term $ \nm{p}_{[-\gamma,\gamma],\infty} $).
Our main result, stated next, arises by bounding this best approximation term for functions that are analytic in sufficiently large complex regions.
\subsection{Main result: the possibility theorem}
We now state our main result (see \S \ref{s:main-thm-proof} for the proof). For this, we first recall the definition of the Bernstein ellipse with parameter $\theta > 1$:
\be{
\label{Bernstein-ellipse}
E_{\theta} = \left \{ \frac{z+z^{-1}}{2} : z \in \bbC,\ 1 \leq | z | \leq \theta \right \}.
}
A classical theorem in approximation theory states that any function $f \in B(E_{\theta})$ is approximated to exponential accuracy by polynomials. Specifically,
\be{
\label{poly-BA}
\inf_{p \in \bbP_n} \nm{f - p}_{[-1,1],\infty} \leq \frac{2}{\theta-1} \nm{f}_{E_{\theta},\infty} \theta^{-n},
}
(see also Lemma \ref{poly-approx-bounds} later). Our main results asserts that polynomial frame approximation can achieve a similar rate of decay in $n$, subject to linear oversampling in $m$. Specifically:
\thm{
[The possibility theorem]
\label{t:possibility-thm}
Let $0 < \epsilon \leq 1/\E$, $\gamma > 1$ and $n \geq \sqrt{\gamma^2-1} \log(1/\epsilon)$, and consider the polynomial frame approximation $\cP^{\epsilon',\gamma}_{m,n}$ defined in \R{poly-frame-approx-1}--\R{poly-frame-approx-2}, where
\be{
\label{m-n-relation}
m =
\left \lceil 36 n \log(1/\epsilon) \big / \sqrt{\gamma^2-1} \right \rceil ,\qquad \epsilon' = \frac{\epsilon(n+1)}{\sqrt{\gamma}}.
}
Then the condition number of the mapping $\cP^{\epsilon',\gamma}_{m,n}$ satisfies
\bes{
\kappa(\cP^{\epsilon',\gamma}_{m,n}) \leq c \sqrt{m+1} ,
}
where $c$ is as in Theorem \ref{t:polynomial-inequality-main}.
Moreover, if $E_{\theta}$ is a Bernstein ellipse with parameter
\be{
\label{theta-cond}
\theta > \gamma + \sqrt{\gamma^2-1}
}
then, for all $f \in B(E_{\theta})$,
\be{
\label{main-err-bd}
\begin{split}
\nmu{f - \cP^{\epsilon',\gamma}_{m,n}(f)}_{[-1,1],\infty} & \leq c g(\theta,\gamma) \sqrt{m} \left ( \theta^{-n} + n \epsilon \right ) \nm{f}_{E_{\theta},\infty}
\\
& \leq c g(\theta,\gamma) \sqrt{m} \left ( \rho^{1-m} + m \epsilon \right ) \nm{f}_{E_{\theta},\infty},
\end{split}
}
where $g(\theta,\gamma)$ depends on $\theta$ and $\gamma$ only and
\be{
\label{rho-main-thm}
\rho = \theta^{c_*},\qquad c_* = \frac{\sqrt{\gamma^2-1} }{36 \log(1/\epsilon)} .
}
}
This result shows that polynomial frame approximation is well-conditioned when $n$ scales linearly with $m$ (specifically, \R{m-n-relation} holds), with its condition number being at worst $\ordu{\sqrt{m}}$ as $m \rightarrow \infty$. Moreover, for functions that are analytic in $E_{\theta}$ (note that this region contains the extended interval $[-\gamma,\gamma]$ in its interior, due to the condition \R{theta-cond}) its error decreases exponentially fast in $m$ down to the level $\ordu{m^{3/2} \epsilon}$. Recall that the rate $\theta^{-n}$ in \R{main-err-bd} is the same as in \R{poly-BA} for the best polynomial approximation of a function in $B(E_{\theta})$. Thus, one can achieve a near-optimal error decay rate in $n$ with only linear oversampling in $m$.
Overall, Theorem \ref{t:possibility-thm} provides a positive counterpart to the impossibility theorem (Theorem \ref{t:impossibility}). It is important emphasize that it does not violate Theorem \ref{t:impossibility}: indeed, exponential decay of the error is only guaranteed down to a finite accuracy.
\rem{
[Varying $\epsilon$ with $n$]
\label{rem:varying-eps}
It is worth observing how the theorem changes if one strives to scale $\epsilon$ with $n$ so as to achieve exponential convergence of the error down to zero{, rather than exponential decay down to a finite level of accuracy}. If $\epsilon$ is chosen as $\epsilon = \theta^{-n}$ then the scaling between $m$ and $n$ becomes quadratic, since $\log(1/\epsilon) = n \log(\theta)$ in this case. When substituted into Theorem \ref{t:polynomial-inequality-main}, this implies quadratic scaling of $m$ with $n$, and therefore root-exponential convergence {(to zero)} of the approximation with respect to $m$. {This is in precise agreement with the impossibility theorem.}
}
Another key aspect of Theorem \ref{t:possibility-thm} is the dependence on $\epsilon$ in \R{m-n-relation} and, in turn, the exponential rate \R{rho-main-thm}. Since $\epsilon > 0$ dictates the limiting accuracy of the approximation scheme, it is often desirable to choose $\epsilon$ close to machine epsilon $\epsilon_{\mathrm{mach}}$, which in IEEE double precision is roughly $\epsilon_{\mathrm{mach}} \approx 1.1 \times 10^{-16}$. Thus, the scaling $\log(1/\epsilon)$ -- which is proportional to the number of digits of accuracy desired -- is highly appealing. A scaling of, for example, $1/\epsilon$, would be meaningless for practical purposes.
Note that Theorem \ref{t:possibility-thm} does not say anything about the rate of the decay of the error for functions that are not analytic in a Bernstein ellipse $E = E_{\theta}$ that is large enough to contain the extended interval $[-\gamma,\gamma]$. We discuss the behaviour of the error for such functions in \S \ref{ss:lower-regularity}. On the other hand, this theorem also offers some insight into the effect of the choice of $\gamma$ on the approximation. Specifically, choosing a smaller $\gamma$ means that \R{theta-cond} holds for smaller values of $\theta$, thus the analyticity requirement $f \in B(E_{\theta})$ becomes less stringent. However, this also leads to a slower rate of exponential convergence in $m$, since $\rho$ is an increasing function of both $\gamma$ and $\theta$. We discuss this matter further in \S \ref{s:numerical}.
Finally, we remark that this theorem actually considers a polynomial frame approximation with parameter $\epsilon' = \epsilon(n+1)/\sqrt{\gamma}$ that grows linearly in $n$. The reason for this can be traced to the need to switch between the $L^2$-norm (or corresponding discrete seminorm) and the $L^{\infty}$-norm (or corresponding discrete seminorm) at various stages in the proof. See the proofs of Lemma \ref{C1-C2-as-C} and Theorem \ref{t:acc-cond-poly-frame} for the precise details. This choice of scaling is made to ensure the first term in the error bound decreases exponentially fast in $m$, which in turn follows from the linear relationship $m \approx 24 n \log(1/\epsilon)/\sqrt{\gamma^2-1}$. It is also possible to use $\epsilon$ as the truncation parameter rather than $\epsilon'$. Following much the same arguments, one can show that this choice results in a log-linear relationship between $m$ and $n$, which in turn leads to subexponential convergence of the form $\sigma^{-m / \log(m)}$ for large $m$, where, like $\rho$, $\sigma > 1$ depends on $\theta$, $\gamma$ and $\epsilon$.
\subsection{Error decay rates for functions of lower regularity}\label{ss:lower-regularity}
Theorem \ref{t:possibility-thm} only asserts exponential decay of the error for functions that are analytic in complex regions containing the extended interval $[-\gamma,\gamma]$. We now consider arbitrary analytic functions. The following result shows that the error for such functions decays exponentially fast with the same rate $\theta^{-n}$, but only down to a larger tolerance.
\thm{
[Error decay for arbitrary analytic functions]
\label{t:possibility-slower-exp}
Consider the setup of Theorem \ref{t:possibility-thm}. Let $E_{\theta}$ be the Bernstein ellipse with parameter
\bes{
1 < \theta < \tau : =\gamma + \sqrt{\gamma^2-1}.
}
Then, for all $f \in B(E_{\theta})$,
\be{
\label{main-err-bd2}
\begin{split}
\nmu{f - \cP^{\epsilon',\gamma}_{m,n}(f)}_{[-1,1],\infty} & \leq c g(\theta,\gamma) \sqrt{m} \left ( \theta^{-n} + n \epsilon^{\frac{\log(\theta)}{\log(\tau)}} \right ) \nm{f}_{E_{\theta},\infty}
\\
& \leq c g(\theta,\gamma) \sqrt{m} \left ( \rho^{1-m} + m \epsilon^{\frac{\log(\theta)}{\log(\tau)}} \right ) \nm{f}_{E_{\theta},\infty},
\end{split}
}
where $g(\theta,\gamma)$ and $\rho$ are also as in Theorem \ref{t:possibility-thm}.
}
This result shows decrease of the error exponentially fast down to roughly $\epsilon^{\frac{\log(\theta)}{\log(\tau)}}$, i.e.\ some fractional power of $\epsilon$ depending on the relative sizes of $\theta$ and $\tau = \gamma + \sqrt{\gamma^2-1}$.
It raises an immediate question: what happens after such an accuracy level is reached? As we show later through an extended impossibility theorem, we cannot expect exponential decay down to $\epsilon$ in general. However, we now show that superalgebraic decay -- i.e.\ faster than any power of $m^{-k}$ -- is indeed possible down to this level.
We do this by first noting that an analytic function is infinitely smooth, and then by establishing an error bound for functions that are $k$-times continuously differentiable.
Specifically, in the following result we consider the space $C^{k}([-1,1])$ of functions that are $k$-times continuously differentiable on $[-1,1]$. We define the norm on this space as
\bes{
\nm{f}_{C^k([-1,1])} = \max_{j = 0,\ldots,k} \nmu{f^{(j)}}_{[-1,1],\infty}.
}
\thm{
[Error decay for $C^k$ functions]
\label{t:possibility-algebraic}
Consider the setup of Theorem \ref{t:possibility-thm}. Then, for all $k \in \bbN$ and $f \in C^k([-1,1])$,
\bes{
\nmu{f - \cP^{\epsilon',\gamma}_{m,n}(f)}_{[-1,1],\infty} \leq c g(k,\gamma) \sqrt{m} \left ( n^{-k} + n \epsilon \right ) \nm{f}_{C^k([-1,1])},
}
where $g(k,\gamma)$ depends on $k$ and $\gamma$ only.
}
Proofs of Theorems \ref{t:possibility-slower-exp} and \ref{t:possibility-algebraic} can be found in \S \ref{s:main-thm-proof}.
Note that all these observations about the rate of error decay are seen in practice in numerical examples. We present a series of experiments confirming these results in \S \ref{s:numerical}.
We remark in passing that superalgebraic decay is slower than root-exponential decay, which is the best possible stipulated by the impossibility theorem for analytic function approximation. Whether or not polynomial frame approximation exhibits root-exponential decay in $m$ after the breakpoint $\epsilon^{\frac{\log(\theta)}{\log(\tau)}}$ is an open problem.
\subsection{An extended impossibility theorem}
Theorems \ref{t:possibility-thm} and \ref{t:possibility-slower-exp} show that polynomial frame approximation can achieve roughly the same exponential rate $\theta^{-n}$ as the best polynomial approximation for functions in $B(E_{\theta})$ when subject to linear oversampling. However, it only maintains this rate down to roughly $\epsilon$ for sufficiently large $\theta$. We now ask whether or not there exists an approximation scheme that can perform better than this: namely, whether an exponential rate $\theta^{-n}$ down to $\epsilon$ can be attained for any $\theta > 1$ in the linear oversampling regime. The following extended impossibility theorem shows that the answer to this question is no.
\thm{[Extended impossibility theorem]
\label{t:impossibility-extended}
Let $\{ \cR_m \}^{\infty}_{m=1}$ be an approximation procedure based on equispaced grids of $m+1$ points such that, for some $c > 0$, $C,\theta^* > 1$, $0 < \epsilon \leq (4C)^{-2}$ and $1/2 < \tau \leq 1$, we have
\be{
\label{extended-error-bound}
\nmu{f - \cR_m(f) }_{[-1,1],\infty} \leq C \left ( \theta^{-c m^{\tau}} + \epsilon \right ) \nm{f}_{E_{\theta},\infty},\quad
}
for all $m \in \bbN$, $f \in B(E_{\theta})$ and $1 < \theta \leq \theta^*$.
Then the condition numbers \R{kappa-def} satisfy
\bes{
\kappa(\cR_m) \geq \sigma^{m^{2 \tau - 1}}
}
for some $\sigma > 1$ and all sufficiently large $m$.
}
The proof of this theorem can be found in \S \ref{s:numerical}. It follows essentially the same steps as that of the impossibility theorem.
This theorem has several consequences. First, it extends the impossibility theorem by showing that exactly the same relationship between fast error decay and conditioning holds even when the overall error decreases only down to a constant tolerance $\epsilon$. To do this, it makes the stronger assumption that the scheme yields exponential decay for all analytic functions -- including those with singularities arbitrarily close to $[-1,1]$ -- with the rate of exponential decay being dependent on the size of the region of analyticity. Specifically, $f \in B(E_{\theta})$ implies a term of the form $\theta^{-c m^{\tau}}$ for \textit{all} $1 < \theta \leq \theta^*$.
Second, it implies the following. Any well-conditioned method must either (i) yield root-exponential decrease of the error down to $\epsilon$, i.e. $\theta^{-c \sqrt{m}} + \epsilon$; or (ii) fail to yield exponential convergence for all analytic functions, i.e.\ \R{extended-error-bound} holds with $\tau = 1$ only for $\theta^{**} \leq \theta \leq \theta^*$ for some $1 < \theta^{**} \leq \theta^*$. As discussed previously, (ii) is exactly how polynomial frame approximation behaves, up to small algebraic factors in $m$. In other words, it is nearly optimal.
\rem{
[Varying $\gamma$ with $n$]
Recall that Theorem \ref{t:possibility-thm} only asserts exponential decay down to $\epsilon$ for functions that are analytic Bernstein ellipses containing the interval $[-\gamma,\gamma]$. A natural idea is therefore to decrease $\gamma$ with $n$ so that, for any fixed, analytic function, the interval $[-\gamma,\gamma]$ is included in its region of analyticity for all large $n$.
To determine a suitable scaling for $\gamma$, one can use similar ideas to those employed in so-called \textit{mapped polynomial} spectral methods \cite{adcock2016mapped,don1997accuracy,boyd1989chebyshev,hale2008new,kosloff1993modified}: namely, choose $\gamma$ to match the terms in the error bound \R{main-err-bd}. Ignoring the term $n$ (for simplicity), we therefore solve the equation
$(\gamma + \sqrt{\gamma^2-1})^{-n} = \epsilon$
with respect to $\gamma$, which yields
\be{
\label{gamma-scale}
\gamma = \gamma(n,\epsilon) = \frac{\epsilon^{1/n} + \epsilon^{-1/n}}{2}.
}
Observe that $\gamma \rightarrow 1^+$ as $n \rightarrow \infty$ for fixed $n$. By combining Theorems \ref{t:possibility-thm} (for large $n$, since $\gamma < \theta$ for all large $n$) and \ref{t:possibility-slower-exp} (for small $n$, noting that $\epsilon^{\frac{\log(\tau)}{\log(\theta)}} = \theta^{-n}$ with this choice of $\gamma$), one can show that
\be{
\label{gamma-scale-err-bound}
\nmu{f - \cP^{\epsilon',\gamma}_{m,n}(f)}_{[-1,1],\infty} \leq c g(\theta,\gamma) \sqrt{m} \left ( \theta^{-n} + n \epsilon \right ) \nm{f}_{E_{\theta},\infty},
}
for $f \in B(E_{\theta})$ and \textit{any} $\theta > 1$. Yet, scaling $\gamma$ as in \R{gamma-scale} causes the relation between $m$ and $n$ to become quadratic. Indeed,
$\sqrt{\gamma^2 - 1} = \log(1/\epsilon)/n + \ord{(\log(1/\epsilon)/n)^3}$ as $n \rightarrow \infty$,
and therefore the condition \R{m-n-eps-poly-growth} becomes $m > 36 n^2$
for all large $n$. Of course, this is to be expected in view of Theorem \ref{t:impossibility-extended}, since the error bound \R{gamma-scale-err-bound} holds for any $\theta > 1$.
}
\section{Accuracy and conditioning of $\cP^{\epsilon,\gamma}_{m,n}$}\label{s:acc-stab-poly-frame}
The next three sections develop the proofs of Theorems \ref{t:polynomial-inequality-main}--\ref{t:impossibility-extended}. We commence in this section by analyzing the error and condition number of the polynomial frame approximation $\cP^{\epsilon,\gamma}_{m,n}$. Our analysis follows similar lines to that of \cite{adcock2020frames} and, in particular, \cite{adcock2020approximating}, the main difference being that we work in the $L^{\infty}$-norm, rather than the $L^2$-norm.
\subsection{Reformulation in terms of singular vectors}
Let $v_0,\ldots,v_{n} \in \bbC^n$ be the right singular vectors of the matrix $A$ defined in \R{Adef}. Then, to each such vector, we can associate a polynomial in $\bbP_n$:
\bes{
\xi_i = \sum^{n}_{j=0} (v_i)_j \psi_j \in \bbP_n,\quad i =0,\ldots,n.
}
Here the $\psi_j$ are as in \R{psii-def}.
Since these functions are orthonormal on $[-\gamma,\gamma]$ and the $v_i$ are orthonormal vectors, the functions $\xi_i$ are themselves orthonormal on $[-\gamma,\gamma]$:
\bes{
\ip{\xi_i}{\xi_j}_{[-\gamma,\gamma],2} = v^*_j v_i = \delta_{ij},\quad i,j = 0,\ldots,n.
}
However, the functions $\xi_i$ are also orthogonal with respect to the discrete inner product $\ip{\cdot}{\cdot}_{m,2}$. Indeed, since $v_j$ and $v_k$ are singular vectors, we have
\eas{
\ip{\xi_j}{\xi_k}_{m,2} = \frac{2}{m+1} \sum^{m}_{i=0} \xi_j(x_i) \overline{\xi_k(x_i)}
& = \frac{2}{m+1} \sum^{m}_{i=0} \sum^{n}_{s=0} \sum^{n}_{t=0} (v_j)_s \psi_s(x_i) \overline{(v_k)_t} \overline{\psi_t(x_i)}
\\
& = v^*_k A^* A v_j
\\
& = \sigma^2_j \delta_{jk}.
}
With this in hand, we now define the subspace
\bes{
\bbP^{\epsilon,\gamma}_{m,n} = \spn \{ \xi_i : \sigma_i > \epsilon \} \subseteq \bbP_n.
}
We note in passing that this space coincides with $\bbP_n$ whenever $m \geq n$ and $\sigma_{\min} = \sigma_n > \epsilon$. In particular, $\bbP^{0,\gamma}_{m,n} = \bbP_n$ for $m \geq n$. On the other hand, when $\epsilon > 0$ we have $\bbP^{\epsilon,\gamma}_{m,n} \subseteq P^{\epsilon,\gamma}_{m,n}$,
where
\bes{
P^{\epsilon,\gamma}_{m,n} = \left \{ p \in \bbP_n : \nm{p}_{[-\gamma,\gamma],2} \leq \nm{p}_{m,2} / \epsilon \right \} \subseteq \bbP_n,
}
is, as before, the set of polynomials of degree at most $n$ whose $L^2$-norm over the extended interval $[-\gamma,\gamma]$ is at most $1/\epsilon$ times larger than their discrete $2$-norm over the equispaced grid. To see why this holds, let $p = \sum_{\sigma_i > \epsilon} c_i \xi_i \in \bbP^{\epsilon,\gamma}_{m,n}$. Then, by the double orthogonality of the $\xi_i$,
\bes{
\nm{p}^2_{[-\gamma,\gamma,2]} = \sum_{ \sigma_i > \epsilon} |c_i|^2 \leq \frac{1}{\epsilon^2} \sum_{ \sigma_i > \epsilon} \sigma^2_i |c_i|^2 = \frac{\nm{p}^2_{m,2}}{\epsilon^2}.
}
Hence $p \in P^{\epsilon,\gamma}_{m,n}$, as required.
Finally, observe that the polynomial frame approximation $\cP^{\epsilon,\gamma}_{m,n}(f)$ of $f \in C([-1,1])$ belongs to the space
$\bbP^{\epsilon,\gamma}_{m,n}$. In fact, it is the orthogonal projection onto this space with respect to the discrete inner product $\ip{\cdot}{\cdot}_{m,2}$. Therefore, by orthogonality, we may write
\be{
\label{Feps-in-terms-of-SVs}
\cP^{\epsilon,\gamma}_{m,n}(f) = \sum_{\sigma_i > \epsilon} \frac{\ip{f}{\xi_i}_{m,2}}{\sigma^2_i} \xi_i,\qquad f \in C([-1,1]).
}
\subsection{Accuracy and conditioning up to constants}
We can now examine the accuracy and conditioning of $\cP^{\epsilon,\gamma}_{m,n}$. We first require the following:
\lem{
\label{l:poly-frame-accuracy-stability}
Let $\epsilon \geq 0$, $\gamma \geq 1$ and $\cP^{\epsilon,\gamma}_{m,n}$ the corresponding polynomial frame approximation. Then, for any $f \in C([-1,1])$,
\be{
\label{acc-bound}
\nmu{f - \cP^{\epsilon,\gamma}_{m,n}(f)}_{[-1,1],\infty} \leq \left ( 1 + \sqrt{m+1} C_1 \right )\nmu{f - p}_{[-1,1],\infty} + C_2 \epsilon \nmu{p}_{[-\gamma,\gamma],\infty},\quad \forall p \in \bbP_n,
}
where
\be{
\label{C1-C2-def}
\begin{split}
C_1 & = C_1(m,n,\gamma,\epsilon) = \sup \{ \nmu{p}_{[-1,1],\infty} : p \in \bbP^{\epsilon,\gamma}_{m,n},\ \nm{p}_{m,\infty} \leq 1 \},
\\
C_2 & = C_2(m,n,\gamma,\epsilon) = \{ \nmu{p -\cP^{\epsilon,\gamma}_{m,n}(p)}_{[-1,1],\infty} : p \in \bbP_{n},\ \nm{p}_{[-\gamma,\gamma],\infty} \leq \epsilon^{-1} \}.
\end{split}
}
Moreover, the condition number $\kappa(\cP^{\epsilon,\gamma}_{m,n})$ satisfies
\be{
\label{stab-bound}
C_1 \leq \kappa(\cP^{\epsilon,\gamma}_{m,n}) \leq \sqrt{m+1} C_1.
}
}
These constants are interpreted as follows. The first, $C_1$, measures how large an element of the space $\bbP^{\epsilon,\gamma}_{m,n}$ can be uniformly on $[-1,1]$ relative to its discrete uniform norm over the grid. The second, $C_2$, examines the effect of the $\epsilon$-truncation. Specifically, it measures the error in reconstructing a polynomial $p \in \bbP_n$ relative to its size on the extended interval. Note, in particular, that $C_2 = 0$ when $\epsilon = 0$ and $m \geq n$. We also have $C_1(m,n,\gamma,0) = B(m,n)$ in this case, where $B(m,n)$ is as in \R{coppersmith-rivlin}.
\prf{
Let $f \in C([-1,1])$ and $p \in \bbP_n$. Then
\eas{
\| f - \cP^{\epsilon,\gamma}_{m,n}(f) \|_{[-1,1],\infty}
& \leq \nmu{f - p}_{[-1,1],\infty} + \nmu{\cP^{\epsilon,\gamma}_{m,n}(f) - \cP^{\epsilon,\gamma}_{m,n}(p)}_{[-1,1],\infty} + \nmu{p - \cP^{\epsilon,\gamma}_{m,n}(p)}_{[-1,1],\infty}
\\
& \leq \nmu{f - p}_{[-1,1],\infty} + C_1 \nmu{\cP^{\epsilon,\gamma}_{m,n}(f) - \cP^{\epsilon,\gamma}_{m,n}(p)}_{m,\infty} + C_2 \epsilon \nmu{p}_{[-\gamma,\gamma],\infty}.
}
Here, in the second step, we used the fact that $\cP^{\epsilon,\gamma}_{m,n}$ is linear and its range is the space $\bbP^{\epsilon,\gamma}_{m,n}$. Now recall that $\cP^{\epsilon,\gamma}_{m,n}$ is an orthogonal projection with respect to $\ip{\cdot}{\cdot}_{m,2}$. Thus, using \R{discnormbds},
\eas{
\nmu{\cP^{\epsilon,\gamma}_{m,n}(f) - \cP^{\epsilon,\gamma}_{m,n}(p)}_{m,\infty}
& \leq \sqrt{(m+1)/2} \nmu{\cP^{\epsilon,\gamma}_{m,n}(f) - \cP^{\epsilon,\gamma}_{m,n}(p)}_{m,2}
\\
& \leq \sqrt{(m+1)/2} \nmu{f - p}_{m,2}
\\
& \leq \sqrt{m+1} \nmu{f-p}_{m,\infty}
\leq \sqrt{m+1} \nmu{f-p}_{[-1,1],\infty}.
}
Substituting this into the previous expression now gives \R{acc-bound}.
For the second result, we use the fact that $\cP^{\epsilon,\gamma}_{m,n}$ is linear once more to write
\bes{
\kappa(\cP^{\epsilon,\gamma}_{m,n}) = \sup_{\substack{f \in C([-1,1]) \\ \nm{f}_{m,\infty} \neq 0}} \frac{\nmu{\cP^{\epsilon,\gamma}_{m,n}(f) }_{[-1,1],\infty} }{\nm{f}_{m,\infty}}.
}
Notice that $\cP^{\epsilon,\gamma}_{m,n}(p) = p$ for all $p \in \bbP^{\epsilon,\gamma}_{m,n}$. Hence
\bes{
\kappa(\cP^{\epsilon,\gamma}_{m,n}) \geq \sup_{\substack{p \in \bbP^{\epsilon,\gamma}_{m,n} \\ \nm{p}_{m,\infty} \neq 0}} \frac{\nm{p}_{[-1,1],\infty} }{\nm{p}_{m,\infty}} = C_1,
}
which gives the lower bound in \R{stab-bound}. On the other hand, using \R{discnormbds} and the fact that $\cP^{\epsilon,\gamma}_{m,n}$ is an orthogonal projection once more, we have
\eas{
\nmu{\cP^{\epsilon,\gamma}_{m,n}(f) }_{[-1,1],\infty} &\leq C_1 \nmu{\cP^{\epsilon,\gamma}_{m,n}(f) }_{m,\infty}
\leq C_1 \sqrt{(m+1)/2} \nmu{\cP^{\epsilon,\gamma}_{m,n}(f) }_{m,2}
\\
& \leq C_1 \sqrt{(m+1)/2} \nmu{f}_{m,2}
\\
& \leq C_1 \sqrt{m+1} \nm{f}_{m,\infty}.
}
This gives the upper bound in \R{stab-bound}.
}
\subsection{Bounding the constants}
The next step is to estimate the constants $C_1$ and $C_2$ appearing in this lemma.
\lem{
\label{C1-C2-as-C}
Consider the setup of the previous lemma. Then the constants $C_1$ and $C_2$ defined in \R{C1-C2-def} satisfy
\bes{
C_1 \leq C \left(m,n,\gamma,\epsilon / (\sqrt{2} c_{n,\gamma}) \right ),
}
and
\bes{
C_2 \leq \sqrt{\gamma(m+1)} \cdot C \left ( m , n , \gamma , \sqrt{m+1} \epsilon / (\sqrt{2} c_{n,\gamma}) \right ),
}
where $C$ is as in \R{C-def-general} and $c_{n,\gamma} = \frac{n+1}{\sqrt{2\gamma}}$.
}
\prf{
Consider $C_1$ first. Let $p \in \bbP^{\epsilon,\gamma}_{m,n}$ with $\nm{p}_{m,\infty} \leq 1$. Then we can write $p = \sum_{\sigma_i > \epsilon} c_i \xi_i$ and, using the orthogonality of the $\xi_i$, we see that
\bes{
\nm{p}^2_{m,2} = \sum_{\sigma_i > \epsilon} \sigma^2_i |c_i|^2 ,\qquad \nm{p}^2_{[-\gamma,\gamma],2} = \sum_{\sigma_i > \epsilon} |c_i|^2.
}
Therefore
\bes{
\nm{p}_{[-\gamma,\gamma],2} \leq \epsilon^{-1} \nm{p}_{m,2} \leq \sqrt{2} \epsilon^{-1} \nm{p}_{m,\infty} \leq \sqrt{2} \epsilon^{-1}.
}
We also recall the following inequality over $\bbP_n$:
\be{
\label{poly-inf-2}
\nm{q}_{[-\gamma,\gamma],\infty} \leq c_{n,\gamma} \nm{q}_{[-\gamma,\gamma],2},\quad \forall q \in \bbP_n.
}
This can be show directly by recalling that the classical Legendre polynomial $P_i(x)$ attains its maximum at $x = 1$ and takes value $P_i(1) = 1$. Hence, writing $q \in \bbP_n$ as $q = \sum^{n}_{i=0} c_i \psi_i$ and recalling \R{psii-def}, we get
\eas{
\nm{q}_{[-\gamma,\gamma],\infty} & \leq \sum^{n}_{i=0} |c_i| \sqrt{\frac{i+1/2}{\gamma}}
\leq \left ( \sum^{n}_{i=0} |c_i|^2 \right )^{1/2} \left ( \sum^{n}_{i=0} \frac{i+1/2}{\gamma} \right )^{1/2}
= c_{n,\gamma} \nm{q}_{[-\gamma,\gamma],2},
}
which establishes \R{poly-inf-2}.
Therefore, since $p \in \bbP^{\epsilon,\gamma}_{m,n} \subseteq \bbP_n$, we get
\bes{
\nm{p}_{[-\gamma,\gamma],\infty} \leq c_{n,\gamma} \nm{p}_{[-\gamma,\gamma],2} \leq \sqrt{2} c_{n,\gamma} \epsilon^{-1}.
}
We deduce that
\eas{
C_1 &\leq \sup \{ \| p \|_{[-1,1],\infty} : p \in \bbP^{\epsilon,\gamma}_{m,n},\ \nm{p}_{m,\infty} \leq 1,\ \nm{p}_{[-\gamma,\gamma],\infty} \leq \sqrt{2} c_{n,\gamma} \epsilon^{-1} \}
\\
& = C(m,n,\gamma,\epsilon / (\sqrt{2} c_{n,\gamma} ) ),
}
which gives the first result.
We now consider $C_2$. Let $p \in \bbP_n$ with $\nm{p}_{[-\gamma,\gamma],\infty} \leq \epsilon^{-1}$. Since $p \in \bbP_n$, we may write
\bes{
p = \sum^{n}_{i = 0} \frac{\ip{p}{\xi_i}_{m,2}}{\sigma^2_i} \xi_i,
}
and using \R{Feps-in-terms-of-SVs}, we may also write
\bes{
\cP^{\epsilon,\gamma}_{m,n}(p) = \sum_{\sigma_i > \epsilon} \frac{\ip{p}{\xi_i}_{m,2}}{\sigma^2_i} \xi_i.
}
Therefore
\bes{
p - \cP^{\epsilon,\gamma}_{m,n}(p) = \sum_{\sigma_i \leq \epsilon} \frac{\ip{p}{\xi_i}_{m,2}}{\sigma^2_i} \xi_i.
}
Using the fact that the $\xi_i$ are orthonormal over $[-\gamma,\gamma]$, we deduce that
\be{
\label{p-minus-Feps-p-1}
\nmu{p - \cP^{\epsilon,\gamma}_{m,n}(p)}^2_{[-\gamma,\gamma],2} = \sum_{\sigma_i \leq \epsilon} \frac{|\ip{p}{\xi_i}_{m,2} |^2}{\sigma^4_i} \leq \sum^{n}_{i = 0} \frac{|\ip{p}{\xi_i}_{m,2} |^2}{\sigma^4_i} = \nm{p}^2_{[-\gamma,\gamma],2},
}
and, using the fact that the $\xi_i$ are orthogonal with respect to the discrete semi-inner product $\ip{\cdot}{\cdot}_{m,2}$, we see that
\be{
\label{p-minus-Feps-p-2}
\nmu{p - \cP^{\epsilon,\gamma}_{m,n}(p)}^2_{m,2} = \sum_{\sigma_i \leq \epsilon} \frac{|\ip{p}{\xi_i}_{m,2} |^2}{\sigma^2_i} \leq \epsilon^2 \sum_{\sigma_i \leq \epsilon} \frac{|\ip{p}{\xi_i}_m |^2}{\sigma^4_i} \leq \epsilon^2 \nm{p}^2_{[-\gamma,\gamma],2}.
}
Now observe that we can write
\bes{
C_2 = \max \{ \nmu{q}_{[-1,1],\infty} : q \in \cA \},\qquad \cA = \left \{ q : q = p - \cP^{\epsilon,\gamma}_{m,n}(p),\ p \in \bbP_n,\ \nmu{p}_{[-\gamma,\gamma],\infty} \leq \epsilon^{-1} \right \}.
}
Let $q = p - \cP^{\epsilon,\gamma}_{m,n}(p) \in \cA$. Then $q \in \bbP_n$ and, due to \R{poly-inf-2} and \R{p-minus-Feps-p-1},
\bes{
\nmu{q}_{[-\gamma,\gamma],\infty} \leq c_{n,\gamma} \nmu{q}_{[-\gamma,\gamma],2} \leq c_{n,\gamma} \nmu{p}_{[-\gamma,\gamma],2} \leq \sqrt{2 \gamma} c_{n,\gamma} \nmu{p}_{[-\gamma,\gamma],\infty} \leq \sqrt{2 \gamma} c_{n,\gamma} /\epsilon.
}
Also, by \R{discnormbds}, \R{p-minus-Feps-p-2} and the fact that $\nm{p}_{[-\gamma,\gamma],\infty} \leq \epsilon^{-1}$,
\eas{
\nmu{q}_{m,\infty}& \leq \sqrt{(m+1)/2} \nmu{q}_{m,2} \leq \sqrt{(m+1)/2} \epsilon \nm{p}_{[-\gamma,\gamma],2} \leq \sqrt{(m+1)/2} \sqrt{2 \gamma} \epsilon \nm{p}_{[-\gamma,\gamma],\infty},
}
and therefore $ \nmu{q}_{m,\infty} \leq \sqrt{(m+1)/2} \sqrt{2 \gamma}$.
Hence
\bes{
q \in \cB : = \left \{ q \in \bbP_n : \nmu{q}_{[-\gamma,\gamma],\infty} \leq \sqrt{2 \gamma} c_{n,\gamma} / \epsilon,\ \nmu{q}_{m,\infty} \leq \sqrt{(m+1)/2} \sqrt{2 \gamma} \right \},
}
which implies that $C_2 \leq \max \{ \nmu{q}_{[-1,1],\infty} : q \in \cB \}$ and, after renormalizing,
\bes{
C_2 \leq \sqrt{(m+1)/2} \sqrt{2 \gamma} \max \left \{ \nmu{p}_{[-\gamma,\gamma],\infty} : p \in \bbP_{n},\ \nm{p}_{m,\infty} \leq 1,\ \nm{p}_{[-\gamma,\gamma],\infty} \leq \frac{c_{n,\gamma} }{\sqrt{(m+1)/2} \epsilon} \right \}.
}
This gives the second result.
}
\subsection{Main result on accuracy and conditioning}
We now summarize these two lemmas with the following theorem:
\thm{
[Accuracy and conditioning of polynomial frame approximation]
\label{t:acc-cond-poly-frame}
Let $\epsilon > 0$, $\gamma \geq 1$, $c > 1$ and $m , n \geq 1$ be such that
\be{
\label{C-cond-main-acc-stab}
C(m,n,\gamma,\epsilon ) \leq c.
}
Then the polynomial frame approximation $\cP^{\epsilon',\gamma}_{m,n}$ with $\epsilon' = \epsilon (n+1) / \sqrt{\gamma}$ satisfies
\bes{
\kappa(\cP^{\epsilon',\gamma}_{m,n}) \leq c\sqrt{m+1},
}
and, for any $f \in C([-1,1])$,
\bes{
\nmu{f - \cP^{\epsilon',\gamma}_{m,n}(f)}_{[-1,1],\infty} \leq 2 c \sqrt{m+1} \inf_{p \in \bbP_n} \left \{ \nm{f - p}_{[-1,1],\infty} + (n+1)\epsilon \nm{p}_{[-\gamma,\gamma],\infty} \right \}.
}
}
\prf{
Observe that $C(m,n,\gamma,\epsilon)$ is a decreasing function of $\epsilon$. Hence, by this, Lemma \ref{C1-C2-as-C} and the fact that $\epsilon' = \sqrt{2} c_{n,\gamma} \epsilon$,
\bes{
C_1(m,n,\gamma,\epsilon') \leq C(m,n,\gamma,\epsilon' / (\sqrt{2} c_{n,\gamma}) ) = C(m,n,\gamma,\epsilon),
}
and
\bes{
C_{2}(m,n,\gamma,\epsilon') \leq \sqrt{\gamma(m+1)}C(m,n,\gamma,\sqrt{m+1} \epsilon' / (\sqrt{2} c_{n,\gamma}) ) \leq \sqrt{\gamma(m+1)} C(m,n,\gamma,\epsilon).
}
Therefore, the condition \R{C-cond-main-acc-stab} implies that
\bes{
C_1(m,n,\gamma,\epsilon' ) \leq c,\quad C_2(m,n,\gamma,\epsilon' ) \leq c \sqrt{\gamma(m+1)}.
}
We now apply Lemma \ref{l:poly-frame-accuracy-stability} to get that $\kappa(\cP^{\epsilon',\gamma}_{m,n}) \leq c \sqrt{m+1}$. For the error bound, we have
\eas{
\nmu{f - \cP^{\epsilon',\gamma}_{m,n}(f)}_{[-1,1],\infty} \leq & \left ( 1 + c\sqrt{m+1} \right ) \nm{f-p}_{[-1,1],\infty} + c \sqrt{\gamma(m+1)} \epsilon' \nm{p}_{[-\gamma,\gamma],\infty}
\\
\leq & \left ( 1 + c\sqrt{m+1} \right ) \left ( \nm{f-p}_{[-1,1],\infty} + (n+1)\nm{p}_{[-\gamma,\gamma],\infty} \right ),
}
where in the second step we used the definition of $\epsilon'$.
The result now follows, since $1 + c\sqrt{m+1} \leq 2 c \sqrt{m+1}$.
}
\section{Maximal behaviour of polynomials bounded at equispaced nodes}\label{s:maximal-behaviour-proof}
In this section, we prove Theorem \ref{t:polynomial-inequality-main}.
\subsection{Pointwise Markov inequality }
We first require a pointwise Markov inequality. The following lemma may be viewed as a generalization of the pointwise Bernstein inequality for algebraic polynomials $p \in \bbP_n$:
\be{\lb{B}
|p'(x)| \le \frac{n}{\sqrt{1-x^2}} \|p\|_{[-1,1],\infty}, \qquad |x| < 1,
}
to higher derivatives. It appeared in \cite{konyagin2021stable} in a slightly less general form. We provide essentially the same proof for completeness.
\begin{lemma} \lb{M}
For any $k,n \in \bbN$ and $\delta \in (0,1)$ such that $k < n \sqrt{(1-\delta^2)/2}$,
and for any polynomial $p \in \bbP_n$, we have
\be{\lb{k}
|p^{(k)}(x)| \le \frac{1.251 n^k}{(1-x^2)^{k/2}} \|p\|_{[-1,1],\infty}, \qquad |x| \le \delta.
}
\end{lemma}
Note that we cannot get \rf[k] just by iterating the Bernstein inequality \rf[B]. Such iterated use produces a much weaker result
$$
|p^{(k)}(x)| \le \frac{n^k k^{k/2}}{(1-x^2)^{k/2}} \|p\|_{[-1,1],\infty}, \qquad |x| < 1.
$$
\prf{
We will use the following known results. First, Shaeffer and Duffin \cite{schaeffer1938some} proved that, for any $k,n \in \bbN$ and $p \in \bbP_n$,
\be{\label{sd1}
|p^{(k)}(x)| \le D_{n,k}(x) \|p\|_{[-1,1],\infty}, \quad |x| < 1,
}
where
\be{\lb{D}
D_{n,k}(x) = \left|(\cos n \arccos x)^{(k)} + i (\sin n \arccos x)^{(k)}\right|.
}
Second, Shadrin \cite{shadrin2004twelve} derived the explicit expression for $D_{n,k}(\cdot)$. Specifically,
\be{\lb{Df}
\frac{1}{n^2}\big(D_{n,k}(x)\big)^2
= \sum_{m=0}^{k-1} \frac{b_{m,n}}{(1-x^2)^{k+m}},
}
where
\ea{
b_{m,n}
&= c_{m,k} (n^2-(m+1)^2)\cdots (n^2-(k-1)^2), \lb{b} \\
c_{m,k}
&:= \begin{cases} 1 & m=0, \\
{k-1+m \choose 2m} (2m-1)!!^2 & m\geq 1
\end{cases}. \lb{c}
}
In particular,
\eas{
\frac{1}{n^2}(D_{n,1}(x))^2
& =
\frac{1}{1-x^2} , \\
\frac{1}{n^2}(D_{n,2}(x))^2
& =
\frac{(n^2-1)}{(1-x^2)^2} + \frac{1}{(1-x^2)^3},\\
\frac{1}{n^2} (D_{n,3}(x))^2
& =
\frac{(n^2-1)(n^2-4)}{(1-x^2)^3} + \frac{3(n^2-4)}{(1-x^2)^4}
+ \frac{9}{(1-x^2)^5}.
}
Now, using \rf[Df] and \rf[b], we see that
\eas{
(D_{n,k}(x))^2
&= n^2 \sum_{m=0}^{k-1}
\frac{c_{m,k}}{(1-x^2)^{k+m}}
(n^2-(m+1)^2)\cdots(n^2-(k-1)^2) \\
&= \frac{n^2(n^2\!-\!1^2)\cdots(n^2\!-\!(k\!-\!1)^2)} {(1-x^2)^k}
\left(1 + \sum_{m=1}^{k-1} \frac{c_{m,k}}{(1-x^2)^m}
\frac{1}{(n^2\!-\!1^2)\cdots(n^2\!-\!m^2)} \right) \\
&=: (A_{n,k}(x))^2 \left( 1 + B_{n,k}(x) \right).
}
Clearly
$$
(A_{n,k}(x))^2 \le \frac{n^{2k}}{(1-x^2)^k}.
$$
We now find an upper bound for the sum $B_{n,k}(x)$. Using \rf[c] we expand $c_{m,k}$ as
$$
c_{m,k}
= {k-1+m \choose 2m}(2m-1)!!^2
= \frac{(2m-1)!!^2}{(2m)!} \frac{(k-1+m)!}{(k-1-m)!}.
$$
In the latter expression, we estimate the first factor using Wallis' inequality (see, e.g., \cite[Chpt.\ 22]{bullen2015dictionary}):
$$
\frac{(2m-1)!!^2}{(2m)!} = \frac{(2m-1)!!}{(2m)!!} < \frac{1}{\sqrt{\pi m}} \le \frac{1}{\sqrt{\pi}}.
$$
For the second factor, we observe that
$$
\frac{(k-1+m)!}{(k-1-m)!}
= \frac{k}{k+m} (k^2-1^2)\cdots(k^2-m^2) \\
< (k^2-1^2)\cdots(k^2-m^2).
$$
Therefore
$$
B_{n,k}(x)
\le \frac{1}{\sqrt{\pi}} \sum_{m=1}^{k-1} \frac{1}{(1-x^2)^m}
\frac{(k^2-1^2)\cdots(k^2-m^2)}{(n^2\!-\!1^2)\cdots(n^2\!-\!m^2)}
\le \frac{1}{\sqrt{\pi}} \sum_{m=1}^{k-1} \left( \frac{1}{1-x^2} \frac{k^2}{n^2}\right)^m,
$$
where in the second step we used the inequality $\frac{k^2-s^2}{n^2-s^2} < \frac{k^2}{n^2}$.
Finally, if $k \le n \sqrt{(1-\delta^2)/2}$ and $|x| \le \delta$, then $\frac{1}{1-x^2} \frac{k^2}{n^2} \le \frac{1}{2}$, and
$$
B_{n,k}(x)
\le \frac{1}{\sqrt{\pi}} \sum_{m=1}^{k-1} \frac{1}{2^m} < \frac{1}{\sqrt{\pi}}.
$$
Altogether, this gives
$$
D_{n,k}(x) < \frac{c_1 n^k}{(1-x^2)^{k/2}}, \quad |x| \le \delta, \qquad c_1 = (1+ 1/\sqrt{\pi})^{1/2} < 1.251,
$$
as required.
}
\begin{corollary} \lb{1/2}
For any $k,n \in \bbN$ and $\delta \in (0,1)$ such that $k < n \sqrt{(1-\delta^2)/2}$,
and for any polynomial $p \in \bbP_n$, we have
\be{\lb{Markov}
\|p^{(k)}\|_{[-\delta,\delta],\infty}
\le \frac{1.251 n^k}{(1-\delta^2)^{k/2}} \|p\|_{[-1,1],\infty}.
}
\end{corollary}
\subsection{A lemma on best approximation}
Next, we require the following lemma on best approximation by polynomials. Although well known, we give a proof for completeness.
\begin{lemma} \lb{T}
Let $f \in C^r([a,b])$. Then
$$
E_{r-1}(f) := \inf_{p \in \bbP_{r-1}} \|f - p\|_{[a,b],\infty} \le \frac{2}{r!} \left( \frac{b-a}{4} \right)^r \|f^{(r)}\|_{[a,b],\infty}.
$$
\end{lemma}
\prf{
Let $p_\Delta \in \bbP_{r-1}$ be the polynomial that interpolates $f$ at the $r$ points of the set $\Delta = (x_i)^{r}_{i=1}$. By the Lagrange interpolation formula, for any $x \in [a,b]$ we have
$$
f(x) - p_\Delta(x) = \frac{1}{r!} \omega_\Delta(x) f^{(r)}(\xi), \qquad \omega_\Delta(x) := \prod_{i=1}^r (x-x_i),
$$
for some $\xi = \xi_x \in [a,b]$. It follows that
$$
E_{r-1}(f)
\le \inf_{\substack{\Delta\subset[a,b] \\ |\Delta| = r}} \|f - p_\Delta\|_{[a,b],\infty}
\le \frac{1}{r!} \inf_{\substack{\Delta\subset[a,b] \\ |\Delta| = r}} \|\omega_\Delta\|_{[a,b],\infty} \cdot \|f^{(r)}\|_{[a,b],\infty}.
$$
It is well-known that the infimum of $\|\omega_\Delta\|_{[a,b],\infty}$ is attained for the set $\Delta_*$
of zeros of the Chebyshev polynomial $T_r^*$ on $[a,b]$, and that for this set we have
$$
\|\omega_{\Delta_*}\|_{[a,b],\infty} = \frac{1}{2^{r-1}} \left( \frac{b-a}{2} \right)^r.
$$
This completes the proof.
}
\subsection{Bounding the norm on a subinterval}
We are now in a position to state and prove the main result of this section. Note that, for convenience, we formulate this result in terms of intervals $[-\delta,\delta]$ and $[-1,1]$ as opposed to intervals $[-1,1]$ and $[-\gamma,\gamma]$. The analogous result for the latter is obtained by setting $\delta = 1/\gamma$.
\begin{theorem}\label{t:poly-inequality-general}
Given $\epsilon> 0$, let $p \in \bbP_n$ satisfy
$$
\|p\|_{[-1,1],\infty} \le 1/\epsilon,
$$
and assume that, for some $\delta \in (0,1)$ and $m \in \bbN$, we also have
$$
|p(x_i)| \le 1, \qquad x_i = - \delta + 2\delta i/m,\quad i = 0,\ldots, m.
$$
If
\be{\lb{m}
m \ge \max \left\{ 12 c_1 n \log (1/\epsilon)\frac{\delta}{\sqrt{1-\delta^2}}, c_1 \log^2(1/\epsilon) \right\},
}
for some $0 < \epsilon \leq 1/\E$ and $c_1 \ge \max \{1/\lfloor \log(1/\epsilon) \rfloor, 3/n\}$,
then
$$
\|p\|_{[-\delta,\delta]} \le C_0 := \beta^{3/c_1}(1+8\epsilon) +8 \epsilon,
$$
where $\beta > 1$ is the upper constant in the Coppersmith and Rivlin bound \R{coppersmith-rivlin}.
\end{theorem}
This theorem is essentially a more general version of Theorem \ref{t:polynomial-inequality-main} in which the dependence of the bound on $C_0$ in the constant factor $c_1$ in \R{m} is made explicit. We now show how it implies Theorem \ref{t:polynomial-inequality-main}.
\prf{
[Proof of Theorem \ref{t:polynomial-inequality-main}]
We use Theorem \ref{t:poly-inequality-general} with $\delta = 1/\gamma$ and $c_1 = 3$. Recall that $0 < \epsilon \leq 1/\E$ and $n \geq \sqrt{\gamma^2-1} \log(1/\epsilon) > 0$ by assumption. In particular, $n \geq 1$ since it is an integer. This implies that
\bes{
\max \{1/\lfloor\log(1/\epsilon) \rfloor, 3/n\} \leq \max \{ 1/\lfloor\log(\E) \rfloor, 3 \} = 3.
}
Hence the value $c_1 = 3$ is permitted. Observe now that, since $\delta = 1/\gamma$,
\eas{
\max \left\{ 12 c_1 n\log (1/\epsilon) \frac{\delta}{\sqrt{1-\delta^2}}, c_1 \log^2(1/\epsilon) \right\} &\leq 12 c_1 n \log (1/\epsilon) \frac{\delta}{\sqrt{1-\delta^2}}
\\
& = 36 n \log(1/\epsilon) \frac{1}{\sqrt{\gamma^2-1}}.
}
Therefore, \R{m-n-eps-poly-growth} implies that \R{m} holds. We deduce that
\bes{
C(m,n,\gamma,\epsilon) \leq C_0 = \beta (1+\epsilon) +\epsilon \leq \beta \left ( 1 + 8/\E \right ) +8/\E < 4 \beta + 3,
}
as required.
}
\prf{
[Proof of Theorem \ref{t:poly-inequality-general}]
For a given $p \in \bbP_n$ that satisfies assumptions of the theorem, set
$$
k := \lfloor \log (1/\epsilon) \rfloor.
$$
If $k > n \sqrt{(1-\delta^2)/2}$, then the condition \rf[m] implies that
$$
m > c_1 n^2 \max \left \{3\sqrt{2} \delta, (1-\delta^2)/2 \right\} > c_1 n^2 /3.
$$
Hence the polynomial $p$ is bounded on the interval $[-\delta, \delta]$ at $m+1 > c_1 n^2/3 + 1$ equidistant points. We deduce that
\be{\lb{p_i}
\|p\|_{[-\delta,\delta],\infty} \le \beta^{3/c_1},
}
by the Coppersmith and Rivlin bound \R{coppersmith-rivlin}.
We now suppose that $k \le n \sqrt{(1-\delta^2)/2}$, so that the Markov-type inequality \rf[Markov] is applicable. First, consider partition of the interval $[-\delta,\delta]$ with the $m+1$ equispaced points
$$
x_i = - \delta + 2 \delta i/m, \qquad i = 0,\ldots,m, \qquad
m \ge \max \left \{12 c_1 n k \frac{\delta}{\sqrt{1-\delta^2}}, c_1 k^2 \right\}.
$$
Take any subinterval $I = [x_r, x_s] \subset [-\delta,\delta]$ containing $\lfloor c_1 k^2 \rfloor + 1$ points,
so that
\be{\lb{I}
|I| = \frac{\lfloor c_1 k^2 \rfloor}{m} 2 \delta \le \frac{c_1 k^2 }{m} 2\delta \le \frac{k \sqrt{1-\delta^2}}{6n},
}
where we used the inequality $m \ge 12 c_1 n k \frac{\delta}{\sqrt{1-\delta^2}}$.
Now, with the same $k = \lfloor \log(1/\epsilon) \rfloor$, let $Q \in \bbP_k$ be the polynomial of best approximation to $p\in\bbP_n$ from $\bbP_k$ on $I$:
$$
\|p - Q\|_{I,\infty} = \inf_{q \in \bbP_k} \|p -q\|_{I,\infty} =: E_k(p) \le E_{k-1}(p).
$$
By Lemma \ref{T},
$$
\|p-Q\|_{I,\infty} \le \frac{2}{k!} \left(\frac{|I|}{4}\right)^k \|p^{(k)}\|_{I,\infty},
$$
and by Corollary \ref{1/2}
$$
\|p^{(k)}\|_{I,\infty}
\le \|p^{(k)}\|_{[-\delta,\delta],\infty}
\le 1.251 n^k (1/\sqrt{1-\delta^2})^k \|p\|_{[-1,1],\infty}.
$$
Hence, using the well-known estimate $k! \ge \sqrt{2\pi}\sqrt{k} (k/\E)^k$, and the bound \rf[I] for $|I|$, we obtain
\eas{
\|p-Q\|_{I,\infty}
&\le \frac{2.502}{\sqrt{2\pi} \sqrt{k}} \frac{\E^k}{k^k} \left( \frac{k \sqrt{1-\delta^2}}{4 \cdot 6n} \right)^k
n^k (1/\sqrt{1-\delta^2})^k \|p\|_{[-1,1],\infty} \\
&\le \rho^k \|p\|_{[-1,1],\infty} , \qquad \rho = \frac{\E}{24} < \frac{1}{\E^2}.
}
Now, recalling that $k = \lfloor \log (1/\epsilon) \rfloor$, whereas $\|p\|_{[-1,1],\infty} \le 1/\epsilon$, we conclude that
\be{\lb{pQ}
\|p-Q\|_{I,\infty} \le \E^{-2 \log(1/\epsilon) + 2} 1/\epsilon \le \E^2 \epsilon < 8 \epsilon.
}
On the other hand, by construction the interval $I = [x_r, x_s]$
contains $\lfloor c_1 k^2 \rfloor +1$ equispaced points from the $(x_i)$. By \R{coppersmith-rivlin},
for $Q \in \bbP_k$, we have
\be{\lb{Qi}
\|Q\|_{I,\infty} \le C_2 \max_{x_i \in I} |Q(x_i)|, \qquad C_2 = \beta^{c_3}, \qquad
c_3 = \frac{k^2}{\lfloor c_1 k^2 \rfloor} \le \frac{k^2}{c_1 k^2/2} \le \frac{2}{c_1}.
}
Here, for the Copppersmith-Rivlin bound, the assumption $c_1k^2 \ge k$, i.e., boundedness of $Q \in \bbP_k$
on at least $k+1$ points, is required, and similarly we required $c_1 n^2/3 \ge n$ in obtaining \rf[p_i].
This is where the theorem's condition
$$
c_1 \ge \max\{1/k,3/n\} = \max\{1/ \lfloor\log(1/\epsilon) \rfloor, 3/n\}
$$
came from. Also, since $c_1 k^2 \ge k \ge 1$, we have the inequality $\lfloor c_1 k^2 \rfloor \ge c_1 k^2/2$ which
we used in \rf[Qi]
We now conclude the proof. Using \rf[pQ] and \rf[Qi], we obtain
\eas{
\|p\|_{I,\infty}
\le \|Q\|_{I,\infty} + 8 \epsilon
\le C_2 \max_{x_i \in I} |Q(x_i)| + 8 \epsilon
\le C_2 (\max_{x_i \in I} |p(x_i)| + 8 \epsilon) + 8 \epsilon
&\le C_2 (1+8\epsilon) + 8\epsilon,
}
as required.
}
\section{Proofs of the possibility and impossibility theorems}\label{s:main-thm-proof}
We now prove Theorems \ref{t:possibility-thm}--\ref{t:impossibility-extended}. For these, we first require the following result.
\lem{
\label{poly-approx-bounds}
Suppose that $E_{\theta} \subset \bbC$ is the Bernstein ellipse \R{Bernstein-ellipse} with parameter $\theta > 1$ and $f \in B(E_{\theta})$. Then there exists a polynomial $p \in \bbP_n$ for which
\bes{
\nmu{f - p}_{[-1,1],\infty} \leq \frac{2}{\theta - 1} \nm{f}_{E_{\theta},\infty} \theta^{-n}.
}
Moreover, this polynomial satisfies
\bes{
\nm{p}_{E_{\tau},\infty} \leq \frac{2}{1-\tau/\theta} \nm{f}_{E_{\theta},\infty},\qquad 1 < \tau < \theta
}
and
\bes{
\nm{p}_{E_{\tau},\infty} \leq \left ( \frac{\tau}{\theta} \right )^n \frac{2\tau/\theta}{\tau/\theta-1} \nm{f}_{E_{\theta},\infty},\qquad \tau > \theta.
}
}
\prf{
This result is essentially standard. We repeat it to obtain the explicit bounds for $p$.
Since $f \in B(E_{\theta})$ its Chebyshev expansion converges uniformly on $[-1,1]$, i.e.
\bes{
f(x) = \sum^{\infty}_{n=0} c_k \phi_k(x),\quad \phi_k(x) = \cos(n \arccos(x)),
}
and its coefficients satisfy $|c_k| \leq 2 \nm{f}_{E_{\theta}} \theta^{-k}$ \cite[Thm.\ 8.1]{trefethen2013approximation}. Let $p = \sum^{n}_{k=0} c_k \phi_k$. Then
\bes{
\nmu{f-p}_{[-1,1],\infty} \leq 2 \nm{f}_{E_{\theta}} \sum_{k > n} \theta^{-k} .
}
Evaluating the sum gives the first result.
For the other results, recall that the Bernstein ellipse is given by $E_{\tau} = \left \{ J(z) : z \in \bbC,\ 1 \leq |z | \leq \tau \right \}$, where $J(z) = \frac12(z+z^{-1})$ is the Joukowsky map, and the Chebyshev polynomials satisfy $\psi_k(J(z)) = \frac12 \left ( z^n + z^{-n} \right )$.
Hence $\nm{\psi_k }_{E_{\tau},\infty} \leq \tau^{n}$. Therefore
\bes{
\nm{p}_{E_{\tau},\infty} \leq 2\nm{f}_{E_{\theta}} \sum^{n}_{k=0} (\tau / \theta)^k,
}
which yields the result.
}
\prf{
[Proof of Theorem \ref{t:possibility-thm}]
We use Theorem \ref{t:acc-cond-poly-frame}. Note that the condition \R{m-n-relation} implies that \R{m-n-eps-poly-growth} holds. Therefore Theorem \ref{t:polynomial-inequality-main} implies that \R{C-cond-main-acc-stab} holds with $c$ as defined therein.
The desired bound for the condition number follows immediately. For the error bound, let $f \in B(E_{\theta})$, where $\theta > \gamma + \sqrt{\gamma^2-1}$, and let $p \in \bbP_n$ be as in the previous lemma. Since $[-\gamma,\gamma] \subset E_{\tau}$ where $\tau = \gamma + \sqrt{\gamma^2-1} < \theta$ by assumption, this lemma gives
\eas{
\nmu{f - p}_{[-1,1],\infty} + (n+1)\epsilon \nm{p}_{[-\gamma,\gamma],\infty} & \leq \left ( \frac{2}{\theta-1} \theta^{-n} + \frac{2(n+1)\epsilon}{1-(\gamma + \sqrt{\gamma^2-1}) / \theta} \right ) \nm{f}_{E_{\theta},\infty}
\\
& \leq g(\theta,\gamma) \left ( \theta^{-n} + n \epsilon \right ) \nm{f}_{E_{\theta},\infty},
}
for some function $g(\theta,\gamma)$ depending on $\theta$ and $\gamma$ only. Here we also use the fact that $n \geq 1$, which follows from the fact that $n$ is an integer and $n \geq \log(1/\epsilon) > 0$ by assumption. Supposing that \R{C-cond-main-acc-stab} holds, Theorem \ref{t:acc-cond-poly-frame} now gives, up to a constant change in $g(\theta,\gamma)$,
\bes{
\nmu{f - \cP^{\epsilon',\gamma}_{m,n}(f) }_{[-1,1],\infty} \leq c \sqrt{m} g(\theta,\gamma) \left ( \theta^{-n} + n \epsilon \right ) \nm{f}_{E_{\theta},\infty},
}
which is the desired error bound with respect to $n$. To obtain the error bound with respect to $m$, we simply notice that \R{m-n-relation} implies that $m \leq 36 n \log(1/\epsilon) / \sqrt{\gamma^2-1} + 1 = n/c^*+1$. Hence $\theta^{-n} \leq \theta^{c^*(1-m)} = \rho^{1-m}$, as required.
}
\prf{
[Proof of Theorem \ref{t:possibility-slower-exp}]
The overall argument is similar to the previous proof. The only difference is the estimation of the term
\be{
\label{best-approx-term}
\nmu{f - p}_{[-1,1],\infty} + (n+1)\epsilon \nm{p}_{[-\gamma,\gamma],\infty}.
}
Suppose that $f$ is analytic in $E = E_{\theta}$ for some $\theta$ with $1 < \theta < \tau : = \gamma + \sqrt{\gamma^2-1}$.
In other words, the Bernstein ellipse $E_{\theta}$ does not contain the extended interval $[-\gamma,\gamma]$. Let $1 \leq k \leq n$ and $p \in \bbP_k$ be the polynomial guaranteed by Lemma \ref{poly-approx-bounds}. Then
\eas{
\nmu{f - p}_{[-1,1],\infty} + (n+1)\epsilon \nm{p}_{[-\gamma,\gamma],\infty} & \leq \left ( \frac{2}{\theta-1} \theta^{-k} + \left ( \frac{\tau}{\theta} \right )^k \frac{2 \tau / \theta}{\tau/\theta-1} (n+1)\epsilon \right )\nm{f}_{E_{\theta},\infty}
\\
& \leq g(\theta,\gamma) \left ( \theta^{-k} + \left ( \frac{\tau}{\theta} \right )^k n \epsilon \right ) \nm{f}_{E_{\theta},\infty}.
}
We now choose
\bes{
k = \min \left \{ n , \left \lfloor \frac{\log(1/\epsilon)}{\log(\tau)} \right \rfloor \right \},
}
so that
\bes{
\left ( \frac{\tau}{\theta} \right )^k \epsilon \leq \epsilon^{1-\frac{\log(\tau/\theta) }{ \log(\tau) }} = \epsilon^{\frac{\log(\theta)}{\log(\tau)} },
}
and
\bes{
\theta^{-k} \leq \theta^{-n} + \theta^{1-\frac{\log(1/\epsilon)}{\log(\tau)}} = \theta^{-n} + \theta \epsilon^{\frac{\log(\theta)}{\log(\tau)}}.
}
Since $n \geq 1$, we deduce that
\bes{
\nmu{f - p}_{[-1,1],\infty} + (n+1)\epsilon \nm{p}_{[-\gamma,\gamma],\infty} \leq 2 g(\theta,\gamma) \left ( \theta^{-n} + n \epsilon^{\frac{\log(\theta)}{\log(\tau)}} \right ) \nm{f}_{E_{\theta},\infty},
}
as required.
}
\prf{
[Proof of Theorem \ref{t:possibility-algebraic}]
As in the previous proof, we only need to obtain the desired estimate for the term \R{best-approx-term}. Let $f \in C^{k}([-1,1])$. Then $f$ has a $C^k$-extension to the interval $[-\gamma,\gamma]$. Specifically, there is a function $\tilde{f} \in C^{k}([-\gamma,\gamma])$ satisfying $\tilde{f}(x) = f(x)$ for all $x \in [-1,1]$ and
\be{
\label{bounded-extension}
\nmu{\tilde{f}}_{C^{k}([-\gamma,\gamma])} \leq c_{k,\gamma} \nm{f}_{C^{k}([-1,1])},
}
for some constant $c_{k,\gamma} \geq 1$ depending on $k$ and $\gamma$ only. Since $\tilde{f} \in C^{k}([-\gamma,\gamma])$ a classical result (see, e.g., \cite[\S 4.6]{cheney1982introduction}) gives that
\bes{
\inf_{p \in \bbP_n} \nmu{\tilde{f} - p}_{[-\gamma,\gamma],\infty} \leq c'_{k,\gamma} n^{-k} \nmu{\tilde{f}^{(k)}}_{[-\gamma,\gamma],\infty},
}
for some $c'_{k,\gamma} > 0$.
Observe that
\bes{
\nmu{f - p}_{[-1,1],\infty} + (n+1)\epsilon \nm{p}_{[-\gamma,\gamma],\infty} \leq 2 \nmu{\tilde{f} - p}_{[-\gamma,\gamma],\infty} + (n+1) \epsilon \nmu{\tilde{f}}_{[-\gamma,\gamma],\infty}.
}
Therefore
\eas{
\inf_{p \in \bbP_n} \left \{ \nmu{f - p}_{[-1,1],\infty} + (n+1)\epsilon \nm{p}_{[-\gamma,\gamma],\infty} \right \} & \leq 2 c'_{k,\gamma} n^{-k} \nmu{\tilde{f}^{(k)}}_{[-\gamma,\gamma],\infty} + (n+1) \epsilon \nmu{\tilde{f}}_{[-\gamma,\gamma],\infty}
\\
& \leq \left ( 2 c'_{k,\gamma} n^{-k} + (n+1) \epsilon \right ) \nmu{\tilde{f}}_{C^k([-\gamma,\gamma])}
\\
& \leq 2 \max \{c'_{k,\gamma} ,1 \} c_{k,\gamma} \left ( n^{-k} + n \epsilon \right ) \nmu{f}_{C^k([-1,1])},
}
where in the last step we used \R{bounded-extension} and the fact that $n \geq 1$.
This gives the desired result.
}
We conclude this section with the proof of Theorem \ref{t:impossibility-extended}.
\prf{
[Proof of Theorem \ref{t:impossibility-extended}]
The proof is similar to that of the impossibility theorem shown in \cite{platte2011impossibility}. First observe that $\cR_m(0) = 0$ for any approximation procedure satisfying \R{extended-error-bound}. Now let $k \in \bbN_0$. Then the condition number \R{kappa-def} satisfies
\bes{
\kappa(\cR_m) \geq \lim_{\delta \rightarrow 0^+} \sup_{\substack{q \in \bbP_k \\ 0 < \nm{q}_{m,\infty} \leq \delta}} \frac{\nm{\cR_m(q)}_{[-1,1],\infty}}{\nm{q}_{m,\infty}}.
}
Let $p \in \bbP_k$ with $\nm{p}_{m,\infty} \neq 0$ and set $q = \delta p / \nm{p}_{m,\infty}$ so that $q \in \bbP_k$ with $\nm{q}_{m,\infty} = \delta$. Since polynomials are entire functions, \R{extended-error-bound} gives that
\eas{
\nmu{\cR_m(q)}_{[-1,1],\infty} & \geq \nm{q}_{[-1,1],\infty} - C \left ( \theta^{-c m^{\tau}} + \epsilon \right ) \nm{q}_{E_{\theta},\infty}
\geq \nm{q}_{[-1,1],\infty} \left ( 1 - C \left ( \theta^{-c m^{\tau}} + \epsilon \right ) \theta^k \right ).
}
Here, in the second step, we used the classical inequality $\nm{q}_{E_{\theta},\infty} \leq \theta^k \nm{q}_{[-1,1],\infty}$, $\forall q \in \bbP_k$. Recalling the definition of $q$, we deduce that
\bes{
\kappa(\cR_m) \geq \left ( 1 - C \left ( \theta^{-c m^{\tau}} + \epsilon \right ) \theta^k \right ) \sup_{\substack{p \in \bbP_k \\ \nm{p}_{m,\infty} \neq 0 }} \frac{\nm{p}_{[-1,1],\infty}}{\nm{p}_{m,\infty}}.
}
Now let
\bes{
k = \left \lfloor \min \left \{ \frac{\log(\frac{1}{4C \epsilon})}{\log(\theta)} , c m^{\tau} + \frac{\log(\frac{1}{4C})}{\log(\theta)} \right \} \right \rfloor.
}
This choice of $k$ gives $C \left ( \theta^{-c m^{\tau}} + \epsilon \right ) \theta^k \leq 1/2$,
and therefore
\bes{
\kappa(\cR_m) \geq \frac12 \alpha^{k^2/m},
}
where $\alpha > 1$ is as in \R{coppersmith-rivlin}. Observe that this holds for all $1 < \theta \leq \theta^*$. Now choose $\theta = \theta_m = \epsilon^{-\frac{1}{c m^{\tau}}}$,
and observe that $\theta_m \leq \theta^*$ for all large $m$. This value of $\theta$ and the fact that $0 < \epsilon \leq (4 C)^{-2}$ give
\bes{
k = \left \lfloor c m^{\tau} \left ( 1 + \frac{\log(\frac{1}{4C})}{\log(1/\epsilon)} \right )\right \rfloor \geq \left \lfloor \frac12 c m^{\tau} \right \rfloor ,
}
and therefore $k \geq \frac13 c m^{\tau}$ for all sufficiently large $m$. We deduce that
\bes{
\kappa(\cR_m) \geq \frac12 a^{c^2 m^{2 \tau - 1}/9} \geq \sigma^{m^{2 \tau-1}},
}
for some $\sigma > 1$ and all large $m$, as required.
}
\section{Numerical examples}\label{s:numerical}
We conclude this paper with a series of experiments to examine the various theoretical results. Unless otherwise stated, we compute the discrete $L^{\infty}$-norm error of the approximation on a grid of 50,000 equispaced points in $[-1,1]$. Also, we consider the polynomial frame approximation threshold parameter $\epsilon$ rather than $\epsilon'$ (as used in the main theorems). Theoretically, this choice leads to a log-linear sample complexity, but in practice it appears to be adequate.
In Fig.\ \ref{f:fig1} we plot the error versus $n$ for different values of the \textit{oversampling} parameter $\eta = m/n$. We compare several different values for the extended domain parameter $\gamma$, and several different values of $\epsilon$. Notice that in all cases, we witness exponential decrease of the error down to some fixed limiting accuracy. The limiting accuracy is related to the stability of the approximation and the parameter $\epsilon$. Observe that it gets smaller with increasing $\eta$, and for sufficiently large $\eta$ it closely tracks the value of $\epsilon$ used. Moreover, it is larger when $\gamma$ is smaller and smaller when $\gamma$ is larger. Both observations are intuitively true. Increasing the number of sample points (i.e.\ larger $\eta$) reduces the maximal growth of a polynomial on $[-1,1]$ relative to its values of the equispaced grid. Similarly, increasing $\gamma$ lengthens the region within which the polynomial cannot exceed the value $1/\epsilon$, and therefore it also cannot grow as large on $[-1,1]$. We also notice that increasing the oversampling parameter makes less difference to the limiting accuracy when $\epsilon$ is larger than it does when $\epsilon$ is smaller. Again, this is intuitively true, since larger $\epsilon$ means the polynomial cannot grow as large on the extended interval. These three observations are also supported by Theorem \ref{t:polynomial-inequality-main}. Here, the sufficient scaling between $m$ and $n$ depends on $\log(1/\epsilon)/\sqrt{\gamma^2-1}$, i.e., it is a decreasing function of both $\epsilon$ and $\gamma$.
\begin{figure}[t]
\begin{small}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[width = 0.3\textwidth]{fig1_11.eps}&
\includegraphics[width = 0.3\textwidth]{fig1_12.eps}&
\includegraphics[width = 0.3\textwidth]{fig1_13.eps}
\\
\includegraphics[width = 0.3\textwidth]{fig1_21.eps}&
\includegraphics[width = 0.3\textwidth]{fig1_22.eps}&
\includegraphics[width = 0.3\textwidth]{fig1_23.eps}
\\
\includegraphics[width = 0.3\textwidth]{fig1_31.eps}&
\includegraphics[width = 0.3\textwidth]{fig1_32.eps}&
\includegraphics[width = 0.3\textwidth]{fig1_33.eps}
\\
$\gamma = 1.2$ & $\gamma = 1.4$ & $\gamma = 1.8$
\end{tabular}
\end{center}
\end{small}
\caption{Approximation error versus $n$ for approximating the function $f(x) = \frac{1}{1+x^2}$ via $\cP^{\epsilon,\gamma}_{m,n}$, where $m/n = \eta$, using various different values of $\eta$, $\gamma$ and $\epsilon$. The values of $\epsilon$ used are $\epsilon = 10^{-14}$ (top), $\epsilon = 10^{-10}$ (middle) and $\epsilon = 10^{-6}$ (bottom). The dashed line shows the quantity $\theta^{-n}$, where $\theta = \sqrt{2}+1$.
}
\label{f:fig1}
\end{figure}
In Fig.\ \ref{f:fig1} the function we consider has poles at $x = \pm \I$, meaning that it is analytic within any Bernstein ellipse $E_{\theta}$ for which $\frac12(\theta - \theta^{-1}) = 1$, i.e.\ $1 < \theta < \sqrt{2}+1 \approx 2.41$. Recall that the interval $[-\gamma,\gamma]$ is contained in the Bernstein ellipse $E_{\tau}$ with $\tau = \gamma + \sqrt{\gamma^2-1}$. In particular, $\tau < \sqrt{2}+1$ for $\gamma = 1.2$ and $\gamma = 1.4$. Our analysis in Theorem \ref{t:possibility-thm} therefore asserts exponential decay of the error with rate roughly $(\sqrt{2}+1)^{-n}$ for these two choices of $\gamma$. This is what we observe in practice in Fig.\ \ref{f:fig1}.
On the other hand, when $\gamma = 1.8$, $\tau = \gamma + \sqrt{\gamma^2-1} \approx 3.30 > 2.41 = \sqrt{2}+1$. In this case, our analysis in Theorem \ref{t:possibility-slower-exp} predicts exponential convergence with rate roughly $(\sqrt{2}+1)^{-n}$ down to roughly $\epsilon^{\frac{\log(\sqrt{2}+1)}{\log(\tau)}} \approx \epsilon^{0.74}$, and slower convergence below this level. This is again in agreement with the right column of Fig.\ \ref{f:fig1}.
To further investigate this effect of decreasing error decay for less regular functions, in Fig.\ \ref{f:fig2} we plot the error versus $n$ for several different functions and values of $\gamma$. We also plot the theoretical breakpoint described in Theorem \ref{t:possibility-slower-exp}, i.e.\ the value
\be{
\label{breakpoint}
\epsilon^{\frac{\log(\theta)}{\log(\tau)}},\qquad \tau = \gamma + \sqrt{\gamma^2-1}.
}
In all cases, we see reasonable agreement between the theoretical results and the empirical performance. First, the error decays with rate $\theta^{-n}$ down to the breakpoint, as in Theorem \ref{t:possibility-slower-exp}, before decreasing more slowly beyond it. This second phase is described by Theorem \ref{t:possibility-algebraic} (since analytic functions are infinitely differentiable).
\begin{figure}[t]
\begin{small}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[width = 0.3\textwidth]{fig2_11.eps}
&
\includegraphics[width = 0.3\textwidth]{fig2_12.eps}
&
\includegraphics[width = 0.3\textwidth]{fig2_13.eps}
\\
\includegraphics[width = 0.3\textwidth]{fig2_21.eps}
&
\includegraphics[width = 0.3\textwidth]{fig2_22.eps}
&
\includegraphics[width = 0.3\textwidth]{fig2_23.eps}
\\
\includegraphics[width = 0.3\textwidth]{fig2_31.eps}
&
\includegraphics[width = 0.3\textwidth]{fig2_32.eps}
&
\includegraphics[width = 0.3\textwidth]{fig2_33.eps}
\\
$\gamma = 1.25$ & $\gamma = 1.5$ & $\gamma = 2$
\end{tabular}
\end{center}
\end{small}
\caption{Approximation errors versus $n$ for approximating the functions $f_1(x) = \frac{1}{1+4x^2}$ (top), $f_2(x) = \frac{1}{10-9 x}$ (middle) and $f_3(x) = 25 \sqrt{9 x^2-10}$ (bottom) via $\cP^{\epsilon,\gamma}_{m,n}$, where $m/n = 4$, using various different values of $\gamma$ and $\epsilon$. The dot-dashed lines show the breakpoints \R{breakpoint} in each case and dashed line shows the quantity $\theta^{-n}$. In this experiment, the values of $\theta$ are $\theta = \frac12(1+\sqrt{5})$ (top), $\theta = \frac{1}{9}(10 + \sqrt{19})$ (middle) and $\theta = \sqrt{10/9} + 1/3$ (bottom).}
\label{f:fig2}
\end{figure}
It is notable that the error decay rate after the breakpoint is quite different for the functions considered. This effect has also been observed and discussed in the case of Fourier extensions \cite{adcock2014parameter,adcock2014resolution}. It can be understood through Theorem \ref{t:possibility-algebraic}. Recall that this theorem asserts that the error is bounded by
\be{
\label{limiting-accuracy-analysis}
c g(k,\gamma) \sqrt{m} \left ( n^{-k} + n \epsilon \right ) \nm{f}_{C^k([-1,1])},
}
for any $k \in \bbN$ (since all functions considered are infinitely smooth). The derivatives of the first function $f_1(x) = \frac{1}{1+4x^2}$ do not grow too large on $[-1,1]$ with increasing $k$. Hence the constants $ \nm{f}_{C^k([-1,1])}$ in the error term remain reasonably small and we see rapid decrease in $n$. On the other hand, the derivatves of the functions $f_2$ and $f_3$ grow rapidly with $k$, meaning the constants $ \nm{f}_{C^k([-1,1])}$ also grow rapidly with $k$. Thus, \R{limiting-accuracy-analysis} suggests that the error decays progressively more slowly the closer it gets to the limiting value $\epsilon$. This is exactly the effect we observe in practice.
\begin{figure}[t]
\begin{small}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[width = 0.3\textwidth]{fig3_11.eps}
&
\includegraphics[width = 0.3\textwidth]{fig3_12.eps}
&
\includegraphics[width = 0.3\textwidth]{fig3_13.eps}
\\
\includegraphics[width = 0.3\textwidth]{fig3_21.eps}
&
\includegraphics[width = 0.3\textwidth]{fig3_22.eps}
&
\includegraphics[width = 0.3\textwidth]{fig3_23.eps}
\\
\includegraphics[width = 0.3\textwidth]{fig3_31.eps}
&
\includegraphics[width = 0.3\textwidth]{fig3_32.eps}
&
\includegraphics[width = 0.3\textwidth]{fig3_33.eps}
\\
$\gamma = 1.25$ & $\gamma = 1.5$ & $\gamma = 2$
\end{tabular}
\end{center}
\end{small}
\caption{Approximation errors versus $n$ for approximating the functions $f(x) = \exp(\I \omega \pi x)$via $\cP^{\epsilon,\gamma}_{m,n}$, where $m/n = 4$, using various different values of $\gamma$ and $\epsilon$. The values of $\omega$ are $\omega = 40$ (top), $\omega = 60$ (middle) and $\omega = 80$ (top).}
\label{f:fig3}
\end{figure}
In Fig.\ \ref{f:fig3} we consider approximating the oscillatory function $f(x) = \E^{\I \omega \pi x}$ for various different values of $\omega$. Oscillatory functions are an interesting case study for approximation procedures from equispaced nodes. They are entire functions, yet they grow extremely rapidly along the imaginary axis for large $|\omega|$, meaning that the term $\nm{f}_{E_{\theta},\infty}$ is extremely large unless $\theta \approx 1$. As we see in Fig.\ \ref{f:fig3}, the approximation error is order one until a minimum value of $n = n_0(\omega)$ is met. After this point, the function begins to be resolved and the error decreases rapidly. Determining the behaviour of $n_0(\omega)$ allows us to examine the \textit{resolution power} of the approximation scheme, i.e.\ the number of points needed before decay of the error sets in.
For the polynomial frame approximation, we determine this point by recalling the error bound
\bes{
\inf_{p \in \bbP_n} \left \{ \nmu{f - p}_{[-1,1],\infty} + (n+1)\epsilon \nm{p}_{[-\gamma,\gamma],\infty} \right \}.
}
Since $f$ is entire and $|f(x)| = 1$ for real $x$, we can write
\bes{
\inf_{p \in \bbP_n} \left \{ \nmu{f - p}_{[-1,1],\infty} + (n+1)\epsilon \nm{p}_{[-\gamma,\gamma],\infty} \right \} \leq 2 \inf_{p \in \bbP_n} \nmu{f - p}_{[-\gamma,\gamma],\infty} + (n+1)\epsilon.
}
Hence, the resolution power for polynomial approximation is related to the resolution power of best polynomial approximation on the extended interval $[-\gamma,\gamma]$. It is well known (see, e.g., \cite{hale2008new,boyd1989chebyshev,gottlieb1977numerical,adcock2014resolution}) that on the interval $[-1,1]$ an oscillatory function with frequency $\omega$ can be approximated by a polynomial once the degree $n$ exceeds the value $\pi \omega$. Since we consider the extended interval, this implies that $n_0(\omega) = \pi \gamma \omega$.
This value is also shown in Fig.\ \ref{f:fig3}. It closely predicts the point at which the error begins to decrease. Note that this suggests choosing a small value of $\gamma$ so as to obtain a higher resolution power. Yet, as seen, this worsens the condition number of the approximation. Or, to put it another way, a smaller $\gamma$ necessitates a more severe oversampling ratio $\eta$ so as to maintain the same condition number, thus worsening the resolution power with respect to the number of equispaced samples $m$.
{
Finally, we conclude in Fig.\ \ref{f:fig4} with an experiment that compares fixed $\epsilon$ (the main setting in this paper) with varying $\epsilon$, the latter chosen as in Remark \ref{rem:varying-eps} to decay like $\theta^{-n}$ for a given $\theta > 1$. In order to make a fair comparison, we define a maximum allowable condition number $\kappa^* = 100$. Then, given $n$, we find the largest value of $m$ for each scheme such that its condition number is at most $\kappa^*$. In order to do this, we use follow the approach of \cite[\S 8]{adcock2020approximating} and work in the discrete $L^2$-norm (as opposed to the discrete uniform norm considered in previous experiments) over a grid of 50,000 equispaced points in $[-1,1]$. Doing so means that the condition number can be computed as the norm of a certain matrix.
For the first function, $f_1$, we observe that varying $\epsilon$ can lead to a benefit whenever the parameter $\theta$ is chosen suitably: specifically, whenever it is chosen close to the value $\theta = \theta^*$, where $\theta^*$ is the largest Bernstein ellipse within which the function is analytic. If $\theta$ is chosen too large, then the scheme behaves similarly to the standard polynomial frame approximation with fixed $\epsilon$. On the other hand, if $\theta$ is chosen too small, then the scheme performs significantly worse. In fact, it can even perform worse that standard polynomial least-squares approximation using orthonormal Legendre polynomials on $[-1,1]$ (see Remark \ref{r:why-not-unit-interval}).
For the second function, $f_2$, varying $\epsilon$ conveys no benefit over fixed $\epsilon$, and, as before, it can lead to worse performance if $\theta$ is chosen too small. It is notable that polynomial least-squares approximation outperforms any of the polynomial frame approximations for this function. This is not surprising. The function grows rapidly near the endpoint $x = + 1$. In polynomial frame approximation the approximating polynomial is constrained to be of a finite size on $[-\gamma,\gamma]$. Hence it cannot capture this rapid growth as effectively as in polynomial least-squares approximation, where there is no such constraint.
Finally, the third function, $f_3$, is oscillatory and therefore entire. Observe that when $\theta$ is small, the polynomial frame approximation with varying $\epsilon$ initially resolves the function using fewer samples than the scheme with fixed $\epsilon$. Yet, as $m$ increases the error decays more slowly, and is eventually larger than the error for the latter method.
Finally, in the second row of Fig.\ \ref{f:fig4} we plot the scaling between $m$ and $n$. As expected, polynomial least-squares approximation exhibits a quadratic scaling, while polynomial frame approximation with fixed $\epsilon$ exhibits a linear scaling. Polynomial frame approximation with varying $\epsilon$ exhibits a quadratic scaling for small $\theta$. When $\theta$ is larger the scaling is at first quadratic and then linear. This arises because $\epsilon$ is constrained to be no smaller than $10^{-14}$ in this experiments, this being done in order to avoid numerical effects in the thresholded SVD. Note that in this regime, the two polynomial frame approximations coincide.
The main conclusion from this experiment is that varying $\epsilon$ with $n$ can lead to some benefit (for small $m$), as long as the parameter $\theta$ is chosen carefully. Unfortunately, such a choice is function dependent (compare, for instance, $f_1$ versus $f_3$), and may require knowledge of the domain of analyticity of the (unknown) function.
}
\begin{figure}[t]
\begin{small}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[width = 0.3\textwidth]{fig4_f1_err.eps}
&
\includegraphics[width = 0.3\textwidth]{fig4_f2_err.eps}
&
\includegraphics[width = 0.3\textwidth]{fig4_f3_err.eps}
\\
\includegraphics[width = 0.3\textwidth]{fig4_f1_mvals.eps}
&
\includegraphics[width = 0.3\textwidth]{fig4_f2_mvals.eps}
&
\includegraphics[width = 0.3\textwidth]{fig4_f3_mvals.eps}
\\
$f_1(x) = \frac{1}{1+16 x^2}$, $\theta^* = \frac{\sqrt{17}+1}{4}$ & $f_2(x) = \frac{1}{30-29 x}$, $\theta^* = \frac{30 + \sqrt{59}}{29}$ & $f_3(x) = \E^{40 \pi \I x}$, $\theta^* = \infty$
\end{tabular}
\end{center}
\end{small}
\caption{{Comparison of various schemes for approximating the functions $f_1$, $f_2$ and $f_3$. The top row shows the approximation error (measured in the discrete $L^2$-norm) versus $m$. The bottom row shows the relationship between $m$ and $n$ where, given $n$, $m$ is smallest integer such that the (discrete $L^2$-norm) condition number of each scheme is at most $100$. The schemes considered are: polynomial least-squares approximation (``PLS"), which uses orthonormal Legendre polynomials on $[-1,1]$ (see Remark \ref{r:why-not-unit-interval}); polynomial frame approximation with fixed $\epsilon = 10^{-14} $ (``PFF"); and polynomial frame approximation with varying $\epsilon = \max \{ \theta^{-n}, 10^{-14} \}$ (``PFV"). Both PFF and PFV use the value $\gamma = 1.25$. The quantity $\theta^*$ is the `optimal' value of $\theta$, in the sense that it is the parameter of the largest Bernstein ellipse within which the given function is analytic. } }
\label{f:fig4}
\end{figure}
\section{Conclusions and outlook}\label{s:conclusions}
In this work, we have shown a positive counterpart to the impossibility theorem of \cite{platte2011impossibility}. Namely, we have shown a possibility theorem (Theorem \ref{t:possibility-thm}) for polynomial frame approximation, which asserts stability and exponential convergence down to a finite, but user-controlled limiting accuracy. This holds for all functions that are analytic in a sufficiently large region -- a condition that must be the case for any scheme achieving this type of error decay, as shown by our extended impossibility theorem (Theorem \ref{t:impossibility-extended}). On the other hand, for insufficiently analytic functions, we have shown exponential decay down to some fractional power of $\epsilon$ (Theorem \ref{t:possibility-slower-exp}), and superalgebraic decay (Theorem \ref{t:possibility-algebraic}) beyond this point.
There are several avenues for further investigation. First, recall that our main error bounds involve (albeit small) algebraic factors in $m$ and $n$. It would be interesting to see if such factors could be removed by modifying the approximation scheme. Alternatively, one might instead work in the $L^2$-norm, which is arguably more natural for least-squares approximations.
Second, this work was inspired by so-called Fourier extensions \cite{adcock2014parameter,adcock2014numerical,matthysen2017function,lyon2012fast,huybrechs2010fourier,bruno2007accurate,boyd2002fourier,pasquetti1996spectral}, wherein a smooth, nonperiodic function on $[-1,1]$ is approximated by a Fourier series on $[-\gamma,\gamma]$. In practice, linear oversampling appears sufficient for accuracy and stability of $\epsilon$-regularized Fourier extensions, with exponential error decay down to $\epsilon$ \cite{adcock2014parameter,adcock2014numerical}. Proving a similar possibility theorem for this scheme is an open problem. Note that Fourier extension is equivalent to a polynomial approximation problem on an arc of the complex unit circle \cite{geronimo2020fourier,webb2020pointwise}. Fourier extension schemes have several advantages over the polynomial extension scheme studied herein. For example, they generally possess higher resolution power \cite{adcock2014resolution} (recall Fig.\ \ref{f:fig3} and the discussion in \S \ref{s:numerical}).
Third, we mention that equispaced points are not special. Similar impossibility theorems have been shown for scattered data, or more generally, any sample points that do not cluster quadratically near the endpoints $x = \pm 1$ \cite{adcock2019optimal}. Beyond pointwise samples, it is notable that an impossibility theorem also holds for reconstructing analytic functions from their Fourier samples \cite{adcock2012stable,adcock2014generalized,adcock2014stability}. The question of whether or not possibility theorems hold for these more general types of samples is also an open problem.
Fourth, notice that we have not concerned ourselves with fast computation of the polynomial frame approximation in this work. We anticipate, however, that a fast implementation may be possible, as it is with Fourier extensions \cite{matthysen2015fast,lyon2012fast,matthysen2017function}. One potential idea in this direction is the AZ algorithm \cite{coppe2020AZ}.
Fifth and finally, we note that the one-dimensional problem is, in some senses, a toy problem. Polynomial frame approximations we first formalized in \cite{adcock2020approximating} to accurately and stably approximate functions that are defined over general, compact domains in two or more dimensions. Here the domain is embedded in a hypercube, and a tensor-product orthogonal polynomial basis on the hypercube is used to construct the approximation. Such approximations are often used in practice in surrogate model construction problems in uncertainty quantification \cite{adcock2022sparse,adcock2020approximating}. Showing that linear oversampling is sufficient in two or more dimensions and a corresponding possibility theorem for analytic function approximation in arbitrary dimensions would be an interesting and practically-relevant extension.
\small
\bibliographystyle{plain}
| {
"timestamp": "2022-03-08T02:07:01",
"yymm": "2110",
"arxiv_id": "2110.03755",
"language": "en",
"url": "https://arxiv.org/abs/2110.03755",
"abstract": "We consider approximating analytic functions on the interval $[-1,1]$ from their values at a set of $m+1$ equispaced nodes. A result of Platte, Trefethen \\& Kuijlaars states that fast and stable approximation from equispaced samples is generally impossible. In particular, any method that converges exponentially fast must also be exponentially ill-conditioned. We prove a positive counterpart to this `impossibility' theorem. Our `possibility' theorem shows that there is a well-conditioned method that provides exponential decay of the error down to a finite, but user-controlled tolerance $\\epsilon > 0$, which in practice can be chosen close to machine epsilon. The method is known as \\textit{polynomial frame} approximation or \\textit{polynomial extensions}. It uses algebraic polynomials of degree $n$ on an extended interval $[-\\gamma,\\gamma]$, $\\gamma > 1$, to construct an approximation on $[-1,1]$ via a SVD-regularized least-squares fit. A key step in the proof of our main theorem is a new result on the maximal behaviour of a polynomial of degree $n$ on $[-1,1]$ that is simultaneously bounded by one at a set of $m+1$ equispaced nodes in $[-1,1]$ and $1/\\epsilon$ on the extended interval $[-\\gamma,\\gamma]$. We show that linear oversampling, i.e., $m = c n \\log(1/\\epsilon) / \\sqrt{\\gamma^2-1}$, is sufficient for uniform boundedness of any such polynomial on $[-1,1]$. This result aside, we also prove an extended impossibility theorem, which shows that such a possibility theorem (and consequently the method of polynomial frame approximation) is essentially optimal.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Fast and stable approximation of analytic functions from equispaced samples via polynomial frames",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.992304351911275,
"lm_q2_score": 0.8080672066194946,
"lm_q1q2_score": 0.8018486057653119
} |
https://arxiv.org/abs/2111.11227 | On a discriminator for the polynomial $f(x)=x^3+x$ | Let $\Delta(n)$ denote the smallest positive integer $m$ such that $a^3+a(1\le a\le n)$ are pairwise distinct modulo $m$. The purpose of this paper is to determine $\Delta(n)$ for all positive integers $n$. | \section{Introduction}
\setcounter{lemma}{0}
\setcounter{theorem}{0}
\setcounter{corollary}{0}
\setcounter{equation}{0}
\medskip
For a polynomial $f(x)\in \Z[x]$ with all $f(a)(a\in \Z^{+})$ pairwise distinct, we introduce the discriminator $\Delta_f(n)$ defined to be the smallest positive integer $m$ such that $f(a)(1\le a\le n)$ are pairwise distinct modulo $m$.
As a simple application of Bertrand's postulate, Arnold, Benkoski and McCabe \cite{ABM} determined $\Delta_{f}(n)$ for $f(x)=x^2$, and they showed that for $n>4$, $\Delta_{f}(n)$ is the smallest positive integer $m\ge 2n$ such that $m$ is $p$ or $2p$ with $p$ an odd prime. Sun \cite{S13} studied $\Delta_f(n)$ for other quadratic polynomials. For example, it was proved in \cite{S13} that if $f(x)=2x(x-1)$ then $\Delta_{f}(n)$ is the least prime number greater than $2n-2$, and in particular $\Delta_{f}(n)$ runs over all prime values.
Among other things, Schumer \cite{S} studied $\Delta_f(n)$ with $f(x)=x^3$. For the study of discriminator $\Delta_f(n)$ with other higher degree polynomials $f$, one may refer to \cite{BSW,Moree,MM,Zieve}. In this paper, we focus on $\Delta_f(n)$ with $f(x)=x^3+x$.
The main result in this paper is the following.
\begin{theorem}\label{theorem1}Let $\Delta(n)=\Delta_f(n)$ with $f(x)=x^3+x$. We have
\begin{align*}\Delta(n)=\begin{cases}7\cdot 3^{6s+4} & \ \textrm{ if } n=3^{6s+5}+1 \textrm{ or } n=3^{6s+5}+2 \textrm{ for some } s\in \N,
\\
3^{\lceil \log_3 n\rceil} & \ \textrm{ otherwise},\end{cases}\end{align*}
where $\lceil x \rceil$ denotes the smallest integer no less than $x$.
\end{theorem}
A closely related problem is to determine $D(n)$, which denotes the smallest positive integer $m$ such that $a^3+a(1\le a\le n)$ are pairwise distinct modulo $m^2$. The authors \cite{YZ} proved that $D(n)=3^{\lceil \log_3 \sqrt{n}\rceil}$, which was conjectured by Z.-W. Sun (see Conjecture 6.76 in \cite{S2021}). The present work is motivated by the above original conjecture of Sun. Different from $D(n)$, the discriminator $\Delta(n)$ is not always a power of three. For example, $\Delta(n)=7\cdot 3^{4}$ when $n=244$ or $245$. This was first observed by Sun (see the remark to Conjecture 6.76 in \cite{S2021}). According to Theorem \ref{theorem1}, the third example of $n$ satisfying $\Delta(n)\not=3^{\lceil \log_3 n\rceil}$ is over $10^5$.
We prove Theorem \ref{theorem1} by combining methods from elementary number theory and analytic number theory. We point out that in order to deal with $\Delta(n)$ we have to study an incomplete character sum, which is not involved in the work \cite{YZ}. The incomplete character sum will be handed by the elementary method when the length of the summation is about $\frac{p}{4}$, and it will be handed by the analytic method when the length of the summation is about $\frac{p}{6}$. The details will be given in Section 3. Moreover, in the very special case $p=7$, we have to discuss the value of Legendre symbol separately (see Lemma \ref{lemma51} in Section 5).
\bigskip
We use the following notations in this paper. Let $\Z^{+}$ denote the set of all positive integers and let $\N=\Z^{+}\cup\{0\}$. We use $e(\alpha)$ to denote $e^{2\pi i\alpha}$. The notation $\lceil x \rceil$ denotes the smallest integer no less than $x$, and $\lfloor x\rfloor$ denotes the greatest integer no more than $x$.
\bigskip
\section{Preparations}
\setcounter{lemma}{0}
\setcounter{theorem}{0}
\setcounter{corollary}{0}
\setcounter{equation}{0}
We introduce
\begin{align*}\mathcal{E}=\{3^{6s+5}+1:\ s\in \N\} \cup \{ 3^{6s+5}+2:\ s\in \N\}.\end{align*}
Throughout this paper, we use the letter $k$ to denote
\begin{align*}k=\lceil \log_3 n\rceil.\end{align*}
We first point out that $a^3+a(1\le a\le n)$ are pairwise distinct modulo $3^{k}$, and therefore $n\le \Delta(n)\le 3^k$. In order to establish Theorem \ref{theorem1}, it suffices to prove the following two results.
\begin{lemma}\label{lemma21}Let $n\not\in \mathcal{E}$. Suppose that
\begin{align}\label{inequality1} n\le m<3^k<3n.\end{align}
Then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$.\end{lemma}
\begin{lemma}\label{lemma22}Let $n=3^{6s+5}+1$ or $3^{6s+5}+2$ with $s\in \N$. Suppose that
\begin{align}\label{inequality2} n\le m<7\cdot 3^{6s+4}.\end{align}
Then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$. Moreover, $a^3+a(1\le a\le n)$ are pairwise distinct modulo $7\cdot 3^{6s+4}$.
\end{lemma}
We shall consider the following $8$ cases.
(i) $m=\delta p$, where $\delta\ge 6$, $p\ge 5$ is a prime, $p\not=7$ and $p\nmid \delta$.
(ii) $m=\delta p^r$, where $\delta\ge 4$, $p\ge 5$ is a prime, $r\ge 2$ is a positive integer.
(iii) $m=2^r$, where $r\in \Z^{+}$.
(iv) $m=2^rt$, where $t\ge 5$ is an odd number and $r\ge 2$.
(v) $m=2^r3^s$, where $r,s\in \Z^{+}$.
(vi) $m=3^{r}\cdot 14$, where $r\in \N$.
(vii) $m=\delta p^{t}$, where $1\le \delta \le 3$, $p\ge 5$ is a prime, $t\in \Z^{+}$.
(viii) $m=3^{r}\cdot 7$, where $r\in \N$.
\noindent The letter $\delta$ always denotes a positive integer. Note that \eqref{inequality2} implies \eqref{inequality1}. Throughout this paper, we assume that \eqref{inequality1} holds.
\bigskip
\section{An incomplete character sum}
\setcounter{equation}{0}
\medskip
For $u\in \Z$, $\delta\in \Z^{+}$ and $p\ge 3$, we introduce
\begin{align*}A_p(\delta,u)=\sum_{-\frac{p-1}{2}\le x\le \frac{p-1}{2}}\Big(\frac{\delta^2x^2+4}{p}\Big)e\big(\frac{ux}{p}\big),\end{align*}
where $\big(\frac{\cdot}{p}\big)$ denotes the Legendre symbol.
\begin{lemma}\label{lemma31}Suppose that $p\ge 3$ is a prime and $p\nmid \delta$.
(i) If $p|u$, then $A_p(\delta,u)=-1$.
(ii) If $p\nmid u$, then $|A_p(\delta,u)|\le 2\sqrt{p}$.\end{lemma}
\begin{proof}For an odd prime $p$, it is well-known that
$$\sum_{1\le c\le p-1}\Big(\frac{c}{p}\Big)e(\frac{c}{p})=\sum_{1\le x\le p}e\big(\frac{x^2}{p}\big),$$
and $|\tau_p|=\sqrt{p}$, where $\tau_p$ denotes the above Gauss sum. By
$$\sum_{1\le c\le p-1}\Big(\frac{c}{p}\Big)e(\frac{c(\delta^2x^2+4)}{p})=\Big(\frac{\delta^2x^2+4}{p}\Big)\tau_p,$$
we deduce that
\begin{align*}A_p(\delta,u)=&\frac{1}{\tau_p}\sum_{-\frac{p-1}{2}\le x\le \frac{p-1}{2}}e(\frac{ux}{p})\sum_{1\le c\le p-1}\Big(\frac{c}{p}\Big)e(\frac{c(\delta^2x^2+4)}{p})
\\= & \frac{1}{\tau_p}\sum_{1\le c\le p-1}e(\frac{4c}{p})\Big(\frac{c}{p}\Big)\sum_{-\frac{p-1}{2}\le x\le \frac{p-1}{2}}e(\frac{c\delta^2x^2+ux)}{p}.\end{align*}
Note that
\begin{align*}\sum_{-\frac{p-1}{2}\le x\le \frac{p-1}{2}}e(\frac{c\delta^2x^2+ux)}{p}=\Big(\frac{c}{p}\Big)e(\frac{-\overline{4\delta^2c}\, u^2}{p})\tau_p,\end{align*}
where $\overline{d}$ means $\overline{d}\cdot d\equiv 1\pmod{p}$. Now we conclude that
\begin{align}\label{resultAp}A_p(\delta,u)=\sum_{1\le c\le p-1}e(\frac{-\overline{4\delta^2c}\, u^2+4c}{p}).\end{align}
If $p|u$, then the summation in \eqref{resultAp} is a Ramanujan sum and $A_p(\delta,u)=1$. If $p\nmid u$, then by Weil's bound on Kloosterman sums (see (4.19) in \cite{I}) we have $|A_p(\delta,u)|\le 2\sqrt{p}$. This completes the proof.
\end{proof}
We remark that Lemma \ref{lemma31} (i) is a well-known result. For a prime $p\ge 5$ and $p\nmid \delta$, we define $\ell_p(\delta)$ to be smallest positive integer $x$ such that
$$\Big(\frac{-3\delta^2x^2-12}{p}\Big)\in\{0,1\}.$$
We introduce
\begin{align*}L_p=\begin{cases}\frac{p+3}{4}\ \ \textrm{ if } p\equiv 1\pmod{12},
\\ \frac{p-1}{4}\ \ \textrm{ if } p\equiv 5\pmod{12},
\\ \frac{p+5}{4}\ \ \textrm{ if } p\equiv 7\pmod{12},
\\ \frac{p+1}{4}\ \ \textrm{ if } p\equiv 11\pmod{12}.\end{cases}\end{align*}
We point out that $L_p<\frac{p}{3}$ holds for $p\ge 5$ except $p=7$.
\begin{lemma}\label{lemma32}Suppose that $p\ge 5$ is a prime and $p\nmid \delta$. We have
$$\ell_p(\delta)\le L_p.$$\end{lemma}
\begin{proof}By Lemma \ref{lemma31} (i), we have
\begin{align*}A_p(\delta,0)=2\sum_{1\le x\le \frac{p-1}{2}}\Big(\frac{\delta^2x^2+4}{p}\Big)+1=-1,\end{align*}
and therefore,
\begin{align}\label{halfsum}\sum_{1\le x\le \frac{p-1}{2}}\Big(\frac{\delta^2x^2+4}{p}\Big)=-1.\end{align}
We introduce
\begin{align*}N_p^{+}=&|\{1\le x\le \frac{p-1}{2}:\ \big(\frac{\delta^2x^2+4}{p}\big)=+1\}|,
\\ N_p^{-}=&|\{1\le x\le \frac{p-1}{2}:\ \big(\frac{\delta^2x^2+4}{p}\big)=-1\}|,
\\ N_p^{0}=&|\{1\le x\le \frac{p-1}{2}:\ \big(\frac{\delta^2x^2+4}{p}\big)=0\}|.\end{align*}
In view of \eqref{halfsum}, we have the following conclusions. If $p\equiv 1\pmod{4}$, then $N_p^{0}=1$, $N_p^{+}=\frac{p-5}{4}$ and $N_p^{-}=\frac{p-1}{4}$. If $p\equiv 3\pmod{4}$, then $N_p^{0}=0$, $N_p^{+}=\frac{p-3}{4}$ and $N_p^{-}=\frac{p+1}{4}$.
Case $p\equiv 1\pmod{12}$. We have $\big(\frac{-3}{p}\big)=1$ and $\ell_p(\delta)$ is the smallest positive integer $x$ such that
$\big(\frac{\delta^2x^2+4}{p}\big)\in\{0,1\}$. Note that $N_p^{0}+N_p^{+}=\frac{p-1}{4}$. Now we conclude that $\ell_p(\delta)\le \frac{p-1}{2}-(N_p^{0}+N_p^{+})+1=\frac{p-1}{2}-\frac{p-1}{4}+1=L_p$.
Case $p\equiv 5\pmod{12}$. We have $\big(\frac{-3}{p}\big)=-1$ and $\ell_p(\delta)$ is the smallest positive integer $x$ such that
$\big(\frac{\delta^2x^2+4}{p}\big)\in\{0,-1\}$. Note that $N_p^{0}+N_p^{-}=\frac{p+3}{4}$. Now we conclude that $\ell_p(\delta)\le \frac{p-1}{2}-(N_p^{0}+N_p^{-})+1=\frac{p-1}{2}-\frac{p+3}{4}+1=L_p$.
Case $p\equiv 7\pmod{12}$. We have $\big(\frac{-3}{p}\big)=1$ and $\ell_p(\delta)$ is the smallest positive integer $x$ such that
$\big(\frac{\delta^2x^2+4}{p}\big)=1$. Note that $N_p^{+}=\frac{p-3}{4}$. Now we conclude that $\ell_p(\delta)\le \frac{p-1}{2}-N_p^{+}+1=\frac{p-1}{2}-\frac{p-3}{4}+1=L_p$.
Case $p\equiv 11\pmod{12}$. We have $\big(\frac{-3}{p}\big)=-1$ and $\ell_p(\delta)$ is the smallest positive integer $x$ such that
$\big(\frac{\delta^2x^2+4}{p}\big)=-1$. Note that $N_p^{-}=\frac{p+1}{4}$. Now we conclude that $\ell_p(\delta)\le \frac{p-1}{2}-N_p^{-}+1= \frac{p-1}{2}-\frac{p+1}{4}+1=L_p$.
We are done.
\end{proof}
\begin{lemma}\label{lemma33}Suppose that $m=\delta p^{r}$, where $\delta, r\in \Z^{+}$, $p\ge 5$ is a prime and $p\nmid \delta$.
(i) If $p^{r}+\delta\frac{p-1}{2}\le n$, then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$.
(ii) If $r=1$ and $p+\delta \ell_p(\delta)\le n$, then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$.
\end{lemma}
\begin{proof}We consider $1\le a\le p^{r}$ and $b=a+\delta c$ with $c\in \Z^{+}$. It suffices to find $a,c\in \Z^{+}$ such that
$a+\delta c\le n$ and
$$a^2+a(a+\delta c)+(a+\delta c)^2+1\equiv 0\pmod{p^{r}},$$
which is equivalent to
\begin{align}\label{congruencequad}(6a+3\delta c)^2\equiv -3\delta^2c^2-12\pmod{p^{r}}.\end{align}
In view of \eqref{halfsum}, we conclude that there exists $1\le c\le \frac{p-1}{2}$ such that $-3\delta^4c^2-12$ is a quadratic residue modulo $p$. Then it is easy to deduce that there exists $1\le a\le p^{r}$ such that $(6a+3\delta c)^2\equiv -3\delta^2c^2-12\pmod{p^{r}}$. This completes the proof the conclusion (i).
By the definition of $\ell_p(\delta)$, we can find $1\le c\le \ell_p(\delta)$ such that $\big(\frac{-3\delta^4c^2-12}{p}\big)\in \{0,1\}$. Then we can find $1\le a\le p$ such that $(6a+3\delta c)^2\equiv -3\delta^2c^2-12\pmod{p}$. This proves the conclusion (ii).
We are done.
\end{proof}
\begin{lemma}\label{lemma34}Suppose that $m=\delta p$, where $\delta \ge 39$, $p\ge 5$ is a prime, $p\not=7$ and $p\nmid \delta$. Then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$.\end{lemma}
\begin{proof}By Lemma \ref{lemma32} and Lemma \ref{lemma33} (ii), we only need to verify $p+\delta L_p\le n$. By \eqref{inequality1}, $n> \frac{\delta p}{3}$ and it suffices to prove $p+\delta L_p\le \frac{\delta p}{3}$.
Indeed we can prove $\frac{p}{39}+L_p\le \frac{p}{3}$ for all $p\not=7$.
This completes the proof.
\end{proof}
Similarly, we have the following.
\begin{lemma}\label{lemma35}Suppose that $m=\delta p$, where $\delta \ge 13$, $p\ge 165$ is a prime and $p\nmid \delta$. Then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$.\end{lemma}
\begin{proof}It suffices to prove $p+\delta L_p\le \frac{\delta p}{3}$.
Indeed we can prove $\frac{p}{13}+L_p\le \frac{p}{3}$ for all $p\ge 165$.
This completes the proof.
\end{proof}
\begin{lemma}\label{lemma36}If $p\ge 4000$ and $p\nmid \delta$, then we have
$$\ell_p(\delta)\le \frac{p}{6}.$$\end{lemma}
\begin{proof}We write
\begin{align*}Y=\lfloor\frac{p-1}{6}\rfloor.\end{align*}
It suffices to prove
\begin{align}\label{sumY}\Big|\sum_{1\le x\le Y}\Big(\frac{\delta^2x^2+4}{p}\Big)\Big|<Y-1,\end{align}
since \eqref{sumY} implies that $\Big(\frac{\delta^2x^2+4}{p}\Big)$ can take both $1$ and $-1$ in the range $1\le x\le Y$.
We define
\begin{align*}A=\sum_{-Y\le x\le Y}\Big(\frac{\delta^2x^2+4}{p}\Big).\end{align*}
Note that \eqref{sumY} is equivalent to $|A-1|<2Y-2$, which follows from $|A|<2Y-3$. For $c,x\in \Z$, we have
\begin{align*}\frac{1}{p}\sum_{u=1}^{p}e\big(\frac{u(c-x)}{p}\big)=\begin{cases}
1, \ & \textrm{ if } c\equiv x\pmod{p},
\\ 0, \ & \textrm{ if } c\not\equiv x\pmod{p},\end{cases}\end{align*}
and therefore,
\begin{align*}A=\frac{1}{p}\sum_{1\le u\le p}\sum_{1\le c\le p}\Big(\frac{\delta^2c^2+4}{p}\Big)e(\frac{uc}{p})\sum_{-Y\le x\le Y}e(-\frac{ux}{p}).\end{align*}
By Lemma \ref{lemma31}, we obtain
\begin{align*}|A|\le \frac{2\sqrt{p}}{p}\sum_{1\le u\le p}\Big|\sum_{-Y\le x\le Y}e(-\frac{ux}{p})\Big|,\end{align*}
and by Lemma 4.8 in \cite{YZ} we further have
\begin{align*}|A|\le 2\sqrt{p}(2+\ln p).\end{align*}
The inequality $|A|<2Y-3$ follows from
\begin{align*} 2\sqrt{p}(2+\ln p)<2Y-3.\end{align*}
Note that
\begin{align*}2Y-3>\frac{p}{3}-5.\end{align*}
Now we need to prove
\begin{align}\label{check}2\sqrt{p}(2+\ln p)<\frac{p}{3}-5.\end{align}
It is easy to prove that \eqref{check} holds for $p\ge 4000$.
This completes the proof.
\end{proof}
\begin{lemma}\label{lemma37}Suppose that $m=\delta p$, where $\delta \ge 6$, $p\ge 4000$ is a prime and $p\nmid \delta$. Then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$.\end{lemma}
\begin{proof}By Lemma \ref{lemma33} (ii) and Lemma \ref{lemma36}, it suffices to prove $p+\frac{\delta p}{6}\le \frac{\delta p}{3}$, which holds for $\delta\ge 6$.
This completes the proof.
\end{proof}
\begin{lemma}[Case (i)]\label{lemma38}Let $n\ge 48000$. Suppose that $m=\delta p$, where $\delta \ge 6$, $p\ge 5$ is a prime, $p\not=7$ and $p\nmid \delta$. Then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$.\end{lemma}
\begin{proof}In view of Lemma \ref{lemma34}, we only need to consider $\delta<39$. By \eqref{inequality1}, $m\ge 48000$. We deduce that $p=\frac{m}{\delta}\ge \frac{n}{\delta}>165$.
By Lemma \ref{lemma35}, we only need to consider $\delta\le 12$. Now we further have $p=\frac{m}{\delta}\ge \frac{n}{\delta}\ge 4000$ and the desired conclusion follows from Lemma \ref{lemma37}. This completes the proof.
\end{proof}
\begin{remark}\label{remark}One can verify Theorem \ref{theorem1} for $n\le 48000$ with the help of a computer. In fact, Z.-W. Sun has verified the truth of Theorem \ref{theorem1} for $n\le 10^5$. Therefore, the condition $n\ge 48000$ in Lemma \ref{lemma38} can be removed.\end{remark}
\bigskip
\section{The Cases (ii)-(vii)}
\setcounter{lemma}{0}
\setcounter{theorem}{0}
\setcounter{corollary}{0}
\setcounter{equation}{0}
\medskip
The purpose of this section is to deal with cases (ii)-(vii).
\begin{lemma}[Case (ii)]\label{lemma41}Suppose that $m=\delta p^{r}$, where $\delta \ge 4$, $p\ge 5$ is a prime, $r\ge 2$ is a positive integer and $p\nmid \delta$. Then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$.\end{lemma}
\begin{proof} By Lemma \ref{lemma33}, it is sufficient to prove $p^{r}+\delta\frac{p-1}{2}\le n$. By \eqref{inequality1}, $n> \frac{\delta p^{r}}{3}$ and it suffices to prove $p^{r}+\frac{1}{2}\delta p\le \frac{\delta p^{r}}{3}$. This follows from
\begin{align}\label{ineqin41}(p^{r-1}-\frac{3}{2})(\delta-3)\ge \frac{9}{2}.\end{align}
Since $p^{r-1}-\frac{3}{2}\ge p-\frac{3}{2}\ge \frac{7}{2}$, \eqref{ineqin41} holds if $\delta\ge 5$. In the case $\delta=4$, \eqref{ineqin41} holds if $p^{r-1}\ge 6$. We now only need to consider $\delta=4$, $p=5$, $r=2$, and it is easy to verify that $p^{r}+\delta\frac{p-1}{2}\le \frac{\delta p^{r}}{3}\le n$ holds.
This completes the proof.
\end{proof}
\begin{lemma}[Case (iii)]\label{lemma42}Suppose that $m=2^r$, where $r\in \Z^{+}$. Then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$.\end{lemma}
\begin{proof}Note that $2^3+2-1^3-1=2^3$ and $5^3+5-1^3-1=2^7$. For $r\le 3$, we can choose $a=1$ and $b=2$. For $4\le r\le 7$, we can choose $a=1$ and $b=5$.
Now we assume that $r\ge 8$. The proof is the same as that of Lemma 3.7 in \cite{YZ}, and thus we explain it briefly. Since $(a+4)^3+(a+4)-a^3-a=4(3(a+2)^2+5)$, it suffices to find $1\le a\le 2^{r-2}-3$ such that $3(a+2)^2+5\equiv 0\pmod{2^{r-2}}$. For $r\ge 8$, we can find $3\le x\le 2^{r-2}-1$ such that $3x^2+5\equiv 0\pmod{2^{r-2}}$. On choosing $a=x-2$, we obtain $3(a+2)^2+5\equiv 0\pmod{2^{r-2}}$. Note that $b=4+a=x+2\le 2^{r-2}+1\le n$. We are done.
\end{proof}
\begin{lemma}[Case (iv)]\label{lemma43}Suppose that $m=2^{r}t$, where $r\ge 2$ is an integer, $t\ge 5$ is an odd number. Then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$.\end{lemma}
\begin{proof}We consider $1\le a\le 2^{r}$ and $b=a+t$. Note that $2^{r}+t\le \frac{2^rt}{3}$ is equivalent to $(2^r-3)(t-3)\ge 9$, which holds expect that $r=2$ and $t\le 11$. In view of Remark \ref{remark}, we may assume that $n>44$, and by \eqref{inequality1} we have $b\le 2^{r}+t\le n$. It suffices to find $1\le a\le 2^{r}$ such that
$$a^2+a(a+t)+(a+t)^2+1\equiv 0\pmod{2^{r}},$$
and the proof of (3.1) in \cite{YZ} also implies the above conclusion. We are done.
\end{proof}
\begin{lemma}[Case (v)]\label{lemma44}Suppose that $m=2^r3^s$, where $r,s\in \Z^{+}$. Then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$.\end{lemma}
\begin{proof}By Lemma \ref{lemma43}, we only need to consider either $r=1$ or $s=1$.
We first consider $s=1$. Note that $a^2+a(a+3)+(a+3)^2+1$ is equal to $40=2^3\cdot 5$ when $a=2$. If $r\le 3$, then the desired conclusion follows by choosing $a=2$ and $b=5$. Next we assume $r\ge 4$. Similarly to the proof of (3.1) in \cite{YZ}, we can obtain that for any $j\ge 3$, there exists $1\le a\le 2^{j}-6$ such that
\begin{align*}a^2+a(a+3)+(a+3)^2+1\equiv 0\pmod{2^{j}}.\end{align*}
In particular, there exists $1\le a\le 2^{r}-6$ such that
$a^2+a(a+3)+(a+3)^2+1\equiv 0\pmod{2^{r}}$. The desired conclusion follows by choosing $b=a+3$ and noting that $b\le 2^{r}-3<n$.
Now we consider $r=1$. By \eqref{inequality1}, $n>3^s$. We can choose $a=1$ and $b=1+3^s$.
The proof is complete.
\end{proof}
\begin{lemma}[Case (vi)]\label{lemma45}Suppose that $m=3^{r}\cdot 14$, where $r\in \N$. Then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$.\end{lemma}
\begin{proof}When $m=14$, it suffices to choose $a=1$ and $b=3$. Now we assume that $r\ge 1$. By Lemma \ref{lemma32} (noting that $L_7=3$) and Lemma \ref{lemma33} (ii) (with $p=7$ and $\delta=2\cdot 3^{r}$),
we only need to verify $7+3^{r+1}\cdot 2\le n$. By \eqref{inequality1}, $r=k-3$ and $n>3^{k-1}=3^{r+2}$. Note that $7+3^{r+1}\cdot 2<3^{r+2}$ if $r\ge 1$.
This completes the proof.
\end{proof}
The last task in this section is to consider Case (vii). The proof is as same as that in Section 4 \cite{YZ}. We introduce
\begin{align}X:=X_p=\lfloor\frac{p}{3}\rfloor p^{t-1}.\end{align}
We aim to find $1\le a\not=b\le \frac{n}{\delta}$ such that $\delta^2(a^2+ab+b^2)+1\equiv 0\pmod{p^{t}}$. Then on choosing $a'=\delta a$, $b'=\delta b$, we obtain $a'^3+a'\equiv b'^3+b'\pmod{m}$. By \eqref{inequality1}, we have $X<\frac{n}{\delta}$.
Let
\begin{align}\label{definef}f(a,b)=\delta^2(a^2+ab+b^2).\end{align}Now we introduce
\begin{align}\label{defineN}\mathcal{N}=\sum_{\substack{1\le a,b\le X \\ f(a,b)+1\equiv 0\pmod{p^{t}}}}1\end{align}
and
\begin{align*}\mathcal{N}^{\not=}=\sum_{\substack{1\le a\not=b\le X \\ f(a,b)+1\equiv 0\pmod{p^{t}}}}1.\end{align*}
Note that $\mathcal{N}^{\not=}\ge \mathcal{N}-2$. The main objective is to prove $\mathcal{N}>2$ (and thus $\mathcal{N}^{\not=}>0$.
For $j\ge 1$, we define
\begin{align}\label{defineTj}T_j=\sum_{\substack{1\le c\le p^j \\ (c,p)=1}}\sum_{\substack{1\le a,b\le X }}e\big(\frac{cf(a,b)+c}{p^j}\big).\end{align}
\begin{lemma}\label{lemma46}Let $\mathcal{N}$ and $T_j$ be given in \eqref{defineN} and \eqref{defineTj} respectively. We have
\begin{align*}\mathcal{N}=\frac{X^2}{p^{t}}+\frac{1}{p^{t}}\sum_{j=1}^{t}T_j.\end{align*}
If $1\le j\le t-1$, then
\begin{align*}T_j=X^2p^{-j}\Big(\frac{-3}{p^j}\Big)\mu(p^j),\end{align*}
where $\mu(\cdot)$ is the M\"obius function.
Moreover, we also have
\begin{align*}|T_t|\le 2p^{\frac{3t}{2}}(2+\ln p^t)^2.\end{align*}
\end{lemma}
\begin{proof}The three conclusions are corresponding to Lemma 4.2, Lemma 4.4 and Lemma 4.9 in \cite{YZ} respectively. Although we only considered the case $t=2r$ in \cite{YZ}, both the proofs and the conclusions of Lemmas 4.2, 4.4 and 4.7 in \cite{YZ} are valid for all $t\in \Z^{+}$.\end{proof}
\begin{lemma}\label{lemma47}If $t\ge 2$, then
\begin{align}\label{finalineq1}\mathcal{N}\ge \frac{X^2}{p^{t}}-\Big(\frac{-3}{p}\Big)\frac{X^2}{p^{t+1}}-2p^{t/2}(2+\ln p^{t})^2. \end{align}
If $t=1$, then
\begin{align}\label{finalineq2}\mathcal{N}\ge \frac{X^2}{p^{t}}-2p^{t/2}(2+\ln p^{t})^2. \end{align}
\end{lemma}
\begin{proof}The desired conclusions follow from Lemma \ref{lemma46}.\end{proof}
\begin{lemma}\label{lemma48}Suppose that $m=\delta p^{t}$, where $1\le \delta \le 3$, $p\ge 5$ is a prime, $t\in \Z^{+}$. Suppose further that $p^t\ge 20000^2$. Then we have
\begin{align*}\mathcal{N}^{\not=}>0.\end{align*}
\end{lemma}
\begin{proof}
For $p\ge 7$, we have $\lfloor\frac{p}{3}\rfloor\ge \frac{3}{11}p$ (the equality holds with $p=11$) and $1-\frac{1}{p}\ge \frac{6}{7}$. Thus for $p\ge 7$, we have
\begin{align}\label{check1}\lfloor\frac{p}{3}\rfloor^2 p^{-2}\Big(1-\big(\frac{-3}{p}\big)\frac{1}{p}\Big)\ge \lfloor\frac{p}{3}\rfloor^2 p^{-2}(1-\frac{1}{p})\ge \frac{3^2\cdot 6}{11^2 \cdot 7}>\frac{6}{125}.\end{align}
For $p=5$, we have
\begin{align}\label{check2}\lfloor\frac{p}{3}\rfloor^2 p^{-2}\Big(1-\big(\frac{-3}{p}\big)\frac{1}{p}\Big)=\frac{6}{125}.\end{align}
We deduce from \eqref{finalineq1}, \eqref{finalineq2}, \eqref{check1} and \eqref{check2} that (for all $p\ge 5$)
\begin{align*}\mathcal{N}\ge \frac{6}{125}p^{t}-2p^{t/2}(2+\ln p^{t})^2. \end{align*}
Since $\mathcal{N}^{\not=}\ge \mathcal{N}-2$, we need to prove
$$\frac{6}{125}p^{t}>2p^{t/2}(2+\ln p^t)^2+2,$$
which follows from
\begin{align}\label{check4}\sqrt{p^t}>\frac{125}{3}(2+\ln p^t)^2+30.\end{align}
On writing $q=\sqrt{p^t}$, our task is to prove $q>\frac{500}{3}(1+\ln q)^2+30$.
Let $g(x)=\sqrt{x-30}-\sqrt{\frac{500}{3}}(1+\ln x)$. Then
$g'(x)=\frac{1}{2\sqrt{x-30}}-\sqrt{\frac{500}{3}}\cdot\frac{1}{x}>\frac{1}{2\sqrt{x}}-\sqrt{\frac{500}{3}}\cdot\frac{1}{x}$ for $x>30$ and $g$ is increasing when $x> \frac{2000}{3}$. Note that $g(20000)>0$. Therefore, $q>\frac{500}{3}(1+\ln q)^2+30$ holds for $q\ge 20000$ and \eqref{check4} holds due to $p^t\ge 20000^2$.
The proof is complete.
\end{proof}
In view of \eqref{inequality1}, for $m=\delta p^t$ (with $1\le \delta \le 3$ and $p\ge 5$ a prime) we have
$$3^{k-1}<n\le \delta p^t<3^{k},$$
and we define
\begin{align}\label{Nstar}\mathcal{N}^\ast=\sum_{\substack{1\le a<b\le 1+3^{k-1} \\ a^3+a\equiv b^3+b\pmod{\delta p^{t}}}}1.\end{align}
We verify $\mathcal{N}^\ast>0$ for $p^t<20000^2$ with the help of a computer.
\begin{lemma}\label{lemma49}Let $\mathcal{N}^\ast$ be given in \eqref{Nstar}. Suppose that $m=\delta p^{t}$, where $1\le \delta \le 3$, $p\ge 5$ is a prime, $t\in \Z^{+}$. Suppose further that $p^t<20000^2$. Then we have
\begin{align*}\mathcal{N}^\ast>0.\end{align*}
\end{lemma}
\begin{proof}This is checked by C++.\end{proof}
\begin{lemma}[Case (vii)]\label{lemma410}Suppose that $m=\delta p^{t}$, where $1\le \delta \le 3$, $p\ge 5$ is a prime, $t\in \Z^{+}$. Then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$.\end{lemma}
\begin{proof}The desired conclusion follows from Lemma \ref{lemma48} and Lemma \ref{lemma49}. \end{proof}
\bigskip
\section{The Case (viii)}
\setcounter{lemma}{0}
\setcounter{theorem}{0}
\setcounter{corollary}{0}
\setcounter{equation}{0}
\medskip
It is in Case (viii) that we need to distinguish $n\in \mathcal{E}$ or not in the proof. For $m=3^r\cdot 7$, by \eqref{inequality1} we have $r=k-2$ and
$$n>3^{r+1}.$$
\begin{lemma}\label{lemma51}(i) If $r\equiv 0\pmod{3}$ or $r\equiv 2\pmod{3}$, then we have $\ell_7(3^r)\le 2$.
(ii) If $r\equiv 1\pmod{3}$, then we have $\ell_7(3^r)=3$.
\end{lemma}
\begin{proof}Since $\big(\frac{-3}{7}\big)=1$, $\ell_7(\delta)$ is the smallest positive integer $x$ such that
$\big(\frac{\delta^2x^2+4}{7}\big)=1$.
If $r\equiv 0\pmod{3}$, then $3^{2r}x^2+4\equiv x^2+4\equiv 1\pmod{7}$ for $x=2$ and thus $\ell_7(3^r)\le 2$ (indeed $\ell_7(3^r)=2$ in this case).
If $r\equiv 2\pmod{3}$, then $3^{2r}x^2+4\equiv 4x^2+4\equiv 1\pmod{7}$ for $x=1$ and thus $\ell_7(3^r)=1$.
If $r\equiv 1\pmod{3}$, then $3^{2r}x^2+4\equiv 2x^2+4\pmod{7}$. Note that $\big(\frac{2\cdot 1^2+4}{7}\big)=\big(\frac{2\cdot 2^2+4}{7}\big)=-1$ and $\big(\frac{2\cdot 3^2+4}{7}\big)=1$. Therefore, $\ell_7(3^r)=3$.
This completes the proof.\end{proof}
\begin{lemma}\label{lemma52}Suppose that $m=3^{r}\cdot 7$, where $r\equiv 0\pmod{3}$ or $r\equiv 2\pmod{3}$. Then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$.\end{lemma}
\begin{proof}For $r=0$, we can choose $a=1$ and $b=3$. By Lemma \ref{lemma51} (i), $\ell_7(3^r)\le 2$. For $r\ge 2$, the desired the conclusion follows from Lemma \ref{lemma33} (ii) on noting that $7+3^r\cdot 2\le 3^{r+1}<n$.\end{proof}
\begin{lemma}\label{lemma53}Suppose that $m=3^{r}\cdot 7$, where $r\equiv 1\pmod{3}$. Suppose further that either $r\equiv 1\pmod{6}$ or $n\not\in \mathcal{E}$. Then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$.\end{lemma}
\begin{proof}Note that $a^2+a(a+3^{r+1})+(a+3^{r+1})^2+1=3a^2+3^{r+2}a+3^{2r+2}+1$. It suffices to find $a\in \Z^{+}$ such that $3a^2+3^{r+2}a+3^{2r+2}+1\equiv 0\pmod{7}$ and $a+3^{r+1}\le n$. Note that for $r\equiv 1\pmod{3}$, we have $3^{2r+2}\equiv 4\pmod{7}$. On writing $r=6s+1+3t$ with $s\in \N$ and $t\in \{0,1\}$, we have
\begin{align}\label{conginlemma53}3a^2+3^{r+2}a+3^{2r+2}+1 \equiv 3a^2+3^{3t+3}a+5\equiv 3a^2+(-1)^{t+1}a+5\pmod{7}.\end{align}
If $t=0$, then by \eqref{conginlemma53} we can choose $a=1$ such that $3a^2+3^{r+2}a+3^{2r+2}+1\equiv 0\pmod{7}$ and $a+3^{r+1}=1+3^{r+1}\le n$.
If $t=1$, then $n\not\in\mathcal{E}$ and $n\ge 3^{r+1}+3$. By \eqref{conginlemma53}, we choose $a=3$ such that $3a^2+3^{r+2}a+3^{2r+2}+1\equiv 0\pmod{7}$. Note that
$b=3+3^{r+1}\le n$. We are done.
\end{proof}
\begin{lemma}[Case (viii), Part 1]\label{lemma54}Let $n\not\in \mathcal{E}$. Suppose that $m=3^{r}\cdot 7$. Then there exist $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$.\end{lemma}
\begin{proof}The desired conclusion follows from Lemmas \ref{lemma52}-\ref{lemma53}.\end{proof}
\begin{lemma}[Case (viii), Part 2]\label{lemma55}Let $n=3^{6s+5}+1$ or $n=3^{6s+5}+2$. Suppose that $m=3^{r}\cdot 7$. Then $a^3+a(1\le a\le n)$ are pairwise distinct modulo $m$.\end{lemma}
\begin{proof}By \eqref{inequality1}, we have $r=6s+4$. Suppose otherwise that we can find $1\le a<b\le n$ such that $b^3+b\equiv a^3+a\pmod{m}$. Note that $3\nmid (a^2+ab+b^2+1)$ for any $a,b\in \Z$, and we conclude that $b=a+3^{6s+4}c$ for some $c\in \Z^{+}$. Since $b=a+3^{3s+4}c<n$, we have $c\le 3$. Write $\delta=3^{6s+4}$. Then $b^3+b\equiv a^3+a\pmod{m}$ implies that $a^2+ab+b^2+1\equiv 0\pmod{7}$, which is equivalent to
\begin{align*}(6a+3\delta c)^2\equiv -3\delta^2c^2-12\pmod{7}.\end{align*}
Therefore, we have $\Big(\frac{-3\delta^2c^2-12}{7}\Big)\in\{0,1\}$. By Lemma \ref{lemma51} (ii), $\ell_7(\delta)=3$ for $\delta=3^{6s+4}$, and we obtain $c\ge \ell_7(\delta)=3$. Now we conclude that $c=3$ and $b=a+3^{6s+5}$.
Then we deduce that $a^2+ab+b^2+1\equiv 3a^2+a+5\equiv 0\pmod{7}$, and which implies $a\ge 3$ and $b=a+3^{3s+5}\ge 3+3^{3s+5}$. This is a contradiction to $b\le n$. The proof is complete.
\end{proof}
\bigskip
\noindent {\it Proof of Lemmas \ref{lemma21}-\ref{lemma22}.} In view of Lemma \ref{lemma38}, Remark \ref{remark}, Lemma \ref{lemma41}, Lemma \ref{lemma42}, Lemma \ref{lemma43}, Lemma \ref{lemma44}, Lemma \ref{lemma45}, Lemma \ref{lemma410}, Lemma \ref{lemma54} and Lemma \ref{lemma55}, we only need to prove that each positive integer $m$ restricted by \eqref{inequality1} must satisfy (at least) one of the $8$ cases in Section 2. By \eqref{inequality1}, $m$ is not a power of $3$.
If $m$ has no prime factors greater than $3$, then $m$ belongs to Case (iii) or (v). Next we assume that $m$ has two distinct prime factors greater than $3$. We write $m=m'p_1^{r_1}p_2^{r_2}$, where $p_1\not =p_2$ are two primes, $p_1\nmid m'$, $p_2\nmid m'$ and $r_1,r_2\in \Z^{+}$. Without loss of generality, we further assume that $p_1\not=7$ and $p_2\ge 7$. Let $\delta=m'p_2^{r_2}$. Then $m=\delta p_1^{r_1}$, $\delta\ge 7$ and $p_1\nmid \delta$. We can see that $m$ belongs to either Case (i) or Case (ii).
Now we assume that $m$ has only one prime factor greater than $3$. We write $m=2^{i}3^{j}p^{r}$, where $p\ge 5$ is a prime, $r\in \Z^{+}$ and $i,j\in \N$. Note that if $i\ge 2$, then $m$ satisfies the condition of Case (iv). We discuss $i=1$ and $i=0$ below.
We first consider $i=1$. If $j=0$, then $m$ belongs to Case (vii). If $j\ge 1$ and $r\ge 2$, then $m$ satisfies the condition of Case (ii). If $j\ge 1$, $r=1$ and $p\not=7$, then $m$ satisfies the condition of Case (i). If $j\ge 1$, $r=1$ and $p=7$, then $m$ satisfies the condition of Case (vi).
Now we consider $i=0$. If $0\le j\le 1$, then $m$ belongs to Case (vii). If $j\ge 2$ and $r\ge 2$, then $m$ satisfies the condition of Case (ii). If $j\ge 2$, $r=1$ and $p\not=7$, then $m$ satisfies the condition of Case (i). If $j\ge 2$, $r=1$ and $p=7$, then $m$ satisfies the condition of Case (viii).
We have proved that $m$ subject to \eqref{inequality1} must satisfy (at least) one of the $8$ cases in Section 2.
According to the remark before Lemma \ref{lemma21}, we also complete the proof of Theorem \ref{theorem1}.
\vskip3mm
\bigskip
| {
"timestamp": "2021-11-25T02:11:59",
"yymm": "2111",
"arxiv_id": "2111.11227",
"language": "en",
"url": "https://arxiv.org/abs/2111.11227",
"abstract": "Let $\\Delta(n)$ denote the smallest positive integer $m$ such that $a^3+a(1\\le a\\le n)$ are pairwise distinct modulo $m$. The purpose of this paper is to determine $\\Delta(n)$ for all positive integers $n$.",
"subjects": "Number Theory (math.NT)",
"title": "On a discriminator for the polynomial $f(x)=x^3+x$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9893474888461861,
"lm_q2_score": 0.8104789063814616,
"lm_q1q2_score": 0.8018452707913022
} |
https://arxiv.org/abs/1201.5989 | Nonconvexity of the set of hypergraph degree sequences | It is well known that the set of possible degree sequences for a graph on $n$ vertices is the intersection of a lattice and a convex polytope. We show that the set of possible degree sequences for a $k$-uniform hypergraph on $n$ vertices is not the intersection of a lattice and a convex polytope for $k \geq 3$ and $n \geq k+13$. We also show an analogous nonconvexity result for the set of degree sequences of $k$-partite $k$-uniform hypergraphs and the generalized notion of $\lambda$-balanced $k$-uniform hypergraphs. | \section{Introduction}
The \emph{degree sequence} of a graph $G$ on vertices $v_1, v_2, \dots, v_n$ is the sequence $d(G)=(d_1, d_2, \dots, d_n)$, where $d_i$ is the degree of the vertex $v_i$ in $G$. The Erd\H{o}s-Gallai Theorem~\cite{ErdosGallai} states that a sequence $(d_1, d_2, \dots, d_n)$ is the degree sequence of a (simple) graph if and only if $\sum_i d_i$ is even and the $d_i$ satisfy a certain set of inequalities. Koren~\cite{Koren} showed that these inequalities define a convex polytope $D_n(2)$, so that the sequences with even sum lying in this polytope are exactly the degree sequences of graphs on $n$ vertices. (For more on this polytope, see \cite{Stanley}.)
We consider the analogous question for $k$-uniform hypergraphs when $k>2$. Klivans and Reiner~\cite{KlivansReiner} verified computationally that the set of degree sequences for $k$-uniform hypergraphs is the intersection of a lattice and a convex polytope for $k=3$ and $n \leq 8$ and asked whether this holds in general. We will show in Section 2 that it does not hold for $k \geq 3$ and $n \geq k+13$.
Similarly, we can associate to a bipartite graph a pair of degree sequences giving the degrees of the vertices in each part. The Gale-Ryser Theorem~\cite{Ryser} gives necessary and sufficient conditions in the form of a system of linear inequalities for a pair of degree sequences to arise from a bipartite graph, so that the set of these pairs of degree sequences can again be described as the intersection of a lattice and a convex polytope. We will show in Section 3 that the analogous result does not hold for $k$-partite $k$-uniform hypergraphs if there exist three parts of sizes at least 5, 6, and 6, respectively. We also generalize the notion of $k$-partite $k$-uniform hypergraphs to that of $\lambda$-balanced $k$-uniform hypergraphs and prove a similar statement in this case.
\section{Hypergraph degree sequences}
A \emph{(simple) $k$-uniform hypergraph} $K$ on the set $[n]=\{1, 2, \dots, n\}$ is a collection of distinct elements (called \emph{hyperedges}) of $\binom{[n]}{k}$, the $k$-element subsets of $[n]$. The \emph{degree sequence} of $K$ is $d(K)=(d_1, d_2, \dots, d_n)$, where $d_i$ is the number of hyperedges in $K$ containing $i$.
We consider degree sequences as points in $\mathbf R^n$. Let $e_i$ be the $i$th standard basis vector, and for any $S = \{i_1, \dots, i_k\} \subset [n]$, write $e_S=e_{i_1i_2\dotsm i_k}=e_{i_1}+e_{i_2}+\dotsb+e_{i_k}$. Each degree sequence $d(K)$ is the sum of some subset of the $e_S$'s, so the convex hull of all such degree sequences is the zonotope
\[D=D_n(k)=\Big\{ \sum_{S \in \binom{[n]}{k}} c_Se_S \mid 0 \leq c_S \leq 1\Big\}.\]
(For more on this polytope, see \cite{BhanuMurthySrinivasan}.)
Moreover, if we let $L \subset \mathbf Z^n$ be the lattice generated by the $e_S$ consisting of lattice points whose coordinates have sum divisible by $k$ (as long as $n>k$), then each $d(K)$ lies in $D \cap L$. Our main result will be to show that $D \cap L$ contains a point that is not the degree sequence of a $k$-uniform hypergraph when $k \geq 3$.
As a remark, this is closely related to the weaker question of whether every point of $L$ lying in the real cone generated by the $e_S$ lies in the semigroup generated by the $e_S$. This is well known to be the case and is equivalent to normality of the monomial algebra generated by the $\mathbf x^S=x_{i_1}x_{i_2}\cdots x_{i_k}$. (See, for instance, \cite{Sturmfels}.) It is also easy to derive the affirmative answer to this question for $\lambda$-balanced hypergraphs as defined in the next section. The essential difference with the present question is that here we are restricted to using each hyperedge at most once.
For a hypergraph $K$, we will define $D(K)$ to be the zonotope generated by the hyperedges in $K$, so
\[D(K) = \left\{\sum_{S \in K} c_Se_S \mid 0 \leq c_S \leq 1 \right\}.\]
\begin{lemma} \label{face}
Let $K$ be a $k$-uniform hypergraph on $n$ vertices. Then any nonempty face of $D(K)$ is a translate of $D(K^0)$ for some $K^0 \subset K$. Moreover, $D(K) \cap L$ contains a point that is not the degree sequence of a subhypergraph of $K$ if and only if $D(K^0) \cap L$ contains a point that is not the degree sequence of a subhypergraph of $K^0$.
\end{lemma}
\begin{proof}
Choose any weight vector $w \in (\mathbf R^*)^n$. To maximize $w\left(\sum c_Se_S\right)=\sum (c_S \cdot w(e_S))$ for $0 \leq c_S \leq 1$, we must take $c_S = 1$ when $w(e_S)>0$ and $c_S=0$ when $w(e_S)<0$, while $c_S$ can be arbitrary if $w(e_S)=0$. Thus the face on which $w$ is maximized is a translate of $D(K^0)$ by $\sum_{S \in K^+} e_S \in L$, where $K^+$ is the set of hyperedges $S$ on which $w$ is positive. The same argument gives the result for degree sequences (simply restrict $c_S$ to be 0 or 1).
\end{proof}
Therefore it suffices to exhibit a weight vector $w$ to maximize and a point of $D(K^0) \cap L$ that is not the degree sequence of a subhypergraph of $K^0$.
\begin{prop} \label{example1}
Let $k=3$ and $n=16$. If
\[w=(8,6,6,4,1,1,0,0,0,0,-2,-2,-3,-3,-5,-12),\]
then
\[p=(2,1,1,2,1,1,1,1,1,1,1,1,2,2,2,1)\] lies in $D(K^0)\cap L$ but is not the degree sequence of a subhypergraph of $K^0$.
\end{prop}
\begin{proof}
Since the sum of the entries of $p$ is $21 = 3\cdot 7$, $p$ lies in $L$. Also,
\begin{multline*}
p = \frac 13 (e_{2,3,16}+e_{4,5,15}+e_{4,6,15}+e_{5,6,11}+e_{5,6,12}+e_{7,8,9}+e_{7,8,10}+e_{7,9,10}+e_{8,9,10})\\
+\frac 23 (e_{1,4,16}+e_{1,13,15}+e_{1,14,15}+e_{2,13,14}+e_{3,13,14}+e_{4,11,12}).
\end{multline*}
Since $w$ vanishes on each $e_S$ on the right side, it follows that $p \in D(K^0)$. However, $p$ is not the degree sequence of a subhypergraph of $K^0$: since $w_7=w_8=w_9=w_{10}=0$ and otherwise $w_i\neq-w_j$, we have $(e_7+e_8+e_9+e_{10})\cdot e_S$ is 0 or 3 for any $S \in K^0$. But $(e_7+e_8+e_9+e_{10})\cdot p = 4$, which is not divisible by 3, so it cannot be the sum of some $e_S$ for $S \in K^0$.
\end{proof}
Using this, we can easily derive the following.
\begin{thm} \label{main}
For $k \geq 3$ and $n \geq k+13$, the set of degree sequences of $k$-uniform hypergraphs on $n$ vertices is not the intersection of a lattice and a convex polytope.
\end{thm}
\begin{proof}
It suffices to show that there is a point in $D \cap L$ that is not a degree sequence (since $D$ and $L$ are the smallest convex polytope and lattice containing all degree sequences). Combining Lemma~\ref{face} and Proposition~\ref{example1} gives the result for $k=3$ and $n=16$. Since $D_n(k)$ is the face of $D_{n+1}(k)$ with last coordinate 0, Lemma~\ref{face} also gives the result for $k=3$ and $n\geq 16$.
Consider the map $f\colon (d_1, d_2,\dots, d_n) \mapsto (d_1,d_2,\dots,d_n,\frac{1}{k}(d_1+\dots+d_n))$. Then $d$ is a $k$-uniform hypergraph degree sequence on $n$ vertices if and only if $f(d)$ is a $(k+1)$-uniform hypergraph degree sequence on $n+1$ vertices (simply add vertex $n+1$ to all hyperedges). Since $f$ is linear, it also sends $D_n(k)$ into $D_{n+1}(k+1)$, so any counterexample for $(n,k)$ yields a counterexample for $(n+1,k+1)$. An easy induction completes the proof.
\end{proof}
It is possible that with additional work or computation the constant 13 may be improved.
In the next section, we will prove an analogous result for $k$-partite $k$-uniform hypergraphs as well as the more general $\lambda$-balanced hypergraphs. (Our construction below can also be used to prove Theorem~\ref{main} but with a constant of 14 instead of 13.)
\section{$\lambda$-balanced hypergraphs}
Let $\lambda=(\lambda_1, \lambda_2, \dots, \lambda_p)$ be a partition of $k$. We say a $k$-uniform hypergraph is \emph{$\lambda$-balanced} if its vertex set can be partitioned into $p$ sets $V_1, \dots, V_p$ such that each hyperedge contains $\lambda_i$ vertices from $V_i$. (We will also call a hyperedge $\lambda$-balanced if it satisfies this property.) A $(1,1,\dots, 1)$-balanced partition is called \emph{$k$-partite}. Note that every $k$-uniform hypergraph is $(k)$-balanced.
Let $n_i=|V_i|$, and label the vertices in $V_i$ by $v^i_1, v^i_2, \dots, v^i_{n_i}$. We then associate to a $\lambda$-balanced hypergraph $K$ a degree sequence \[d=(d^1_1, d^1_2, d^1_3, \dots;\quad d^2_1, d^2_2, \dots;\quad \dots;\quad d^p_1, d^p_2, \dots),\]
where $d^i_j$ gives the number of hyperedges in $K$ containing vertex $v^i_j$. As before, this degree sequence is $\sum_{S \in K} e_S$, where $e_S$ is the sum of the standard basis vectors in $\mathbf R^{n_1}\times \mathbf R^{n_2} \times \dots \times \mathbf R^{n_p}$ corresponding to vertices in the hyperedge $S$. When $n_i>\lambda_i$ for all $i$, the lattice $L$ generated by all possible $e_S$ consists of all sequences $d$ for which there exists $q\in \mathbf Z$ such that $\sum_{j=1}^{n_i} d^i_j = \lambda_iq$ for all $i$. (In other words, the sum of the degrees of the vertices in $V_i$ must be the same integer multiple of $\lambda_i$.)
As before, we let $D$ be the zonotope generated by all $e_S$ for $\lambda$-balanced hyperedges $S$ and ask whether all points in $D \cap L$ are degree sequences for $\lambda$-balanced hypergraphs. We will again find that this is not the case for any $\lambda$ when $k\geq 3$ and the $n_i$ are sufficiently large. We first consider a special case.
\begin{prop} \label{example2}
Let $\lambda=(1,1,1)$ and $(n_1, n_2, n_3)=(5,6,6)$. Also let
\[w = (-7,-7,-7,-7,-7;\quad 1, 1, 2, 2, 3, 3;\quad 6, 6,5,5,4,4)\]
and define $K_0$ as in Lemma~\ref{face}. Then
\[p=(11,9,6,3,1;\quad 2,4,6,8,3,7;\quad 2,4,6,8,3,7)\]
lies in $D(K_0) \cap L$ but is not the degree sequence of a subhypergraph of $K^0$.
\end{prop}
\begin{proof}
Define points
\begin{alignat*}{2}
p^-&=(10,8,4,2,0;\quad&1,3,5,7,2,6;\quad&1,3,5,7,2,6),\\
p^+&=(12,10,8,4,2;\quad&3,5,7,9,4,8;\quad&3,5,7,9,4,8),
\end{alignat*}
so $p=\frac12(p^-+p^+)$.
Note that the sum of the coordinates of the three parts of $p^-$ are all 24, so $p^- \in L$. Likewise, $p^+$ and $p$ also lie in $L$.
Let
\[A=(a_{rs})=
\begin{pmatrix}
1&2&0&0&0&0\\
2&3&0&0&0&0\\
0&0&3&4&0&0\\
0&0&4&5&0&0\\
0&0&0&0&1&3\\
0&0&0&0&3&5\end{pmatrix}.\]
Note that $K^0 = \{\{v^1_q,v^2_r,v^3_s\} \mid 1 \leq q \leq 5, 1\leq r,s \leq 6, a_{rs} \neq 0\}$.
Then $p^- = \sum e_S$, where the sum ranges over all $S=\{v^1_q, v^2_r, v^3_s\}$ such that $q < a_{rs}$. Likewise $p^+ = \sum e_S$, where instead $q \leq a_{rs}$. Therefore $p^-$, $p^+$, and their midpoint $p$ lie in $D(K_0)$.
We will now show that $p$ is not the degree sequence of a hypergraph that uses only hyperedges in $K^0$. Suppose it were, so that we could write $p=\sum_{S \in K} e_S$ for some $K \subset K_0$. Let $B=(b_{rs})$ be the $6 \times 6$ matrix such that $b_{rs}$ counts the number of $q$ for which $\{v^1_q,v^2_r,v^3_s\} \in K$. Then the sequence of row and column sums of $B$ must both be $(2,4,6,8,3,7)$. Since we also know that $0 \leq b_{rs}\leq 5$, this means that:
\begin{align*}
B_1=\begin{pmatrix}
b_{11}&b_{12}\\b_{21}&b_{22}
\end{pmatrix}
&\in
\left\{
\begin{pmatrix}0&2\\2&2\end{pmatrix},
\begin{pmatrix}1&1\\1&3\end{pmatrix},
\begin{pmatrix}2&0\\0&4\end{pmatrix}
\right\}\\
B_2=\begin{pmatrix}
b_{33}&b_{34}\\b_{43}&b_{44}
\end{pmatrix}
&\in
\left\{
\begin{pmatrix}1&5\\5&3\end{pmatrix},
\begin{pmatrix}2&4\\4&4\end{pmatrix},
\begin{pmatrix}3&3\\3&5\end{pmatrix}
\right\}\\
B_3=\begin{pmatrix}
b_{55}&b_{56}\\b_{65}&b_{66}
\end{pmatrix}
&\in
\left\{
\begin{pmatrix}0&3\\3&4\end{pmatrix},
\begin{pmatrix}1&2\\2&5\end{pmatrix}
\right\}.
\end{align*}
Moreover, for $1 \leq r,s \leq 6$, the pair $\{v^2_r,v^3_s\}$ can appear in at most $\min \{q, b_{rs}\}$ hyperedges with one of the vertices in $\{v^1_1, \dots, v^1_q\}$. Therefore, if we let $\mu=(11,9,6,3,1)$, then $\mu_1+\dots+\mu_q \leq \sum_{r,s} \min \{q, b_{rs}\}$. In other words, if $\nu = (\nu_1, \dots, \nu_5)$ is the partition such that $\nu_q$ counts the number of $b_{rs}$ that are at least $q$, then $\mu_1 + \dots + \mu_q \leq \nu_1 + \dots + \nu_q$.
It is now straightforward to show that there are no possible choices of $B_1$, $B_2$, and $B_3$ satisfying these conditions: if
$B_3 = (\begin{smallmatrix}0&3\\3&4\end{smallmatrix})$, we cannot choose $B_1$ such that both $\mu_1 \leq \nu_1$ and $\mu_1+\mu_2 \leq \nu_1+\nu_2$. Similarly if $B_3=(\begin{smallmatrix}1&2\\2&5\end{smallmatrix})$, we cannot choose $B_2$ such that both $\mu_1+\mu_2+\mu_3 \leq \nu_1+\nu_2+\nu_3$ and $\mu_1+\mu_2+\mu_3+\mu_4 \leq \nu_1+\nu_2+\nu_3+\nu_4$. Thus $p \in D(K_0) \cap L$ is not the degree sequence of a hypergraph using only hyperedges in $K_0$.
\end{proof}
Combining Lemma~\ref{face} and Proposition~\ref{example2} gives our desired result for $3$-partite $3$-uniform hypergraphs, and we can easily extend this result to $k$-partite $k$-uniform hypergraphs.
\begin{thm} \label{partite}
For $k \geq 3$, consider $k$-partite $k$-uniform hypergraphs with parts of sizes $n_1, n_2, \dots, n_k$ for which $n_1 \geq 5$, $n_2 \geq 6$, $n_3 \geq 6$, and $n_i \geq 1$ otherwise. The corresponding set of degree sequences is not the intersection of a lattice and a convex polytope.
\end{thm}
\begin{proof}
As in Theorem~\ref{main}, combining Lemma~\ref{face} and Proposition~\ref{example2} gives the result for $k=3$ and $(n_1, n_2, n_3)=(5,6,6)$. Also note that the polytopes and lattices for $k\geq 3$ with $(n_1, n_2, n_3, n_4, \dots, n_k) = (5,6,6,1,\dots, 1)$ are all identical to the $k=3$ case (by projecting away the last $k-3$ coordinates) so this also proves those cases. Finally, increasing any $n_i$ but restricting to the face of the zonotope where the new vertices have degree 0 again reduces to the same case by Lemma~\ref{face}, completing the proof.
\end{proof}
Theorem~\ref{partite} is also easy to extend to $\lambda$-balanced hypergraphs for all $\lambda$ when $k \geq 3$. Consider a $\lambda$-balanced hypergraph on vertex sets $V_1, \dots, V_p$ of sizes $n_1, n_2, \dots, n_p$. We will say that $(n_1, \dots, n_p)$ is a \emph{$\lambda$-coarsening} of $(m_1, \dots, m_k)$ if each $V_i$ can be partitioned into $\lambda_i$ sets such that the sizes of all the resulting sets are $m_1, \dots, m_k$.
\begin{thm}\label{balanced}
Consider $\lambda$-balanced hypergraphs with parts of sizes $n_1, \dots, n_p$, where $(n_1, \dots, n_p)$ is a $\lambda$-coarsening of $(m_1, m_2, \dots, m_k)$ such that Theorem~\ref{partite} holds for parts of sizes $m_1, \dots, m_k$. (In particular, this will hold whenever the $n_i$ are sufficiently large.) Then the corresponding set of degree sequences is not the intersection of a lattice and a convex polytope.
\end{thm}
\begin{proof}
Let the vertex sets $V_1, \dots, V_p$ have corresponding coarsening $W_1, \dots, W_k$. It suffices to exhibit a weight vector $w$ such that the corresponding $K_0$ as in Lemma~\ref{face} is the complete $k$-partite $k$-uniform hypergraph on $W_1, \dots, W_k$. Indeed, any hyperedge in $K_0$ will be $\lambda$-balanced by the definition of $\lambda$-coarsening, and the lattice generated by hyperedges in $K_0$ is a sublattice of the lattice generated by all $\lambda$-balanced hyperedges. Therefore any counterexample for $K_0$ will yield a counterexample for $\lambda$-balanced hypergraphs as in Lemma~\ref{face}.
To exhibit such a weight vector, let $N$ be an integer larger than any $m_i$. Then let the weight of vertices in $W_1$ be $-(1+N+N^2+\dots+N^{k-2})$ and in $W_i$ be $N^{i-2}$ for $2 \leq i \leq k$. Then the only way to pick $k$ vertices the sum of whose weights is 0 is to take one from each $W_i$. In other words, the only hyperedges in $K_0$ are those that have one vertex from each $W_i$, as desired.
\end{proof}
\section{Acknowledgments}
The author would like to thank Victor Reiner for suggesting this direction of study, as well as for useful discussions and overall encouragement. This work was supported by a National Science Foundation Mathematical Sciences Postdoctoral Research Fellowship.
| {
"timestamp": "2012-01-31T02:01:49",
"yymm": "1201",
"arxiv_id": "1201.5989",
"language": "en",
"url": "https://arxiv.org/abs/1201.5989",
"abstract": "It is well known that the set of possible degree sequences for a graph on $n$ vertices is the intersection of a lattice and a convex polytope. We show that the set of possible degree sequences for a $k$-uniform hypergraph on $n$ vertices is not the intersection of a lattice and a convex polytope for $k \\geq 3$ and $n \\geq k+13$. We also show an analogous nonconvexity result for the set of degree sequences of $k$-partite $k$-uniform hypergraphs and the generalized notion of $\\lambda$-balanced $k$-uniform hypergraphs.",
"subjects": "Combinatorics (math.CO)",
"title": "Nonconvexity of the set of hypergraph degree sequences",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9893474894743616,
"lm_q2_score": 0.8104788995148792,
"lm_q1q2_score": 0.8018452645069891
} |
https://arxiv.org/abs/2105.03086 | Pseudorandom sequences derived from automatic sequences | Many automatic sequences, such as the Thue-Morse sequence or the Rudin-Shapiro sequence, have some desirable features of pseudorandomness such as a large linear complexity and a small well-distribution measure. However, they also have some disastrous properties in view of certain applications. For example, the majority of possible binary patterns never appears in automatic sequences and their correlation measure of order 2 is extremely large.Certain subsequences, such as automatic sequences along squares, may keep the good properties of the original sequence but avoid the bad ones.In this survey we investigate properties of pseudorandomness and non-randomness of automatic sequences and their subsequences and present results on their behaviour under several measures of pseudorandomness including linear complexity, correlation measure of order $k$, expansion complexity and normality. We also mention some analogs for finite fields. | \section{Introduction
Pseudorandom sequences are sequences generated by deterministic algorithms which shall simulate randomness.
In contrast to truly random sequences they are not random at all but guarantee certain desirable features and are reproducible.
Automatic sequences, see Section~\ref{sec:automatic_sequences} below for the definition, have some of these desirable features but also some undesirable ones.
For example, the Thue-Morse sequence $(t_n)$, defined by \eqref{tmdef} below,
\begin{itemize}
\item has large $N$th linear complexity, see Section~\ref{sec:linear_complexity},
\item has large $N$th maximum-order complexity, see Section~\ref{sec:max-order_complexity},
\item is balanced and has a small well-distribution measure, see Section~\ref{sec:correlation}.
\end{itemize}
However, the Thue-Morse sequence
\begin{itemize}
\item has a very large correlation measure of order $2$,
see Section~\ref{sec:correlation},
\item a very small expansion complexity, see Section~\ref{sec:expansion_complexity},
\item and there are short patterns such as $000$ and $111$ which do not appear in the sequence and its subword complexity is only linear, see Section~\ref{sec:normality}.
\end{itemize}
Hence, despite
some nice features this sequence is not looking random at all, see Figure~\ref{tmfig}. The same is true for the Rudin-Shapiro sequence $(r_n)$ defined by~\eqref{rsdef} below and many other related sequences.
\begin{figure}
\begin{center}
\includegraphics[scale=0.15]{TM.png} \qquad
\includegraphics[scale=0.15]{RS.png}
\end{center}
\caption{The first $4096$ elements of the Thue-Morse (left) and Rudin-Shapiro (right) sequence split into $64$ rows of each $64$ sequence elements. Zeros are represented by white, ones are represented by black.}
\label{tmfig}
\end{figure}
Taking suitable subsequences may destroy the non-random structure of the original sequence but may keep the desirable features of pseudorandomness.
Promising candidates for such subsequences are
\begin{itemize}
\item along squares, cubes, bi-squares, ... or any polynomial values
for any polynomial $f$ of degree at least $2$ with $f(\mathbb{N}_0)\subset \mathbb{N}_0$,
\item along primes,
\item along the Piateski-Shapiro sequence $\lfloor n^c\rfloor$, $1<c<2$,
\item and along geometric sequences such as $3^n$.
\end{itemize}
For example, the Thue-Morse sequence and the Rudin-Shapiro sequence along squares still
\begin{itemize}
\item have a large maximum-order complexity and thus a large linear complexity, see Section~\ref{sec:max-order_complexity},
\item and are asymptotically balanced, see Section~\ref{sec:normality}.
\end{itemize}
Moreover, in contrast to the original sequence they
\begin{itemize}
\item have unbounded expansion complexity, see Section~\ref{sec:expansion_complexity},
\item and are normal, that is, asymptotically each pattern appears with the right frequency in the sequence, see Section~\ref{sec:normality}.
\end{itemize}
Roughly speaking, they look much more random than the original sequences, see Figure~\ref{squarefig}.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.15]{TM_along_squares.png} \qquad
\includegraphics[scale=0.15]{RS_along_squares.png}
\end{center}
\caption{The first $4096$ elements of the Thue-Morse (left) and Rudin-Shapiro (right) sequence along squares
split into $64$ rows of each $64$ sequence elements. Zeros are represented by white, ones are represented by black.}
\label{squarefig}
\end{figure}
Still some questions about these sequences remain open such as upper bounds on the correlation measure of order $k$ and on the expansion complexity. We will state explicitly some selected open problems to motivate future research.
We also look for further directions in Section~\ref{sec:finite_fields}. In particular, we discuss analogs of the Thue-Morse and Rudin-Shapiro sequence and their subsequences in the setting of finite fields.
For general background on automatic sequences and finite automata we refer to the monograph of Allouche and Shallit \cite{alsh2} and also to \cite{alsh,alshya,ev,fo}.
For surveys on pseudorandom sequences see \cite{gy,merisa,niwi,sh,towi}.
\section{Finite automata and automatic sequences}\label{sec:automatic_sequences}
Roughly speaking, a sequence is {\em automatic} if it is generated by a finite automaton, see Definition~\ref{def:sequence} below.
\begin{definition
Let $k\geq 2$ be an integer.
A \emph{finite $k$-automaton} ${\mathcal A}$ is a $6$-tuple
$$
{\mathcal A}=(Q,\Sigma, \delta, q_0, \varphi,\Delta),
$$
where
\begin{itemize}
\item $Q$ is a finite set of states,
\item $\Sigma=\{0,1,\ldots,k-1\}$ is the input alphabet,
\item $\delta: Q \times \Sigma \rightarrow Q$ is the transition function,
\item $q_0\in Q$ is the initial state,
\item $\Delta$ is the output alphabet
\item and $\varphi:Q \rightarrow \Delta$ is the output function.
\end{itemize}
\end{definition}
For example, the \emph{Thue-Morse automaton}, see Figure~\ref{fig:TM}, is a $2$-automaton with $2$ states and the \emph{Rudin-Shapiro automaton}, see Figure~\ref{fig:RS}, is a $2$-automaton with $4$ states, both with inputs and outputs in $\Sigma=\Delta=\{0,1\}$.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[auto,thick]
\node (E) at (-2,0) [circle] {};
\node (A) at (0,0) [circle, draw] {$A/0$};
\node (B) at (3,0) [circle, draw] {$B/1$};
\draw [->,bend left] (A) to node {1} (B);
\draw [->,bend left] (B) to node {1} (A);
\path (B) edge [loop above] node {0} (B);
\path (A) edge [loop above] node {0} (A);
\draw [->] (E) to node
[midway,above,align=center ] {\texttt{start}}
(A);
\end{tikzpicture}
\end{center}
\caption{Thue-Morse automaton} \label{fig:TM}
\end{figure}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[auto,thick]
\node (E) at (-2,0) [circle] {};
\node (A) at (0,0) [circle, draw] {$A/0$};
\node (B) at (3,0) [circle, draw] {$B/0$};
\node (C) at (6,0) [circle, draw] {$C/1$};
\node (D) at (9,0) [circle, draw] {$D/1$};
\draw [->,bend left] (A) to node {1} (B);
\draw [->,bend left] (B) to node {0} (A);
\draw [->,bend left] (B) to node {1} (C);
\draw [->,bend left] (C) to node {1} (B);
\draw [->,bend left] (C) to node {0} (D);
\draw [->,bend left] (D) to node {1} (C);
\path (D) edge [loop above] node {0} (B);
\path (A) edge [loop above] node {0} (A);
\draw [->] (E) to node
[midway,above,align=center ] {\texttt{start}}
(A);
\end{tikzpicture}
\end{center}
\caption{Rudin-Shapiro automaton} \label{fig:RS}
\end{figure}
\begin{definition}\label{def:sequence}
Let $\Delta$ be a finite set. A sequence $(s_n)$ over $\Delta$ is called a \emph{$k$-automatic sequence} if there is a $k$-automaton ${\mathcal A}$ such that on input of the digits $n_0,n_1,\ldots$ of the $k$-ary expansion of $n\geq 0$,
\begin{equation}\label{eq:k-ary}
n=\sum_{i\geq 0} n_i k^i, \quad n_i\in\{0,1,\dots, k-1\},
\end{equation}
${\mathcal A}$ outputs the sequence element $s_n\in \Delta$.
Reading of the digits of $n$ starting with the most significant digit is called {\em direct} whereas reading starting with the least significant digit $n_0$ is called {\em reverse}.
If not stated otherwise, we use reverse reading.
Finally, a sequence is called {\em automatic} if it is $k$-automatic for some $k$.
\end{definition}
\begin{example}[Thue-Morse sequence]
The \emph{Thue-Morse sequence}~$(t_n)$ is a $2$-automatic sequence generated by the Thue-Morse automaton, Figure~\ref{fig:TM}. This sequence is the {\em sequence of the sum of digits modulo $2$}. The sequence begins with
$$
011010011001 \dots,
$$
see also Figure~\ref{tmfig} for a picture of the first $4096$ sequence elements.
It follows from the defining automaton, see Figure~\ref{fig:TM}, that $(t_n)$
satisfies the following recurrence relation
\begin{equation}\label{tmdef}
t_n=
\left\{
\begin{array}{cl}
t_{n/2} & \mbox{if $n$ is even},\\
t_{(n-1)/2}+1 \bmod 2 & \mbox{if $n$ is odd},
\end{array}\right.
\quad n=1,2,\ldots
\end{equation}
with initial value $t_0=0$.
\end{example}
\begin{example}[Rudin-Shapiro sequence]
The \emph{Rudin-Shapiro sequence} $(r_n)$ is a $2$-automatic sequence generated by the Rudin-Shapiro automaton, see Figure~\ref{fig:RS}. The sequence begins with
$$
000100100001 \dots,
$$
see also Figure~\ref{tmfig} for a picture of the first $4096$ sequence elements.
It follows from the defining automaton, see Figure~\ref{fig:RS}, that $(r_n)$
satisfies the following recurrence relation
\begin{equation}\label{rsdef}
r_n=
\left\{
\begin{array}{cl}
r_{\lfloor n/2\rfloor}+1 \bmod 2 & \mbox{if $n\equiv 3 \bmod 4$},\\
r_{\lfloor n/2\rfloor} & \mbox{otherwise},
\end{array}\right.
\quad n=1,2,\ldots
\end{equation}
with initial value $r_0=0$.
\end{example}
The sequence $((-1)^{r_n})$ over $\{-1,+1\}$ is also called Rudin-Shapiro sequence in the literature. Here we study only the sequence $(r_n)$ over $\{0,1\}$.
\begin{example}[Pattern sequences]
For a pattern
$P\in \Delta^\ell\setminus\{(0,\dots,0)\}$ of length $\ell$
over $\Delta=\{0,1,\dots, k-1\}$
define the sequence $(p_n)$ by
\begin{equation*}
p_n= e_P(n) \bmod k, \quad 0\leq p_n<k, \quad n=0,1,\dots,
\end{equation*}
where $e_P(n)$ is the number of occurrences of $P$ in the $k$-ary expansion of $n$.
The sequence~$(p_n)$ over~$\Delta$ satisfies the following recurrence relation
\begin{equation}\label{eq:recurrence}
p_n=
\left\{
\begin{array}{cl}
p_{\lfloor n/k\rfloor}+1 \bmod k & \text{if } n\equiv a \bmod k^\ell,\\
p_{\lfloor n/k\rfloor} & \text{otherwise,}
\end{array}
\right.
n=1,2,\dots
\end{equation}
with initial value $p_0=0$, where $a=a(P)$ is the integer $0< a <k^\ell$ such that its $k$-ary expansion corresponds to the pattern $P$.
Classical examples for binary pattern sequences are the Thue-Morse sequence with
$$k=2,\quad \ell=1, \quad P=1 \quad \mbox{and}\quad a=1,$$ and the Rudin-Shapiro sequence with
$$k=2,\quad \ell=2,\quad P=11\quad \mbox{and}\quad a=3.$$
In particular, if $n_0,n_1,\dots$ are the bits of the non-negative integer taken from \eqref{eq:k-ary} with $k=2$,
then
\begin{equation}\label{sumofdigitsdef}
t_n=\sum_{i=0}^\infty n_i \bmod 2 \quad \mbox{and}\quad r_n=\sum_{i=0}^\infty n_in_{i+1} \bmod 2.
\end{equation}
\end{example}
\begin{example}[Rudin-Shapiro-like sequence]
Lafrance, Rampersad and Yee \cite{laraye} introduced a {\em Rudin-Shapiro-like sequence}
$(\ell_n)$ which is based on the number of occurrences of the pattern $10$ as a scattered subsequence in the binary representation, \eqref{eq:k-ary} with $k=2$, of
$n$. That is,~$\ell_n$ is
the parity of the number of pairs $(i,j)$ with $i>j$ and $(n_i,n_j)=(1,0)$. See Figure~\ref{fig:RS-like} for its defining automaton.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[auto,thick]
\node (E) at (-2,0) [circle] {};
\node (A) at (0,0) [circle, draw] {$A/0$};
\node (B) at (3,0) [circle, draw] {$B/0$};
\node (C) at (6,0) [circle, draw] {$C/1$};
\node (D) at (9,0) [circle, draw] {$D/1$};
\draw [->,bend left] (A) to node {1} (B);
\draw [->,bend left] (B) to node {1} (A);
\draw [->,bend left] (B) to node {0} (C);
\draw [->,bend left] (C) to node {0} (B);
\draw [->,bend left] (C) to node {1} (D);
\draw [->,bend left] (D) to node {1} (C);
\path (D) edge [loop above] node {0} (B);
\path (A) edge [loop above] node {0} (A);
\draw [->] (E) to node
[midway,above,align=center ] {\texttt{start}}
(A);
\end{tikzpicture}
\end{center}
\caption{Rudin-Shapiro-like automaton with direct reading} \label{fig:RS-like}
\end{figure}
This sequence can also be defined by
\begin{equation}\label{eq:rslike}\ell_{2n+1}=\ell_n \quad \mbox{and}\quad \ell_{2n}=\ell_n+t_n\bmod 2,
\end{equation}
see \cite[$(1)$ and $(2)$]{laraye},
with initial value $\ell_0=0$ and where $(t_n)$ is the Thue-Morse sequence.
\end{example}
\begin{example}[Baum-Sweet sequence]
The \emph{Baum-Sweet sequence} $(b_n)$ is a $2$-automatic sequence defined by the rule $b_0=1$ and for $n\ge 1$
$$
b_n=
\left\{
\begin{array}{cl}
1& \text{if the binary representation of $n$ contains no block of} \\
& \text{consecutive $0$'s of odd length,}\\
0& \text{otherwise.}
\end{array}
\right.
$$
Equivalently, we have for $n\geq 1$ of the form $n=4^\ell m$ with $4\nmid m$ that
\begin{equation}\label{bsdef}
b_n=
\left\{
\begin{array}{cl}
0& \text{if $m$ is even}, \\
b_{(m-1)/2}& \text{if $m$ is odd.}
\end{array}
\right.
\end{equation}
The sequence
$(b_n)$ is generated by the Baum-Sweet automaton in Figure~\ref{fig:BS}.
\end{example}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[auto,thick]
\node (E) at (-2,0) [circle] {};
\node (A) at (0,0) [circle, draw] {$A/1$};
\node (B) at (3,0) [circle, draw] {$B/1$};
\node (C) at (6,0) [circle, draw] {$C/0$};
\path (A) edge [loop above] node {1} (A);
\draw [->] (E) to node
[midway,above,align=center ] {\texttt{start}}
(A);
\draw [->,bend left] (A) to node {0} (B);
\draw [->,bend left] (B) to node {0} (A);
\draw [->,left] (B) to node [midway,above,align=center ] {1} (C);
\path (C) edge [loop above] node {0,1} (C);
\end{tikzpicture}
\end{center}
\caption{Baum-Sweet automaton} \label{fig:BS}
\end{figure}
\begin{example}[Characteristic sequence of sums of three squares]
Consider the {\em characteristic sequence $(c_n)$ of the set of integers which are sums of three squares of an integer},
that is,
$$
c_n=
\left\{
\begin{array}{cl}
1& \text{if } n=a^2+b^2+c^2 \text{ for some non-negative integers $a,b,c$,} \\
0& \text{otherwise.}
\end{array}
\right.
$$
By Legendre's three-square theorem, we have the equivalent definition
\begin{equation}\label{eq:cndef}
c_n=
\left\{
\begin{array}{cl}
1& \text{if $n$ is not of the form $n=4^\ell(8k+7)$,} \\
0& \text{otherwise.}
\end{array}
\right.
\end{equation}
See Figure~\ref{fig:3squares} for the defining automaton.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[auto,thick]
\node (AA) at (-2,0) [circle] {};
\node (A) at (0,0) [circle, draw] {$A/1$};
\node (B) at (3,0) [circle, draw] {$B/1$};
\node (C) at (0,-3) [circle, draw] {$C/1$};
\node (D) at (3,-3) [circle, draw] {$D/1$};
\node (E) at (6,-1.5) [circle, draw] {$E/1$};
\node (F) at (9,-1.5) [circle, draw] {$F/0$};
\draw [->] (A) to node {1} (B);
\draw [->,bend left] (A) to node {0} (C);
\draw [->] (B) to node {1} (E);
\draw [->] (B) to node {0} (D);
\draw [->] (C) to node {1} (D);
\draw [->,bend left] (C) to node {0} (A);
\draw [->] (E) to node {1} (F);
\draw [->] (E) to node {0} (D);
\path (D) edge [loop below] node {0,1} (D);
\path (F) edge [loop below] node {0,1} (F);
\draw [->] (AA) to node
[midway,above,align=center ] {\texttt{start}}
(A);
\end{tikzpicture}
\end{center}
\caption{Automaton of the characteristic sequence of sums of three squares with reverse reading} \label{fig:3squares}
\end{figure}
\end{example}
\begin{example}[Regular paper-folding sequence]
The {\em regular paper-folding sequence} $(v_n)$
with initial value $v_0\in \{0,1\}$ is defined as follows. If $n=2^km$ with an odd $m$, then
\begin{equation}\label{pfdef}
v_n=\left\{\begin{array}{ll}1,& m\equiv 1\bmod 4,\\ 0, &m\equiv 3\bmod 4,\end{array}\right.\quad n=1,2,\ldots
\end{equation}
Its defining automaton with four states is given in Figure~\ref{fig:RPF}.
\end{example}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[auto,thick]
\node (E) at (-2,0) [circle] {};
\node (A) at (0,0) [circle, draw] {$A/v_0$};
\node (B) at (3,0) [circle, draw] {$B/1$};
\node (C) at (6,1.5) [circle, draw] {$C/0$};
\node (D) at (6,-1.5) [circle, draw] {$D/1$};
\path (A) edge [loop above] node {0} (A);
\draw [->] (E) to node
[midway,above,align=center ] {\texttt{start}}
(A);
\draw [->] (A) to node {1} (B);
\draw [->] (B) to node [midway,above,align=center ] {1} (C);
\draw [->] (B) to node [midway,above,align=center ] {0} (D);
\path (C) edge [loop above] node {0,1} (C);
\path (D) edge [loop above] node {0,1} (D);
\end{tikzpicture}
\end{center}
\caption{Regular paper-folding automaton} \label{fig:RPF}
\end{figure}
\begin{example}[An automatic apwenian sequence]
Any binary sequence $(a_n)$ satisfying $a_0=1$
and
$$
a_{2n+2}=a_{2n+1}+a_n \bmod 2,\quad n=0,1,\ldots
$$
is called {\em apwenian}, see for example \cite{alhani}. Apwenian sequences which are $2$-automatic are characterized in \cite{alhani}. For example, the sequence $(w_n)$ defined by
\begin{equation}\label{apdef}
w_{2n}=1\quad \mbox{and}\quad w_{2n+1}=w_n+1\bmod 2,\quad n=0,1,\ldots
\end{equation}
is apwenian and defined by the automaton in Figure~\ref{fig:apw}.
\end{example}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[auto,thick]
\node (E) at (-2,0) [circle] {};
\node (A) at (0,0) [circle, draw] {$A/1$};
\node (B) at (3,2) [circle, draw] {$B/1$};
\node (C) at (3,-2) [circle, draw] {$C/0$};
\node (D) at (6,-2) [circle, draw] {$D/0$};
\draw [->] (E) to node
[midway,above,align=center ] {\texttt{start}}
(A);
\draw [->] (A) to node [midway,above,align=center ] {0} (B);
\draw [->,bend left] (A) to node [midway,above,align=center ] {1} (C);
\draw [->,bend left] (C) to node [midway,above,align=center ] {1} (A);
\draw [->] (C) to node [midway,above,align=center ] {0} (D);
\path (B) edge [loop above] node {0,1} (B);
\path (D) edge [loop above] node {0,1} (D);
\end{tikzpicture}
\end{center}
\caption{Apwenian automaton} \label{fig:apw}
\end{figure}
In addition to the examples above, all ultimately periodic sequences are
$k$-automatic for all integers $k\geq 2$, see \cite[Theorem~5.4.2]{alsh2}.
Moreover, by Cobham's theorem \cite[Theorem~11.2.1]{alsh2}, if a sequence $(s_n)$ is both $k$-automatic and $\ell$-automatic and $k$ and $\ell$ are multiplicatively
independent,\footnote{Two integers $k$ and $\ell$ are {\em multiplicatively dependent} if $k^r=\ell^s$ for some positive integers $r$ and~$s$. Otherwise they are {\em multiplicatively independent}.}
then $(s_n)$ is ultimately periodic.
For a prime power $k=q$, $k$-automatic sequences $(s_n)$ over the finite field\footnote{For a prime power $q$ we denote the finite field of size $q$ by $\mathbb{F}_q$.} $\Delta=\mathbb{F}_q$ can be characterized
by a result of Christol, see \cite{christol} for prime $q$ and \cite{chka} for prime power $q$ as well as \cite[Theorem~12.2.5]{alsh2}.
\begin{theorem}\label{thm:christol}
Let
\footnote{We denote by $\mathbb{F}_q \llbracket x \rrbracket$ the ring of formal power series over $\mathbb{F}_q$.}
$$
G(x)=\sum_{n=0}^{\infty}s_nx^n \in \mathbb{F}_q \llbracket x \rrbracket
$$
be the {\em generating function} of the sequence $(s_n)$ over $\mathbb{F}_q$.
Then $(s_n)$ is $q$-automatic if and only if $G(x)$ is algebraic over $\mathbb{F}_q(x)$, that is, there is a polynomial
$h(x,y)\in \mathbb{F}_q[x,y]\setminus\{0\}$
such that $h(x,G(x))=0$.
\end{theorem}
Note that for all $m=1,2,\ldots$ a sequence is $k$-automatic if and only if it is $k^m$-automatic by \cite[Theorem~6.6.4]{alsh2} and even a slightly more general version of Christol's result holds:
For a prime $p$ and positive integers $m$ and $r$, $(s_n)$ is $p^m$-automatic over $\mathbb{F}_{p^r}$ if and only if $G(x)$ is algebraic over $\mathbb{F}_{p^r}(x)$.
\begin{example}
The generating function $G(x)$ of the Thue-Morse sequence $(t_n)$ over $\mathbb{F}_2$ satisfies $h(x,G(x))=0$ with
\begin{equation}\label{eq:TH-equation}
h(x,y)=(x+1)^3y^2 + (x+1)^2y+x.
\end{equation}
The generating function $G(x)$ of the Rudin-Shapiro sequence $(r_n)$ over $\mathbb{F}_2$ satisfies $h(x,G(x))=0$ with
\begin{equation}\label{eq:RS-equation}
h(x,y)=(x+1)^{5}y^2 + (x+1)^4 y + x^3.
\end{equation}
In general, for prime $p$ the generating function $G(x)$ of the $p$-ary pattern sequence $(p_n)$ over $\mathbb{F}_p$ with respect to the pattern $P$ of length $\ell$ satisfies $h(x,G(x))=0$ with
\begin{equation}\label{eq:pattern-equation}
h(x,y)=(x-1)^{p^\ell +p -1}y^p - (x-1)^{p^\ell} y - x^{a(P)}.
\end{equation}
The generating function $G(x)$ of the Rudin-Shapiro-like
sequence $(\ell_n)$ over $\mathbb{F}_2$ defined by~\eqref{eq:rslike} satisfies $h(x,G(x))=0$ with
\begin{equation}\label{eq:rslh}
h(x,y)=(x+1)^8y^4+(x^6+x^5+x^2+x)y^2+(x+1)^4y+x^2,
\end{equation}
see \cite[Proof of Theorem 2]{suzeli}.
The generating function $G(x)$ of the Baum-Sweet sequence $(b_n)$ over $\mathbb{F}_2$ satisfies \linebreak[4] $h(x,G(x))=0$ with
\begin{equation}\label{eq:BS-equation}
h(x,y)=y^3+xy+1.
\end{equation}
The generating function $G(x)$ of the characteristic sequence $(c_n)$ of sums of three
squares~\eqref{eq:cndef} over $\mathbb{F}_2$ satisfies $h(x,G(x))=0$ with
\begin{equation}\label{eq:cnh}
h(x,y)=(x+1)^8(y+y^4)+x^6+x^5+x^3+x^2+x,
\end{equation}
see \cite[Equation (7)]{howi}.
The generating function $G(x)$ of the regular paper-folding sequence $(v_n)$ over $\mathbb{F}_2$ satisfies $h(x,G(x))=0$ with
\begin{equation}\label{pfh}
h(x,y)=(x+1)^4(y^2+y)+x.
\end{equation}
The generating function $G(x)$ of the apwenian sequence $(w_n)$ over $\mathbb{F}_2$ defined by \eqref{apdef}
satisfies
\begin{equation}\label{aph} h(x,y)=(x+1)(xy^2+y)+1.
\end{equation}
\end{example}
\section{Linear complexity}\label{sec:linear_complexity}
The linear complexity is a figure of merit of pseudorandom sequences
introduced to capture undesirable linear structure in a sequence. It originates in cryptography and provides a test of randomness which is
a standard tool to filter sequences with non-randomness properties and is implemented in many test suites such as NIST and TestU01~\cite{nist,testU01}.
\begin{definition}
The \emph{$N$th linear complexity} $L(s_n, N)$ of a sequence $(s_n)$ over $\mathbb{F}_q$ is the length~$L$ of a shortest linear recurrence relation satisfied by the first $N$ elements of $(s_n)$,
$$
s_{n+L}=c_{L-1}s_{n+L-1}+\dots +c_1s_{n+1}+c_0s_n, \quad 0\leq n\leq N-L-1,
$$
for some $c_0,\ldots,c_{L-1}\in \mathbb{F}_q$.
We use the convention that $L(s_n,N)=0$ if the first $N$ elements of $(s_n)$ are all zero and $L(s_n,N)=N$ if $s_0=\dots=s_{N-2}=0\ne s_{N-1}$.
The sequence~$(L(s_n,N))_{N=1}^\infty$
is called {\em linear complexity profile} of $(s_n)$ and
$$
L(s_n)=\sup_{N\ge 1} L(s_n,N)
$$
is the {\em linear complexity} of $(s_n)$.
\end{definition}
Clearly, $0\leq L(s_n,N) \leq N$ and $L(s_n,N)\leq L(s_n,N+1)$.
For truly random sequences $(s_n)$ the expected value of its $N$th linear complexity $L$ is
$$\frac{N}{2}+O(1),
$$
see for example \cite[Theorem~10.4.42]{handbookFF}.
Deviations of order of magnitude $\log N$ must appear for infinitely many $N$.
More precisely, for a prime power $q$ consider the following probability measure of sequences over $\mathbb{F}_q$ determined by
\begin{equation}\label{eq:prob_space}
\mathbb{P}\left[(s_n)\in \mathbb{F}_q^{\infty}: (s_0,\dots, s_{\ell-1})= (c_0,\dots, c_{\ell-1}) \right]=q^{-\ell}, \quad c_0,\dots, c_{\ell-1}\in \mathbb{F}_q.
\end{equation}
Then we have the following result on the deviation from the expected value, see\cite[Theorem~10]{Niederreiter88}.
\begin{theorem}
We have
$$
\limsup_{N\rightarrow \infty }\frac{L(s_n, N) -N/2}{\log N}=\frac{1}{2 \log q},
$$
and
$$
\liminf_{N\rightarrow \infty }\frac{L(s_n, N) -N/2}{\log N}=\frac{-1}{2 \log q}
$$
with probability one with respect to the probability measure \eqref{eq:prob_space}.
\end{theorem}
It is well-known \cite[Lemma~1]{Niederreiter88-b} that $L(s_n)<\infty$ if and only if $(s_n)$ is ultimately periodic, that is,
its generating function is rational: $G(x)=g(x)/f(x)$ with polynomials $g(x),f(x)\in \mathbb{F}_q[x]$.
The $N$th linear complexity is a measure for the unpredictability of a sequence.
A large $N$th linear complexity, up to sufficiently large $N$,
is necessary, but not sufficient, for cryptographic applications. Sequences of small linear complexity are also weak in view of Monte-Carlo methods, see \cite{do,domewi,dowi,DorferWinterhof}.
For more background on linear complexity and related measures of pseudorandomness we refer to
\cite[Section~10.4]{handbookFF}
and \cite{Niederreiter2003,towi,wi1}.
M\'erai and Winterhof \cite{mewi18} showed that automatic sequences which are not ultimately periodic possess large $N$th linear complexity.
\begin{theorem}\label{thm:MW-lin-compl-gen}
Let $q$ be a prime power and $(s_n)$ be a $q$-automatic sequence over $\mathbb{F}_q$ which is not ultimately periodic.
Let $h(x,y)=h_0(x)+h_1(x)y+\dots + h_d(x)y^d\in\mathbb{F}_q[x,y]$ be a non-zero polynomial with $h(G(x),x)=0$ with no rational zero.
Put $$
M=\max_{0\leq i\leq d}\{\deg h_i-i\}.
$$
Then we have
\[
\frac{\displaystyle N-M}{d}\leq L(s_n,N)\leq \frac{\displaystyle (d-1)N+M+1}{d}.
\]
\end{theorem}
See also \cite{XingLam} for the special case $d=2$.
The idea of the proof of Theorem \ref{thm:MW-lin-compl-gen} is that small $N$th linear complexity profile gives a good rational approximation to the generating function. However, transcendental elements over $\mathbb{F}_q(x)$ are not well-approximated.
Namely, since $(s_n)$ is not ultimately periodic, $G(x)=\sum_{n=0}^\infty s_nx^n\not\in \mathbb{F}_q(x)$ is not rational by \cite[Lemma~1]{Niederreiter88-b}.
Let $g(x)/f(x)\in\mathbb{F}_q(x)$ be a rational zero of $h(x,y)$ modulo $x^N$ with $\deg(f)\le L(s_n,N)$ and $\deg(g)<L(s_n,N)$.
More precisely, put $L=L(s_n,N)$. Then we have
$$
\sum_{\ell=0}^Lc_{\ell}s_{n+\ell}=0 \quad \mbox{for }0\le n\le N-L-1
$$
for some $c_0,\ldots,c_{L}\in \mathbb{F}_p$ with $c_L=-1$.
Take
\[
f(x)=\sum_{\ell=0}^L c_{\ell}x^{L-\ell}
\]
and
\[
g(x)=\sum_{m=0}^{L-1}\left(\sum_{\ell=L-m}^Lc_\ell s_{m+\ell-L}\right)x^m
\]
and verify
$$
f(x)G(x)\equiv g(x)\bmod x^N.
$$
Then
\[
h_0(x)f^d(x)+h_1(x)g(x)f^{d-1}(x)+\dots + h_d(x)g(x)^d=K(x) x^N.
\]
Here $K(x)\neq 0$ since $h(x,y)$ has no rational zero. Comparing the degrees of both sides we get
\[
dL+M\geq N
\]
which gives the lower bound.
The upper bound for $N=1$ is trivial. For $N\geq 2$ the result follows from the well-known bound, see for example \cite[Lemma 3]{DorferWinterhof},
\[
L(s_n,N)\leq \max\left\{L(s_n,N-1), N-L(s_n,N-1) \right\}
\]
by induction.
The bound in Theorem \ref{thm:MW-lin-compl-gen} combined with \eqref{eq:TH-equation}-\eqref{aph}
gives the following estimates for the $N$th linear complexity of the Thue-Morse sequence $(t_n)$ defined by~\eqref{tmdef}
\begin{equation}\label{eq:lin-compl_TM}
\left\lceil \frac{N-1}{2} \right\rceil\leq L(t_n,N)\leq \left\lfloor \frac{N}{2} \right\rfloor+1,
\end{equation}
of the Rudin-Shapiro sequence $(r_n)$ defined by \eqref{rsdef} and the regular paper-folding sequence $(v_n)$ defined by \eqref{pfdef}
\begin{equation}\label{eq:lin-compl_RS}
\left\lceil\frac{N-3}{2}\right\rceil\leq L(r_n,N), L(v_n,N)\leq \left\lfloor\frac{N}{2}\right\rfloor+2,
\end{equation}
of the $p$-ary pattern sequence $(p_n)$ defined by \eqref{eq:recurrence} with any pattern $P$ of length $\ell$
$$
\left\lceil\frac{N+1}{p}\right\rceil-p^{\ell-1}\leq L(p_n,N)\leq \left\lfloor\frac{(p-1)N}{p}\right\rfloor+p^{\ell-1},
$$
of the Rudin-Shapiro-like sequence $(\ell_n)$ defined by \eqref{eq:rslike}
$$
\left\lceil \frac{N}{4}\right\rceil-1\le L(\ell_n,N)\le \left\lfloor\frac{3N+5}{4}\right\rfloor,
$$
of the Baum-Sweet sequence $(b_n)$ defined by \eqref{bsdef}
$$
\left\lceil\frac{N}{3}\right\rceil\leq L(b_n,N)\leq \left\lfloor\frac{2N+1}{3}\right\rfloor,
$$
of the characteristic sequence $(c_n)$ of sums of three squares defined by \eqref{eq:cndef}
$$
\left\lceil\frac{N-7}{4}\right\rceil\le L(c_n,N)\le \left\lfloor\frac{3N}{4}\right\rfloor+2
$$
and of the apwenian sequence $(w_n)$ defined by \eqref{apdef}
\begin{equation}\label{aplin} L(w_n,N)=\left\lfloor \frac{N+1}{2}\right\rfloor.
\end{equation}
Note that the bound \eqref{eq:lin-compl_TM} is also true for the dual $(t'_n)$ of the Thue-Morse sequence, that is, $t'_n=1-t_n$, and apwenian sequences are characterized by the property \eqref{aplin}, see \cite{alhani}. Note that not all apwenian sequences are automatic.
The bounds \eqref{eq:lin-compl_TM} for the Thue-Morse sequence and \eqref{eq:lin-compl_RS}
for the Rudin-Shapiro sequence are optimal.
Using the continued fraction expansions of their generating functions, M\'erai and Winterhof
\cite{mewi18} determined the exact value of the $N$th linear complexity profiles of the Thue-Morse and Rudin Shapiro sequence.
\begin{theorem}
The $N$th linear complexity of the Thue-Morse sequence is
$$
L(t_n,N)=2\left \lfloor \frac{N+2}{4}\right\rfloor, \quad N=1,2,\dots
$$
and the $N$th linear complexity of the Rudin-Shapiro sequence is
$$L(r_n,N)=\left\{\begin{array}{cc} 6\left\lfloor N/12\right\rfloor+4, & N\equiv 4,5,6,7,8,9\bmod 12,\\
6\left\lfloor (N+2)/12\right\rfloor, & \mbox{otherwise}.
\end{array}\right.$$
\end{theorem}
The result can be extended to binary pattern sequences $(p_n)$ defined by \eqref{eq:recurrence} with the all one pattern of length $\ell\ge 3$, that is, $a=2^\ell-1$.
It follows from Theorem~\ref{thm:MW-lin-compl-gen}, that if an automatic sequence is not ultimately periodic and its generating function has a quadratic minimal polynomial, that is $d=2$ in Theorem~\ref{thm:MW-lin-compl-gen}, then the deviation of the $N$th linear complexity from its expected value $N/2$ is bounded by $(M+1)/2$,
$$
\left|L(s_n,N)-\frac{N}{2}\right|\leq \frac{M+1}{2}.
$$
Such sequences
are said to have \emph{almost perfect} or \emph{$(M+1)$-perfect} linear complexity profile,
see \cite{Niederreiter88-b,alhani}.
Apwenian sequences are those sequences having $1$-perfect or just {\em perfect} linear complexity profile. The bounds \eqref{eq:lin-compl_TM} and
\eqref{eq:lin-compl_RS} imply that the Thue-Morse sequence has $2$-perfect linear complexity profile and the Rudin-Shapiro sequence and the paper-folding sequence both have $4$-perfect linear complexity profile.
Although automatic sequences have some good pseudorandom properties including a desirable linear complexity profile, these sequences have also some strong non-randomness properties, see Sections~\ref{sec:correlation}, \ref{sec:expansion_complexity} and \ref{sec:normality} below.
Such randomness flaws may be avoided considering subsequences of automatic sequences. For example, the Thue-Morse and Rudin-Shapiro sequences along squares are not automatic, see Section~\ref{sec:normality} below, and seem to have $N$th linear complexity $N/2+O(\log N)$, see Figure~\ref{fig:L}.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.51]{TM_along_squares_L.png}
\includegraphics[scale=.51]{RS_along_squares_L.png}
\end{center}
\caption{The $N$th linear complexity of the Thue-Morse (left) and
Rudin-Shapiro (right) sequence along squares.}
\label{fig:L}
\end{figure}
\begin{problem
Prove that the $N$th linear complexities of the Thue-Morse and Rudin-Shapiro sequences along squares satisfy
\footnote{$f(k)=o(g(k))$ is equivalent to $f(k)/ g(k)\rightarrow 0$ as $k\rightarrow \infty$.}
$$
L(t_{n^2},N)=\frac{N}{2}+o(N)\quad \mbox{and}\quad L(r_{n^2},N)=\frac{N}{2}+o(N).
$$
\end{problem}
We remark, that lower bounds on the $N$th linear complexities of $(t_{n^2})$ and $(r_{n^2})$ of order of magnitude $\sqrt{N}$ follow from Theorem~\ref{thm:suwi} and \eqref{eq:ML} in the next section.
Additional to these examples, the same problem is also open for other subsequences such as along other polynomial values, along primes etc.
\section{Maximum order complexity}\label{sec:max-order_complexity}
Maximum order (or nonlinear) complexity is a refinement of the linear complexity considering not only linear but \emph{any} recurrence relation.
\begin{definition}
The {\em $N$th maximum order complexity} $M(s_n,N)$ is the smallest positive integer~$M$ with
$$
s_{n+M}=f(s_{n+M-1},\ldots,s_n),\quad 0\le n\le N-M-1,
$$
for some mapping $f:\mathbb{F}_2^M \rightarrow \mathbb{F}_2$.
The sequence $(M(s_n,N))_{N=1}^\infty$ is called {\em maximum order complexity profile}.
\end{definition}
Obviously, we have
\begin{equation}\label{eq:ML}
M(s_n,N)\le L(s_n,N)
\end{equation}
and the maximum order complexity is a finer measure for the unpredictability of a sequence than the linear complexity.
However, often the linear complexity is easier to analyze both theoretically and algorithmically.
Clearly, a sufficiently large maximum order complexity is needed for unpredictability and suitability in cryptography.
However, sequences of very large maximum order complexity have also a very large autocorrelation or correlation measure of order $2$, see \eqref{C2M} below, and are not suitable for many applications including cryptography, radar, sonar and wireless communications.
The maximum order complexity was introduced by Jansen in \cite[Chapter 3]{ja}, see also~\cite{jabo}.
The typical value for the $N$th maximum order complexity is of order of magnitude $\log N$, see
\cite{ja,jabo}. An algorithm for calculating the maximum order complexity profile of linear time and memory was presented by
Jansen \cite{ja,jabo} using the graph algorithm introduced by Blumer et al.\ \cite{bl}.
The maximum order complexity of the Thue-Morse sequence was determined in \cite[Theorem 1]{suwi19}.
\begin{theorem}\label{thm:suwi-tm}
For $N\ge 4$,
the $N$th maximum order complexity of the Thue-Morse sequence~$(t_n)$
satisfies
$$M(t_n,N)=2^\ell+1,$$
where
$$\ell=\left\lceil \frac{\log (N/5)}{\log 2}\right\rceil.$$
\end{theorem}
It is easy to see that
\begin{equation}\label{tmmoc}\frac{N}{5}+1\le M(t_n,N)\le 2\frac{N-1}{5}+1
\quad\text{for}\;\; N\ge 4.
\end{equation}
In Section~\ref{sec:correlation} we will see that such a large maximum order complexity points to undesirable structure in a sequence.
The $N$th maximum order complexity of the Rudin-Shapiro sequence and some generalizations is also of order of magnitude $N$, see \cite[Theorem 2]{suwi19}. In particular we have
\begin{equation}\label{rsmoc}
M(r_n,N)\ge \frac{N}{6}+1,\quad N\ge 4.
\end{equation}
The maximum order complexity of the subsequences of the Thue-Morse and the Rudin-Shapiro sequence along squares are still
large enough, see \cite{suwi}.
\begin{theorem}\label{thm:suwi}
The $N$th maximum order complexities $M(t_{n^2},N)$ and $M(r_{n^2},N)$
of the subsequences $(t_{n^2})$ and $(r_{n^2})$ of the Thue-Morse and the Rudin-Shapiro sequence along squares satisfy
\begin{align*}
&M(t_{n^2},N)\geq \sqrt{\frac{2N}{5}},\quad N\ge 21, \quad \mbox{and} \\
&M(r_{n^2},N)\geq \sqrt{\frac{N}{8}},\quad N\ge 64.
\end{align*}
\end{theorem}
We sketch the proof.
First, let $t$ be the length of the longest subsequence of~$(t_{n^2})$ that
occurs at least twice with different successors among the first $N$ sequence elements. Then
$M(t_{n^2},N)\ge t + 1$. Hence
the first inequality follows from
$$
t_{(i+2^{\ell+1})^2}=t_{(i+2^{\ell+2})^2},\quad i=0,1,\ldots,\left\lfloor \sqrt{2^{\ell+2}-1}\right\rfloor$$
$$\mbox{and}\quad t_{(2^{\ell}+2^{\ell+1})^2}\ne t_{(2^\ell+2^{\ell+2})^2},
$$
which can be shown by induction over $\ell\ge 2$,
where $\ell$ is defined by $5\cdot 2^\ell<N\le 5\cdot 2^{\ell+1}$.
The second bound follows from
$$r_{(i+2^{\ell+3})^2}=r_{(i+2^{\ell+4})^2},\quad i=0,1,\ldots,\left\lfloor\sqrt{2^{\ell+3}-1}\right\rfloor,$$
$$\mbox{and}\quad
r_{(2^{\ell+2}+2^{\ell+3})^2}\ne r_{(2^{\ell+2}+2^{\ell+4})^2},$$
where $\ell$ is defined by $2^{\ell+5}\le N<2^{\ell+6}$.
Figure \ref{maxordersquare} suggests that $\sqrt{N}$ is the right order of magnitude for the $N$th maximum order complexities of $(t_{n^2})$ and $(r_{n^2})$.
For $N\ge 2^{2\ell+2}$ the same lower bound $\sqrt{N/8}$ is true for binary pattern sequences along squares with the all one pattern of length $\ell$, that is, $a=2^\ell-1$ for~$\ell\ge 3$, see \cite{suwi}.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.5]{TM_along_squares_M.png}
\includegraphics[scale=.5]{RS_along_squares_M.png}
\end{center}
\caption{The $N$th maximum order complexity of the Thue-Morse (left) and Rudin-Shapiro (right) sequence along squares.}
\label{maxordersquare}
\end{figure}
This result was extended by Popoli \cite{po} to sequences along polynomial values of higher degrees $d$. However, the lower bounds are of order of magnitude $N^{1/d}$.
Note that no better lower bounds are known for the $N$th linear complexity of these subsequences of automatic sequences.
The problem for other subsequences is still open as for example for subsequences along primes.
\begin{problem}
Study the maximum-order complexity of the subsequences of the Thue-Morse and the Rudin-Shapiro sequence along primes.
\end{problem}
The maximum order complexity of other automatic sequences has also been studied.
Sun, Zeng and Lin \cite{suzeli} showed that the $N$th maximum order complexity of the Rudin-Shapiro-like sequence $(\ell_n)$ defined by \eqref{eq:rslike} is of order of magnitude~$N$.
\bigskip
We remark, that in addition to automatic sequences based on the $k$-ary expansion~\eqref{eq:k-ary} of integers, one can consider analogously sequences using other numeration systems.
In particular, consider the \emph{Fibonacci numbers} defined by
$$
F_0=0,~F_1=1\quad \mbox{and}\quad F_n=F_{n-1}+F_{n-2}\mbox{ for }n\ge 2.
$$
Then the unique, see for example \cite[Theorem~3.8.1]{alsh2}, {\em Zeckendorf expansion} or {\em Fibonacci expansion}, of a positive integer~$n$ is
$$
n=\sum_{i=0}^\infty e_i F_{i+2},\quad \mbox{where }e_i\in \{0,1\} \mbox{ and } e_ie_{i+1}=0\mbox{ for }i=0,1,\ldots
$$
Analogously to the Thue-Morse sum-of-digits sequence $(t_n)$ and the Rudin-Shapiro sequence~$(r_n)$
which can be defined by~\eqref{sumofdigitsdef} we can define and study the {\em Zeckendorf
sum-of-digits sequences modulo $2$} $(z_n)$ and $(u_n)$ defined by
\begin{equation}\label{zeck}
z_n=\sum_{i=0}^\infty e_i \bmod 2
\quad\mbox{and}\quad
u_n=\sum_{i=0}^\infty e_ie_{i+2}\bmod 2.
\end{equation}
Very recently the maximum-order complexity of $(z_n)$ and its subsequences along polynomial values has been studied by Jamet, Popoli and Stoll in \cite{japost}.
A lower bound on $M(u_n,N)$ and some generalizations can be obtained along the same lines and will be contained in Popoli's thesis.
\section{Well-distribution and correlation measures}\label{sec:correlation}
Mauduit and S\'ark\"ozy \cite{masa} introduced two measures
of pseudorandomness for finite sequences over $\{-1,+1\}$, the well-distribution measure and the correlation measure of order $k$. We adjust these definitions to infinite binary sequences $(s_n)$ over $\mathbb{F}_2$.
\begin{definition}
The {\em $N$th well-distribution measure} of $(s_n)$ is defined as
$$W(s_n,N)=\max_{a,b,t}\left|\sum_{j=0}^{t-1} (-1)^{s_{a+jb}}\right|,$$
where the maximum is taken over all integers $a,b,t$, $b\ge 1$, such that $0\le a \le a+(t-1)b\le N-1$.
\end{definition}
The well-distribution measure provides information on the balance,
\footnote{Note that the term {\em balanced} is used with a different meaning in combinatorics on words, see for example \cite[Definition~10.5.4]{alsh2}.}
that is the distribution of zeros and ones, along arithmetic progressions.
For random sequences it is expected to be small. More precisely, Alon et al.\ \cite[Theorem~1]{alko} proved the following result on the typical value of the well-distribution measure.
\begin{theorem}
For all $\varepsilon>0$, there are numbers $N_0=N_0(\varepsilon)$ and $\delta=\delta(\varepsilon)>0$ such that for $N\ge N_0$ we have
$$\delta \sqrt{N}<W(s_n,N) <\frac{\sqrt{N}}{\delta}$$
with probability at least $1-\varepsilon$ with respect to the probability measure \eqref{eq:prob_space}.
\end{theorem}
Moreover, Aistleitner \cite{aistleitner} showed that
there exists a continuous limit distribution
of~$\frac{W(s_n,N)}{\sqrt{N}}$.
More precisely,
for any $t\in \mathbb{R}$ the limit
$$
F(t)=\lim_{N\rightarrow \infty} {\mathbb P}\left(\frac{W(s_n,N)}{\sqrt{N}}\le t\right)
$$
exists and satisfies
$$
\lim_{t\rightarrow \infty}t(1-F(t))e^{t^2/2}=\frac{8}{\sqrt{2\pi}},
$$
with respect to the probability measure~\eqref{eq:prob_space}.
\begin{definition}
For $k\geq 1$,
the {\em $N$th correlation measure of order~$k$} of a binary sequence~$(s_n)$ is
$$C_k(s_n,N)=\max_{M,D}\left|\sum^{M-1}_{n=0}(-1)^{s_{n+d_1}}\cdots (-1)^{s_{n+d_k}}\right|,$$
where the maximum is taken over all $D=(d_1,d_2,\ldots,d_k)$ with integers satisfying
$0\le d_1<d_2<\cdots<d_k$ and $1\le M\le N-d_k$.
\end{definition}
The correlation measure of order $k$ provides information about the similarity of parts of the sequence and their shifts.
For a random sequence this similarity and thus the correlation measure of order $k$ is expected to be small.
More precisely, Alon et al.\ \cite[Theorem 2]{alko} proved the following result on the typical
value of the correlation measure of
order $k$.
\begin{theorem}
For any $\varepsilon>0$, there exist an $N_0=N_0(\varepsilon)$ such that for all $N\ge N_0$
we have for a randomly chosen sequence $(s_n)$ and any $k$ with $2\le k\le N/4$,
$$
\frac{2}{5}\sqrt{N\log{N\choose k}}<C_k(s_n,N)<\frac{7}{4}\sqrt{N\log{N\choose k}}
$$
with probability at least $1-\varepsilon$ with respect to the probability measure \eqref{eq:prob_space}.
\end{theorem}
Moreover, Schmidt \cite[Theorem 1.1]{schmidt} showed, that for fixed $k$, we have
$$
\lim_{N\rightarrow \infty}\frac{C_k(s_n,N)}{\sqrt{2N\log \binom{N}{k-1}}} =1
$$
with probability $1$ with respect to the probability measure \eqref{eq:prob_space}.
A large well-distribution measure implies a large correlation measure of order $2$. More precisely we have by \cite[Theorem 1]{ms03}
\footnote{$f(k)=O(g(k))$ is equivalent to $|f(k)|\le c g(k)$ for some constant $c>0$.}
$$
W(s_n,N)=O\left(\sqrt{NC_2(s_n,N)}\right).
$$
Mauduit and Sárközy \cite{masa98} obtained bounds on the well-distribution measure and correlation measure of order $2$ of Thue-Morse sequence $(t_n)$ and Rudin-Shapiro sequence~$(r_n)$.
For example, as a consequence of the bound
\begin{equation}\label{eq:exp-TM}
\left|\sum_{n=0}^{N-1}(-1)^{t_n}z^n\right|\leq (1+\sqrt{3}) N^{\log 3/\log 4},\quad |z|=1,
\end{equation}
of Gel'fond~\cite[p.~262]{gelfond}, see \cite{foma} for the explicit constant $1+\sqrt{3}$, they obtained a bound on~$W(t_n,N)$.
\begin{theorem}\label{thm:masa-thm1}
We have
$$
W(t_n,N) \leq 2(1+\sqrt{3}) N^{\log 3/\log 4}.
$$
\end{theorem}
Also, using the bound
\begin{equation}\label{eq:exp-RS}
\left|\sum_{n=0}^{N-1}(-1)^{r_n}z^n\right|\leq (2+\sqrt{2}) N^{1/2},\quad |z|=1,
\end{equation}
obtained by Rudin~\cite{rudin} and Shapiro~\cite{shapiro}, see also \cite[Theorem 3.3.2]{alsh2},
they proved a bound on~$W(r_n,N)$.
\begin{theorem}\label{thm:masa-thm2}
We have
$$
W(r_n,N)\leq 2(2+\sqrt{2}) N^{1/2}.
$$
\end{theorem}
In general, following the proofs of \cite{masa98} we get
\begin{equation}\label{eq:W-idea}
W(s_n,N)
=O\left(
\sup_{|z|=1, m\le N}\left|\sum_{n=0}^{m-1} (-1)^{s_n}z^n\right| \right)
\end{equation}
and thus Theorems \ref{thm:masa-thm1} and \ref{thm:masa-thm2} follow, up to the constant, from \eqref{eq:exp-TM} and \eqref{eq:exp-RS}.
However, for $(t_n)$ and $(r_n)$ Mauduit and Sárközy \cite{masa98} detected non-randomness properties by showing that the correlation measure of order $2$ of these sequences is large.
\begin{theorem}
We have
\begin{equation}\label{ctm} C_2(t_n,N)>\frac{N}{12},\quad N\ge 5,
\end{equation}
and
\begin{equation}\label{crs}
C_2(r_n,N)>\frac{N}{6},\quad N\ge 4.
\end{equation}
\end{theorem}
Mérai and Winterhof~\cite{mewi2} showed that all automatic
sequences share the property of having a large correlation measure of order $2$. They provided the following lower bound in terms of the defining automaton.
\begin{theorem}\label{thm:gen_lowe_bound_on_C}
Let $(s_n)$ be a $k$-automatic binary sequence generated by the finite automaton $(Q,\Sigma,\delta,q_0,\varphi,\{0,1\})$.
Then
\[
C_2(s_n,N)\geq \frac{N}{k(|Q|+1)} \quad \text{for } N\geq k(|Q|+1).
\]
\end{theorem}
This result applied to $(t_n)$ and $(r_n)$ gives the following bounds
$$
C_2(t_n,N)\geq \frac{N}{6}, \quad N\geq 6, \quad\text{and}\quad C_2(r_n,N)\geq \frac{N}{10}, \quad N\geq 10,
$$
which improves \eqref{ctm}.
Figures~\ref{fig:well} and \ref{fig:cor} may lead to the conjecture that well-distribution measure and correlation measure of order $2$ of both $(t_{n^2})$ and $(r_{n^2})$ are of order of magnitude $N^{1/2}$ and $(N\log N)^{1/2}$, respectively.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.51]{TM_along_squares_W.png}
\includegraphics[scale=.51]{RS_along_squares_W.png}
\end{center}
\caption{The $N$th well-distribution measure of the Thue-Morse (left) and Rudin-Shapiro (right) sequence along squares.}
\label{fig:well}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.51]{TM_along_squares_C2.png}
\includegraphics[scale=.51]{RS_along_squares_C2.png}
\end{center}
\caption{The $N$th second order correlation measure of the Thue-Morse (left) and Rudin-Shapiro (right) sequence along squares.}
\label{fig:cor}
\end{figure}
\begin{problem}
For fixed $k=2,3,\ldots$
show that
$$
C_k(t_{n^2},N)=o(N)\quad \mbox{and}\quad C_k(r_{n^2},N)=o(N).
$$
\end{problem}
Mauduit and Rivat \cite{mari18} showed that
$$
\left|\sum_{n=0}^{N-1}(-1)^{r_{n^2}}z^n\right|= O\left( N^{1-\eta}\right),\quad |z|=1,\quad \mbox{for some $\eta>0$},
$$
which, together with \eqref{eq:W-idea}, gives
a bound on $W(r_{n^2},N)$ of the same order of magnitude.
More precisely, \cite{mari18} deals with the more general case of binary pattern sequences $(p_n)$ defined by \eqref{eq:pattern-equation} with either the all one pattern of length $k\ge 2$, that is, $a=2^k-1$, or the patterns $10\ldots 01$ of length $k\ge 3$, that is, $a=2^{k-1}+1$, and the constants depend on $k$.
For the Thue-Morse sequence along squares $(t_{n^2})$ one
can easily derive a nontrivial bound on
$$
\left|\sum_{n=0}^{N-1}(-1)^{t_{n^2}}z^n\right|,\quad |z|=1,
$$
and thus on $W(t_{n^2},N)$ since
the proof of \cite[Th\'eor\`eme 1]{mari1} for $z=1$
works also for $z\neq 1$ since after applying a variant of the van der Corput inequality, \cite[Lemma 15]{mari1}, we get an expression which does not depend on the variable $z$ anymore, that is, the same expression as for $z=1$.
Theorem~\ref{thm:suwi-tm} in Section~\ref{sec:max-order_complexity} above shows that the Thue-Morse sequence has maximum order complexity $M(t_n,N)$ of order of magnitude $N$.
Although a large maximum order complexity is desired it should be not too large since otherwise the correlation measure of order $2$ is large.
Namely, we have
\begin{equation}\label{C2M} C_2(s_n,N)\ge M(s_n,N)-1
\end{equation}
since by \cite[Proposition 3.1]{ja} there exist $0\le n_1<n_2\le N-M(s_n,N)-1$ with
$$s_{n_1+i}=s_{n_2+i},\quad i=0,\ldots,M(s_n,N)-2,\quad \mbox{but }s_{n_1+M(s_n,N)-1}\ne s_{n_2+M(s_n,N)-1}$$
and thus
$$M(s_n,N)-1=\sum_{i=0}^{M(s_n,N)-2}(-1)^{s_{n_1+i}+s_{n_2+i}}\le C_2(s_n,N).$$
Combining \eqref{tmmoc} and \eqref{C2M} we get for the Thue-Morse sequence
$$
C_2(t_n,N)\ge \frac{N}{5},\quad N\ge 4,
$$
which further improves the constant in \eqref{ctm}.
Combining \eqref{rsmoc} and \eqref{C2M} recovers~\eqref{crs}.
The correlation measure of order $2$ with bounded lags of some generalizations of the Rudin-Shapiro sequence has recently been studied in \cite{mastta}.
In contrast to the Thue-Morse und Rudin-Shapiro sequence, the well-distribution measure of some other binary automatic sequences is very large.
For example,
the Baum-Sweet sequence $(b_n)$,
the characteristic sequence $(c_n)$ of the sums of three squares,
the paper-folding sequence $(v_n)$
and the apwenian sequence $(w_n)$ defined by \eqref{apdef} are very unbalanced and thus
have all well-distribution measure of order of magnitude $N$.
However, it seems to be interesting to study the well-distribution measure for
arbitrary apwenian sequences.
For the Rudin-Shapiro like sequence $(\ell_n)$ defined by \eqref{eq:rslike} Lafrance, Rampersad and Yee \cite{laraye} proved
$$\liminf_{N\rightarrow \infty} \frac{\sum_{n=0}^{N-1}(-1)^{\ell_n}}{\sqrt{N}}=\frac{\sqrt{3}}{3}
\quad \mbox{and}\quad
\limsup_{N\rightarrow \infty} \frac{\sum_{n=0}^{N-1}(-1)^{\ell_n}}{\sqrt{N}}=\sqrt{2}.
$$
However, a bound on $W(\ell_n,N)$ is not known and in contrast to \eqref{eq:exp-RS} for the Rudin-Shapiro sequence~$(r_n)$, for $(\ell_n)$
the absolute values
$$\left|\sum_{n=0}^{N-1}(-1)^{\ell_n} z^n\right|$$
can be of much larger order of magnitude than $\sqrt{N}$ for some $z$ with $|z|=1$, see \cite[Theorem~2]{al16} as well as \cite{chgr}.
Finally, we remark that the result of Theorem~\ref{thm:gen_lowe_bound_on_C} provides an estimate on the \emph{state complexity} of sequences in terms of the correlation measure of order $2$.
\begin{definition}
Let $k\geq 2$. Then the \emph{$N$th state complexity} $SC_k(s_n,N)$ of a sequence~$(s_n)$ over $\mathbb{F}_2$ is the minimum of the number of states of any finite $k$-automaton which generates the first $N$ sequence elements.
\end{definition}
\begin{cor}
Let $(s_n)$ be a binary sequence. Then for all $k\geq 2$ we have
\[
SC_k(s_n,N)\geq\frac{N}{k\cdot C_2(s_n,N)}-1 \quad \text{for } N\geq 3.
\]
\end{cor}
\section{Expansion complexity}\label{sec:expansion_complexity}
Theorem~\ref{thm:MW-lin-compl-gen} indicates that automatic sequences possess good properties in terms of the linear complexity profile.
However, the results of Section~\ref{sec:correlation} show that these sequences have a serious lack of
pseudorandomness.
Diem~\cite{di} showed that these sequences are
not just statistically auto-correlated, but
are completely predictable from a relatively short initial segment.
He introduced the notion of \emph{expansion complexity}
to turn such security flaw into a quantitative form.
\begin{definition}
Let $(s_n)$ be a sequence over $\mathbb{F}_q$ with generating function
$$
G(x)=\sum_{n=0}^{\infty}s_nx^n \in \mathbb{F}_q \llbracket x \rrbracket .
$$
For a positive integer $N$, the \emph{$N$th expansion complexity $E(s_n,N)$ of $(s_n)$} is $E(s_n,N)=0$ if $s_0=\dots=s_{N-1}=0$ and otherwise the least total degree of a non-zero polynomial $h(x,y)\in\mathbb{F}_q[x,y]$ such that
\begin{equation}\label{eq:h}
h(x,G(x))\equiv 0 \bmod x^N.
\end{equation}
The sequence $(E(s_n,N))_{N=1}^\infty$ is called \emph{expansion complexity profile of $(s_n)$} and
$$
E(s_n)=\sup_{N\geq 1} E(s_n,N)
$$
is the \emph{expansion complexity of $(s_n)$}.
\end{definition}
By Christol's Theorem~\ref{thm:christol}, a sequence is automatic if and only if its expansion complexity is finite. For example, we have for the Thue-Morse sequence $(t_n)$, the Rudin-Shapiro sequence $(r_n)$, the $p$-ary pattern sequence $(p_n)$, the Baum-Sweet sequence $(b_n)$, the Rudin-Shapiro like sequence $(\ell_n)$
and the characteristic sequence $(c_n)$ of sums of three squares that
$$
E(t_n)= 5, \quad E(r_n)= 7, \quad E(p_n)\le p^\ell+2p-1, \quad E(b_n)= 3, \quad E(\ell_n)\le 12
$$
$$
E(c_n)\le 12, \quad E(v_n)= 6 \quad \mbox{and}\quad E(w_n)= 4,
$$
which follows from \eqref{eq:TH-equation}, \eqref{eq:RS-equation}, \eqref{eq:pattern-equation}, \eqref{eq:rslh},
\eqref{eq:BS-equation}, \eqref{eq:cnh}, \eqref{pfh} and \eqref{aph}. The equalities follow from
the fact that there is no lower degree polynomial with such property since~$h(x,y)$ is irreducible in these cases, see \cite[Proposition~4]{di}.
Diem showed \cite{di} that if a sequence has small expansion complexity, then
long parts of such sequences can be computed efficiently from short ones. We summarize his
results.
\begin{theorem}\label{thm:Diem}
Let $(s_n)$ be a sequence over $\mathbb{F}_q$ with expansion complexity $E(s_n)=d$. From the first $d^2$ elements, one can compute an irreducible polynomial $h(x,y)\in \mathbb{F}_q[x,y]$ of degree $\deg h \leq d$ with $h(x,G(x))=0$ in polynomial time in $d\cdot \log q$.
Moreover, an initial segment of the sequence of length $M>N$ can be determined from $h$ and the $d^2$ initial values in polynomial time in $d \cdot \log q$ and in linear time in~$M$.
\end{theorem}
Theorem~\ref{thm:Diem} shows that automatic sequences have a strong non-randomness property. The expansion complexity profile is defined to capture such non-randomness property locally, that is for initial segments of sequences.
For the $N$th expansion complexity, we have the trivial bound $E(s_n,N)\leq N-1$ realized by the polynomial
$$
h(x,y)=y-\sum_{n=0}^{N-1}s_nx^n.
$$
Moreover, one can show the stronger upper bound
\begin{equation}\label{eq:exp_upper}
\binom{E(s_n,N)+1}{2}\leq N,
\end{equation}
which holds for all sequence $(s_n)$ and all $N\geq 1$, see \cite[Theorem~1]{GoMeNi}.
The $N$th expansion complexity of random sequences is concentrated to its upper bound \eqref{eq:exp_upper}, see \cite[Theorem~2]{GoMe}.
\begin{theorem}
We have
$$\liminf_{N\rightarrow \infty} \frac{E(s_n,N)}{\sqrt{N}} \geq \frac{\sqrt{2}}{2},
$$
with probability one
with respect to the probability measure~\eqref{eq:prob_space}.
\end{theorem}
One can estimate the $N$th expansion complexity $E(s_n,N)$ in terms of the $N$th linear complexity $L(s_n,N)$, see \cite[Theorem~3]{meniwi}.
\begin{theorem}\label{thm:MeWiNi}
Let $(s_n)$ be a sequence over $\mathbb{F}_q$ and let $G(x)$ bet its generating function. For $N\geq 2$, assume, that
$$
G(x)\not\equiv 0\bmod x^N.
$$
Let $L_N=L(s_n,N)$ be the $N$th linear complexity
and let
$$
\sum_{\ell=t_N}^{L_N} c_\ell s_{i+\ell}=0,\quad 0\le i\le N-L_N-1,
$$
be a shortest linear recurrence for the first $N$ terms of $(s_n)$, where $c_{L_N}=1$ and $c_{t_N}\ne 0$.
Then
$$
E(s_n,N)\ge \left\{\begin{array}{ll} L_N-t_N+1 & \mbox{for } N>(L_N-t_N)(L_N-\min\{1,t_N-1\}),\\
\left\lceil \frac{N}{L_N-\min\{1,t_N-1\}}\right\rceil & \mbox{otherwise,}
\end{array}\right.
$$
and
$$
E(s_n, N)\le \min\{L_N+\max\{-1,-t_N+1\},N-L_N+2\}.
$$
\end{theorem}
The result formulates in a qualitative way that \emph{very} large $N$th linear complexity, that is $N$th linear complexity close to $N$, is a non-randomness property. Moreover, it enables us to estimate the $N$th expansion complexity from below if the $N$th linear complexity is not too close to either $0$ or $N$ (in a logarithmic scale), say, of order of magnitude $\sqrt{N}$.
We refer to \cite{meniwi, howi} for applications of Theorem~\ref{thm:MeWiNi} for estimating the $N$th expansion complexity of certain sequences.
Certain subsequences of automatic sequences, say, the Thue-Morse and Rudin-Shapiro sequences along squares are not automatic, see Section~\ref{sec:normality} below, and thus have unbounded expansion complexity profile. However, their growth rates
are not known.
For example, one can study further the Thue-Morse and Rudin-Shapiro sequence along squares.
\begin{problem}
Estimate the expansion complexity profiles of the subsequences $(t_{n^2})$ and $(r_{n^2})$ of the Thue-Morse and Rudin-Shapiro sequence along squares.
\end{problem}
Figure~\ref{fig:exp} suggests $E(t_{n^2},N)$ and $E(r_{n^2},N)$ are both of order of magnitude $\sqrt{N}$.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.51]{TM_along_squares_E.png}
\includegraphics[scale=.51]{RS_along_squares_E.png}
\end{center}
\caption{The $N$th expansion complexity of the Thue-Morse (left) and Rudin-Shapiro (right) sequence along squares.}
\label{fig:exp}
\end{figure}
Finally, we remark that in order to use the full strength of Theorem~\ref{thm:Diem} for inferring sequence elements, one needs to require the irreducibly of the polynomial $h(x,y)$ in \eqref{eq:h}. In \cite{GoMe, GoMeNi}, the authors studied this variant of the $N$th expansion complexity and the relation between these two complexity measures.
\section{Subword complexity and normality}\label{sec:normality}
The results of Section~\ref{sec:correlation} show that many automatic sequences, including Thue-Morse and Rudin-Shapiro sequence, are balanced, that is, the frequencies of the symbols are close to the expected values. However, the frequencies of longer patterns are far from uniform. This phenomenon can be made precise by the notion of \emph{subword complexity}.
\begin{definition}
For a sequence $(s_n)$ over the alphabet $\Delta$ the \emph{subword complexity} $p(s_n,k)$ is the number of distinct subsequences of length $k$.
\end{definition}
Trivially we have $1\le p(s_n,k)\le |\Delta|^k$ and for ultimately periodic sequences we have $p(s_n,k)=O(1)$.
By \cite[Corollary 10.3.2]{alsh2} the subword complexity $p(s_n,k)$ of automatic sequences $(s_n)$ is of order of magnitude $k$.
\begin{theorem}\label{thm:subword} If $(s_n)$ is an automatic sequence that is not ultimately periodic,
then we have
\footnote{$f(k)=\Theta(g(k))$ is equivalent to $c_1g(k)\le f(k)\le c_2g(k)$ for some constants $c_2\ge c_1>0$.}
$$p(s_n,k)=\Theta(k).$$
\end{theorem}
For the Thue-Morse sequence $(t_n)$, the exact value of its subword complexity $p(t_n,k)$ was independently determined by Brlek \cite[Proposition~4.4]{br} and by de Luca and Varricchio \cite[Proposition~4.4]{deva}, see also \cite[Exercise~10.11.10]{alsh2}.
De Luca and Varricchio \cite[Property~3.3]{deva} also showed that patterns such as $000$ and $111$ do not appear in the Thue-Morse sequence
and more general the following result.
\begin{theorem}
The Thue-Morse sequence is cube-free, that is, no pattern of the form $www$ with $w\in \{0,1\}^k$ for some $k\ge 1$ appears in the sequence.
\end{theorem}
The papers \cite{br,deva} contain also several other results on the non-existence of certain patterns in
the Thue-Morse sequence.
The subword complexity and the correlation measure of order $\ell$ are related by the following
result of Cassaigne et al. \cite[Theorem 6]{cafe}.
\begin{theorem}
If for some positive integers $k$ and $N$
$$
C_{\ell}(s_n,N)\le \frac{N}{2^{2k+1}},\quad \ell=1,2,\ldots,k,
$$
then
$$p(s_n,k)=2^k.$$
\end{theorem}
For automatic sequences we can have $p(s_n,k)=2^k$ only for finitely many $k$ since $p(s_n,k)=\Theta(k)$.
However, certain subsequences of automatic squences are \emph{normal}, that is, all patterns appear in the sequence with the expected frequencies. More formally, a sequence $(s_n)$ is called \emph{normal} if for any fixed length $k$ and any pattern $\mathbf{e}\in \Delta^k$
$$
N_k(s_n, \mathbf{e}, N) =\frac{
\#\{0\le n< N: (s_{n},s_{n+1},\ldots,s_{n+k-1})=\mathbf{e}\}}{N}\rightarrow \frac{1}{|\Delta|^k} \quad \text{as } N \rightarrow \infty.
$$
Drmota et al.\ \cite{drmari} and M\"ullner \cite{mu} proved the normality of the Thue-Morse and the Rudin-Shapiro sequences along squares, that is
\begin{equation}\label{eq:normality}
\lim_{N\rightarrow \infty}N_k(t_{n^2}, \mathbf{e}, N) =2^{-k}
\quad \text{and} \quad
\lim_{N\rightarrow \infty}N_k(t_{n^2}, \mathbf{e}, N)=2^{-k}
\end{equation}
for any $\mathbf{e}\in \{0,1\}^k$.
The main tool to obtain the results \eqref{eq:normality} is to prove estimates on the sums $$
\sum_{n<N}(-1)^{e_0t_{n^2}+\dots + e_{k-1}t_{(n+k-1)^2} }
\quad \text{and} \quad
\sum_{n<N}(-1)^{e_0r_{n^2}+\dots + e_{k-1}r_{(n+k-1)^2} }
$$
for any $e_0,\dots, e_{k-1}\in\{0,1\}$.
These sums can be estimated via a Fourier analytic method of Mauduit and Rivat which has its origin in \cite{mari1,mari2}.
For more details we refer to the survey \cite{dr} of Drmota and the original papers \cite{drmari,mu}.
In particular, the normality results \eqref{eq:normality} yield the the subword complexities
\begin{equation}\label{eq:subword}
p(t_{n^2},k)=p(r_{n^2},k)=2^k.
\end{equation}
It is conjectured but not proved yet that the subsequences of the Thue-Morse sequence~$(t_{f(n)})$ and Rudin-Shapiro sequence~$(r_{f(n)})$ along any polynomial~$f$ of degree $d\ge 3$
are normal, see \cite[Conjecture 1]{drmari}.
Even the weaker problem of determining the frequency of $0$ and $1$ in the subsequences $(t_{f(n)})$ and $(r_{f(n)})$ along any polynomial $f(x)$ of degree $d\ge 3$ with $f(\mathbb{N}_0)\subset \mathbb{N}_0$ seems to be very intricate, see
\cite[above Conjecture 1]{drmari}.
\begin{problem}
Show that the subsequences of Thue-Morse and Rudin-Shapiro sequence along cubes, bi-squares, ..., any polynomial values for a polynomial of degree at least~$3$ are normal.
\end{problem}
However, Moshe \cite{mo} proved the following lower bound on the subword complexity of~$(t_{f(n)})$,
\begin{equation}\label{moshe}
p(t_{f(n)},k)\ge 2^{k/2^{d-2}}.
\end{equation}
Stoll \cite{st12,st16} showed that the number of zeros (resp.\ ones) among the first $N$ sequence elements of both, $(t_{f(n)})$ and $(r_{f(n)})$,
is at least of order of magnitude $N^{4/(3d+1)}$, $d\ge 3$.
For subsequences $(z_{f(n)})$ of the Zeckendorf sum of digits sequence $(z_n)$ defined by \eqref{zeck} the numbers of zeros and ones among the first~$N$ sequence elements are both lower bounded by~$N^{4/(6d+1)}$, see Stoll~\cite{st13}.
M\"ullner and Spiegelhofer \cite{musp,sp} addressed the normality problem for the Thue-Morse sequence along the Piateski-Shapiro sequence $\lfloor n^c\rfloor$ for $1<c<3/2$. Moreover, it is asymptotically balanced (or simply normal)\cite[Theorem~1.2]{sp20} for $1<c<2$.
For results on the Thue-Morse and Rudin-Shapiro sequence along primes see \cite{bo1,bo2,mari2,mari} and references therein.
In particular, the Thue-Morse sequence $(t_p)$ along primes is balanced, see Mauduit and Rivat \cite{mari2}. However, it is not known whether $(t_{f(p)})_{p}$
is normal for any nonconstant polynomial~$f$.
From Theorem~\ref{thm:subword} and \eqref{eq:subword} we know that $(t_{n^2})$ and $(r_{n^2})$ are not automatic
and by Theorem~\ref{thm:christol} these subsequences are, in contrast to the original sequence, not of bounded expansion complexity, that is,
$$
\lim\limits_{N\rightarrow \infty}E(t_{n^2},N)=\lim\limits_{N\rightarrow \infty}E(r_{n^2},N)=\infty.
$$
Theorem \ref{thm:subword} combined with \eqref{moshe} implies that $(t_{f(n)})$ is not automatic
and
$$\lim\limits_{N\rightarrow \infty}E(t_{f(n)})=\infty$$
for any polynomial of degree at least $2$ with $f(\mathbb{N}_0)\subset \mathbb{N}_0$.
Note that it was shown in \cite{al82} that $(t_{f(n)})$ is not $2^r$-automatic and in
\cite{alsa} that $(r_{f(n)})$ is not $2^r$-automatic and thus we also have
$$
\lim\limits_{N\rightarrow \infty}E(r_{f(n)})=\infty.
$$
Subsequences of the Thue-Morse sequence along geometric sequences such as $(t_{3^n})$ seem to be even more difficult to analyze.
For example, Lagarias \cite[Conjecture~1.12]{la} conjectured that
each pattern appears at least once in $(t_{3^n})$.
For other related results see\cite{duwe,kast}.
For more details on the normality of automatic sequences and their subsequences we refer to \cite{dr}.
\section{Analogs for finite fields}\label{sec:finite_fields}
An analog for finite fields of the problem on the distribution of automatic sequences and their subsequences was introduced by Dartyge and S\'ark\"ozy \cite{dasa}. It has been further investigated in \cite{damewi,mawi,sw18,sw1,ma}, see also
\cite{dielsh,ga17,sw2,damasa,os}.
In the finite field setting some problems can be solved although the analog for integers seems to be out of reach including the normality problem for the analog of the Thue-Morse sequence and the frequency problem for the analog of the Rudin-Shapiro sequence both along polynomials. Hence, these analogs for finite fields are further attractive sources of pseudorandomness.
For a prime $p$ and $q=p^r$ with $r\geq 2$
let $(\beta_1,\ldots,\beta_r)$ be an ordered basis of $\mathbb{F}_q$ over~$\mathbb{F}_p$. Then one can write all elements $\xi \in \mathbb{F}_q$ as
\begin{equation}\label{eq:xi}
\xi=\sum_{i=1}^r x_i\beta_i , \quad x_1,\dots, x_r\in \mathbb{F}_p.
\end{equation}
It is natural to consider the coefficients $x_1,\dots, x_r$ as digits with respect to the basis~$(\beta_1,\dots, \beta_r)$.
Then, in analogy to the Thue-Morse and Rudin-Shapiro sequence satisfying \eqref{sumofdigitsdef} we define the {\em Thue-Morse function}
$$
T\left(\sum_{i=1}^r x_i\beta_i\right)=
\sum_{i=1}^rx_i
,\quad x_1,\ldots,x_r\in \mathbb{F}_p,
$$
and {\em Rudin-Shapiro function}
$$
R\left(\sum_{i=1}^r x_i\beta_i\right)
=\sum_{i=1}^{r-1}x_ix_{i+1}
,\quad x_1,\ldots,x_r\in \mathbb{F}_p,
$$
on $\mathbb{F}_q$.
Dartyge and S\'ark\"ozy \cite{dasa} studied the balance of the Thue-Morse function along polynomial values. They derived results using the Weil bound \cite[Theorem~5.38]{lini} on additive character sums:
\begin{lemma}\label{lemma:weil}
Let $f\in\mathbb{F}_q[x]$ be of degree $d\geq 1$ with $\gcd(d,q)=1$ and $\psi$ be a nontrivial additive character of $\mathbb{F}_q$. Then
$$
\left|\sum_{\xi\in\mathbb{F}_q}\psi(f(\xi))\right|\leq (d-1)\sqrt{q}.
$$
\end{lemma}
Put
$$
e(\alpha)=\exp(2\pi i \alpha), \quad \alpha\in\mathbb{R},
$$
and
note that $\psi(x)=e(T(x)/p)$ is a nontrivial additive character of $\mathbb{F}_q$.
Then from
$$
\sum_{h=0}^{p-1} e\left(\frac{ha}{p}\right)=\left\{\begin{array}{cc}0,& a\neq 0,\\ p, &a=0,\end{array}\right.
\quad a\in \mathbb{F}_p,
$$
we get
$$
\#\{\xi\in \mathbb{F}_q: T(f(\xi))=c\}= \frac{1}{p}\sum_{h=0}^{p-1}\sum_{\xi\in \mathbb{F}_q}\psi\left(hf(\xi)\right)e\left(\frac{-hc}{p}\right).
$$
The contribution of $h=0$ is trivially $p^{r-1}$, which is the expected number of solutions. The other terms for $h\neq 0$ contribute to the error term and can be bounded by Lemma~\ref{lemma:weil}. We immediately get \cite[Theorem~1.2]{dasa}:
\begin{theorem}\label{thm:dasa}
Let $f\in \mathbb{F}_q[x]$ be of degree $d$ with $\gcd(d, q) = 1$. Then for all $c\in \mathbb{F}_p$, we
have
$$
\big|\#\{\xi\in \mathbb{F}_q: T(f(\xi))=c\}-p^{r-1}\big|\le (d-1)p^{r/2}.
$$
\end{theorem}
Later Dartyge, M\'erai and Winterhof \cite{damewi} investigated this problem for the Rudin-Shapiro function. The main difference between the two problems is that the Rudin-Shapiro function is not a linear map contrary to the Thue-Morse function. Standard character sum techniques fail in this situation. Namely, consider $R(f(\xi))$ with $\xi$ having the form \eqref{eq:xi} as a polynomial in the $r$ variables $x_1,\dots, x_r$. Then using Lemma~\ref{lemma:weil} for one coordinate $x_i$ one gets an error term larger than the main term. Stronger results in higher dimension such as the Deligne bound \cite[Th\'eor\`eme 8.4]{De74} also cannot be applied as it needs some more technically intricate conditions which are not satisfied in our situation. However, sacrificing the explicit dependence of the degree $d$, one can use an affine version of the Hooley-Katz Theorem, see \cite{ho} or \cite[Theorem~7.1.14]{handbookFF}.
First recall that the \emph{(affine) singular locus} ${\mathcal L}(F)$ of a polynomial $F\in\mathbb{F}_p[x_1,\dots, x_r]$ is the set of common zeros in $\overline{\mathbb{F}_p}^r$
of the polynomials\footnote{$\overline{\mathbb{F}_p}=\bigcup_{n=1}^\infty \mathbb{F}_{p^n}$ denotes the algebraic closure of $\mathbb{F}_p$.}
$$
F,\frac{\partial F}{\partial x_1},\ldots,\frac{\partial F}{\partial x_r}.
$$
We also recall that the dimension of ${\mathcal L}(F)$ is the largest $d$ for which there exist $1\le i_1<i_2<\ldots<i_d\le r$ such that there is no nonzero polynomial $P$ in $d$ variables with
$P(y_{i_1},\ldots,y_{i_d})=0$ for all $(y_1,\ldots,y_r)\in {\mathcal L}(F)$, see \cite[Corollary~9.5.4]{ideal}.
\begin{lemma}\label{lemma:HK}
Let $Q\in\mathbb{F}_p[x_1,\dots, x_r]$ be of degree~$d\ge 1$ such that the dimensions of the singular loci of $Q$ and its homogeneous part $Q_d$ of degree $d$ satisfy
$$
\max\{\dim({\mathcal L}(Q)),\dim({\mathcal L}(Q_d))-1\}\le s.
$$
Then the number $N$ of zeros of~$Q$ in $\mathbb{F}_p^r$ satisfies
$$
\left|N-p^{r-1}\right|\le C_{d,r}p^{(r+s)/2},
$$
where $C_{d,r}$ is a constant depending only on $d$ and $r$.
\end{lemma}
Then using Lemma~\ref{lemma:HK}, one can show that the Rudin-Shapiro function is also asymptotically balanced on polynomial values, see \cite[Theorem~1]{damewi}.
\begin{theorem}\label{thm:damewi}
Let $f\in \mathbb{F}_q[x]$ be of degree $d$ with $\gcd(d, q) = 1$. Then for all $c\in \mathbb{F}_p$, we
have
$$
\big|\#\{\xi\in \mathbb{F}_q: R(f(\xi))=c\}-p^{r-1}\big|\le C_{d,r} p^{(3r+1)/4},
$$
where the constant $C_{d,r}$ depends only on the degree $d$ of $f$ and $r$.
\end{theorem}
Theorem \ref{thm:damewi} is nontrivial if $r$ is fixed and $p\rightarrow \infty$. Contrary to Theorem \ref{thm:dasa}, nothing is known for the dual situation.
\begin{problem}\label{prob:RS_FF}
For fixed prime $p$ show that if $r$ is large enough, then the Rudin-Shapiro function along polynomial values is balanced possibly under some natural restrictions on the polynomial.
\end{problem}
Analogously to the normality results of Section~\ref{sec:normality}, Makhul and Winterhof \cite{mawi} obtained results on the normality of the Thue-Morse function along polynomial values. For sake of simplicity we state the case when the polynomial $f$ has degree $d$ smaller than the characteristic $p$, \cite[Corollary~1]{mawi}.
\begin{theorem
Assume $1\leq d < p$ and $s\leq d$. For any polynomial $f\in\mathbb{F}_q[x]$ of degree $d$ and any pairwise distinct $\alpha_1, \dots, \alpha_s\in\mathbb{F}_q$ and any $c_1,\dots, c_s\in \mathbb{F}_p$ we have
$$
\big|\#\{\xi\in \mathbb{F}_q: T(f(\xi+\alpha_i))=c_i, 1\leq i\leq s\}-p^{r-s}\big|\le (d-1) p^{r/2}.
$$
\end{theorem}
Note that the restriction $s\le d$ is natural and counterexamples for $s>d$ are easy to construct.
The case of the Rudin-Shapiro function is much more intricate.
\begin{problem
Study the normality of the Rudin-Shapiro function at $f(x)$. Namely, show that
$$
\frac{\#\{\xi\in \mathbb{F}_q: R(f(\xi+\alpha_i))=c_i, 1\leq i\leq s\}}{p^{r-s}} \rightarrow 1 \quad \text{as } p\rightarrow \infty
$$
for some $s\geq 2$ and any $f\in\mathbb{F}_q[x]$ of fixed degree.
\end{problem}
Of course, this problem is also open for fixed $p$ and $r\rightarrow \infty$ even in the simplest
case~$s=1$, see Problem~\ref{prob:RS_FF}.
It is natural to define the
{\em Rudin-Shapiro function} on the polynomial ring~$\mathbb{F}_p[t]$ by assigning the coefficients of the polynomial $f(t)\in\mathbb{F}_p[t]$ to $(x_1,\dots, x_r)$, that is,
$$
R(t^r+x_{1}t^{r-1}+\dots + x_r )=\sum_{i=1}^{r-1}x_ix_{i+1},
$$
for $x_1,\dots, x_r\in \mathbb{F}_p$.
Analogously to the result of Mauduit and Rivat \cite{mari} on the Rudin-Shapiro sequence along prime numbers, it is natural to investigate the balance and the normality of the Rudin-Shapiro function along irreducible polynomials. As the number of monic irreducible polynomials of degree $r$ is $p^r/r+o(p^r)$, see for example \cite[Theorem~3.25]{lini},
we expect that the frequency of each element $c$ is $\frac{p^{r-1}}{r}+o(p^{r-1})$.
For $r=2$ and fixed $c\in \mathbb{F}_p$ we have to count the number of $x_2\in \mathbb{F}_p^*$ such that $t^2+x_2^{-1}ct+x_2$ is irreducible over $\mathbb{F}_p$ or equivalently the discriminant $x_2^{-2}c^2-4x_2$ is a quadratic non-residue modulo~$p$. This number is
$$\frac{1}{2}\sum_{x_2\in \mathbb{F}_p^*} \left(1-\left(\frac{c^2-4x_2^3}{p}\right)\right)=
\left\{\begin{array}{cc} \frac{p-1}{2}, & c=0,\\
\frac{p-1}{2}+O(p^{1/2}), & c\ne 0,\end{array}\right.$$
by the Weil bound for multiplicative character sums \cite[Theorem 5.41]{lini}, where~$\left(\frac{.}{.}\right)$ is the Legendre symbol.
\begin{problem
Prove that for all $c\in\mathbb{F}_p$ and $r\ge 3$ we have
$$
\lim_{p\rightarrow\infty}\frac{ \#\{f\in \mathbb{F}_p[t]: \deg f=r, f \text{ monic and irreducible over $\mathbb{F}_p$}, R(f)=c \} }{p^{r-1}}=\frac{1}{r}.$$
\end{problem}
We remark that one can define the {\em Thue-Morse function}
by
$$T(f)=T(t^r+x_1t^{r-1}+\ldots+x_r)=x_1+\ldots+x_r=f(1)-1.$$
Note that for irreducible polynomials $f(x)$ we have $T(f)\ne -1$ and for $c\ne -1$ the number of monic irreducible polynomials of degree $r=2$
with $T(f)=c$ is
$$
\frac{1}{2}\sum_{u\in \mathbb{F}_p\atop u^2\ne c+1}\left(1-\left(\frac{u^2-c-1}{p}\right)\right) =\frac{p-\left(\frac{c+1}{p}\right)}{2},
$$
where we used a well-known result on sums of Legendre symbols of quadratic polynomials, see for example \cite[Theorem~5.48]{lini}.
In general, since $f(x)$ is irreducible whenever $f(x-1)$ is irreducible we have to estimate the number $I_c$ of monic irreducible polynomials with fixed constant term $c\ne 0$ which satisfies
$$\frac{1}{r}\left(\frac{p^r-1}{p-1}-2p^{r/2}\right)\le I_c\le \frac{p^r-1}{r(p-1)},$$
see \cite{car} or \cite[Theorem~3.5.9]{handbookFF}, and we get the desired
$$
I_c=\frac{p^{r-1}}{r}+o(p^{r-1})
$$
for $r\ge 3$ as well.
Moreover, the corresponding normality problem is trivial since for any polynomial~$g(x)$ of degree at most $r-1$ the value $T(f+g)=f(1)+g(1)-1$ is uniquely defined by $T(f)=f(1)-1$ and $g(1)$.
For other results on 'digits' along irreducible polynomials see for example
\cite[Chapter~3]{handbookFF} and
\cite{gakuwa,gr,pol,tuwa,ha,por}.
\section*{Acknowledgment}
The authors were supported by the Austrian Science Fund FWF grants P 30405 and P~31762.
They wish to thank Jean-Paul Allouche, Harald Niederreiter, Igor Shparlinski, Cathy Swaenepoel, Thomas Stoll and Steven Wang for very useful discussions.
| {
"timestamp": "2021-05-10T02:08:48",
"yymm": "2105",
"arxiv_id": "2105.03086",
"language": "en",
"url": "https://arxiv.org/abs/2105.03086",
"abstract": "Many automatic sequences, such as the Thue-Morse sequence or the Rudin-Shapiro sequence, have some desirable features of pseudorandomness such as a large linear complexity and a small well-distribution measure. However, they also have some disastrous properties in view of certain applications. For example, the majority of possible binary patterns never appears in automatic sequences and their correlation measure of order 2 is extremely large.Certain subsequences, such as automatic sequences along squares, may keep the good properties of the original sequence but avoid the bad ones.In this survey we investigate properties of pseudorandomness and non-randomness of automatic sequences and their subsequences and present results on their behaviour under several measures of pseudorandomness including linear complexity, correlation measure of order $k$, expansion complexity and normality. We also mention some analogs for finite fields.",
"subjects": "Number Theory (math.NT); Discrete Mathematics (cs.DM)",
"title": "Pseudorandom sequences derived from automatic sequences",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018390836984,
"lm_q2_score": 0.8221891327004133,
"lm_q1q2_score": 0.8018003542840739
} |
https://arxiv.org/abs/0712.3618 | Network Tomography: Identifiability and Fourier Domain Estimation | The statistical problem for network tomography is to infer the distribution of $\mathbf{X}$, with mutually independent components, from a measurement model $\mathbf{Y}=A\mathbf{X}$, where $A$ is a given binary matrix representing the routing topology of a network under consideration. The challenge is that the dimension of $\mathbf{X}$ is much larger than that of $\mathbf{Y}$ and thus the problem is often called ill-posed. This paper studies some statistical aspects of network tomography. We first address the identifiability issue and prove that the $\mathbf{X}$ distribution is identifiable up to a shift parameter under mild conditions. We then use a mixture model of characteristic functions to derive a fast algorithm for estimating the distribution of $\mathbf{X}$ based on the General method of Moments. Through extensive model simulation and real Internet trace driven simulation, the proposed approach is shown to be favorable comparing to previous methods using simple discretization for inferring link delays in a heterogeneous network. |
\section*{Appendix: A counter example}
Based on the proof of Lemma 1, we can construct a counter example
that the distributions of $\{ X_1,X_2,X_3\}$ are not identifiable. Let $c(t;a,\lambda)=e^{-\lambda|t|}I(|t|\leq a)+\lambda e^{-\lambda a}(a+\frac{1}{\lambda}-|t|)I(a<|t|\leq a+\frac{1}{\lambda})$
be a continuous function defined for $t\in\mathcal{R}$. It is easy
to check using Polya's condition (\cite{lukacs:1970}) that for any $a\geq0,\lambda>0$, $c(t;a,\lambda)$ is a
characteristic function corresponding to a symmetric, non-vanishing,
bounded continuous density function. Let $\phi_{X_1}(t)=c(t;2,1)$,
$\phi_{X'_1}(t)=c(t;3,1)$,
$\phi_{X_2}(t)=\phi_{X'_2}(t)=\phi_{X_3}(t)=\phi_{X'_3}(t)=c(t;0,1)$.
Both groups of distributions corresponding to characteristic functions
$\{\phi_{X_k}\}$ and $\{\phi_{X'_k}\}$ for $\{ X_{k}\}$ respectively can
generate the same joint
distribution of $(Y_{1},Y_{2})$. Figure \ref{fig:counter} shows both
their characteristic
functions and probability density functions on a two-leaf tree:
$\phi_{X_1}$ and $\phi_{X_1'}$ are the two curves in the first box of
the second row, and $\phi_{X_2}$ and $\phi_{X_3}$ are in the second
and third box of the second row respectively, with corresponding
density functions plotted in the first row. Notice that
these distributions cannot be used for link delays which do
not permit negative values. It is an open question whether there
exist link delays whose distribution shapes are
not identifiable even for the simple two-leaf tree tomography.
\begin{figure}
\begin{center}
\includegraphics[width=3.6in]{counterexample1.eps}
\end{center}
\caption{A counter example of identifiability for the two-leaf tree
model where $(X_1+X_2,X_1+X_3)$ and $(X_1'+X_2',X_1'+X_3')$ have the same
joint distribution: The bottom three figures plot the characteristic
functions: the first one for $\phi_{X_1}$ and $\phi_{X_1'}$, and
the second and third ones for
$\phi_{X_2}=\phi_{X_2'}$ and $\phi_{X_3}=\phi_{X_3'}$
respectively; The top three figures plot the
corresponding probability density functions.}
\label{fig:counter}
\end{figure}
\section{Fast algorithms derived from the General Method of
Moments}\label{sec:algorithm}
In this section, we discuss how to estimate the
parameters of mixture coefficients. It is worth pointing out that by
Theorem \ref{thm:delayiden} the parameter of the link delay mixture models
defined above is identifiable when link delays have positive probabilities
at zero, which is usually true.
\subsection{The General Method of Moments}\label{sec:estimation}
Following \cite{bu:2002}, it is not hard to show that the
computational complexity of MLE using an EM algorithm for the
above flexible mixture model is of order $O(\max_jn_j^J)$, which is
too expensive.
In this section, we present an estimation approach for network
tomography using Fourier transform, following the pioneering work of
\cite{feuerverger:1977}. The estimators using this approach can be
computed easily as shown below and also exhibit good statistical properties.
The motivation is that the characteristic function of $\mathbf{Y}$ is
simply the {\em product} of the characteristic function of components
of $\mathbf{X}$ as shown in Equation (\ref{eq:CFY}), though the
distribution function is a high order convolution of those of $X_j$s.
We derive the estimator from the General Method of Moments formally described
in \cite{hansen:1982} and \cite{carrasco.florens.2000}. To be
self-contained, we give a formal description of our estimator for the
tomography model below and discuss its advantages over previous
approaches.
Suppose that each $X_{j}$ is modeled by a probability density function
$f_{X_{j}}(x_j;\theta_{j})$ with an unknown parameter $\theta_j$.
Let $\theta=\{\theta_j: j=1,\ldots,J\}$.
By Equation~(\ref{eq:CFY}), the joint characteristic function of
$\mathbf{Y}$ is
\[
\phi_{\mathbf{Y}}(\mathbf{t};\theta)=\prod_{j=1}^J\phi_{X_j}(t_j;\theta_j)
\]
where $\phi_{X_j}$ is the characteristic function with respect to $f_{X_j}$.
Let $\{ \mathbf{Y}(n):1\leq n\leq N\}$ be the independent measurements of $\mathbf{Y}$. The empirical characteristic function of $\mathbf{Y}$ is
\[
\hat{\phi}_{\mathbf{Y}}(\mathbf{t})=\frac{1}{N}\sum_{n=1}^{N}\exp(i\mathbf{t}^{T}\mathbf{Y}(n)).
\]
Similar to the maximum likelihood estimate which is derived by
minimizing the Kullback-Leibler divergence between the empirical
distribution and the model distribution of $\mathbf{Y}$, an estimate
of $\theta$ can
be obtained
by minimizing an $L_2$ distance between the empirical
characteristic function and the model characteristic function of
$\mathbf{Y}$, i.e.,
\begin{eqnarray} \hat{\theta} & = &
\arg\min\int\left|\epsilon_{N}(\mathbf{t};\theta)\right|^{2}d\mu(\mathbf{t}),\label{eq:estimator}\end{eqnarray}
where
\[
\epsilon_{N}(\mathbf{t};\theta)=\sqrt{N}(\hat{\phi}_{\mathbf{Y}}(\mathbf{t})-\phi_{\mathbf{Y}}(\mathbf{t};\theta)),
\]
and $\mu(\mathbf{t})$ is a specified probability distribution
function on $\mathcal{R}^{I}$ (we use the sub script $N$ to show the
dependence on the sample size $N$).
For a continuous measure $\mu$, the right hand side of
(\ref{eq:estimator}) does not have a closed form in general. To
evaluate the integral, a Monte Carlo approximation can be used:
first randomly draw $K$ samples from $\mu(\mathbf{t})$, say
$\{\mathbf{t}_{k}:k=1,2,\cdots, K\}$, and then replace $\mu(\mathbf{t})$ by
its empirical distribution based on these samples.
Let $\epsilon_{N}(\theta)\equiv(\epsilon_{N}(\mathbf{t}_{k};\theta), k=1,\cdots,K)^T$
be a column vector. We can rewrite (\ref{eq:estimator}) as
\begin{eqnarray} \label{eq:CF}
\hat{\theta} =
\arg\min_{\theta}\epsilon_{N}^{T}(\theta)\epsilon_{N}^{*}(\theta),
\end{eqnarray}
where $\epsilon_{N}^*(\theta)$ is the conjugate of
$\epsilon_{N}(\theta)$. We call it the {\em CF-estimator}, since it
is based on characteristic function.
The CF-estimator can be considered as a least square estimator based
on the residuals evaluated at $\mathbf{t}_1,\ldots,\mathbf{t}_K$,
which are obviously correlated. Let $W$ be the covariance matrix of $\epsilon_N(\theta)$, it is easy to show that
\[
W_{jk}
=\phi_{\mathbf{Y}}(\mathbf{t}_{j}-\mathbf{t}_{k};\theta)-\phi_{\mathbf{Y}}(\mathbf{t}_{j};\theta)\phi_{\mathbf{Y}}^{*}(\mathbf{t}_{k};\theta).
\]
This motivates a weighted version of the CF-estimator, called {\em WCF},
\begin{eqnarray}
\hat{\theta}^{(W)} & = &
\arg\min_{\theta}\varepsilon_{N}^{T}(\theta)(W+\delta_N I_{K})^{-1}\varepsilon_{N}^{*}(\theta),\label{eq:WCF}
\end{eqnarray}
where $I_{K}$ is the $K\times K$ identity matrix and $\delta_N$, a
tuning parameter, is used to make sure the inversion is well defined.
$\delta$ should be small and we typically choose $\delta_N$ of order
$N^{-1/2}$. In practice $W$ cannot be calculated precisely since
$\theta$ is unknown. We can either use a $W$ estimated from an
initial estimate of $\theta$ such as the CF-estimator, or iterate this
process using an iteratively reweighed least squares, which is a
common technique used in generalized linear models.
{\em 1) Statistical properties}.
The characteristic-function based estimators presented in this section
fall into the class of Generalized Methods of Moment (GMM)
estimators. There is a considerable body of work on their statistical
properties, see \cite{feuerverger:1977} and
\cite{carrasco.florens.2000}, from which the consistency and
asymptotic normality of
both CF-estimator \eqref{eq:CF} and WCF-estimator \eqref{eq:WCF}, can
be established. In addition, it has been
proved by \cite{carrasco.florens.2000} that when the probability
measure $\mu$ in
Equation~(\ref{eq:estimator}) has a density all over $\mathcal{R}^{I}$, the
WCF estimator is asymptotically as efficient as MLE with $K=\infty$
and an appropriate choice of $\delta_N$.
{\em 2) Sampling of $\mathbf{t}$}. For both the CF and WCF estimators, the
points $\mathbf{t}_k,k=1,\ldots,K$ are sampled based on a probability
measure $\mu$. In general, how to choose $\mu$ or sample $\mathbf{t}$
efficiently is a hard problem (\cite{feuerverger:1977}).
In the following, we suggest the choice of $\mu$ based
on our simulation experiences.
Since the scales of components of $\mathbf{Y}$ may be different,
we normalize $\mathbf{Y}$ by its empirical covariance
matrix and use an elliptic distribution for $\mu$, such as Gaussian.
From simulations we notice that sampling $\mathbf{t}$
directly from a probability measure in $\mathcal{R}^I$ does not easily
yield good results. This is due to the sparsity of $\mathbf{t}$ in the
high dimensional space so that the characteristic functions
$\phi_{\mathbf{Y}}(\mathbf{t})$ evaluated at most of the points are
close to zero. Since the variance of the residual
$\epsilon_N(\mathbf{t};\theta)$ is equal to
$1-|\phi_{\mathbf{Y}}(\mathbf{t})|^2$, the closer to zero of the
characteristic function $|\phi_{\mathbf{Y}}(\mathbf{t})|^2$, the
larger the variance, and the less the information.
Although it may lose some efficiency, simulations suggest that better
performance can be achieved by sampling
$\mathbf{t}$ from lower dimension subspace,
for example 2-dim
subspaces. When we draw $\mathbf{t}$ from a lower dimensional
subspace, it implies that we minimize the residuals (difference
between model and empirical characteristic function) only for these
subspaces. This may be viewed as a counterpart of the pseudo likelihood
approach for network tomography proposed by \cite{liang:2003} but in
the Fourier domain.
\subsection{A Fast Algorithm by Quadratic Programming} \label{sec:fastalg}
With the mixture model described in Section \ref{sec:mixture}, the
unknown parameter for the model is
$\theta=\{\theta_{j}:j=1,\cdots,J\}$, the mixture coefficients. Now we
describe how to estimate $\theta$
iteratively using the approach developed in Section
\ref{sec:estimation}.
By Equation (\ref{eq:CFY}) and (\ref{eq:cf-mixture}), the model
characteristic function of $\mathbf{Y}$ is
\[
\phi_{\mathbf{Y}}(\mathbf{t};\theta) =
\prod_{j=1}^J\theta_{j}^{T}\Phi_{j}(\mathbf{t}^TA^j),
\]
where
$\Phi_j(t)=(\phi_{j1}(t),\cdots,\phi_{jn_j}(t))^T$.
The objective function in obtaining the CF estimate defined in
Equation (\ref{eq:CF}) can then be written as
\[
\sum_{k=1}^K|\epsilon_N(\mathbf{t}_k;\theta)|^2 =
\sum_{k=1}^K\left|\hat{\phi}_{\mathbf{Y}}(\mathbf{t}_k)-\prod_{j=1}^{J}\left(\theta_j^{T}\Phi_{j}(\mathbf{t}_k^TA^j)\right)\right|^{2}.
\]
It is easy to see that for each $\theta_j$, if the rest of the
parameters are known, the optimization function is a quadratic
function of $\theta_j$. Specifically, given all other parameters
$\{\theta_l:l=1,\cdots,J,l\neq j\}$, the optimal $\theta_j$
can be obtained by minimizing
\begin{eqnarray} \label{eq:quadprog}
C(\theta_j)
& = &
\theta_j^{T}D_j\theta_j-2\theta_j^{T}\mathbf{d}_j,
\end{eqnarray}
where \begin{eqnarray*}
D_j & = & \sum_{k=1}^K \left|\prod_{l\neq
j}\phi_{X_{l}}(\mathbf{t}_k^{T}A^{l})\right|^{2}Re\{\Phi_{j}^{*}(\mathbf{t}_k^TA^j)\Phi_{j}^T(\mathbf{t}_k^TA^j)\}
\end{eqnarray*} is an $n_{j}\times n_{j}$ matrix, and
\begin{eqnarray*} \mathbf{d}_j & = & \sum_{k=1}^K
Re\{\hat{\phi}_{\mathbf{Y}}^{*}(\mathbf{t}_k)\prod_{l\neq
j}\phi_{X_{l}}(\mathbf{t}_k^{T}A^{l})\Phi_{j}(\mathbf{t}_k^TA^j)\}
\end{eqnarray*}
is an $n_{j}$-dim column vector.
This is a standard quadratic programming problem and can be solved
quickly.
Therefore, estimation of $\theta$ can be obtained by an iterative algorithm as follows.
\begin{alg} Iterative quadratic programming\label{alg:quadprog}
\begin{itemize}
\item[(1)] Choose an initial value for $\theta_j$, $j=1,\ldots, J$.
\item[(2)] For each $j=1,\ldots, J$,
estimate $\theta_j$ by minimizing (\ref{eq:quadprog})
using quadratic programming.
\item[(3)] Repeat Step 2 until convergence.
\end{itemize}
\end{alg}
A nice property of Algorithm \ref{alg:quadprog} is that it
always converges to a local solution
because the objective function never increases after each iteration
and is bounded below by 0. This is similar to EM algorithms, but care
is needed in order to obtain the global minimal. Simulations show that
$\{ p_{jk}=1/n_{j}\}$ can serve as a good starting value.
The computational complexity of each iteration in Algorithm
\ref{alg:quadprog} is
$O(KIJ\max_j(n_{j})^{3})$ for the {\em CF-estimator}. For the {\em
WCF-estimator}, a similar iterative algorithm by quadratic
programming can be obtained. Due to the weight matrix, the complexity
of each iteration becomes $O(K^3IJ\max_j(n_j)^3)$.
\section{Identifiability}
In this section, we study the identifiability issue for model
\eqref{eq:tomo1} and prove that the distribution of $\mathbf{X}$ is
identifiable up to a shift parameter under mild conditions. The main
tool we use is characteristic function whose basic properties are
reviewed below.
\subsection{Characteristic Function}
A characteristic function of a univariate random variable $Z$ is defined by
\[
\phi_{Z}(t)=E[e^{itZ}]=\int_{-\infty}^{\infty}e^{itz}f_{Z}(z)dz,\,\,\,\,
t\in\mathcal{R},\] where $E[\cdot]$ denotes the expectation with
respect to $Z$ and
$f_{Z}(\cdot)$ is the probability density function of $Z$. By
convention, $\phi_Z$ and $f_Z$ denote the characteristic function and
probability density function of $Z$, respectively.
The characteristic function for a random vector
$\mathbf{Z}\in\mathcal{R}^D$ can be defined in a similar
manner by considering $\mathbf{t}\in \mathcal{R}^D$ instead of
$\mathcal{R}$. It is well known that a probability distribution can be
uniquely specified by its characteristic function and vise versa.
Suppose $Z_{1}$ and
$Z_{2}$ are two independent random variables. Then the joint
characteristic function of $\mathbf{Z}=(Z_1,Z_2)$ is
\[
\phi_\mathbf{Z}(\mathbf{t}) = \phi_{Z_1}(t_1) \phi_{Z_2}(t_2), \ \
\mathbf{t}=(t_1,t_2),
\]
which is a product of the marginal characteristic functions of $\mathbf{Z}$. Let
$V=Z_1+Z_2$, then the characteristic function of $V$ is simply a
product of the characteristic functions of $Z_{1}$ and $Z_{2}$, i.e.,
\[
\phi_{V}(t)=\phi_{Z_{1}}(t)\phi_{Z_{2}}(t).
\]
This is much easier to compute than the density function
of $V$, say $f_{V}(\cdot)$, which is
a convolution of densities of $Z_{1}$ and $Z_{2}$, i.e.,
\[
f_{V}(v)=\int_{z_{1}\in\mathcal{R}}f_{Z_{1}}(z_{1})f_{Z_{2}}(v-z_{1})dz_{1}.
\]
For the tomography model \eqref{eq:tomo1}, since the components of
$\mathbf{X}$ are mutually independent,
it is easy to evaluate the characteristic function of $\mathbf{Y}$ by
\begin{eqnarray}\label{eq:CFY}
\phi_{\mathbf{Y}}(\mathbf{t}) = E[e^{i\mathbf{t}^{T}\mathbf{Y}}] =
E[e^{i(\mathbf{t}^{T}A)\mathbf{X}}] =
\prod_{j=1}^{J}\phi_{X_{j}}(\mathbf{t}^{T}A^{j}),
\end{eqnarray}
where $A^j$ is the $j$th column of $A$. However, it is in general
difficult to evaluate the distribution of $\mathbf{Y}$ because it is a
high order convolution in terms of the distribution of $\mathbf{X}$.
Below we will use the formula \eqref{eq:CFY} for both the identifiability
proof and estimation in network tomography.
\subsection{Identifiability}
By identifiability, we mean that the distribution of $\mathbf{X}$ can be
uniquely determined by the distribution of $\mathbf{Y}$. It is
important to establish the identifiability. Otherwise, the distribution
of $\mathbf{X}$ may not be estimable from the distribution of
$\mathbf{Y}$. In the following, we present our general theorems for
identifiability and discuss related issues.
We assume that $E|X_j|<\infty$, $j=1,\cdots,J$ and that the distribution of $\mathbf{X}$ satisfies one of the
two conditions, namely C1 and C2, defined below.
\begin{itemize}
\item[(C1)] the characteristic function of each $X_j$ is analytic \footnote{An analytic characteristic function corresponds to a distribution function which has moments $m_k$ of all orders $k$ and
$\lim\sup_{k\rightarrow\infty}[|m_{k}|/k!]^{1/k}$ is finite};
\item[(C2)] the characteristic function of each $X_j$ has no zeros in $\mathcal{R}$.
\end{itemize}
We first address the identifiability issue in Lemma 1 for the
simple two-leaf
tree tomography model described earlier with Figure
\ref{fig:twoleaf}. The result will serve as the basis for Theorem 1
and 2 below where the routing topology is not a simple two-leaf tree.
\begin{lemma}\label{lemma:lemma}
If \(Y_{1}=X_{1}+X_{2}\) and \(Y_{2}=X_{1}+X_{3}\), then the distributions of $X_1,X_2,X_3$ can be identified up to a shift parameter.
\end{lemma}
\begin{proof}
Suppose there exist both $\mathbf{X}=(X_1,X_2,X_3)^T$ and $\mathbf{X}'=(X_1',X_2',X_3')^T$ with mutually independent
components that give rise to the same distribution $\mathbf{Y}=(Y_1,Y_2)$, then we show
that distributions of $X_j$ and $X_j'$, $j=1,2,3$, are the same up to a shift parameter.
By (\ref{eq:CFY}), we have for $t,s\in\mathcal{R}$,
\begin{equation}
\label{eq:pf}
\phi_{X_{1}}(t+s)\phi_{X_{2}}(t)\phi_{X_{3}}(s)=\phi_{X_{1}'}(t+s)\phi_{X_{2}'}(t)\phi_{X_{3}'}(s).
\end{equation}
Notice that $\varphi_{j}(t)\equiv\log\phi_{X_{j}}(t)/\phi_{X_{j}'}(t)$
is well defined in a neighborhood of the origin with $\varphi_{j}(0)=0$,
$j=1,2,3$. Thus for $t$ and $s$ in a neighborhood of zero, \begin{eqnarray*}
\varphi_{1}(t+s)+\varphi_{2}(t)+\varphi_{3}(s) & \equiv & 0.\end{eqnarray*}
By using the argument of finite differences (c.f. Lemma 1.5.1 of
\cite{kagan:1973}), each $\varphi_{j}$ is a linear complex function
in a neighborhood of zero and thus in $\mathcal{R}$ with the given condition.
That is, there exist complex
numbers $a_{j},b_{j}$ such that
$\phi_{X_{j}}(t)=\phi_{X_{j}'}(t)e^{a_{j}+ib_{j}t}$
for any $t\in\mathcal{R}$. By evaluating both sides at $t=0$,
$a_{k}=0$.
By taking the first order derivative on both sides
at zero, $iE[X_{j}]=iE[X_{j}']+ib_{j}$ and thus $b_{j}\in\mathcal{R}$,
due to $X_{j},X_{j}'\in\mathcal{R}$. Hence $X_{j}$ and $X_{j}'+b_{j}$
have the same distribution. Further, $AE[\mathbf{X}]=AE[\mathbf{X}']$ implies $b_{2}=b_{3}=-b_{1}$
\end{proof}
For network delay tomography, as a generalization of the simple
two-leaf tree model, let $A$ correspond to a routing matrix
derived from a multicast tree (\cite{presti:2002}), where each node,
except for the root and leaves, must have at least two
children. Let us take the four-leaf tree in Figure \ref{fig:4leaf} as
an example of a
multicast tree, which will be used for simulation purposes later. Let
$X_1,\cdots, X_7$ denote the link delays on the edges from top to
bottom and from left to right in the tree, i.e., the link delay on the
edge with end node $j$ is denoted by $X_j$. Let $Y_1,\cdots, Y_4$
denote the end-to-end delays from the root node 0 to end node 4, 5, 6 and 7,
respectively. Then each element of $\mathbf{Y}=(Y_1,\cdots,Y_4)^T$ is
a partial sum of $\mathbf{X}=(X_1,\cdots,X_7)^T$, for example,
$Y_1=X_1+X_2+X_4$. This can be written in the form of
\eqref{eq:tomo1}, where $A$ is a $4\times 7$ binary matrix and can be derived
from the linear equations. From Lemma \ref{lemma:lemma}, the
distributions of $X_4,X_5$ are determined up to a
shift parameter by the joint distribution of
$(Y_1,Y_2)$, so are the distributions of $X_6,X_7$. Using a bottom-up
induction on the tree, it follows that the distributions of all
components of $\mathbf{X}$
are determined by that of $\mathbf{Y}$ up to shift ambiguity. The same
arguments leads to the following theorem.
\begin{thm}\label{thm:delayiden}
Let \(A\) be the routing matrix derived from a multicast tree, then
the distribution of $\mathbf{X}$ is identifiable up to shift
ambiguity.
\end{thm}
Theorem \ref{thm:iden} below provides a general identifiability result for the
traffic demand tomography model, where the routing topology is more
general than a multicast tree, as studied in \cite{cao:2000a}.
\begin{thm}\label{thm:iden}
Let \(B\) be the \([I(I+1)/2]\times J\) matrix whose rows consist of
the rows of \(A\) and the component-wise products of each different
pair of rows from \(A\). If \(B\) has full column rank, then the
distributions of \(\mathbf{X}\) are identifiable up to shift ambiguity. The
shift ambiguity satisfies the constraint \(E[\mathbf{Y}]=AE[\mathbf{X}]\).
\end{thm}
\begin{proof}
For the convenience of expression, ignore the shift ambiguity. Let
$A_iA_k$ be the element-wise product of $A_i$ and $A_k$. Notice
that $(A_{i}A_{k})\mathbf{X}$ denotes the common part of
$(A_{i}\mathbf{X},A_{k}\mathbf{X})$,
i.e. $(Y_{i},Y_{k})$. Since $\{ X_{j}\}$ are mutually independent,
by Lemma 1, the distribution of $(A_{i}A_{k})\mathbf{X}$ is identifiable.
Thus the distribution of each component of $B\mathbf{X}$ is identifiable.
Let $\psi_{k}$ denote the characteristic function of $B_{k}\mathbf{X}$,
where $B_{k}$ is the $k$th row of $B$. Then for $k=1,\cdots,I(I+1)/2$
and for $t$ in a neighborhood of zero,
\begin{eqnarray*}
\log\psi_{k}(t) & = & \sum_{j}B_{kj}\log\phi_{X_{j}}(t),\end{eqnarray*}
where $B_{kj}\in\{0,1\}$ is the $(k,j)$th element of $B$. Since
$B$ has full column rank, $\{\log\phi_{X_{j}}(t):j=1,\cdots,J\}$
can be uniquely solved from the above linear equations.
Then under either (C1) or (C2), $\phi_{X_j}$ is uniquely decided.
That is, the
distribution of each $X_{j}$ can be uniquely identified.
\end{proof}
We now discuss three issues related to the above identifiability results.
{\em 1) Location ambiguity}.
The location ambiguity of the tomography problem has been recognized
in previous works. To avoid such ambiguity, \cite{vardi:1996}
assumed a Poisson distribution whose mean is the same as its variance,
\cite{cao:2000a} used a power relation between mean and
variance, and all previous discrete link delay models assume
probabilities starting from zero delay. The important message here is
that, despite the location ambiguity, Theorem 1 and 2 state that the
distributional shape of each $X_j$ can be determined, for example, all
orders of central moments that exist are uniquely identified. In
practice, to completely identify the distribution including the
location, one can bring in some additional information such as the
achievable lower bounds of $\mathbf{X}$ for example in delay
tomography, and relationship between mean and variance for example in
traffic demand estimation.
{\em 2) Conditions on the $\mathbf{X}$ distribution}.
The distributional assumption on $\mathbf{X}$ is very
weak. A lot of well known distributions have analytic characteristic
functions, such as Poisson, Gaussian and discrete distributions, which
have been used in the literature. A
mixture distribution that we later use to model the link delays in
Section 4 has an analytic characteristic function. Although, the
heavy-tailed distributions do not
satisfy (C1), some heavy-tailed distributions such as $\alpha$-stable
distributions satisfy (C2). Despite the generality of our conditions,
we do note that they are not necessary ones. For theoretical
interest, we have constructed a counter example of a distribution
$\mathbf{X}=(X_1,X_2,X_3)$ that cannot be identified from
$\mathbf{Y}=(Y_1,Y_2)$ for the simple two-leaf tree model $Y_1=X_1+X_2$
and $Y_2=X_1+X_3$, as in the Appendix.
{\em 3) Condition on the routing matrix $A$}. \cite{cao:2000a} has
shown that the full rank condition
in Theorem \ref{thm:iden} is necessary in the context of traffic
demand tomography when $\mathbf{X}$ is Gaussian.
In practice, such a condition is easily
satisfied for routing matrices derived from realistic network
topologies. A more general condition of $A$ has been developed to prove
the identifiability for Poisson distributions in the context of
traffic demand estimation \cite{vardi:1996}. We conjecture that under
Vardi's more general condition of $A$, the distribution of
$\mathbf{X}$ is identifiable up to mean and variance ambiguity under
condition (C1), but we leave the investigation for future work.
\section{Introduction}
Network performance monitoring and diagnosis is challenging due to the
size and decentralized nature of the Internet.
The service providers may collect their link level
statistics using tools such as Cisco Netflow, whereas the end users
can obtain the end-to-end performance by probing the
network. Unfortunately, none of them has a global view of the
Internet. For instance, when an end-to-end measurement indicates the
performance degradation of an Internet path, the exact cause is hard
to be uncovered because the path may traverse several autonomous
systems (AS) that are often owned by different entities and the
service providers generally do not share
their internal performance. Even if they do, there is no scalable way to
correlate the link level measurements to end-to-end performance in a
large network like the Internet. Similarly, the service providers may
be interested in the end-to-end path characteristics that they can not
observe directly.
Network tomography is a technology addressing these issues that infers
unobservable characteristics from easily available measurements.
There have been two forms of network tomography being studied in the
literature. One, called {\it network delay tomography},
estimates the link-level characteristics based on
end-to-end measurements,
and the other, called {\it traffic demand tomography}, predicts
end-to-end path-level traffic intensities based on link-level traffic
measurements.
The key advantage of network tomography is that it does not require
the collaboration between network internal elements and end users.
See \cite{castro:2004}, \cite{denby:2007} and references
therein for an excellent review.
We focus on network delay tomography in this paper, while the proposed
approach may also be applied to traffic demand tomography.
Network delay tomography aims to estimating network internal
characteristics such as loss and delay\footnote{To be precise, the
delay here is the queuing delay that excludes the constant link
propagation delay, we omit queuing when context is clear}, from
end-to-end measurements by exploiting the inherent correlation in
performance. Considering a tree spanning a source of probes (root)
and a set of receivers (leaves), the packets are potentially subject
to queuing delay and loss at each link.
The end-to-end (source-to-receiver) measurements may be made passively
or actively. The
probes for the active measurements can be sent using either multicast
or unicast routing\footnote{With multicast, a packet is sent from a
source to multiple destinations simultaneously; with unicast, a
packet is sent to different destinations separately}. See
\cite{lawrence.et.al.2006} and \cite{denby:2007} for examples of how
unicast and multicast
probes can be designed and sent. Because only
one copy of a probe is transmitted
on the common links, multicast probing based tomography has the
advantage of perfect correlation on the common links, less
overhead, and better scalability. Following \cite{presti:2002} and
\cite{liang:2003}, we assume that
measurements are collected from multicast probes, although the multicast
routing is not widely enabled in today's Internet. It has been shown in
\cite{bu:2002} on how to apply the tomography algorithms developed for
multicast measurements when only unicast measurements are available.
The statistical models for both types of network tomography can be
unified as follows:
\begin{equation}\label{eq:tomo1}
\mathbf{Y}=A\mathbf{X},
\end{equation}
where
$\mathbf{X}=(X_{1},\ldots,X_{J})^T$ is a $J$-dimensional vector of
network dynamic parameters, and $\mathbf{Y}=(Y_{1},\ldots,Y_{I})^T$ is
an $I$-dimensional vector of measurements and $A$ is an $I\times J$
matrix with elements 0 or 1 which represents the routing topology of
the network under
consideration. Here we use the superscript $T$
to denote the transpose. In most network tomography scenarios, the
components of $\mathbf{X}$ are assumed independent but
unobservable. Usually $I$ can be as large as $J^2$ for network demand
tomography and as large as $2J-1$ for network delay tomography. In
network delay tomography, each component of $\mathbf{X}$ represents an
internal link delay and each component of $\mathbf{Y}$ represents a
delay measurement from a source to a destination.
The objective of network tomography is to estimate the distribution of
$\mathbf{X}$ given independent observations from the distribution of
$\mathbf{Y}$.
As a simple example,
Figure \ref{fig:twoleaf} shows a two-leaf tree topology, on which a
probing packet is sent from the root node 0 (source) to leaf nodes 2
and 3 (receivers). When the packet arrives at node 1 from the source, it is
replicated and transmitted to node 2 and 3
simultaneously as the red arrows show. Let $X_1$ denote the link delay
from node 0 to node 1 and
let $X_i$ denote the link delay from node 1 to node $i$, $i=2,3$
respectively. Let $Y_1, Y_2$ be the end-to-end delays from node 0 to
node 2 and 3 respectively. Then $Y_1=X_1+X_2$ and $Y_2=X_1+X_3$, which
can be written in the form of \eqref{eq:tomo1} with $A$ a $2\times 3$
binary matrix, i.e. $A=[1,1,0;1,0,1]$.
\begin{figure}[t]
\begin{minipage}[t]{.42\textwidth}
\begin{center}
\epsfig{file=twoleaf1.eps, scale=0.35}
\caption{Two-leaf tree}
\label{fig:twoleaf}
\end{center}
\end{minipage}
\begin{minipage}[t]{.42\textwidth}
\begin{center}
\epsfig{file=tree41.eps, scale=0.3}
\caption{Four-leaf tree}
\label{fig:4leaf}
\end{center}
\end{minipage
\end{figure}
There have been significant amount of works on network tomography in
recent years. Network tomography was first proposed by
\cite{vardi:1996} and then followed by remarkably \cite{tebaldi:1998},
\cite{cao:2000a} and \cite{liang:2003}
for traffic matrix estimation, i.e. traffic demand tomography.
\cite{caceresIT:1999}, \cite{zhu.geng.2005} and \cite{Xi.et.al.2006} among others
studied it for inferring network internal loss.
Network delay tomography has also been studied
extensively. \cite{presti:2002} developed a fast algebraic
algorithm but it is quite inefficient. \cite{bu:2002} showed that
the maximum likelihood estimate (MLE) requires exponential computational
complexity. \cite{tsang:2003} proposed a penalized maximum
likelihood method. \cite{liang:2003} proposed a pseudo-likelihood method with
multicast measurements, and recently \cite{lawrence.et.al.2006} proposed
local likelihood method with both unicast and multicast measurements,
both of which were shown to be fast and quite efficient compared with the MLE.
These studies are based on a discrete distribution with equally
spaced bins for modeling link delays, where the {\it same}
bin width is used for all the links for the ease of
computation.
\cite{duffield:2001b} pointed out that, however, a single fixed bin
width is not appropriate for heterogeneous networks such as the
Internet because it does not scale well between both fast links and
slow links.
They proposed a varying-bin discrete model for estimating link delay
distributions based on unicast measurements. Their estimation idea is to use
structured bins such that they can iteratively estimate a segment
of delay distributions by truncating the delays from both sides,
i.e. rounding the left of the segment to zero and the right to
infinity. However,
the performance of their estimation approach is not better than that
using an equal-bin discrete model with an appropriate bin width
as they reported, probably due to the
bias introduced by their brute-force truncation. Our approach is
also based on varying-bin type models but does not suffer from such bias.
\cite{shih.hero.2001} proposed to estimate cumulative
generating functions (similar to characteristic functions used in this
paper) of link delays, but
they did not estimate link delay distributions. \cite{shih.hero:2003}
also proposed finite mixture models with Gaussian components for link delay
distributions based on unicast measurements.
There are several previous works that have considered the identifiability issue
for the network tomography problem, for example \cite{vardi:1996},
\cite{cao:2000a} and \cite{presti:2002}. These authors considered
instances of the tomography problem by assuming specific parametric
(such as Poisson and Gaussian) or discrete distributions.
We will unify these results and extend the identifiability
condition to general distributions under mild assumptions.
The contributions of this paper are as follows.
First, we prove that the
distribution of $\mathbf{X}$ is identifiable up to a shift parameter
under general conditions.
Second, we
propose flexible mixture models of characteristic
functions for network delay
tomography and develop a fast algorithm
for estimation based on the General Method of Moments (GMM).
The new approach allows one to model continuous
delays on {\it heterogeneous} network links conveniently, where delays
may not have the same scale across all network links.
Extensive model simulation and real Internet trace-driven
simulation suggest that our new approach can yield more accurate
estimates of link delay distributions yet is computationally less
expensive than previous approaches.
The remaining sections of the paper are structured as follows.
In Section 2, we address the
identifiability issue. We describe the mixture models for link delays
in Section 3 and develop a fast algorithm for estimating the delay
distributions in Section 4. In Section 5, we present
extensive experimental studies for evaluating the proposed method. Section 6
concludes the paper.
\section{Network delay tomography using mixture modeling} \label{sec:mixture}
Below we focus on network delay tomography and describe a
class of flexible mixture models for modeling link delays.
It is well known that there does not exist a standard parametric model
that can sufficiently
model the distributions of network link delays (see
\cite{duffield:2001b} and \cite{tsang:2003} among others). But it is
possible to define a mixture model which is flexible enough for link
delay distributions.
Assume that for each link $j$, the link delay $X_j$ follows a mixture density
function with $n_j$ components, $X_j \sim f_{X_j}$, defined by :
\begin{eqnarray}
f_{X_j}(x;\theta_j) = \sum_{l=1}^{n_j} p_{jl}\kappa_{jl}(x), \ \ x>0\label{eq:mixture}
\end{eqnarray}
where $\theta_j\equiv (p_{jl},\cdots,p_{jn_j})^T$ contains the mixing
probabilities
with constraint $p_{jl}\geq 0$, $\sum_{l}p_{jl}=1$, and
$\{\kappa_{jl}\}$ are some basis density functions.
There is another practical reason that we use a mixture model for
link delays:
The characteristic function of a mixture distribution is a mixture of
characteristic functions of the basis distributions and thus can be
computed conveniently once the basis distributions are chosen
appropriately, as shown later. In
this case, the characteristic function of $X_j$
can be expressed as
\begin{eqnarray} \phi_{X_j}(t;\theta_j) & = &
\sum_{l=1}^{n_j}p_{jl}\phi_{jl}(t),
\label{eq:cf-mixture}
\end{eqnarray}
where $\phi_{jl}$ is the characteristic function of the basis function
$\kappa_{jl}$.
The basis functions are chosen as follows for modeling link
delays. For $j=1,\cdots,J$, let
$0=b_{j1}<b_{j2}<\ldots<b_{j(n_j-1)}<\infty$. Define the basis
function as
\begin{eqnarray}\label{eq:kernel}
\left\{
\begin{array}{ll}
\kappa_{j1}(x) \ = & \mbox{point mass at zero (for zero)}\\
\kappa_{jk}(x) \ = & \mbox{uniform on $[b_{j(k-1)}, b_{jk}]$} \\
& \hspace*{0in} \mbox{$\ 2 \leq k \leq n_j-1$ (for body)}\\
\kappa_{jn_{j}}(x) \ = & \mbox{exponential with scale $\alpha_j$}\\
& \hspace*{0.1in}\mbox{on $[b_{j(n_j-1)}, \infty]$ (for tail)}\\
\end{array}
\right.
\end{eqnarray}
The point mass at zero link delay is used here because it is well
known that for a FIFO queue (First In, First Out), the steady state
queuing distribution has
zero delay with probability one minus the utilization of the queue.
For the body of the distribution, we choose the piecewise uniform
model because of its simplicity and flexibility. Finally, an
exponential distribution is used to model the tail because it is the
right model for the short range dependent traffic model, and for
long-range dependent model it represents a trade-off between accuracy
and simplicity.
In order to reduce the computational complexity, we choose the bin
endpoints $\{b_{jk}\}$ in advance. The parameter of interest is
composed of the mixture coefficients, denoted as
$\theta={\theta_1,\cdots,\theta_J}$. To our advantage, we do not
require
the bins to be equally spaced. In fact, it is important to choose the
bins that are adaptive to individual link delay distributions in order
to obtain accurate estimates. Such a varying bin strategy is
especially important for a heterogeneous network environment whose
link delay distributions vary widely across links, because a single
bin width value could be at the same time too coarse grained for a
high bandwidth link with small delays but too fine-grained to
efficiently capture the essential characteristics of the delay along a
low bandwidth link (\cite{presti:2002}). In addition, since a typical
delay distribution may have a density varying a lot at different
areas, it is important to be able to place more bins in the high
density area and fewer bins in the low density area.
Note that the equal-bin distribution used by most previous
researchers is also a mixture model. Figure \ref{fig:delaypdf} shows
two link delay distributions (in solid lines) where one ranges from
0 to 12 (top) and the other from 0 to 240 (bottom). The slashed lines
are fitted
curves using a equal-bin model with bin-width 1 which accomodates the
scales of both links, but with 12 bins for the slow link and as many
as 240 bins for
the fast link. The dotted lines are fitted curves using
varying-bin models where only 10 bins are used for both links. It is
clear that the equal-bin model fits the slow link very well, but not
the fast link, while the varying-bin model with a small number of bins
fits both links well. It is possible to use a very small bin-width for
the equal-bin model, but it would require too many bins for the slow
link. The varying bins here are chosen based on quantiles of the delay
distribution, which works very well in general from our
simulation experiences.
In reality the quantiles are unknown and
we can only obtain an approximation using an initial estimate of the
link delay distribution. This process can be iterated until we get a
good estimate.
The scale parameter $\alpha_j$ for the tail basis in
Equation~(\ref{eq:kernel}) is unknown and needs to be estimated.
However, the accuracy of the scale estimate is less important if the
endpoint $b_{j(n_j-1)}$ of the last bin can be placed at the far end of the
tail. For a further simplification, we can fix the tail basis
with a crude estimate of $\alpha_j$ for each link $j$ and only
estimate the mixing probabilities $\{p_{jk}\}$, which is described next.
\begin{figure}[h]\begin{center}
\psfig{figure=compevbin.eps,width=3in}
\caption{Fitting two link delay distributions using an equal-bin
model and a varying-bin model, where the delay on the slow link
(bottom) is 20 times in average of that on the fast link (top): the
solid lines are for the link delay density functions, slashed lines
for the estimated densities using bin-with equal to 1 (12 bins for the fast
link and 240 bins for the slow one), and the dotted lines for the
estimated densities using varying bin-widths (only 10 bins for each).}
\label{fig:delaypdf}\end{center}
\end{figure}
\section{Simulation and Experimental Studies}
\label{sec:simulation}
In Section \ref{sec:algorithm}, we have developed simple and fast
algorithms using a flexible mixture model for network delay tomography.
In this section, we evaluate the performance of
the proposed algorithms in terms of statistical efficiency and
accuracy.
To measure the accuracy of the estimation as compared to
the true distributions, we use a $L_1$-distance for discrete link
delay distributions and a normalized Mallows distance for continuous
link delay distributions.
Our evaluation is divided into three pieces.
First, we study the efficiency of our estimates by comparing them
with that of MLE for a discrete link delay distribution with equally
spaced bins. We show that our estimators have comparable efficiency to
that of MLE which is statistically efficient and also computable in
this setting.
Second, we examine the performance of our estimators using model
simulations for continuous link delays in an ideal scenario where both
temporal and spatial independence hold. Model simulations
demonstrate the importance of varying bins selection that should adapt
to not only delay distributions of individual links but the different
scales of delays across links in order to achieve satisfactory estimates.
Finally, we use real trace driven simulations to
examine the accuracy of our estimators under more realistic scenarios
where the independence assumptions may not be strictly true as
appeared in the Internet. Results from our trace driven simulations
demonstrate that the estimates made by our algorithms closely match
the real distributions.
\subsection{Efficiency Evaluation}
We study the efficiency of our estimators using a discrete link delay
distribution with equally spaced bins on a four-leaf tree
(Figure~\ref{fig:4leaf}). For link $j, j=1,\ldots,7$, the link delay
has a discrete distribution at $\{0,1,\cdots,5\}$ with probabilities
generated uniformly from the space $\sum_{k=1}^6 p_{jk}=1$
with constraints $0<p_{jk}<1$. A total of 500 delay samples
are generated for each link from its specified delay distribution and
the end-to-end delays are computed according to the model
\eqref{eq:tomo1}. The delay distributions of all
seven links are estimated using the MLE, the CF-estimator, and the
WCF-estimator.
We repeat the experiment 100 times with different random seeds. Both
the MLE and the CF-estimator use the uniform distribution as
starting values whereas WCF uses the CF estimates as starting
values. The weight matrix $W$ for WCF is also derived from the CF
estimates. For both the CF and WCF estimators, a total of 3000 samples of
$\mathbf{t}$ are drawn randomly from the 2-dim subspaces of $I$-dimensional
end-to-end delays using a Gaussian distribution with a scale parameter of
5 after normalizing $\mathbf{Y}$. We have
also run the recursive algorithm developed in \cite{presti:2002}, but
we do not report the result here except to state that it often yields
much poorer estimates (similar to observations made by
\cite{liang:2003}).
Figure~\ref{fig:mn_pdf} shows both the estimated and the simulated
seven link delay density functions in one simulation
experiment. We observe that all methods give reasonably accurate estimates. To
compare errors of the different estimators, we calculate the $L_{1}$
distances between the estimated link delay density functions and the
ground truth for each of the 100 experiments. Figure~\ref{fig:mnL1err} reports
the 25\%, 75\% quantiles of the $L_{1}$ errors for each link in
vertical line segments, whose middle points represent median errors.
MLE has the smallest median error, and the median errors of CF and WCF
are 50\% and 22\% higher than that of MLE.
The results suggest that both CF and WCF are somewhat worse than as
expected but comparable to MLE.
\subsection{Accuracy Evaluation Using Model Simulation}
In this subsection, we investigate the performance of the estimators
for continuous delay distributions, which are more realistic than
discrete ones since network delays are essentially continuous except at zero.
Delay tomography in a heterogeneous network is intrinsically more
challenging than in a homogeneous network because links
with small delays are not equally represented as links with large
delays in the end-to-end delay measurements. In addition, the
heterogeneous environment also represents a situation where most of
the existing methods such as MLE do not work well because they rely on
simple discretization. After all, the real Internet is a heterogeneous
network. Thus we report model simulations on a four-leaf tree that resemble a
heterogeneous network environment. For simplicity, we do not consider
the point mass at zero for model simulations, but we will treat this
in later real trace driven simulations.
To conduct a comprehensive evaluation, we run simulations for the link
delay distributions of different shapes. Due to the space limit, we
only report the results for two representative distributions that are
i) exponential (uni-modal) and ii) a mixture of an exponential and a
Gamma with shape parameter 2
(multi-modal). In
both cases, the average link delays on the four-leaf tree are 3, 1, 5,
10, 6, 4 and 20 respectively for link 1 to 7 assigned from top to
bottom and left to right, which resembles a heterogeneous network
with the average link delays varying by a factor of 20. We generate
2000 delay samples for each link from the specified delay
distributions, and we estimate the seven link delay
density functions from the resulting end-to-end delays.
We use four different estimates of link delay distributions: {\em
CF\_equal\_bin, WCF\_equal\_bin, CF\_varying\_bin,
WCF\_varying\_bin}. All four estimates are obtained using a mixture
model for the link delays of the same form as \eqref{eq:kernel} with
$n_j=12$ except removing the point mass at 0.
The difference in the mixture model
for the estimates lies in the bin placement. For both {\em
CF\_equal\_bin} and {\em WCF\_equal\_bin}, the 12 bins are equally
spaced using a bin width selected for each link based on variance
estimates, which are
obtained by solving systems of linear equations, following
\cite{duffield:2004}. For both {\em CF\_varying\_bin} and {\em
WCF\_varying\_bin}, the bins are located at the quantiles of
the delay distributions that corresponds to probabilities $i/13,
i=1,\ldots, 12$.
Figure \ref{fig:expcdf} and \ref{fig:expgamcdf} plot the estimated
cumulative distribution functions for each link delay for case i) and
ii) respectively, along with the ground truth in one simulation run.
From the figures, we observe that the estimates
using varying bins are almost identical to the true distributions. The
estimates using equal bins give satisfactory
estimates for case i) but not quite as good for the more complex case ii).
To measure the accuracy of the estimates, we use the Mallows
distance defined for a cumulative distribution $F$ and its estimate
$\hat{F}$ by
\[
M(F,\hat{F}) = \int_{0}^1 \left|F^{-1}(p) - \hat{F}^{-1}(p)\right| dp,
\]
where $F^{-1}$ and $\hat{F}^{-1}$ are the inverse cumulative
distributions. The Mallows distance can be viewed as the average of
absolute difference in quantiles between two distributions. Because
the Mallows distance is linear to the scale of distributions, we use
$M(F,\hat{F})/\sigma_F$, the normalized Mallows distance, to measure
the difference between $F$ and $\hat{F}$, where
$\sigma_F$ is the standard deviation of $F$.
We repeat the simulation 100 times and compute the normalized Mallows
distance between the estimated and true distributions as the error
metric for all
links. Figure~\ref{fig:expmallow} and \ref{fig:expgammallow} report,
corresponding to
case i) and ii) respectively, the first and third quartiles of
the errors as well as median errors for each link, similar to Figure
\ref{fig:mnL1err}.
It is clear that the varying bins improve the
quality of estimates significantly over equal bins. (Note that the
difference between CF and WCF are not significant though.)
This suggests that
selecting bins based on characteristics of the underlying density
distributions is important in improve the accuracy in a heterogeneous
network.
\subsection{Accuracy Evaluation Using Real Internet Traces}
We next investigate how the algorithms perform in a realistic network
environment
where some of the assumptions may not hold completely.
For instance, due to the closed-loop control
nature of the TCP protocol, the packets within the same TCP connection
have strong temporal dependency. Although the dependency is weakened
when many TCP connections are multiplexed as they arrive to a link,
the dependency may not be completely gone. We approximate a real
scenario by simulating the behavior of a link using the real traces
collected from the Internet. Since the traces include the arrival time
and the size of each packet, the simulation sees the exact link
behaviors as what the original link where the trace collected from
seen if we set the bandwidth and the buffers the same as the original
link.
We use traces from the NLANR web site
\footnote{http://pma.nlanr.net/Traces/} that archives packet header
traces collected from about ten links at different locations of the
Internet. The links differ in both bandwidth and traffic. A 90-second
trace is recorded every one (two) hours for each of the links. In our
experiment, we first assign traces collected from different sites to
the links of the simulated network. We then simulate the
links using
the assigned traces as input using the standard network simulator tool
\cite{ns}. Moreover, we superpose the probes to
the traces and record their per-link queuing delays as well as
end-to-end delays where the latter is used as input for the estimation
whereas the prior is for comparison with the estimates.
Notice that the delays on
the edge links in a real network may vary more than the core
links due to its low
bandwidth. In addition, the average delay may also differ dramatically
for different links. We resemble a real network in a symmetric binary
8-leaf tree by
assuming that both the root and the leaves of the tree are on the edge
of the network whereas the interior links are in the core. We assign
traces of high rates to links in the core and traces of low rates to
the edge links.
Figure~\ref{fig:8leaf_cdf} shows both CF and WCF estimate of the delay
distribution using a varying bin strategy laid out in
Section~\ref{sec:mixture}, along with the simulated distribution. The
throughputs across different links vary by a factor of 40. It is
easy to see that the estimates are extremely good for most links,
except for link 9 that has the smallest average link delay where it
shows some marginal error. The average normalized Mallows distance
over all links is 0.065 which also suggests a good match between the
estimates and simulated results. We have also simulated the four-leaf
tree network and a
symmetric binary 16-leaf tree network respecitvley, using different
traces, and the proposed algorithms give satisfactory results.
In addition, our real trace driven simulations suggest that the link
delay distributions excluding the tail can be well approximated by a
Weibull distribution with a shape parameter slightly smaller than
1. This is not surprising because it has been shown that the queuing
delay for a FIFO queue with a Fractional Brownian Motion traffic input
has a Weibullian tail. The Weibullian form is also consistent with the
finding in \cite{cao:2004}.
\section{Conclusion}
This paper has presented a general identifiability result and
introduced a general estimation approach for the network
tomography problem. For network delay tomography, a fast algorithm
based on GMM has been developed for estimating the link delay
distributions using mixture models of characteristic functions.
In comparison with likelihood based approaches, the most significant
nature of the new method is that it affords the choice of varying
bin widths which adapts to delay variabilities of individual links and
has low computational complexity. The new approach can be applied to
traffic demand estimation as well.
\section*{Acknowledgments}
We would like to thank Gang Liang for sharing his simulation codes and
Michael Greenwald for helpful discussions. A conference version of the
main results has appeared in the Proceeding of IEEE INFOCOM
(\cite{chen.et.al.2007}).
\input{appendix}
\bibliographystyle{Chicago}
| {
"timestamp": "2007-12-21T04:47:29",
"yymm": "0712",
"arxiv_id": "0712.3618",
"language": "en",
"url": "https://arxiv.org/abs/0712.3618",
"abstract": "The statistical problem for network tomography is to infer the distribution of $\\mathbf{X}$, with mutually independent components, from a measurement model $\\mathbf{Y}=A\\mathbf{X}$, where $A$ is a given binary matrix representing the routing topology of a network under consideration. The challenge is that the dimension of $\\mathbf{X}$ is much larger than that of $\\mathbf{Y}$ and thus the problem is often called ill-posed. This paper studies some statistical aspects of network tomography. We first address the identifiability issue and prove that the $\\mathbf{X}$ distribution is identifiable up to a shift parameter under mild conditions. We then use a mixture model of characteristic functions to derive a fast algorithm for estimating the distribution of $\\mathbf{X}$ based on the General method of Moments. Through extensive model simulation and real Internet trace driven simulation, the proposed approach is shown to be favorable comparing to previous methods using simple discretization for inferring link delays in a heterogeneous network.",
"subjects": "Methodology (stat.ME); Statistics Theory (math.ST); Applications (stat.AP); Computation (stat.CO)",
"title": "Network Tomography: Identifiability and Fourier Domain Estimation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9752018362008348,
"lm_q2_score": 0.8221891305219504,
"lm_q1q2_score": 0.8018003497893739
} |
https://arxiv.org/abs/1909.06347 | Steiner's formula and a variational proof of the isoperimetric inequality | We give a new proof of the isoperimetric inequality in the plane, based on Steiner's formula for the area of a convex neighborhood. This proof establishes the isoperimetric inequality directly, without requiring that we separately establish the existence of an optimal domain. In doing so, this proof bypasses the main difficulty in all of the proofs Steiner outlined for the plane isoperimetric inequality. | \section{Introduction}
\label{intro}
The classical isoperimetric inequality states that among all simple closed curves of length $L$ in the plane, the unique curve enclosing the largest area is the circle of circumference $L$:
\smallskip
\begin{theorem}[The Isoperimetric Inequality]
\label{ie}
Let $\gamma$ be a simple closed curve in the plane of length $L$, enclosing a domain $D$ of area $A$.
\medskip
Then $L^{2} \geq 4\pi A$, with equality precisely if $\gamma$ is a circle.
\end{theorem}
\smallskip
This paper gives a proof of the isoperimetric inequality based on Steiner's formula, which describes the area of a neighborhood of a convex domain in $\R^{2}$:
\smallskip
\begin{theorem}[Steiner's Formula, \cite{St}]
\label{sf}
Let $D$ be a bounded, convex domain in $\R^{2}$, of area $A$ and perimeter $L$, and let $D_{r}$ be the $r$-neighborhood of $D$, i.e. the points in $\R^{2}$ whose distance from $D$ is $r$ or less. Then:
\smallskip
\begin{itemize}
\item[{\bf A.}] $Area(D_{r}) = \pi r^{2} + L r + A$,
\medskip
\item[{\bf B.}] $Length(\partial D_{r}) = 2\pi r + L$.
\end{itemize}
\end{theorem}
\smallskip
Jakob Steiner (March $18^{th}$, 1796 - April $1^{st}$, 1863) proved Theorem \ref{sf} for convex polygons and a similar formula for convex polyhedra in $\R^{3}$. By polygonal approximation, Theorem \ref{sf} then follows for any compact, convex set in $\R^{2}$, and in fact a version of Theorem \ref{sf} holds in much greater generality -- for more about Steiner's formula, see \cite{Gr,Sc}. Steiner was fascinated by the isoperimetric inequality, and he sketched several ideas for proving it -- cf. \cite{Tr,Bl}. The isoperimetric problem was already ancient when Steiner considered it in the nineteenth century, but Theorem \ref{ie} had never been proven rigorously. It remained unproven in Steiner's lifetime, and all of Steiner's ideas for proving the isoperimetric inequality required the same additional step, which he never provided: one must show that the isoperimetric problem has a solution. \\
More precisely, we define the {\bf isoperimetric ratio} of a domain $D$ with area $A$ and perimeter $L$ to be:
\begin{equation}
\label{ir}
\displaystyle {\Huge \frac{L^{2}}{4\pi A}}.
\end{equation}
\medskip
The isoperimetric ratio is scale-invariant -- we formulate the isoperimetric inequality in terms of $L^{2}$ and $A$, as in Theorem \ref{ie}, because $L^{2}$ and $A$ transform the same under rescalings. The isoperimetric inequality then states that the isoperimetric ratio of any plane domain is greater than or equal to $1$, with equality precisely for disks. Steiner developed many proofs that no domain other than a disk could minimize the isoperimetric ratio, but he didn't establish the existence of a domain that minimizes (\ref{ir}). \\
The first proof of the existence of a domain minimizing the isoperimetric ratio seems to have been in unpublished lecture notes of Weierstrass in 1879, cf. \cite{Bl}. The existence of an optimal isoperimetric domain in the plane is now known to be a consequence of several compactness theorems in metric geometry and geometric measure theory, however the proof below does not require that we establish the existence of a minimizer for the isoperimetric ratio -- we show directly that no domain can have an isoperimetric ratio less than $1$. We believe part of the significance of our proof is that it shows how one of Steiner's ideas from convex geometry can be used to prove the isoperimetric inequality without separately establishing the existence of an optimal domain. \\
The basic observation for our proof is the following: if $D$ is a bounded convex domain in $\R^{2}$, we can use Theorem \ref{sf} to calculate the isoperimetric ratio $\mathcal{I}(r)$ of the $r$-neighborhood of $D$ as a function of $r$. Letting $A$ be the area of $D$ and $L$ its perimeter, we have:
\begin{equation}
\label{sfir}
\displaystyle \mathcal{I}(r) = \frac{\left( 2\pi r + L \right)^{2}}{4\pi \left( \pi r^{2} + Lr + A \right)} = \frac{4\pi^{2} r^{2} + 4\pi L r + L^{2}}{4\pi^{2} r^{2} + 4\pi L r + 4\pi A} .
\end{equation}
\medskip
Differentiating with respect to $r$, we have:
\begin{equation}
\label{sfird}
\displaystyle \mathcal{I}'(r) = \frac{\left( 4\pi A - L^{2} \right) \left( 8\pi^{2} r + 4\pi L \right)}{\left( 4\pi^{2} r^{2} + 4\pi L r + 4\pi A \right)^{2}} = \frac{\left( 4\pi A - L^{2} \right) \left( \pi r + L \right)}{4\pi \left( \pi r^{2} + L r + A \right)^{2}}.
\end{equation}
\medskip
This implies that $\mathcal{I}(r)$ is a monotone function of $r$, decreasing if the isoperimetric ratio of $D$ is greater than $1$ and constant if the isoperimetric ratio of $D$ is equal to $1$. If $D$ were a convex domain with an isoperimetric ratio less than $1$, $\mathcal{I}(r)$ would increase monotonically to $1$, the isoperimetric ratio of the disk, as $r$ goes to infinity. As $r$ goes to infinity, the $r$-neighborhoods of any convex domain $D$, when rescaled to have constant area, converge to a disk -- see Proposition \ref{goodvar}. We will see that this gives a variation of the disk, as an argument for the functional on plane domains given by the isoperimetric ratio. We will use Steiner's formula to find its first and second variations -- in particular, we will relate them to the isoperimetric ratio of the domain $D$ in question. We will then be able to deduce Theorem \ref{ie} from the fact that the disk is a critical point, with non-negative second variation, for the isoperimetric ratio on plane domains. For later reference, the quantity $L^{2} - 4\pi A$ whose negative appears in (\ref{sfird}) is called the {\bf isoperimetric deficit} of a domain. \\
It will be important in our proof that, in the plane, the convex hull $conv(D)$ of a non-convex domain $D$ always has a smaller isoperimetric ratio than $D$ itself: $conv(D)$ encloses a larger area than $D$ with a smaller perimeter. Therefore, to prove Theorem \ref{ie}, it is enough to show that the isoperimetric inequality holds for convex domains. Steiner was aware of this fact and used it in several of his ideas for proving the isoperimetric inequality. In dimensions greater than $2$, this is no longer true: the isoperimetric ratio of a $3$-dimensional domain with volume $V$ and surface area $A$ is defined to be $\frac{A^{3}}{36 \pi V^{2}}$. Like (\ref{ir}) for plane domains, the isoperimetric ratio of a domain in $\R^{3}$ is scale-invariant and the ball has isoperimetric ratio equal to $1$. The isoperimetric inequality in $\R^{3}$ states that the isoperimetric ratio of any domain is greater than or equal to $1$, with the ball being the unique minimizer. For a ball with a long spike in $\R^{3}$, both the volume and surface area, and thus the isoperimetric ratio, can be made arbitrarily close to that of the ball by making the spike narrow enough. On the other hand, the convex hull of such a domain will be approximately a cone with a hemispherical cap, with an isoperimetric ratio significantly greater than $1$: for a spike of length $\eta$ on the unit ball, the isoperimetric ratio of its convex hull will be approximately $\frac{\eta + 3}{4}$ for $\eta$ very large. \\
The outline of this paper and our proof of the isoperimetric inequality is as follows: \\
In Section \ref{variations}, we will calculate the first and second variations of the isoperimetric ratio of the disk. We will show that the disk is a stable critical point of the isoperimetric ratio and that any variation has positive second variation unless, to first order, the variation is the sum of a translation and a rescaling of the disk. \\
In Section \ref{monotonicity}, we will use the $r$-neighborhoods of a compact, convex domain $D$ in the plane to construct a variation of the disk of the type analyzed in Section \ref{variations}. We will use Steiner's formula to relate its first and second variations to the isoperimetric deficit of $D$, and in doing so, we will show that the isoperimetric deficit of $D$ is non-negative. \\
Once we know that the isoperimetric inequality $L^{2} - 4\pi A \geq 0$ holds, any of Steiner's arguments then prove that the disk is the only domain for which equality holds. However, we will show in Section \ref{uniqueness} that the uniqueness of the disk as a minimizing domain also follows from our proof. \\
We will prove that the perimeter $L$ and area $A$ of a plane domain $D$ satisfy $L^{2} \geq 4\pi A$ under the assumption that its boundary $\partial D$ is smooth, and we will make the further simplifying assumption that the curvature of $\partial D$ is strictly positive -- that is, the curvature vector of $\partial D$ always points into $D$ and never vanishes. However by approximation (and the reduction to the convex case) this inequality then follows immediately for any plane domain with a rectifiable boundary. The corresponding issue is more difficult in higher dimensions -- this is discussed in Section 2 of \cite{Os}. In all dimensions, however, the boundary of a compact, convex domain can be realized as the Lipschitz image of a round sphere and is therefore rectifiable. \\
Throughout the paper, we will discuss the relationship between this proof and other known proofs of the isoperimetric inequality. Robert Osserman's article \cite{Os} gives an overview of the isoperimetric inequality, its generalizations and their significance in mathematics. Isaac Chavel's \cite{Ch} and Luis Santal\'o's \cite{San} books both discuss many results and questions in geometry and analysis which are based on the isoperimetric inequality and give several proofs of the classical isoperimetric inequality. Bl\r{a}sj\"o discusses the history of the isoperimetric inequality in \cite{Bl}, and Howards, Hutchings and Morgan in \cite{HHM} and Andrejs Treibergs in \cite{Tr} present several proofs of the classical isoperimetric inequality. \\
{\bf Acknowledgments:} I am very happy to thank Christopher Croke, Joseph H.G. Fu and Peter McGrath for their feedback about this work and Isaac Chavel, Frank Morgan and Franz Schuster for their input about the history of the isoperimetric inequality.
\section{The First and Second Variations of the Isoperimetric Ratio}
\label{variations}
We will calculate the first and second variations of the isoperimetric ratio of the disk for variations through families of convex domains -- in particular, we will see that the disk is a critical point of the isoperimetric ratio and, infinitesimally, a minimizer. \\
A compact, convex domain $D$ can be described by its {\bf support function} $p(\theta) : S^{1} \rightarrow \R$, defined as follows:
\begin{align*}
\displaystyle p(\theta) = \text{max} \left(\lbrace h_{\theta}(x) := x_0 \cos(\theta) + x_1 \sin(\theta) \ | \ x = (x_0, x_1) \in D \rbrace \right).
\end{align*}
\smallskip
If the boundary $\partial D$ of $D$ is smooth and has strictly positive curvature, then $p(\theta) + p''(\theta)$ is its radius of curvature. In this case, the area $A$ and perimeter $l$ of $D$ are given by:
\begin{equation}
\label{sup_area}
\displaystyle A = (\frac{1}{2})\int\limits_{0}^{2\pi} p(\theta) \left( p(\theta) + p''(\theta) \right) d\theta = (\frac{1}{2})\int\limits_{0}^{2\pi} p(\theta)^{2} - p'(\theta)^{2} d\theta,
\end{equation}
\begin{equation}
\label{sup_perim}
\displaystyle l = \int\limits_{0}^{2\pi} p(\theta)d\theta.
\end{equation}
\smallskip
This is described in Chapter 1 of \cite{San}. A variation of the unit disk $\mathcal{D}_{0}$ through a family of such domains $\lbrace \mathcal{D}_{t} \rbrace_{t \geq 0}$ can therefore be described by a smooth function $p(\theta, t)$, with $p(\theta, t)$ the support function of the domain $\mathcal{D}_{t}$. In particular, $p(\theta, 0) \equiv 1$.
\begin{proposition}
\label{12var}
Let $\mathcal{D}_{t}$ be a family of compact, convex domains in the plane, with the boundary $\partial \mathcal{D}_{t}$ of each domain smooth and with positive curvature, which give a variation of the disk $\mathcal{D}_{0}$ as above. Let $I(t)$ be the isoperimetric ratio of the domain $\mathcal{D}_{t}$.
\medskip
Then $I'(0) = 0$ and $I''(0) \geq 0$, with equality if and only if, to first order, the family of domains coincides with a rescaling and translation of the disk.
\end{proposition}
\begin{proof} Let $p(\theta,t)$ be the support function of $\mathcal{D}_{t}$ as above. Then letting $A(t)$ be the area and $l(t)$ the perimeter of $\mathcal{D}_{t}$, by (\ref{sup_area}) and (\ref{sup_perim}) we have:
\begin{equation}
\label{var_area}
\displaystyle A(t) = (\frac{1}{2})\int\limits_{0}^{2\pi} p(\theta, t)^{2} - \frac{\partial p}{\partial \theta} (\theta, t)^{2} d\theta,
\end{equation}
\begin{equation}
\label{var_perim}
\displaystyle l(t) = \int\limits_{0}^{2\pi} p(\theta, t) d\theta.
\end{equation}
\smallskip
Because $p(\theta, 0) \equiv 1$ and $\frac{\partial p}{\partial \theta}(\theta, 0) \equiv 0$, $A'(0)$ and $l'(0)$ are both equal to $\int_{0}^{2\pi} \frac{\partial p}{\partial t}(\theta, 0) d\theta$. \\
We then have that $I'(0) = \frac{2A(0) l(0) l'(0) - A'(0) l(0)^{2}}{4 \pi A(0)^{2}} $ is equal to:
\begin{align*}
\displaystyle \frac{2 \times \pi \times 2\pi \left( \int\limits_{0}^{2\pi} \frac{\partial p}{\partial t}(\theta, 0) d\theta \right) - 2\pi \times 2\pi \left( \int\limits_{0}^{2\pi} \frac{\partial p}{\partial t}(\theta, 0) d\theta\right)}{4\pi^{3}} = 0.
\end{align*}
\smallskip
$l''(0)$ is equal to $\int_{0}^{2\pi} \frac{\partial^{2} p}{\partial t^{2}}(\theta, 0) d\theta$ and, using again that $p(\theta, 0) \equiv 1$ and $\frac{\partial p}{\partial \theta}(\theta, 0) \equiv 0$, we have:
\begin{equation}
\label{area_second_derivative}
\displaystyle A''(0) = \int\limits_{0}^{2\pi} \left[ \frac{\partial p}{\partial t}(\theta, 0)^{2} + \frac{\partial^{2} p}{\partial t^{2}}(\theta, 0) - \frac{\partial^{2} p}{\partial t \partial \theta}(\theta, 0)^{2} \right] d\theta.
\end{equation}
\smallskip
We then have that $I''(0) = \frac{ \left(2 A'(0) - l'(0) \right)^{2} + 2\pi \left(l''(0) - A''(0) \right)}{2 \pi^{2}}$ is equal to:
\begin{equation}
\label{wi}
\displaystyle \frac{\left(\int\limits_{0}^{2\pi} \frac{\partial p}{\partial t}(\theta, 0) d\theta \right)^{2} + 2\pi \left(\int\limits_{0}^{2\pi} \frac{\partial^{2} p}{\partial \theta \partial t}(\theta, 0)^{2} - \frac{\partial p}{\partial t}(\theta, 0)^{2} d\theta \right)}{2 \pi^{2}}.
\end{equation}
\smallskip
{\bf Wirtinger's inequality} states that if $\varphi(\theta)$ is a $2\pi$-periodic, continuously differentiable function with $\int_{0}^{2\pi} \varphi(\theta) d\theta = 0$, then:
\begin{align*}
\displaystyle \text{\Large $\int\limits_{0}^{2\pi}$}\varphi'(\theta)^{2} d\theta \geq \text{\Large $\int\limits_{0}^{2\pi}$} \varphi(\theta)^{2} d\theta.
\end{align*}
\smallskip
Equality holds precisely if $\varphi(\theta) = a_{0}\cos(\theta) + a_{1}\sin(\theta)$ for some constants $a_{0}, a_{1}$. Wirtinger's inequality thus implies by (\ref{wi}) that $I''(0) \geq 0$ and is strictly positive unless $\frac{\partial p}{\partial t}(\theta, 0) = a_{0}\cos(\theta) + a_{1}\sin(\theta) + 2\pi \widehat{p}$, where $\widehat{p} = \frac{1}{2\pi}\int_{0}^{2\pi}\frac{\partial p}{\partial t}(\theta, 0) d\theta$. The variation corresponding to $a_{0}\cos(\theta) + a_{1}\sin(\theta)$ gives a translation of the disk, in the direction whose argument is $\arctan(\frac{a_{1}}{a_{0}})$ at speed $\sqrt{a_{0}^{2} + a_{1}^{2}}$, and the variation corresponding to $2\pi \widehat{p}$ rescales the disk, by a factor $1 + t_{0} 2\pi \widehat{p}$ when $t = t_{0}$. \end{proof}
Wirtinger's inequality can be proved by comparing the Fourier series of a $2\pi$-periodic function with that of its derivative, cf. \cite{Fo}. Wirtinger's inequality also implies the isoperimetric inequality directly. This was discovered by Hurwitz, who gave the first proof of the isoperimetric inequality based on Fourier analysis and Wirtinger's inequality in \cite{Hu}. A variant of this proof, in which the role of Wirtinger's inequality is made explicit, can be found in \cite{Os} and \cite{BG}. As with our proof, Hurwitz's proof of the isoperimetric inequality does not require that one separately establish the existence of a minimizing domain -- his argument shows directly that $l^{2} \geq 4\pi A$ for any plane domain, with equality precisely when the domain is a disk.
\section{Steiner's Formula and the Monotonicity of the Isoperimetric Ratio}
\label{monotonicity}
To prove Theorem \ref{ie}, we begin by confirming that the $r$-neighborhoods of a bounded, convex domain $D$, when rescaled to have constant area, give a variation of the disk of the type considered in Proposition \ref{12var}:
\begin{proposition}
\label{goodvar}
Let $D$ be a compact, convex domain in the plane whose boundary is smooth and has positive curvature. For $t > 0$, let $\mathcal{D}_{t}$ be the $r = \frac{1}{t}$-neighborhood of $D$, rescaled to have the same area as $D$, and let $\mathcal{D}_{0}$ be a disk with the same area as $D$.
\medskip
Then $\lbrace \mathcal{D}_{t} \rbrace_{t \geq 0}$ gives a variation of the disk $\mathcal{D}_{0}$, as in Proposition \ref{12var}. More precisely, if $q(\theta)$ is the support function of $D$, this variation is described by:
\begin{equation}
\label{var_function}
\displaystyle p(\theta, t) = \text{\footnotesize $\sqrt{\frac{A}{A t^{2} + lt + \pi}}$} \left(q(\theta) t + 1 \right),
\end{equation}
\smallskip
where $A$ is the area and $l$ is the perimeter of $D$.
\end{proposition}
\begin{proof}
Let $D$ be as above -- without loss of generality, suppose $D$ has area $\pi$. Note first that each $r$-neighborhood of $D$ is also convex, cf. Remark \ref{ms} below, so that the variation in question is through a family of convex sets. If $q(\theta)$ is the support function of $D$, then $q(\theta) + r$ is the support function of $D_{r}$ and, by Theorem \ref{sf}, $\scriptstyle \sqrt{\frac{\pi}{\pi r^{2} + lr + \pi}}$$(q(\theta) + r)$ is the support function of the rescaling of $D_{r}$ whose area is equal to that of $D$. Rewriting this in terms of $t = \frac{1}{r}$ for $r > 0$, we have:
\begin{equation}
\label{good_var_eqn}
\displaystyle p(\theta, t) = \text{\footnotesize $\sqrt{\frac{\pi}{\pi (\frac{1}{t})^{2} + l\frac{1}{t} + \pi}}$} \left( q(\theta) + \frac{1}{t} \right) = \text{\footnotesize $\sqrt{\frac{\pi}{\pi t^{2} + lt + \pi}}$} \left(q(\theta) t + 1 \right).
\end{equation}
\smallskip
We then have:
\begin{equation*}
\displaystyle p(\theta,t) + \frac{\partial^{2} p}{\partial \theta^{2}}(\theta,t) = \text{\footnotesize $\sqrt{\frac{\pi}{\pi t^{2} + lt + \pi}}$} \left(t(q(\theta) + q''(\theta)) + 1 \right).
\end{equation*}
\smallskip
Since the curvature of $\partial D$ is positive, $q(\theta) + q''(\theta) > 0$, so for all $t > 0$ we also have that $p(\theta,t) + \frac{\partial^{2} p}{\partial \theta^{2}}(\theta,t) > 0$, and that $\partial \mathcal{D}_{t}$ has positive curvature. $p(\theta,t)$ extends smoothly to $t=0$, where it is equal to the support function of the unit disk, and gives a variation of the disk as in Proposition \ref{12var}. \end{proof}
\begin{remark}
\label{ms}
The $r$-neighborhood $D_{r}$ of a compact, convex set $D$ is the {\bf Minkowski sum} of $D$ with a disk of radius $r$ in $\R^{2}$. Minkowski summation of convex sets is discussed extensively in \cite{Sc} and many other texts on convex and integral geometry.
\end{remark}
We now prove the inequality in Theorem \ref{ie} -- that for a compact domain in $\R^{2}$ with perimeter $l$ and area $A$, $l^{2} \geq 4\pi A$. We will then address the characterization of the equality case in Section \ref{uniqueness}.
\begin{proof}[Proof of Theorem \ref{ie}, Part 1]
Let $D$ be a compact, convex domain in the plane with area $A$ and boundary length $l$, and suppose $\partial D$ is smooth and has positive curvature as above. By (\ref{sfir}), for $t > 0$, the isoperimetric ratio $I(t)$ of the $(\frac{1}{t})$-neighborhood of $D$ is:
\begin{equation}
\label{sfir2}
\displaystyle I(t) = \frac{l^{2}t^{2} + 4\pi l t + 4\pi^{2}}{4\pi A t^{2} + 4\pi l t + 4\pi^{2}}.
\end{equation}
\smallskip
Letting $\delta$ be the least absolute value of the roots of $f(t) = 4\pi A t^{2} + 4\pi l t + 4\pi^{2}$, the denominator of (\ref{sfir2}) (see Remark \ref{roots} below), the function of $t$ defined by (\ref{sfir2}) extends smoothly to $(-\delta, \infty)$. In particular, (\ref{sfir2}) extends smoothly to $t = 0$ to give the isoperimetric ratio of the variation $\lbrace \mathcal{D}_{t} \rbrace_{t \geq 0}$ of the disk described in Proposition \ref{goodvar}. $I(t)$ is a monotone function of $t \geq 0$, with the sign of $I'(t)$ determined by the isoperimetric deficit of $D$:
\begin{equation}
\label{sfird2}
\displaystyle I'(t) = \frac{\left( l^{2} - 4\pi A \right)\left( lt^{2} + 2\pi t \right)}{4\pi\left(At^{2} + lt + \pi\right)^{2}}.
\end{equation}
\smallskip
Therefore, $I'(0) = 0$ (which also follows from Propositions \ref{12var} and \ref{goodvar}) and for $t > 0$, $I'(t)$ has the same sign as the isoperimetric deficit of $D$. To show that $l^{2} \geq 4\pi A$, we calculate the second derivative of $I(t)$:
\begin{equation}
\displaystyle I''(t) = \left( \frac{l^{2} - 4\pi A}{2\pi} \right) \left( \frac{\pi^{2} - 3\pi A t^{2} - Alt^{3}}{(At^{2} + lt + \pi)^{3}} \right).
\end{equation}
\smallskip
In particular, $\displaystyle I''(0) = \frac{l^{2} - 4\pi A}{2\pi^{2}}$. The sign of $l^{2} - 4\pi A$ is the same as that of $I''(0)$, which by Proposition \ref{12var} is greater than or equal to $0$. \end{proof}
\begin{remark}
\label{roots}
The roots of the denominator of (\ref{sfir2}), $f(t) = 4\pi A t^{2} + 4\pi l t + 4\pi^{2}$, are:
\begin{equation}
\displaystyle \frac{-l \pm \sqrt{l^{2} - 4\pi A}}{2A}.
\end{equation}
\smallskip
The isoperimetric inequality is equivalent to the statement that the roots of this polynomial are real, and thus negative, and are distinct unless the domain in question is a disk. For our purposes, it is enough simply to note that any real roots of $f(t)$ are negative since $f(t) \geq 4\pi^{2}$ when $t \geq 0$. The roots of the Steiner polynomial were studied by Green and Osher in \cite{GO} (the Steiner polynomial of a domain with area $A$ and perimeter $l$ is $\pi r^{2} + lr + A$, with roots $\frac{-l \pm \sqrt{l^{2} - 4\pi A}}{2\pi}$). They note that Steiner's formula implies the isoperimetric deficit of the $r$-neighborhood of $D$ is equal to that of $D$.
\end{remark}
\section{The Uniqueness of the Disk}
\label{uniqueness}
Once we have shown that $l^{2} \geq 4\pi A$ for all plane domains with perimeter $l$ and area $A$, and thus that the disk minimizes the isoperimetric ratio, any of Steiner's arguments then show that it is the unique minimizer. The uniqueness of the disk as a minimizing domain for the isoperimetric ratio also follows from our argument, subject to some mild technical assumptions:
\begin{proof}[Proof of Theorem \ref{ie}, Part 2] Let $D$ be a bounded domain in the plane with smooth (or $C^{2}$) boundary whose area $A$ and boundary length $l$ satisfy $l^{2} = 4\pi A$. We can suppose $A = \pi$ and $l = 2\pi$. Suppose in addition that the curvature of $\partial D$ is positive, as above. By (\ref{sfir2}), in the variation $\lbrace \mathcal{D}_{t} \rbrace_{t \geq 0}$ of the disk constructed from $D$ as in Section \ref{monotonicity}, the isoperimetric ratio of $\mathcal{D}_{t}$ is equal to $1$ for all $t \geq 0$, and therefore $l(t) \equiv 2\pi$. Therefore,
\begin{equation}
\label{l_deriv}
\displaystyle l'(t) = \int\limits_{0}^{2\pi} \frac{\partial p}{\partial t}(\theta,t) d\theta \equiv 0,
\end{equation}
\begin{equation}
\label{l_second_deriv}
\displaystyle l''(t) = \int\limits_{0}^{2\pi} \frac{\partial^{2} p}{\partial t^{2}}(\theta,t) d\theta \equiv 0.
\end{equation}
\smallskip
By (\ref{area_second_derivative}) and (\ref{l_second_deriv}), we then have:
\begin{equation}
\label{a_second_deriv}
\displaystyle \int\limits_{0}^{2\pi} \left[ \frac{\partial p}{\partial t}(\theta, t)^{2} - \frac{\partial^{2} p}{\partial t \partial \theta}(\theta, t)^{2} \right] d\theta = A''(t) \equiv 0.
\end{equation}
\smallskip
By (\ref{l_deriv}), (\ref{a_second_deriv}) and Wirtinger's inequality, $\frac{\partial p}{\partial t}(\theta, t) = c_{0}(t) \cos(\theta) + c_{1}(t) \sin(\theta)$ for some functions $c_{0}(t), c_{1}(t)$ of $t$. Letting $q(\theta)$ be the support function of $D$, by (\ref{var_function}),
\begin{equation}
\displaystyle \frac{q(\theta) - 1}{(t + 1)^{2}} = c_{0}(t) \cos(\theta) + c_{1}(t) \sin(\theta).
\end{equation}
\smallskip
This then implies that $c_{0}(t) = \frac{d_{0}}{(t + 1)^{2}}$, $c_{1}(t) = \frac{d_{1}}{(t + 1)^{2}}$ for some constants $d_{0}, d_{1}$, and that $q(\theta) = d_{0} \cos(\theta) + d_{1} \sin(\theta) + 1$. $D$ is therefore the unit disk centered at $(d_{0}, d_{1})$. \end{proof}
We conclude with a few remarks about the technical assumptions in the proof of the characterization of equality above: \\
We have assumed the domain $D$ to be convex, and to have $C^{2}$ boundary whose curvature is strictly positive, so that it can be described by a $C^{2}$ support function $q(\theta)$. However, by the reduction to the convex case, any domain realizing equality in the isoperimetric inequality must be convex. Moreover, for any compact, convex set $D$ and $r >0$, the $r$-neighborhood $D_{r}$ of $D$ has $C^{1,1}$ boundary, which is therefore twice-differentiable almost everywhere. If $D$ realizes equality in the isoperimetric inequality, then by (\ref{sfir}) each of its $r$-neighborhoods $D_{r}$ does as well, and by the convexity of $D_{r}$, the curvature of $\partial D_{r}$ is non-negative at all points where it is defined. Thus, if one can show that a domain which realizes equality in the isoperimetric inequality, whose boundary is twice-differentiable almost everywhere, and has non-negative curvature at all points where its curvature is defined is a disk, one will have shown that $D_{r}$ is a disk for all $r > 0$, and thus that $D$ is a disk as well. \\
The relationship between the regularity of the boundary of a domain and the regularity of its support function and the smoothness properties of $\partial D_{r}$ are both discussed in \cite{Sc}. Osserman discusses the significance of the regularity assumed on the boundaries of domains in the isoperimetric inequality in Section 2 of \cite{Os}. He notes that one can modify a smooth domain by adding ``wiggles" to its boundary, increasing its perimeter while leaving its area unchanged -- thus, ``one has the ironic situation that the more irregular the boundary, the stronger will be the isoperimetric inequality, but the harder it is to prove. The fact is, the isoperimetric inequality holds in the greatest generality imaginable, but one needs suitable definitions even to state it."
| {
"timestamp": "2021-01-15T02:23:43",
"yymm": "1909",
"arxiv_id": "1909.06347",
"language": "en",
"url": "https://arxiv.org/abs/1909.06347",
"abstract": "We give a new proof of the isoperimetric inequality in the plane, based on Steiner's formula for the area of a convex neighborhood. This proof establishes the isoperimetric inequality directly, without requiring that we separately establish the existence of an optimal domain. In doing so, this proof bypasses the main difficulty in all of the proofs Steiner outlined for the plane isoperimetric inequality.",
"subjects": "Differential Geometry (math.DG); Classical Analysis and ODEs (math.CA)",
"title": "Steiner's formula and a variational proof of the isoperimetric inequality",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9863631671237733,
"lm_q2_score": 0.8128673155708975,
"lm_q1q2_score": 0.8017823798379101
} |
https://arxiv.org/abs/1202.3033 | Orange Peels and Fresnel Integrals | There are two standard ways of peeling an orange: either cut the skin along meridians, or cut it along a spiral. We consider here the second method, and study the shape of the spiral strip, when unfolded on a table. We derive a formula that describes the corresponding flattened-out spiral. Cutting the peel with progressively thinner strip widths, we obtain a sequence of increasingly long spirals. We show that, after rescaling, these spirals tends to a definite shape, known as the Euler spiral. The Euler spiral has applications in many fields of science. In optics, the illumination intensity at a point behind a slit is computed from the distance between two points on the Euler spiral. The Euler spiral also provides optimal curvature for train tracks between a straight run and an upcoming bend. It is striking that it can be also obtained with an orange and a kitchen knife. | \section{Summary Paragraph}
{\bf
There are two standard ways of peeling an orange: either cut the skin
along meridians, or cut it along a spiral. We consider here the
second method, and study the shape of the spiral strip, when unfolded
on a table. We derive a formula that describes the corresponding
flattened-out spiral. Cutting the peel with progressively thinner
strip widths, we obtain a sequence of increasingly long spirals. We
show that, after rescaling, these spirals tends to a definite shape,
known as the Euler spiral. The Euler spiral has applications in many
fields of science. In optics, the illumination intensity at a point
behind a slit is computed from the distance between two points on the
Euler spiral. The Euler spiral also provides optimal curvature for
train tracks between a straight run and an upcoming bend. It is
striking that it can be also obtained with an orange and a kitchen
knife.}
\section{Outline}
Cut the skin of an orange along a thin spiral of constant width
(fig.~\ref{fig:orange}) and lay it flat on a table
(fig.~\ref{fig:peel}). A natural breakfast question, for a
mathematician, is which shape the spiral peel will have, when
flattened out. We derive a formula that, for a given cut width,
describes the corresponding spiral's shape.
For the analysis, we parametrize the spiral curve by a constant speed
trajectory, and express the curvature of the flattened-out spiral as a
function of time.
\begin{figure}[h]
\[
\begin{tikzpicture}
\node[scale = .925] at (-.4,0) {\includegraphics[scale=0.15]{sphere.jpg}};
\draw[<->,white,very thick] (0.95,1.45) -- node[right, pos=.3, scale=.97] {$1/N$} (1.6,0.4);
\end{tikzpicture}
\]
\caption{An orange, assumed to be a sphere of radius one, with spiral of width $1/N$.\vspace{-1cm}}\label{fig:orange}
\end{figure}
This is achieved by comparing a revolution of the
spiral on the orange with a corresponding spiral on a cone tangent to
the surface of the orange (fig.~\ref{fig:cone}, left). Once we know the
curvature, we derive a differential equation for our spiral, which we
solve analytically (fig.~\ref{fig:spiral}, left).
We then consider what happens to our spirals when we vary the strip
width. Two properties are affected: the overall size, and the shape.
\begin{figure}[h]
\[
\begin{tikzpicture}
\node at (0,0) {\includegraphics[scale=0.098]{spiral.jpg}};
\draw[very thick] (2.5,-2) -- (2.5,-2.05) -- node[below, scale=.97] {1\,cm} (2.8,-2.05) -- (2.8,-2);
\end{tikzpicture}
\]
\caption{The flattened-out orange peel.}\label{fig:peel}
\end{figure}
Taking finer and finer widths of strip, we obtain a sequence of
increasingly long spirals; rescale these spirals to make them all of
the same size. We show that, after rescaling, the shape of these
spirals tends to a well defined limit. The limit shape is a classical
mathematical curve, known as the \emph{Euler spiral} or the
\emph{Cornu spiral} (fig.~\ref{fig:spiral}, right). This spiral is the
solution of the \emph{Fresnel integrals}.
The Euler spiral has many applications. In optics, it occurs in the
study of light diffracting through a
slit~\cite{hecht:optics}*{\S10.3.8}. More precisely, the illumination
intensity at a point behind a slit is the square of the distance
between two points on the Euler spiral, easily determined from the
slit's geometry.
\begin{figure}[h]
\[
\begin{tikzpicture}
\draw (0,0) circle (1);
\coordinate (c) at (0,2);
\begin{scope}
\pgftransformcm{1}{0}{0}{.3}{\pgfpoint{0}{0}}
\draw (-1,0) arc (-180:0:1);
\pgftransformcm{1.07}{0}{0}{1.07}{\pgfpoint{0}{12}}
\draw[densely dotted] (-1,0) arc (175:5:1);
\foreach \x in {180, 188.5, ..., 360}{\draw[gray!50] (\x:1) -- (c);}
\draw (-190:1) arc (-190:10:1) -- (c) -- cycle;
\pgftransformcm{.9}{0}{0}{.9}{\pgfpoint{0}{16}}
\pgftransformcm{.95}{.1}{0}{1}{\pgfpoint{-.4}{8.2}}
\draw[red, densely dotted] (-110:1) arc (-110:-90:1);
\draw[red] (-90:1) arc (-90:10:1);
\pgftransformcm{.95}{0}{0}{.95}{\pgfpoint{0}{8.2}}
\draw[red] (-190:1) arc (-190:10:1);
\draw[red] (-190:1) arc (-190:10:1);
\pgftransformcm{.95}{0}{0}{.95}{\pgfpoint{0}{8.2}}
\draw[red] (-190:1) arc (-190:10:1);
\pgftransformcm{.95}{0}{0}{.95}{\pgfpoint{0}{8.2}}
\draw[red] (-190:1) arc (-190:-50:1);
\draw[red, densely dotted] (-50:1) arc (-50:-30:1);
\end{scope}
\draw[<->] (0,2)+(30:.1) -- node[scale=.85, right, pos=.45]{$\scriptstyle R$} (30:1.1);
\draw[->,decorate,decoration={snake,post length=.5mm,amplitude=.5mm,segment length=2mm}] (0,-1.3) -- (0,-1.75);
\begin{scope}[xshift=-3.95cm,yshift=-2.6cm]
\foreach \x in {-15, -22.5, ..., -165}{\draw [gray!50] (4,.5) -- +(\x:1.5);}
\draw (4,.5) -- +(-10:1.5) arc (-10:-170:1.5) -- cycle;
\draw [red] (3.95,.5) +(-80:1.2) arc (-80:-9.5:1.2);
\draw [red] (3.95,.5) +(-169.5:1.1) arc (-169.5:-9.5:1.1);
\draw [red] (3.95,.5) +(-169.5:1.0) arc (-169.5:-9.5:1.0);
\draw [red] (3.95,.5) +(-169.5:.9) arc (-169.5:-60:.9);
\draw[red, densely dotted] (3.95,.5) +(-60:.9) arc (-60:-45:.9);
\draw[red, densely dotted] (3.95,.5) +(-96:1.2) arc (-96:-80:1.2);
\draw[<->] (4,.5) -- node[scale=.85, right, pos=.5]{$\scriptstyle R$} +(-78.7:1.09);
\end{scope}
\begin{scope}[xshift=3cm,yshift=-1.75cm]
\pgftransformcm{.93}{0}{0}{.93}{\pgfpoint{0}{0}}
\clip (-1,-2.02) rectangle (2.1,4.2);
\draw (0,0) circle (2cm);
\draw (0,0) -- (0,1);
\draw[<->] (-.14,0) -- node[left, scale=.93] {$s$} (-.14,1);
\draw (0,1) -- (0,4);
\draw (0,1) -- node[above, xshift=-5,yshift=-2, scale=.9] {$\sqrt{1-s^2}$} (30:2);
\draw (0,0) -- node[below, scale=.93] {$1$} (30:2);
\draw (30:2) -- (0,4);
\filldraw (0,0) circle (.03);
\pgftransformxshift{3.6}
\pgftransformyshift{1.8}
\draw[<->] (30:2) -- node[above,sloped, scale=.9] {$R=\sqrt{1-s^2\,}\!\big/s$} (0,4);
\end{scope}
\end{tikzpicture}
\]
\caption{\emph{left:} Spiral on the sphere, transferred to the tangent cone, and developed
on the plane, for computing its radius of curvature;\\
\emph{right:} The computation of the radius of curvature $R$ of the flattened spiral.}
\label{fig:cone}
\end{figure}
The same spiral is also used in civil engineering: it provides optimal
curvature for train tracks~\cite{profillidis:railway}*{\S14.1.2}. A
train that travels at constant speed and increases the curvature of
its trajectory at a constant rate will naturally follow an arc of the
Euler spiral. The review~\cite{Levien:EECS-2008-111} describes the
history of the Euler spiral and its three independent discoveries.
\begin{figure}[h]
\[
\begin{tikzpicture}[scale=0.8]
\useasboundingbox (-2.2,-2.1) rectangle (1.7,1.9);
\node at (-0.3,-0.1) {\includegraphics[scale=0.8]{solution.pdf}};
\draw[->] (0,-.3) node[anchor=north, scale=.6] {$t=0$} -- (0,0);
\draw[->] (-0.8,-1.24) node[anchor=west, scale=.6] {$t=-2\pi N$} --
(-1.31,-1.24);
\draw[->] (0.65,1.21) node[anchor=east, scale=.6] {$t=2\pi N$} -- (1.17,1.21);
\end{tikzpicture}\quad\,\,\,
\begin{tikzpicture}[scale=0.865]
\useasboundingbox (-2.3,-2.0) rectangle (1.6,1.8);
\pgftransformyshift {-1.7}
\node at (-0.4445,-0.1) {\includegraphics[scale=0.865]{euler.pdf}};
\fill[red] (1.05,1.12) circle (.1) (-1.19,-1.13) circle (.1);
\end{tikzpicture}
\]
\caption{\emph{left:} Maple plot of the orange peel spiral ($N=3$);\\
\emph{right:} the Euler spiral; limit $N \to \infty$.}\label{fig:spiral}
\end{figure}
\section{Analysis}
For the purpose of our mathematical treatment, we shall replace the
orange by a sphere of radius one. The spiral on the sphere is taken
of width $1/N$, see (fig.~\ref{fig:orange}). The area of the sphere
is $4\pi$, so the spiral has a length of roughly $4\pi N$. We
describe the flattened-out orange peel spiral by a curve $(x(t),y(t))$
in the plane, parameterized at unit-speed from time $t=-2\pi N$ to
$t=2\pi N$.
On a sphere of radius one, the area between two horizontal planes at
heights $h_1$ and $h_2$ is ${2\pi(h_2-h_1)}$, see (fig.~\ref{fig:area}). It
follows that, at time $t$, the point on the sphere has height
$s:=t/2\pi N$.
\begin{figure}[h]
\[
\begin{tikzpicture}
\draw (0,0) circle (2cm);
\draw (30:2) -- (150:2);
\draw (36:2) -- (144:2);
\fill[gray!20!white] (150:2) arc (150:144:2cm) -- (36:2) arc (36:30:2cm) -- cycle;
\draw[->] (0,2.8) node[anchor=south] {Perimeter $\approx 2\pi\sqrt{1-s^2}$} -- (0,0 |- -36:-2);
\draw[->] (-2.5,1.8) node[anchor=south] {Width $\approx \epsilon/\sqrt{1-s^2}$} -- (147:2);
\draw[->] (-2.5,0) -- node[left] {$h_1$} (-2.5,0 |- 30:2);
\draw[<->] (-2.5,0 |- 30:2) -- node[left] {$\epsilon$} (-2.5,0 |- 36:2);
\draw[->] (-2.3,0) -- node[right, yshift=-2.5, xshift=2, fill=white, inner sep=1] {$h_2=h_1+\epsilon$} (-2.3,0 |- 36:2);
\draw[dashed] (-2.9,0) -- (1.5,0);
\draw[->] (.5,0) --node[anchor=west, fill=white, inner sep=1, xshift=3] {Height $\approx s$} (.5,0 |- 33:2);
\node[fill=white, inner sep=2] at (-1.38,-1) {Area = Width $\times$ Perimeter $\approx 2\pi\epsilon$};
\filldraw (0,0) circle (.03);
\end{tikzpicture}
\]
\caption{Area of a thin circular strip on the sphere.}
\label{fig:area}
\end{figure}
Our first goal is to find a differential equation for
$(x(t),y(t))$. For that, we compute the radius of curvature $R(t)$ of
the flattened-out spiral at time $t$: this is the radius of circle
with best contact to the curve at time $t$. For example, $R(-2\pi
N)=R(2\pi N)=0$ at the poles, and $R(0)=\infty$ at the equator.
For $N$ large, the spiral at time $t$ follows roughly a parallel at
height $s$ on the orange. The surface of the sphere can be
approximated by a tangent cone whose development on the plane is a
disk sector (fig.~\ref{fig:cone}, left). The radius
\[
R(t)=\sqrt{1-s^2}/s=\sqrt{(2\pi N)^2-t^2}/t
\]
of that disk equals the radius of curvature of the spiral at time $t$,
and can be computed using Thales' theorem (fig.~\ref{fig:cone},
right). The radius $R(t)$ is in fact only determined up to sign; our
choice reflects the NE-SW orientation of the spiral on the sphere.
Now, the condition that we move move at unit speed on the sphere ---
and on the plane --- is $(\dot x)^2+(\dot y)^2=1$, and the condition that
the spiral has a curvature of $R(t)$ is $\dot x\ddot y-\ddot x\dot y=1/R$.
Here, $\dot x$ and $\dot y$ are the speeds of $x$ and $y$ respectively, and $\ddot x$ and $\ddot y$ are their accelerations.
In fact, introducing the complex path $z(t)=x(t)+iy(t)$, the
conditions can be expressed as $|\dot z|^2=1$ and $\ddot z \dot {\bar z}=i/R$.
The solution has the general form \[z(t)=\int_0^t\exp(i\phi(u))du,\]
for a real function $\phi$; indeed, its derivative is computed as $\dot
z=\exp(i\phi(t))$ and has norm $1$. As $\ddot z\dot{\bar z}=i\dot\phi(t)$,
we have $\dot\phi(t)=s/\sqrt{1-s^2}$, which has as elementary solution
$\phi(t)=-\sqrt{(2\pi N)^2-t^2}$. We have deduced that the
flattened-out spiral has parameterization
\[\left\{\begin{array}{l}
\displaystyle x(t)\,=\,\,\int_0^t\cos\sqrt{(2\pi N)^2-u^2}du,\\
\displaystyle y(t)\,=\,-\int_0^t\sin\sqrt{(2\pi N)^2-u^2}du.
\end{array}\right.\]
The flattened-out peel of an orange is shown in (fig.~\ref{fig:peel}),
and the corresponding analytic solution, computed by
\textsc{Maple}~\cite{Maple10}, is shown in (fig.~\ref{fig:spiral},
left). The orange's radius was 3cm, and the peel was 1cm wide, giving
$N=3$.
\section{Limiting behaviour}
What happens if $N$ tends to infinity, that is, if we peel the
orange with an ever thinner spiral? For that, we recall the power
series approximation
\[
\sqrt{a^2-u^2}=a-\frac{u^2}{2a}+\mathcal O(\frac{u^4}{a^3}),
\]
which we substitute with $a=2\pi N$ in the above expression:
\begin{align*}
z(t)&=\int_0^t\exp\Big(\!-i\sqrt{(2\pi N)^2-u^2}\,\Big)du\\&
\approx\int_0^t\exp \Big(\!-i\Big(2\pi N-\frac{u^2}{2\cdot2\pi N}\Big)\!\Big)du.
\end{align*}
Taking only values of $N$ that are integers, this simplifies to
$\int_0^t\exp(iu^2/4\pi N)du$. We then set $v=u/\sqrt{4\pi N}$ to
obtain
\[
z(t)\approx \sqrt{4\pi N}\int_0^{t/\sqrt{4\pi N}}\exp(iv^2)dv.
\]
The approximation error is $\int_0^t\mathcal
O(\frac{u^4}{a^3})du=\mathcal O(t^5/N^3)$, which becomes negligible
compared to the size $\mathcal O(\sqrt N)$ of the spiral for $|t|\ll
N^{0.7}$.
The above curve is, up to scaling and parameterization speed,
the solution of the classical Fresnel integral
\[(X(t),Y(t))=\left(\int_0^t\cos u^2du,\int_0^t\sin u^2du\right),\]
defined by the condition that the radius of curvature at time $t$ is
$1/2t$; here the parameterization is over $t$ from $-\infty$ to
$+\infty$. The corresponding curve is called the Euler spiral and
winds infinitely often around the points
$\pm(\sqrt{\frac\pi8},\sqrt{\frac\pi8})$. Setting $T:=t/\sqrt{4\pi
N}$, the condition $|t|\ll N^{0.7}$ becomes $|T|\ll N^{0.2}$. We
have thus proven:
\begin{theorem}
If $T\ll N^{0.2}$, then the part of the orange peel of width $1/N$
parameterized between $-\sqrt{4\pi N}\,T$ and $\sqrt{4\pi N}\,T$ is
a good approximation for the part of the Euler spiral parameterized
between $-T$ and $T$.
\end{theorem}
\noindent Note that for large $N$, the piece of the orange peel
parameterized between $-\sqrt{4\pi N}\,T$ and $\sqrt{4\pi N}\,T$ forms
a rather thin band around the orange's equator. The contribution of
rest of the orange disappears due to the rescaling process.
\section{Conclusion}
The Euler spiral is a well known mathematical curve. In this article,
we explained how to construct it with an orange and a kitchen knife.
Flattened fruit peels have already been considered, e.g.\ those of
apples~\cite{Turrell}, but were never studied analytically. The Euler
spiral that we obtained has had many discoveries across
history~\cite{Levien:EECS-2008-111}; ours occurred over breakfast.
\begin{bibdiv}
\begin{biblist}
\bib{hecht:optics}{book}{
author={Hecht, Eugene},
title={Optics},
date={2002},
publisher={Pearson Educat.},
edition={4th}
}
\bib{Levien:EECS-2008-111}{techreport}{
Author = {Levien, Raph},
Title = {The Euler spiral: a mathematical history},
Institution = {EECS Department, University of California, Berkeley},
Year = {2008},
URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-2008-111.html},
Number = {UCB/EECS-2008-111},
}
\bib{Maple10}{book}{
author = {Michael B.~Monagan},
author = {Keith O.~Geddes},
author = {K.~Michael Heal},
author = {George Labahn},
author = {Stefan M.~Vorkoetter},
author = {James McCarron},
author = {Paul DeMarco},
title = {Maple~10 Programming Guide},
publisher = {Maplesoft},
year = {2005},
address = {Waterloo ON, Canada},
}
\bib{profillidis:railway}{book}{
author={Profillidis, Vassilios A.},
title={Railway management and engineering},
date={2006},
publisher={Ashgate Publishing Ltd.},
pages={469}
}
\bib{Turrell}{article}{
author={F. M. Turrell},
title={The definite integral symbol},
journal = {Amer. Math. Monthly},
volume = {67},
year = {1960},
number = {7},
pages = {656--658}
}
\end{biblist}
\end{bibdiv}
\end{document}
| {
"timestamp": "2012-02-15T02:02:37",
"yymm": "1202",
"arxiv_id": "1202.3033",
"language": "en",
"url": "https://arxiv.org/abs/1202.3033",
"abstract": "There are two standard ways of peeling an orange: either cut the skin along meridians, or cut it along a spiral. We consider here the second method, and study the shape of the spiral strip, when unfolded on a table. We derive a formula that describes the corresponding flattened-out spiral. Cutting the peel with progressively thinner strip widths, we obtain a sequence of increasingly long spirals. We show that, after rescaling, these spirals tends to a definite shape, known as the Euler spiral. The Euler spiral has applications in many fields of science. In optics, the illumination intensity at a point behind a slit is computed from the distance between two points on the Euler spiral. The Euler spiral also provides optimal curvature for train tracks between a straight run and an upcoming bend. It is striking that it can be also obtained with an orange and a kitchen knife.",
"subjects": "History and Overview (math.HO)",
"title": "Orange Peels and Fresnel Integrals",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9863631639168357,
"lm_q2_score": 0.8128673178375734,
"lm_q1q2_score": 0.8017823794668609
} |
https://arxiv.org/abs/2004.02437 | A historical note on the 3/2-approximation algorithm for the metric traveling salesman problem | One of the most fundamental results in combinatorial optimization is the polynomial-time 3/2-approximation algorithm for the metric traveling salesman problem. It was presented by Christofides in 1976 and is well known as "the Christofides algorithm". Recently, some authors started calling it "Christofides-Serdyukov algorithm", pointing out that it was published independently in the USSR in 1978. We provide some historic background on Serdyukov's findings and a translation of his article from Russian into English. | \section{Introduction}
\noindent
One of the most fundamental problems
in combinatorial optimization
is the traveling salesman problem,
formalized as early as 1832
\citep[cf.][Chapter~1]{ABCC06}:
given $n$~cities
and their pairwise distances,
find a shortest tour{}
to visit each city exactly once
and return to the starting point.
Finding the \emph{shortest} tour{}
is a computationally intractable problem
even in the special case
where the distances between the cities
satisfy the triangle inequality \citep{GJ79}.
\citet{Chr76}
presented an $O(n^3)$-time \emph{3/2\hyp approximation algorithm}
for this special case:
it yields a tour{}
that is at most 3/2 times longer
than the shortest one.
It %
is
a prime example for approximation algorithms
that entered
textbooks
and encyclopedias as
``the Christofides algorithm''
or ``the Christofides heuristic''
\citep{GJ79,Chr79,Gut09,WS11,Bla16}.
Quite some efforts have been made
trying to improve it
\citep[cf.\ the surveys of][]{Vyg12,Sve13}.
One line of research aims for improving its running time:
there are many faster heuristics,
which
cannot guarantee 3/2-approximate solutions \citep{JM07},
yet $(3/2+\varepsilon)$-approximate solutions for
any~$\varepsilon>0$
are computable
by a randomized algorithm
in $O(n^2\log^4n/\varepsilon^2)$~time \citep{CQ17}.
Another line of research aims for improving the approximation factor,
which was successful only in special cases:
in polynomial time,
one can compute 8/7-approximate solutions if the distances are in~$\{1,2\}$
\citep{BK06},
7/5-approximate solutions if
the distances are lengths of shortest paths
in an unweighted graph \citep{SV14},
and $(1+\varepsilon)$\hyp approximate solutions
for any fixed~$\varepsilon>0$
if the cities are points in fixed\hyp dimensional
Euclidean \citep{Aro98,Mit99} or
doubling spaces \citep[using a randomized algorithm]{BGK16},
or if the distances are lengths of shortest paths
in a graph excluding some fixed minor \citep{DHK11}.
For general distances satisfying the triangle inequality,
the 3/2 approximation factor of \citeauthor{Chr76}' algorithm
remains the state of the art.\footnote{Actually,
\citet{Wol80} showed
that the length of the computed tours
is within a factor 3/2
not only of the optimum,
but
of a lower bound
given by the optimal solution to a relaxation
of an integer linear programming model.}
Recently,
a small but growing group of authors started referring
to it
as ``the Christofides-Serdyukov algorithm''
\citep{BDW98,DT10,BBMV19,GW19,SZ19,Tar19,TV19},%
\footnote{We deliberately omit articles coauthored by
Serdyukov's former colleagues from this list.}
claiming that it was independently obtained
in the USSR by \citet{Ser78}.
\looseness=-1
At the one hand,
this claim is plausible:
in the beginning of the 70s,
a~lot of research on computationally intractable
problems was carried out parallely in the USSR,
leading to independent proofs of seminal results
like the Cook-Levin theorem
about the NP\hyp completeness
of the satisfiability problem for Boolean formulas
\citep{Tra84}.
Moreover,
the submission date
given in
the journal article of \citet{Ser78},
January 27th, 1976,
predates
the report of \citet{Chr76}, dated February, 1976.
On the other hand,
such claims should be treated with caution:
for example,
the wide\hyp spread claim
that Kuratovski's theorem
was earlier proved in the USSR
has little support \citep{KQS85}.
We give some historic background
on \citeauthor{Ser78}'s findings,
which indeed supports the claim
of his independent discovery of the 3/2\hyp approximation algorithm
and sheds some light on the timely coincidence
of the publications of \citeauthor{Chr76} and \citeauthor{Ser78}.
We also provide a translation of \citeauthor{Ser78}'s article
in the appendix.
\section{Anatoliy I.\ Serdyukov (1951--2001)}
\noindent
The following information
about Serdyukov
can be found in \citet{BGS07}, \citet{Dem17},%
\footnote{The birth year 1952 given in the book edited by \citet{Dem17}
is incorrect.}
the archives
of the Department of Mechanics and Mathematics
of Novosibirsk State University,
and the
State Public Scientific Technological Library
of the Siberian Branch of the Russian Academy of Sciences.
Anatoliy Ivanovich Serdyukov was born
on October 29th, 1951,
in Prokopyevsk,
a city in Kemerovo region (Western Siberia), USSR.
He graduated from
Novosibirsk State University in 1973,
after which
he was employed
in the structures of the Siberian Branch
of the Lenin Academy of Agricultural Sciences,
then at the
Institute of Cytology and Genetics
of the Siberian Branch of the Academy of Sciences of the USSR
(SB AS USSR),
and finally
at the Institute of Mathematics of the SB AS USSR
(now named the Sobolev Institute of Mathematics,
Siberian Branch of the Russian Academy of Sciences),
where he was working
until his death on February 7th, 2001.
In 1980,
already working at the Institute of Mathematics,
he was awarded the academic degree
of candidate of physico\hyp mathematical sciences.
\citeauthor{Ser80}'s \citeyearpar{Ser80} thesis
is on the complexity of finding Hamiltonian and Eulerian cycles in graphs.
His best known results are
approximation algorithms
for finding \emph{longest}
traveling salesman tours \citep[surveyed by][]{BGS07}.
Taking into account his graduation year
and the submission date of \citeauthor{Ser78}'s \citeyearpar{Ser78}
article, January 27th, 1976,
\citeauthor{Ser78}
must have obtained his 3/2\hyp approximation algorithm
as a young graduate student in about 1975.
\section{Circulation of Christofides' result between 1976 and 1979}
\noindent
Authors usually refer to \citeauthor{Chr76}' \citeyear{Chr76}
technical report at Carnegie-Mellon University (CMU)
as the source of the 3/2\hyp approximation algorithm
for the metric traveling salesman problem,
which some authors
do not consider as published
\citep[cf.][who also claims that ``Christofides never published his algorithm'']{Bla16}.
\looseness=-1
Apparently,
\citeauthor{Chr76}' technical report was not known
to a wide audience up to 1978. %
For example,
\citet{Kar77} and \citet{RSL77}
refer to Christofides' abstract
in the proceedings
of a symposium held at CMU in April 1976.
The proceedings
were published only in December 1976 \citep{Tra76}.
\citet{FHK76}
refer to the same abstract,
whereas later,
in the journal version of their article,
\citet{FHK78} refer to the technical report.
\citeauthor{Chr76}' technical report could have been popularized
in 1977,
when its abstract
stating the 3/2-approximation
was indexed by the NASA abstract journal
Scientific and Technical Aerospace Reports
\citep{Chr77}.
Some authors of that time,
for example \citet{LR79},
refer to a journal article of Christofides
that is to appear in the journal \emph{Mathematical Programming}.
In a combinatorial optimization textbook,
\citet{Chr79}
describes his algorithm
without proving the approximation factor,
referring to an article in press in \emph{Mathematical Programming}
for the proof,
not mentioning his technical report.
Interestingly,
according to the archives of \emph{Mathematical Programming},
his article was not published.
The algorithm with complete proof details
was published to a wide audience not later
than in the seminal textbook of \citet{GJ79}.
Summarizing,
\citet{Ser78} submitted his journal article in January 1976,
which predates all traces of \citeauthor{Chr76}' publications on this topic.
Thus,
it is plausible that \citet{Ser78}
obtained the result independently.
\section{Serdyukov's work between 1974 and 1978}
\noindent
\looseness=-1
We give some historic background
on the findings of \citeauthor{Ser78}
to shed some light on the
timely coincidence of
the publications
of \citet{Chr76} and \citet{Ser78}.
To this end,
it is helpful to interpret the 3/2-approximation algorithm
for the traveling salesman problem as follows:
A first step
computes a minimum\hyp cost spanning tree
that connects all the cities.
A second step
computes a shortest tour{} in the input graph
that traverses the edges of the spanning tree.
The second step is solved
using an approach earlier developed for the closely
related Chinese postman problem
of computing a shortest tour{}
traversing \emph{all} edges of a graph:
\citet{Chr73} and \citet{Ser74},
but also \citet{EJ73}, actively
study the Chinese postman problem at that time.
They all reduce it
to
the problem of finding a minimum\hyp cost perfect matching
on the
complete edge\hyp weighted graph
on all odd\hyp degree vertices of the input graph.%
\footnote{Notably,
\cite{Ser74} explicitly introduces
the problem
that forty years later
is intensively studied as the Eulerian extension problem
\citep{SBNW11,HJM12,SBNW12,DMNW13,GWY17,BFTxx}.}
Surprisingly,
while
\citeauthor{Chr73}, \citeauthor{EJ73}
solve the matching problem
using the polynomial\hyp time algorithm of \citet{Edm65b},
\citeauthor{Ser74}
reduces it
to an exponential number of matching problems in bipartite graphs.%
\footnote{In contemporary terms of parameterized complexity theory
\citep[cf.][]{CFK+15},
\cite{Ser74} merely describes a fixed\hyp parameter algorithm
for the Chinese postman problem
parameterized by the number of odd\hyp degree vertices in the input graph.}
Apparently,
in 1974
neither
\citeauthor{Ser74}
nor his reviewers
were aware
of the work of \citet{Chr73}, \citet{EJ73},
or
the
polynomial\hyp time
algorithm for computing maximum\hyp weight
matchings in general graphs,
published by \citeauthor{Edm65b} nine years earlier.
Since \citet{Ser78} uses \citeauthor{Edm65b}' algorithm
to solve the matching problem
in his 3/2\hyp approximation algorithm
for the traveling salesman problem
but was unaware of it in 1974,
he must have learned about \citeauthor{Edm65b}' algorithm
in 1974 or 1975.
One scenario is
that he learned about it
via the article of \citet{Chr73},
which \citet{Ser76} cites
in an article studying
reductions between matching, covering,
the Chinese postman, and the traveling salesman problems.
In this scenario,
\citeauthor{Ser78} obtained
his 3/2\hyp approximation
independently of \citeauthor{Chr73}
but because of him.
Another scenario is that
\citet{Ser78} learned about \citeauthor{Edm65b}' algorithm
from \citet{Kar76},
whose $O(n^3\log n)$\hyp time implementation
of \citeauthor{Edm65b}' algorithm
he uses in his 3/2\hyp approximation.
\citeauthor{Kar76}'s
article was probably not yet published in January 1976,
when \citeauthor{Ser78} submitted his article,
but he might have had access to a preliminary copy,
which is supported by the fact
that the titles given by \citeauthor{Ser78} and \citeauthor{Kar76}
differ slightly.
\section{Conclusion}
\noindent
Our findings support the claim that
\citet{Ser78} discovered the 3/2\hyp approximation algorithm
for the metric traveling salesman problem
independently of \citet{Chr76}.
Concerning the timely coincidence
of the publications of \citeauthor{Chr76} and \citeauthor{Ser78},
we conclude that,
on the one hand,
it was impossible for
\citeauthor{Ser78}
to find the algorithm
much earlier
than \citeauthor{Chr76},
being unaware of \citeauthor{Edm65b}'
polynomial-time matching algorithm
up to 1974.
On the other hand,
actively working on
the Chinese postman before,
he found the 3/2\hyp approximation
for the traveling salesman problem
as soon as he became aware of
\citeauthor{Edm65b}' algorithm.
\looseness=-1
An English abstract of \citeauthor{Ser78}'s \citeyearpar{Ser78}
article
was indexed in zbMATH only in 1982 \citep{Ser82}.
At~this time,
``the \citeauthor{Chr76} algorithm''
had already entered
fundamental textbooks like that of \citet{GJ79}.
Moreover,
the English abstract
does not mention any approximation factors.
Thus,
it is not surprising that \citeauthor{Ser78}'s result
remained largely unknown beyond the USSR.
\paragraph{Acknowledgments}
We thank Edward Kh.\ Gimadi and Oxana Yu.\ Tsidulko
for helpful input.
\paragraph{Funding}
René van Bevern is supported by the
Mathematical Center in Akademgorodok,
agreement No.\ 075-15-2019-1675 with the Ministry of Science and
Higher Education of the Russian Federation.
Viktoriia A.\ Slugina is supported by
grant No.\ 19-39-60006
of the Russian Foundation for Basic Research.
\bibliographystyle{tsp-history}
| {
"timestamp": "2020-04-28T02:07:16",
"yymm": "2004",
"arxiv_id": "2004.02437",
"language": "en",
"url": "https://arxiv.org/abs/2004.02437",
"abstract": "One of the most fundamental results in combinatorial optimization is the polynomial-time 3/2-approximation algorithm for the metric traveling salesman problem. It was presented by Christofides in 1976 and is well known as \"the Christofides algorithm\". Recently, some authors started calling it \"Christofides-Serdyukov algorithm\", pointing out that it was published independently in the USSR in 1978. We provide some historic background on Serdyukov's findings and a translation of his article from Russian into English.",
"subjects": "Data Structures and Algorithms (cs.DS); Discrete Mathematics (cs.DM); History and Overview (math.HO); Optimization and Control (math.OC)",
"title": "A historical note on the 3/2-approximation algorithm for the metric traveling salesman problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9724147201714922,
"lm_q2_score": 0.8244619220634457,
"lm_q1q2_score": 0.8017189092353761
} |
https://arxiv.org/abs/2101.08898 | Consecutive primes which are widely digitally delicate | We show that for every positive integer $k$, there exist $k$ consecutive primes having the property that if any digit of any one of the primes, including any of the infinitely many leading zero digits, is changed, then that prime becomes composite. | \section{\@startsection {section}{1}{\z@}
{-30pt \@plus -1ex \@minus -.2ex}
{2.3ex \@plus.2ex}
{\normalfont\normalsize\bfseries\boldmath}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}
{-3.25ex\@plus -1ex \@minus -.2ex}
{1.5ex \@plus .2ex}
{\normalfont\normalsize\bfseries\boldmath}}
\renewcommand{\@seccntformat}[1]{\csname the#1\endcsname. }
\makeatother
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{conjecture}{Conjecture}
\newtheorem{proposition}{Proposition}
\newtheorem{corollary}{Corollary}
\newtheorem{definition}{Definition}
\setlength{\footskip}{35pt}
\allowdisplaybreaks
\begin{document}
\begin{center}
{\Large \bf Consecutive primes}
\vskip 5pt
{\Large \bf which are widely digitally delicate}
\vskip 20pt
{\bf Michael Filaseta}\\
{\smallit Dept.~Mathematics,
University of South Carolina,
Columbia, SC 29208, USA}\\
{\tt filaseta@math.sc.edu}\\
\vskip 20pt
{\bf Jacob Juillerat}\\
{\smallit Dept.~Mathematics,
University of South Carolina,
Columbia, SC 29208, USA}\\
{\tt juillerj@email.sc.edu}\\
\end{center}
\vskip 5pt
\centerline{\phantom{\smallit Received: , Revised: , Accepted: , Published: }}
\vskip 12pt
\centerline{\textit{Dedicated to the fond memory of Ronald Graham}}
\vskip 15pt
\centerline{\bf Abstract}
\vskip 5pt\noindent
We show that for every positive integer $k$, there exist $k$ consecutive primes
having the property that if any digit of any one of the primes, including any of the infinitely many
leading zero digits, is changed, then that prime becomes composite.
\pagestyle{myheadings}
\thispagestyle{empty}
\baselineskip=12.875pt
\vskip 30pt
\section{Introduction}
In 1978, M.~S.~Klamkin \cite{klamkin} posed the following problem.
\vskip 5pt
\centerline{\parbox[t]{12cm}{\textit{Does there exist any prime number such that if any digit (in base $10$) is changed to any other digit, the resulting number is always composite?
}}}
\vskip 8pt\noindent
In addition to computations establishing the existence of such a prime, the published solutions in 1979 to this problem included a proof by P.~Erd\H{o}s \cite{Erd79} that there exist infinitely many such primes.
Borrowing the terminology from J.~Hopper and P.~Pollack \cite{hopperpollack}, we call such primes \textit{digitally delicate}.
The first digitally delicate prime is $294001$.
Thus, $294001$ is a prime and, for every $d \in \{ 0, 1, \ldots, 9 \}$, each of the numbers
\[
d\hspace{.1em}94001, \quad 2d4001, \quad 29d\hspace{.1em}001, \quad 294d\hspace{.1em}01, \quad 2940d1, \quad 29400d
\]
is either equal to $294001$ or composite. The proof provided by Erd\H{o}s consisted of creating a partial covering system of the integers (defined in the next section) followed by a sieve argument.
In 2011, T.~Tao \cite{tao} showed by refining the sieve argument of Erd\H{o}s that a positive proportion (in terms of asymptotic density) of the primes are digitally delicate.
In 2013, S.~Konyagin \cite{konyagin}
pointed out that a similar approach implies that a positive proportion of composite numbers $n$, coprime to $10$, satisfy the property that if any digit in the base $10$
representation of $n$ is changed, then the resulting number remains composite.
For example, the number $n=212159$ satisfies this property.
Thus, every number in the set \[\{d12159, 2d2159, 21d159, 212d59, 2121d9, 21215d: d\in\{0,1, 2, \dots,9\}\}\] is composite.
Later, in 2016, J.~Hopper and P.~Pollack \cite{hopperpollack} resolved a question of Tao's on digitally delicate primes allowing for an arbitrary but fixed number of digit changes to the beginning and end of the prime.
All of these results and their proofs hold for numbers written in an arbitrary base $b$ rather than base $10$, though the proof provided by Erd\H{o}s \cite{Erd79} only addresses the argument in base $10$.
In 2020, the first author and J.~Southwick \cite{filsou} showed that a positive proportion of primes $p$, are \textit{widely digitally delicate}, which they define as having the property that
if any digit of $p$, \textit{including any one of the infinitely many leading zeros of $p$}, is replaced by any other digit, then the resulting number is composite.
The proof was specific to base $10$, though they elaborate on other bases for which the analogous argument produces a similar result, including for example base $31$;
however, it is not even clear whether widely digitally delicate primes exist in every base.
Observe that the first digitally delicate prime, 294001, is not widely digitally delicate since 10294001 is prime.
It is of some interest to note that even though a positive proportion of the primes are widely digitally delicate, no specific examples of widely digitally delicate primes are known.
Later in 2020, the authors with J.~Southwick \cite{filjuisou} gave a related argument showing that there are infinitely many (not necessarily a positive proportion) of composite numbers $n$ in base $10$
such that when any digit is inserted in the decimal expansion of $n$, including between two of the infinitely many leading zeros of $n$ and to the right of the units digit of $n$, the number $n$ remains composite
(see also \cite{Fil10}).
In this paper, we show the following.
\begin{theorem}\label{maintheorem}
For every positive integer $k$, there exist $k$ consecutive primes all of which are widely digitally delicate.
\end{theorem}
Let $\mathcal P$ be a set of primes. It is not difficult to see that
if $\mathcal P$ has an asymptotic density of $1$ in the set of primes, then there exist $k$ consecutive primes in $\mathcal P$ for each $k \in \mathbb Z^{+}$.
On the other hand, for every $\varepsilon \in (0,1)$, there exists $\mathcal P$ having asymptotic density $1-\varepsilon$ in the set of primes such that
there do not exist $k$ consecutive primes in $\mathcal P$ for $k$ sufficiently large (more precisely, for $k \ge 1/\varepsilon$).
Thus, the prior results stated above are not sufficient to establish Theorem~\ref{maintheorem}.
The main difficulty in using the prior methods to obtain Theorem~\ref{maintheorem} is in the application of sieve techniques in the prior work.
We want to bypass the use of sieve techniques and instead give complete covering systems to show that there is an arithmetic progression containing infinitely many primes such that every
prime in the arithmetic progression is a widely digitally delicate prime. This then gives an alternative proof of the result in \cite{filsou}. After that, the main
driving force behind the proof of Theorem~\ref{maintheorem}, work of D.~Shiu \cite{shiu}, can be applied. D.~Shiu \cite{shiu} showed that in any arithmetic
progression containing infinitely many primes (that is, $an+b$ with $\gcd(a,b) = 1$ and $a > 0$) there are arbitrarily long sequences of consecutive primes.
Thus, once we establish through covering systems that such an arithmetic progression exists where every prime in the arithmetic progression is widely
digitally delicate, D.~Shiu's result immediately applies to finish the proof of Theorem~\ref{maintheorem}.
Our main focus in this paper is on the proof of Theorem~\ref{maintheorem}.
However, in part, this paper is to emphasize that the remarkable work of Shiu \cite{shiu} provides for a nice application to a number of results
established via covering systems. One can also take these applications further by looking at the strengthening of Shiu's work by J.~Maynard \cite{maynard}.
To illustrate the application of Shiu's work in other context, we give some further examples before closing this introduction.
A Riesel number is a positive odd integer $k$ with the property that $k \cdot 2^{n}-1$ is composite for all positive integers $n$.
A Sierpi\'nski number is a positive odd integer $k$ with the property that $k \cdot 2^{n}+1$ is composite for all nonnegative integers $n$.
The existence of such $k$ were established in \cite{riesel} and \cite{sierpinski}, respectively,
though the former is a rather direct consequence of P.~Erd{\H o}s's work in \cite{pe} and the latter is a somewhat less direct application of this same work,
an observation made by A.~Schinzel (cf.~\cite{FFK}).
A Brier number is a number $k$ which is simultaneously Riesel and Sierpi\'nski, named after Eric Brier who first considered them (cf.~\cite{FFK}).
The smallest known Brier number, discovered by Christophe Clavier in 2014 (see \cite{sloantwo}) is
\[
3316923598096294713661.
\]
As is common with all these numbers, examples typically come from covering systems giving an arithmetic progression of examples.
In particular, Clavier established that every number in the arithmetic progression
\[
3770214739596601257962594704110\,n +
3316923598096294713661, \quad n \in \mathbb Z^{+} \cup \{ 0 \}
\]
is a Brier number. Since the numbers $3770214739596601257962594704110$ and $3316923598096294713661$ are coprime, Shiu's theorem gives the following.
\begin{theorem}
For every positive integer $k$, there exist $k$ consecutive primes all of which are Brier numbers.
\end{theorem}
Observe that as an immediate consequence the same result holds if Brier numbers are replaced by Riesel or Sierpi\'nski numbers.
As another less obvious result to apply Shiu's theorem to, we recall a result of R.~Graham \cite{graham} from 1964.
He showed that there exist relatively prime positive integers $a$ and $b$ such that the recursive Fibonacci-like sequence
\begin{equation}\label{grahamrecursion}
u_{0} = a, \quad u_{1} = b, \quad \text{and} \quad u_{n+1} = u_{n} + u_{n-1} \quad \text{for integers $n \ge 1$},
\end{equation}
consists entirely of composite numbers.
The known values for admissible $a$ and $b$ have decreased over the years through the work of
others including D.~Knuth \cite{knuth}, J.~W.~Nicol \cite{nicol} and M.~Vsemirnov \cite{vsemirnov}, the latter giving the smallest known such $a$ and $b$
(but notably the same number of digits as the $a$ and $b$ in \cite{nicol}).
The result has also been generalized to other recursions; see
A~Dubickas, A.~Novikas and J.~\v{S}iurys \cite{dns},
D.~Ismailescu, A.~Ko, C.~Lee and J.~Y.~Park \cite{iklp} and
I.~Lunev \cite{lunev}.
As the Graham result concludes with all $u_{i}$ being composite, the initial elements of the sequence, $a$ and $b$, are composite.
However, there is still a sense in which one can apply Shiu's result. To be precise, the smallest known example given by Vsemirnov is
done by taking
\[
a = 106276436867 \quad \text{and} \quad b = 35256392432.
\]
With $u_{j}$ defined as above, one can check that each $u_{j}$ is divisible by a prime from the set
\[
\mathcal P = \{ 2, 3, 5, 7, 11, 17, 19, 23, 31, 41, 47, 61, 107, 181, 541, 1103, 2521 \}.
\]
Setting
\[
N = \prod_{p \in \mathcal P} p = 1821895895860356790898731230,
\]
the value of $a$ and $b$ can be replaced by any integers $a$ and $b$ satisfying
\[
a \equiv 106276436867 {\hskip -4pt}\pmod{N} \quad \text{and} \quad
b \equiv 35256392432 {\hskip -4pt}\pmod{N}.
\]
As $\gcd(106276436867,N) = 31$ and $\gcd(35256392432,N) = 2$, these congruences are equivalent to taking
$a = 31 a'$ and $b = 2 b'$ where $a'$ and $b'$ are integers satisfying
\[
a' \equiv 3428272157 {\hskip -4pt}\pmod{58770835350334090028991330}
\]
and
\[
b' \equiv 17628196216 {\hskip -4pt}\pmod{910947947930178395449365615}.
\]
As a direct application of D.~Shiu's result, we have the following.
\begin{theorem}
For every $k \in \mathbb Z^{+}$, there are $k$ consecutive primes $p_{1}, p_{2}, \ldots, p_{k}$
and $k$ consecutive primes $q_{1}, q_{2}, \ldots, q_{k}$ such that for any $i \in \{ 1, 2, \ldots, k \}$,
the numbers $a = 31 p_{i}$ and $b = 2 q_{i}$ satisfy
$\gcd(a,b) = 1$ and have the property that
the $u_{n}$ defined by \eqref{grahamrecursion}
are all composite.
\end{theorem}
\vskip 0pt \noindent
This latter result is not meant to be particularly significant but rather an indication that Shiu's work
does provide information in cases where covering systems are used to form composite numbers.
Regarding open problems,
given the recent excellent works surrounding the non-existence of covering systems of particular forms
(cf.~\cite{BBMST, BBMST2, hough, houghnielsen}),
the authors are not convinced that widely digitally delicate primes exist in every base.
Thus, a tantalizing question is whether they exist or whether a positive proportion of the primes
in every base are widely digitally delicate.
In the opposite direction, as noted in \cite{filsou}, Carl Pomerance has asked for an unconditional proof that
there exist infinitely many primes which are not digitally delicate or which are not widely digitally delicate.
For other open problems in this direction, see the end of the introdutcions in \cite{filjuisou} and \cite{filsou}.
\section{The first steps of the argument}
As noted in the introduction, to prove Theorem~\ref{maintheorem},
the work of D.~Shiu \cite{shiu} implies that it suffices to obtain an arithmetic progression $An+B$,
with $A$ and $B$ relatively prime positive integers, such that every prime in the arithmetic progression
is widely digitally delicate. We will determine such an $A$ and $B$ by finding relatively prime
positive integers $A$ and $B$ satisfying property ($*$) given by
\vskip 5pt
\centerline{($*$){\ }\parbox[t]{11cm}{If $d \in \{ -9, -8, \ldots, -1 \} \cup \{ 1, 2, \ldots, 9 \}$, then each number
in the set
\[
\mathcal A_{d} = \big\{ An+B + d\cdot 10^{k}: n \in \mathbb Z^{+}, k \in \mathbb Z^{+} \cup \{ 0 \} \big\}
\]
is composite.}}
\vskip 8pt\noindent
As changing a digit of $An+B$, including any one of its infinitely many leading zero digits, corresponds to adding or subtracting
one of the numbers $1, 2, \ldots, 9$ from a digit of $An+B$, we see that relatively prime positive integers $A$ and $B$
satisfying property ($*$) also satisfy the property we want, that every prime in $An+B$ is widely digitally delicate.
To find relatively prime positive integers $A$ and $B$ satisfying property ($*$), we make use of covering systems which we
define as follows.
\begin{definition}
A covering system (or covering) is a finite set of congruences
\[
x \equiv a_{1} {\hskip -4pt}\pmod{m_{1}}, \quad x \equiv a_{2} {\hskip -4pt}\pmod{m_{2}}, \quad \ldots, \quad x \equiv a_{r} {\hskip -4pt}\pmod{m_{r}},
\]
where $r \in \mathbb Z^{+}$, each $a_{j} \in \mathbb Z$, and each $m_{j} \in \mathbb Z^{+}$, such that every integer
satisfies at least one congruence in the set of congruences.
\end{definition}
\noindent
In other contexts in the literature, further restrictions can be made on the $m_{j}$, so we emphasize here that we
want to allow for $m_{j} = 1$ and for repeated moduli (so that the $m_{j}$ are not necessarily distinct). There will
be restrictions on the $m_{j}$ that will arise in the covering systems we build due to the approach we are using. We
will see these as we proceed.
For each $d \in \{ -9, -8, \ldots, -1 \} \cup \{ 1, 2, \ldots, 9 \}$, we will create a separate covering system to show that the elements
of $\mathcal A_{d}$ in ($*$) are composite.
Table~\ref{tablenumcong} indicates, for each $d$, the number of different congruences in the covering system corresponding to $d$.
\begin{table}[!hbt]
\centering
\caption{Number of congruences for each covering}\label{tablenumcong}
\begin{minipage}{3 cm}
\centering
\begin{tabular}{|c|c|}
\hline
$d$ & \# cong. \\
\hline \hline
$-9$ & $232$ \\ \hline
$-8$ & $441$ \\ \hline
$-7$ & $1$ \\ \hline
$-6$ & $257$ \\ \hline
$-5$ & $268$ \\ \hline
$-4$ & $1$ \\ \hline
\end{tabular}
\end{minipage}
\begin{minipage}{3 cm}
\centering
\begin{tabular}{|c|c|}
\hline
$d$ & \# cong. \\
\hline \hline
$-3$ & $739$ \\ \hline
$-2$ & $289$ \\ \hline
$-1$ & $1$ \\ \hline
$1$ & $37$ \\ \hline
$2$ & $1$ \\ \hline
$3$ & $203$ \\ \hline
\end{tabular}
\end{minipage}
\begin{minipage}{3 cm}
\centering
\begin{tabular}{|c|c|}
\hline
$d$ & \# cong. \\
\hline \hline
$4$ & $26$ \\ \hline
$5$ & $1$ \\ \hline
$6$ & $19$ \\ \hline
$7$ & $137$ \\ \hline
$8$ & $1$ \\ \hline
$9$ & $4$ \\ \hline
\end{tabular}
\end{minipage}
\end{table}
The integers we are covering for each $d$ are the exponents $k$ on $10$ in the definition of $\mathcal A_{d}$.
In other words, we will want to view each exponent $k$ as satisfying one of the congruences in our covering system for a given $\mathcal A_{d}$.
In the end, the values of $A$ and $B$ will be determined by the congruences we choose for the covering systems as well as certain primes that
arise in our method.
We clarify that the work on digitally delicate primes in prior work mentioned in the introduction used a partial covering of the integers $k$,
that is a set of congruences where most but not all integers $k$ satisfy at least one of the congruences,
together with a sieve argument.
The work in \cite{filsou} on widely digitally delicate primes used covering systems for $d \in \{ 1, 2, \ldots, 9 \}$ and the same approach
of partial coverings and sieves for $d \in \{ -9, -8, \ldots, -1 \}$.
The work in \cite{filjuisou}, like we will use in this paper, made use of covering systems for all $d \in \{ -9, -8, \ldots, -1 \} \cup \{ 1, 2, \ldots, 9 \}$.
For \cite{filjuisou}, some of the covering systems could be handled rather easily by taking advantage of the fact that we were looking for
composite numbers satisfying a certain property rather than primes.
Next, we explain more precisely how we create and take advantage of a covering system for a given fixed $d \in \{ -9, -8, \ldots, -1 \} \cup \{ 1, 2, \ldots, 9 \}$.
We begin with a couple illustrative examples.
Table~\ref{tablenumcong} indicates that a number of the $d$ are handled with just one congruence.
This is accomplished by taking
\[
A \equiv 0 {\hskip -5pt}\pmod{3}
\qquad \text{and} \qquad
B \equiv 1 {\hskip -5pt}\pmod{3}.
\]
Observe that each element of $\mathcal A_{d}$ in ($*$) is divisible by $3$ whenever $d \equiv 2 \pmod{3}$.
Thus, since $A$ and $B$ are positive, as long as we also have $B > 3$, the elements of $\mathcal A_{d}$ for such $d$ are all composite, which is our goal.
Note the crucial role of the order of $10$ modulo the prime $3$. The order is $1$, and the covering system for each of these $d$ is simply $k \equiv 0 \pmod{1}$.
Every integer satisfies this congruence, so it is a covering system.
The modulus corresponds to the order of $10$ modulo $3$. Note also that we cannot use the prime $3$ in an analogous way to cover another digit $d$
because the choices for $A$ and $B$, and hence the congruences on $A$ and $B$ above, are to be independent of $d$.
For example, if $d = 4$, then $An+B + d\cdot 10^{k} \equiv 1 + 4 \equiv 2 \pmod{3}$ and, hence, $An+B + d\cdot 10^{k}$ will not be divisible by $3$.
As a second illustration, we see from Table~\ref{tablenumcong} that we handle the digit $d = 9$ with $4$ congruences. The congruences for $d = 9$ are
\[
k \equiv 0 {\hskip -5pt}\pmod{2}, \quad \
k \equiv 3 {\hskip -5pt}\pmod{4}, \quad \
k \equiv 1 {\hskip -5pt}\pmod{8}, \quad \
k \equiv 5 {\hskip -5pt}\pmod{8}.
\]
One easily checks that this is a covering system, that is that every integer $k$ satisfies one of these congruences. To take advantage of this covering system,
we choose a different prime $p$ for each congruence with $10$ having order modulo $p$ equal to the modulus. We used
the prime $11$ with $10$ of order $2$,
the prime $101$ with $10$ of order $4$,
the prime $73$ with $10$ of order $8$, and
the prime $137$ with $10$ of order $8$.
We take $A$ divisible by each of these primes. For ($*$), with $d = 9$, we want $An+B + 9 \cdot 10^{k}$ composite.
For $k \equiv 0 \pmod{2}$, we accomplish this by taking $B \equiv 2 \pmod{11}$ and $B > 11$ since then $An+B + 9 \cdot 10^{k} \equiv B + 9 \equiv 0 \pmod{11}$.
For $k \equiv 3 \pmod{4}$, we accomplish this by taking $B \equiv 90 \pmod{101}$ and $B > 101$ since then $An+B + 9 \cdot 10^{k} \equiv 90 + 9 \cdot 10^{3} \equiv 9090 \equiv 0 \pmod{101}$.
Similarly, for $k \equiv 1 \pmod{8}$ and $B \equiv 56 \pmod{73}$, we obtain $An+B + 9 \cdot 10^{k} \equiv 0 \pmod{73}$;
and for $k \equiv 5 \pmod{8}$ and $B \equiv 90 \pmod{137}$, we obtain $An+B + 9 \cdot 10^{k} \equiv 0 \pmod{137}$.
Thus, taking $B > 137$, we see that ($*$) holds with $d = 9$.
Of some significance to our explanations later, we note that we could have interchanged the roles of the primes $73$ and $137$ since $10$ has the same order for each of these primes.
In other words, we could associate $137$ with the congruence $k \equiv 1 \pmod{8}$ above and associate $73$ with the congruence $k \equiv 5 \pmod{8}$.
Then for $k \equiv 1 \pmod{8}$ and $B \equiv 47 \pmod{137}$, we would have $An+B + 9 \cdot 10^{k} \equiv 0 \pmod{137}$; and
for $k \equiv 5 \pmod{8}$ and $B \equiv 17 \pmod{73}$, we would have $An+B + 9 \cdot 10^{k} \equiv 0 \pmod{73}$.
In general, in our construction of widely digitally delicate primes, we want each congruence
$k \equiv a \pmod{m}$ in a covering system associated with a prime $p$ for which the order of $10$ modulo $p$ is $m$,
but how we choose the ordering of those primes (which prime goes to which congruence) for a fixed modulus $m$ is irrelevant.
For each $d \in \{ -9, -8, \ldots, -1 \} \cup \{ 1, 2, \ldots, 9 \}$,
we determine a covering system of congruences for $k$,
where each modulus $m$ corresponds to the order of $10$ modulo some prime $p$.
This imposes a condition on $A$, namely that $A$ is divisible by each of these primes $p$.
Fixing $d$, a congruence from our covering system $k \equiv a \pmod{m}$, and a corresponding prime $p$ with $10$ having order $m$ modulo $p$,
we determine $B$ such that $A n + B + d\cdot 10^{k} \equiv B + d\cdot 10^{a} \equiv 0 \pmod{p}$.
Note that the values of $d$, $a$ and $p$ dictate the congruence condition for $B$ modulo $p$.
Each prime $p$ will correspond to a unique congruence condition $B \equiv - d\cdot 10^{a} \pmod{p}$,
so the Chinese Remainder Theorem implies the existence of a $B \in \mathbb Z^{+}$ simultaneously satisfying all
the congruence conditions modulo primes on $B$.
As long as $B$ is large enough, then the condition ($*$) will hold.
To make sure that there is a prime of the form $An+B$, we will want $\gcd(A,B) = 1$.
For $k \equiv a \pmod{m}$ and a corresponding prime $p$ as above,
we will have $A$ divisible by $p$ and
$B \equiv - d\cdot 10^{a} \pmod{p}$.
Since $d \in \{ -9, -8, \ldots, -1 \} \cup \{ 1, 2, \ldots, 9 \}$, if $p \ge 11$, then we see that $p \nmid B$.
We will not be using the primes $p \in \{ 2,5 \}$ as $10$ does not have an order modulo these primes.
We have already seen that we are using the prime $p = 3$ for $d \equiv 2 \pmod{3}$, so this ensures that $3 \nmid B$.
We will use $p = 7$ for $d \in \{-9, -8, -6, -5, -3, 3, 4 \}$, which then implies $7 \nmid B$.
Therefore, the condition $\gcd(A,B) = 1$ will hold.
Recall that we used the same congruence and corresponding prime in our covering system for each $d \equiv 2 \pmod{3}$.
There is no obstacle to repeating a congruence for different $d$ if the corresponding prime, having $10$ of order the modulus, is different.
But in the case of $d \equiv 2 \pmod{3}$, the same prime $3$ was used for different $d$. To illustrate how we can repeat the use of a prime,
we return to how we used the prime $p = 11$ above for $d = 9$. We ended up with $A \equiv 0 \pmod{11}$ and $B \equiv 2 \pmod{11}$.
In order for us to take advantage of the prime $p = 11$ for $d$, we therefore want
$A n + B + d\cdot 10^{k} \equiv 2 + d\cdot 10^{k} \equiv 0 \pmod{11}$.
It is easy to check that this holds for $(d,k) \in \{ (-9,1), (-2,0), (2,1), (9,0) \}$.
The case $(d,k) = (9,0)$ is from our example with $d = 9$ above.
The case $(d,k) = (2,1)$ does not serve a purpose for us as $d = 2$ was covered by our earlier example using the prime $3$ for all $d \equiv 2 \pmod{3}$.
The cases where $(d,k) \in \{ (-9,1), (-2,0) \}$ are significant, and we make use of congruences modulo $11$ in the covering systems for $d = -9$ and $d = -2$.
Thus, we are able to repeat the use of some primes for different values of $d$.
However, this is not the case for most primes we used. A complete list of the primes which we were able to use for more than one value of $d$
is given in Table~\ref{tablerepeatprimes}, together with the list of corresponding $d$'s. The function $\rho(m,p)$ in this table will be explained in the next section.
\begin{table}[!hbt]
\centering
\caption{Primes used for more than one digit $d$}\label{tablerepeatprimes}
\begin{minipage}{7 cm}
\centering
\begin{tabular}{|c|c|c|}
\hline
prime & $d$'s & $\rho(m,p)$ \\
\hline \hline
$3$ & $-7, -4, -1, 2, 5, 8$ & $1$ \\ \hline
$7$ & $-9, -8, -6, -5, -3, 3, 4$ & $1$ \\ \hline
$11$ & $-9, -2, 9$ & $1$ \\ \hline
$13$ & $-9, -3, 3, 4$ & $2$ \\ \hline
$17$ & $-8, -6, -3, -2, 7$ & $1$ \\ \hline
$19$ & $-6, 4$ & $1$ \\ \hline
$23$ & $-9, -8, -6, -3, 3, 7$ & $1$ \\ \hline
$29$ & $-9, -8, -6, 1, 3$ & $1$ \\ \hline
$31$ & $-8, -2, 6$ & $1$ \\ \hline
$37$ & $3, 4$ & $1$ \\ \hline
$43$ & $-8, -3, 1$ & $1$ \\ \hline
$53$ & $-8, -5, 3$ & $1$ \\ \hline
$61$ & $-6, 3, 6$ & $1$ \\ \hline
$67$ & $-9, 7$ & $1$ \\ \hline
$79$ & $-9, -5$ & $2$ \\ \hline
$89$ & $-6, -3, 7$ & $1$ \\ \hline
$103$ & $-9, -8, -3$ & $1$ \\ \hline
\end{tabular}
\end{minipage}
\begin{minipage}{5.5 cm}
\centering
\vspace{-.43cm}
\begin{tabular}{|c|c|c|}
\hline
prime & $d$'s & $\rho(m,p)$ \\
\hline \hline
$199$ & $-6, -3, 7$ & $1$ \\ \hline
$211$ & $-6, 6$ & $1$ \\ \hline
$241$ & $-6, 6$ & $2$ \\ \hline
$331$ & $-8, 7$ & $1$ \\ \hline
$353$ & $-6, 7$ & $1$ \\ \hline
$409$ & $-8, -3$ & $1$ \\ \hline
$449$ & $-9, 7$ & $2$ \\ \hline
$2161$ & $-6, 6$ & $3$ \\ \hline
$3541$ & $-6, 6$ & $1$ \\ \hline
$9091$ & $-6, 6$ & $1$ \\ \hline
$27961$ & $-6, 6$ & $2$ \\ \hline
$1676321$ & $-6, 6$ & $1$ \\ \hline
$3762091$ & $-6, 6$ & $2$ \\ \hline
$4188901$ & $-6, 6$ & $2$ \\ \hline
$39526741$ & $-6, 6$ & $3$ \\ \hline
$5964848081$ & $-6, 6$ & $2$ \\ \hline
\end{tabular}
\end{minipage}
\end{table}
Recalling that the modulus in a covering system is equal to the order of $10$ modulo a prime $p$,
the role of primes and the order of $10$ modulo those primes is significant in coming up with covering systems to deduce ($*$).
A modulus $m$ can be used in a given covering system as many times as there are primes with $10$ of order $m$.
Thus, for the covering system for $d = 9$, we saw the modulus $8$ being used twice as there are two primes
with $10$ of order $8$, namely the primes $73$ and $137$.
One can look at a list of primitive prime factors of $10^{k}-1$ such as in \cite{brill}, but we needed much more
extensive data than what is contained there.
Our approach uses that the complete list of primes for which $10$ has a given order $m$ is the same as the list
of primes dividing $\Phi_{m}(10)$ and not dividing $m$ where $\Phi_{m}(x)$ is the $m$-th cyclotomic polynomial (cf.~\cite{brill, filjuisou, filsou}).
We used Magma V2.23-1 on a 2017 MacBook Pro to determine different primes dividing $\Phi_{m}(10)$.
We did not always get a complete factorization but used that if the remaining unfactored part of $\Phi_{m}(10)$ is composite,
relatively prime to the factored part of $\Phi_{m}(10)$ and $m$, and not a prime power,
then there must be at least two further distinct prime factors of $\Phi_{m}(10)$.
This allowed us then to determine a lower bound on the number of distinct primes of a given order $m$.
Though we used most of these in our coverings, sometimes we found extra primes that we did not need to use.
In total, we made use of $673$ different moduli $m$ and $2596$ different primes dividing $\Phi_{m}(10)$ for such $m$.
Of the $2596$ different primes, there are $590$ which came from $295$ composite numbers arising from an unfactored part of some $\Phi_{m}(10)$,
and there are $63$ other composite numbers for which only one prime factor of each of the composite numbers was used.
The largest explicit prime (not coming from the $295+63 = 358$ composite numbers) has $1700$ digits, arising from testing what was
initially a large unfactored part of $\Phi_{m}(10)$ for primality and determining it is a prime.
The largest of the $358$ composite numbers has $17234$ digits. For obvious reasons, we will avoid listing
these primes and composites in this paper, though to help with verification of the results, we are providing the data
from our computations in \cite{Filweb}; more explicit tables can also be found in \cite{juillerat}.
Table~\ref{orderofprimes} in the appendix gives, for each of the $673$ different moduli $m$,
the detailed information on the number of distinct primes we used with $10$ of order $m$,
which we denote by $L(m)$. Thus, $L(m)$ is a lower bound on the total number of distinct primes with $10$ of order $m$.
Note that $L(m)$ is less than or equal to the number of distinct primes dividing $\Phi_{m}(10)$ but not dividing $m$.
For each $d \in \{ -9, -8, \ldots, -1 \} \cup \{ 1, 2, \ldots, 9 \}$, the goal is to find a covering system so that ($*$) holds.
We have already given the covering systems we obtained for $d \equiv 2 \pmod{3}$ and for $d = 9$.
In the next section and the appendix, we elaborate on the covering systems for the remaining $d$.
We also explain how the reader can verify the data showing these covering systems satisfy the conditions needed for ($*$).
\section{Finishing the argument}
To finish the argument, we need to present a covering system for each value of $d$ in $\{ -9, -8, \ldots, -1 \} \cup \{ 1, 2, \ldots, 9 \}$
as described in the previous section. For the purposes of keeping the presentation of these covering systems manageable,
for each $m$ listed in Table~\ref{orderofprimes},
we take the $L(m)$ primes we found with $10$ of order $m$ and order them in some way.
Corresponding to the discussion concerning $d = 9$ and the primes $73$ and $137$,
the particular ordering is not important to us (for example, increasing order would be fine). Suppose the primes corresponding to $m$ are
ordered in some way as $p_{1}, p_{2}, \ldots, p_{L(m)}$. We define $\rho(p_{j},m) = j$.
Thus, if $p_{j}$ is the $j$-th prime in our ordering of the primes with $10$ of order $m$, we have $\rho(p_{j},m) = j$.
The particular values we used for $\rho(p_{j},m)$ is not important to the arguments. So as to make the entries in
Table~\ref{tablerepeatprimes} correct, the entries for $\rho(p,m)$ indicate the values we used for those primes.
For example, Table~\ref{orderofprimes} indicates there are $2$ primes of order $6$. One of them is $7$.
Table~\ref{tablerepeatprimes} indicates then that $\rho(7,6) = 1$. Thus, we put $7$ as the first of the $2$ primes
with $10$ of order $6$. The other prime with $10$ of order $6$ is $13$, and as Table~\ref{tablerepeatprimes}
indicates we set $13$ as the second of the $2$ primes with $10$ of order $6$.
Tables~\ref{covforminus9}-\ref{covfor9} give the covering systems used for each $d \in \{ -9, -8, \ldots, -1 \} \cup \{ 1, 2, \ldots, 9 \}$ with
$d \not\equiv 2 \pmod{3}$. Rather than indicating the prime, which in some cases has thousands of digits, corresponding to each congruence
$k \equiv a \pmod{m}$ listed, we simply wrote the value of $\rho(m,p)$. As $m$ corresponds to the modulus used in the given congruence
$k \equiv a \pmod{m}$ and the ordering of the primes is not significant to our arguments (any ordering will do),
this is enough information to confirm the covering arguments.
That said, the time consuming task of coming up with the $L(m)$ primes to order for each $m$ is nontrivial (at least at this point in time).
So that this work does not need to be repeated, a complete list of the $L(m)$ primes for each $m$ is given in \cite{Filweb}.
Further, the tables in the form of lists can be found there as well, with the third column in each case replaced by the prime we used
with $10$ of order the modulus of the congruence in the second column.
In the way of clarity, recall that the primes were not explicitly computed in the case that the unfactored part of $\Phi_{m}(10)$ was
tested to be composite; instead the composite number is listed in place of both primes in \cite{Filweb}.
For the remainder of this section, we clarify how to verify the information in Tables~\ref{covforminus9}-\ref{covfor9}.
We address both verification of the covering systems and the information on the primes as listed in \cite{Filweb}.
\subsection{Covering Verification.}
The most direct way to check that a system $\mathcal C$ of congruences
\[
x \equiv a_{1} {\hskip -5pt}\pmod{m_{1}}, \quad x \equiv a_{2} {\hskip -5pt}\pmod{m_{2}}, \quad \ldots, \quad x \equiv a_{s} {\hskip -5pt}\pmod{m_{s}}
\]
is a covering system is to set $\ell = \text{lcm}(m_{1}, m_{2}, \ldots, m_{s})$ and then to check if every integer in the interval $[0,\ell-1]$ satisfies
at least one congruence in $\mathcal C$. If not, then $\mathcal C$ is not a covering system.
If on the other hand, every integer in $[0,\ell-1]$ satisfies a congruence in $\mathcal C$, then $\mathcal C$ is a covering system. To see the latter,
let $n$ be an arbitrary integer, and write $n = \ell q + r$ where $q$ and $r$ are integers with $0 \le r \le \ell-1$.
Since $r \in [0,\ell-1]$ satisfies some $x \equiv a_{j} \pmod{m_{j}}$ and since $\ell \equiv 0 \pmod{m_{j}}$, we deduce for this same $j$ that
$n = \ell q + r \equiv a_{j} \pmod{m_{j}}$.
The above is a satisfactory approach if $\ell$ is not too large.
For the values of $d$ in $ \{ -9, -8, \ldots, -1 \} \cup \{ 1, 2, \ldots, 9 \}$ with $d \not\equiv 2 \pmod{3}$, the least common multiple $\ell$ given by the congruences in
Tables~\ref{covforminus9}-\ref{covfor9}
are listed in Table~\ref{lcmtable}.
The maximum prime divisor of $\ell$ is also listed in the fourth column of Table~\ref{lcmtable}.
The value of $\ell$ can exceed $10^{12}$, so we found a more efficient way to test whether one of our systems $\mathcal C$ of congruences,
where $\ell$ is large, is a covering system.
\begin{table}[!hbt]
\centering
\caption{Least common multiple of the moduli for the coverings in each table}\label{lcmtable}
\begin{minipage}{6.6 cm}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
$d$ & Table & $\ell$ & max $p$ \\
\hline \hline
$-9$ & $5$ & $14433138720$ & $31$ \\ \hline
$-8$ & $6$ & $699847948800$ & $17$ \\ \hline
$-6$ & $7$ & $1045044000$ & $29$ \\ \hline
$-5$ & $8$ & $56216160$ & $13$ \\ \hline
$-3$ & $9$ & $1486147703040$ & $19$ \\ \hline
$-2$ & $10$ & $321253732800$ & $23$ \\ \hline
\end{tabular}
\end{minipage}
\begin{minipage}{5.6 cm}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
$d$ & Table & $\ell$ & max $p$ \\
\hline \hline
$1$ & $11$ & $5040$ & $7$ \\ \hline
$3$ & $12$ & $133333200$ & $37$ \\ \hline
$4$ & $13$ & $1296$ & $3$ \\ \hline
$6$ & $14$ & $360$ & $5$ \\ \hline
$7$ &$15$ & $18295200$ & $11$ \\ \hline
$9$ & $16$ & $8$ & $2$ \\ \hline
\end{tabular}
\end{minipage}
\end{table}
Suppose $\ell > 10^{6}$ in Table~\ref{lcmtable} and
the corresponding collection of congruences coming from the table indicated in the second column is $\mathcal C$.
Let $q$ be the largest prime divisor of $\ell$ as indicated in the fourth column.
Let $w = 4 \cdot 3 \cdot 5 \cdot q$.
This choice of $w$ was selected on the basis of some trial and error;
other choices are certainly reasonable.
We do however want and have that $w$ divides $\ell$.
Based on the comments above, we would like to know if every integer in the interval $[0,\ell-1]$ satisfies
at least one congruence in $\mathcal C$.
The basic idea is to take each $u \in [0,w-1]$ and to consider the integers that are congruent to $u$ modulo $w$ in $[0,\ell-1]$.
One advantage of doing this is that not every congruence in $\mathcal C$ needs to be considered.
For example, take $d = -3$. Then Table~\ref{lcmtable} indicates $\ell = 1486147703040$
and Table~\ref{tablenumcong} indicates the number of congruences in $\mathcal C$ is $739$.
From Table~\ref{covforminus3}, the first few of the congruences in $\mathcal C$ are
\[
k \equiv 4 {\hskip -5pt}\pmod{6}, \quad
k \equiv 5 {\hskip -5pt}\pmod{6}, \quad
k \equiv 0 {\hskip -5pt}\pmod{16}, \quad
k \equiv 11 {\hskip -5pt}\pmod{21}.
\]
Here, $w = 4 \cdot 3 \cdot 5 \cdot 19 = 1140$.
If we take $u = 0$, then only the third of these congruences can be satisfied by an integer $k$ congruent to $u$ modulo $w$,
as each of the other ones requires $k \not\equiv 0 \pmod{3}$ whereas $k \equiv u \pmod{w}$ requires $k \equiv 0 \pmod{3}$.
Let $\mathcal C'$ be the congruences in $\mathcal C$ which are consistent with $k \equiv u \pmod{w}$.
One can determine these congruences by using that there exist integers satisfying both
$k \equiv a \pmod{m}$ and $k \equiv u \pmod{w}$ if and only if $a \equiv u \pmod{\gcd(m,w)}$.
Observe that, with $u \in [0,w-1]$ fixed, we would like to know if each integer $v$ of the form
\begin{equation}\label{vequat}
v = w t + u, \quad \text{ with } 0 \le t \le (\ell/w)-1
\end{equation}
satisfies at least one congruence in $\mathcal C'$.
The main advantage of this approach is that, as we shall now see, not all $\ell/w$ values of $t$ need to be considered.
First, we note that if $\mathcal C'$ is the empty set, then the integers in \eqref{vequat} are not covered and therefore
$\mathcal C$ is not a covering system.
Suppose then that $|\mathcal C'| \ge 1$.
Let $\ell'$ denote the least common multiple of the moduli in $\mathcal C'$.
Let $\delta = \gcd(w,\ell')$.
We claim that we need only consider $v = w t + u$ where $0 \le t \le (\ell'/\delta)-1$.
To see this, suppose we know that every $v = w t + u$ with $0 \le t \le (\ell'/\delta)-1$ satisfies one of the congruences in $\mathcal C'$.
There are integers $q$, $q'$, $r$ and $r'$ satisfying
$t = \ell' q' + r'$ where $0 \le r' \le \ell'-1$ and $r' = (\ell'/\delta) q + r$, where $0 \le r \le (\ell'/\delta)-1$.
Then
\[
v = w t + u = w \ell' q' + w r' + u = w \ell' q' + (w/\delta) \ell' q + w r + u.
\]
The definition of $\delta$ implies that $w/\delta \in \mathbb Z$.
As each modulus in $\mathcal C'$ divides $\ell'$, we see that $v$ satisfies a congruence in $\mathcal C'$ if and only if
$w r + u$ does. Here, $w$ and $u$ are fixed and $0 \le r \le (\ell'/\delta)-1$.
Thus, we see that for each $u \in [0,w-1]$, we can restrict to determining whether $v$ in \eqref{vequat} satisfies a congruence
in $\mathcal C'$ for $0 \le t \le (\ell'/\delta)-1$.
Returning to the example of $d = -3$, $\ell = 1486147703040$ and $|\mathcal C| = 739$,
where $w = 1140$ and we considered $u = 0$, one can check that $|\mathcal C'| = 19$,
$\ell' = 12640320$, $\delta = w$ and $\ell'/\delta = 11088$. Thus, what started out as ominously checking whether over $10^{12}$ integers
each satisfy at least one of $739$ different congruences is reduced in the case of $u = 0$ to looking at whether $11088$ integers
each satisfy at least one of $19$ different congruences.
As $u \in [0,w-1]$ varies, the number of computations does as well. An extreme case for $d = -3$ occurs for $u = 75$, where
we get $\ell'/\delta = 14325696$ and $|\mathcal C'| = 47$. As $d$ and $u$ vary, though, this computation becomes manageable for
determining that we have covering systems for each $d$ in $ \{ -9, -8, \ldots, -1 \} \cup \{ 1, 2, \ldots, 9 \}$ with $d \not\equiv 2 \pmod{3}$
and $\ell > 10^{6}$. On a 2017 MacBook Plus running Maple 2019 with a 2.3 GHz Dual-Core Intel Core i5 processor, the total cpu time for
determining the systems of congruences in Tables~\ref{covforminus9}-\ref{covfor9} are all covering systems took approximately
$2.9$ cpu hours, with almost all of this time spent on the case $d = -3$ which took $2.7$ hours. The largest value of $\ell'/\delta$
encountered was $\ell'/\delta = 14325696$ which occurred precisely for $d = -3$ and $u \in \{ 75, 303, 531, 759, 987 \}$.
\subsection{Data check.}
The most cumbersome task for us was the determination of the data in Table~\ref{orderofprimes}.
As noted earlier, although the reader can check the data there directly, we have made the list of primes corresponding to each $m$
available through \cite{Filweb}. With the list of such primes for each $m$, it is still worth indicating how the data can be checked.
Recall, in particular, the list of primes is not explicit in the case that there was an unfactored part of $\Phi_{m}(10)$.
In this subsection, we elaborate on what checks should be and were done.
All computations below were done with the MacBook Pro mentioned at the end of the last subsection and using Magma V2.23-1.
For each modulus $m$ used in our constructions (listed in Table~\ref{orderofprimes}), we made a list of primes
$p_{1}, p_{2}, \ldots, p_{s}$, written in increasing order, together with up to two additional primes $q_{1}$ and $q_{2}$,
included after $p_{s}$ on the list but not written explicitly (as we will discuss). Each prime came from a factorization or
partial factorization of $\Phi_{m}(10)$.
The primes $p_{1}, p_{2}, \ldots, p_{s}$ are the distinct primes appearing in the factored part of $\Phi_{m}(10)$,
and as noted earlier do not include primes dividing $m$.
In some cases, a complete factorization was found for $\Phi_{m}(10)$.
For such $m$, there are no additional primes $q_{1}$ and $q_{2}$.
If $\Phi_{m}(10)$ had an unfactored part $Q > 1$ (already tested to be composite), then we checked that
$Q$ is relatively prime to $m p_{1} p_{2} \cdots p_{s}$ and that $Q$ is not of the form $N^{k}$ with
$N \in \mathbb Z^{+}$ and $k$ an integer greater than or equal to $2$.
As this was always the case for the $Q$ tested, we knew each such $Q$ had two distinct prime factors $q_{1}$ and $q_{2}$.
We deduce that there are at least two more primes $q_{j}$, $j \in \{ 1,2 \}$, different from $p_{1}, p_{2}, \ldots, p_{s}$ for which
$10$ has order $m$ modulo $q_{j}$. As the data only contains the primes used in the covering systems,
we only included the primes $q_{1}$ and $q_{2}$ that were used. Thus, despite $Q$ having at least two distinct prime divisors,
we may have listed anywhere from $0$ to $2$ of them. The question arises, however, as to how one can list primes that we do not know;
there are primes $q_{1}$ and $q_{2}$ dividing $Q$, but we were unable to (or chose not to) factor $Q$ to determine them explicitly.
Instead of listing $q_{1}$ and $q_{2}$ then, we opted to list $Q$. Thus, for each $m$ we associated a list of one of the forms
\[
[p_{1}, p_{2}, \ldots, p_{s}], \quad [p_{1}, p_{2}, \ldots, p_{s},Q], \quad [p_{1}, p_{2}, \ldots, p_{s},Q,Q],
\]
depending on whether $Q$ either did not exist or we used no prime factor of $Q$, we used one prime factor of $Q$, or we used two prime factors of $Q$, respectively.
It is possible that $s=0$; for example, the lists associated with the moduli $2888$ and $2976$ each take the middle form with no $p_{j}$ and one composite number.
For a fixed $m$, given such a list, say from \cite{Filweb}, one merely needs to check:
\begin{itemize}
\setlength\itemsep{-0.25em}
\item
Each element of the list divides $\Phi_{m}(10)$.
\item
Each element of the list is relatively prime to $m$.
\item
There is at most one composite number, say $Q > 1$, in the list, which may appear at most twice. The other numbers in the list are distinct primes.
\item
If the composite number $Q$ exists, then $\gcd(Q, p_{1} p_{2} \cdots p_{s}) = 1$.
\item
If the composite number $Q$ exists twice, then $Q^{1/k} \not\in \mathbb Z^{+}$ for every integer $k \in [2, \log(Q)/\log(2)]$.
\end{itemize}
\noindent
The upper bond in the last item above is simply because $k > \log(Q)/\log(2)$ implies $1 < Q^{1/k} < 2$ and, hence, $Q^{1/k}$ is not an integer.
For each $m$, the value of $L(m)$ in Table~\ref{orderofprimes} is simply the number of elements in the list associated with $m$.
With the data from the tables in the Appendix, also available in \cite{Filweb} with the indicated primes $p_{1}, p_{2}, \ldots, p_{s}, q_{1}, q_{2}$ depending on $m$ as above,
some further details need to be checked to fully justify the computations.
We verified that whenever $m$ is used as a modulus in a table, it was associated with one of the primes dividing $\Phi_{m}(10)$.
Furthermore, for any given $d \in \{ -9, -8, \ldots, -1 \} \cup \{ 1, 2, \ldots, 9 \}$, the complete list of primes used as the congruences vary are distinct,
noting that $q_{1}$ and $q_{2}$, for a given $m$, will be denoted by the same number $Q$ but represent two distinct prime divisors of $Q$.
As $d$ varies, a given modulus $m$ and a prime $p$ dividing $\Phi_{m}(10)$ can be used more than once as indicated in Table~\ref{tablerepeatprimes}.
To elaborate, suppose such an $m$ and $p$ is used for each $d \in \mathcal D \subseteq \{ -9, -8, \ldots, -1 \} \cup \{ 1, 2, \ldots, 9 \}$.
For each $d \in \mathcal D$, then, there corresponds a congruence $k \equiv a \pmod{m}$, where $a = a(d)$ will depend on $d$, as well as $m$ and $p$.
As noted earlier, this is permissible if and only if the values of $d \cdot 10^{a(d)}$ are congruent modulo $p$ for all $d \in \mathcal D$.
Thus, for each $p$ that occurs in more than one table, as in Table~\ref{tablerepeatprimes}, a check is done to verify the corresponding values of $d \cdot 10^{a(d)}$
are congruent modulo $p$.
The verification of the covering systems needed for Theorem~\ref{maintheorem} is complete, and the work of D.~Shiu \cite{shiu} now
implies the theorem.
| {
"timestamp": "2021-01-25T02:04:41",
"yymm": "2101",
"arxiv_id": "2101.08898",
"language": "en",
"url": "https://arxiv.org/abs/2101.08898",
"abstract": "We show that for every positive integer $k$, there exist $k$ consecutive primes having the property that if any digit of any one of the primes, including any of the infinitely many leading zero digits, is changed, then that prime becomes composite.",
"subjects": "Number Theory (math.NT)",
"title": "Consecutive primes which are widely digitally delicate",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815535796247,
"lm_q2_score": 0.8104789155369048,
"lm_q1q2_score": 0.8017107928143249
} |
https://arxiv.org/abs/2108.00987 | Threshold Ramsey multiplicity for odd cycles | The Ramsey number $r(H)$ of a graph $H$ is the minimum $n$ such that any two-coloring of the edges of the complete graph $K_n$ contains a monochromatic copy of $H$. The threshold Ramsey multiplicity $m(H)$ is then the minimum number of monochromatic copies of $H$ taken over all two-edge-colorings of $K_{r(H)}$. The study of this concept was first proposed by Harary and Prins almost fifty years ago. In a companion paper, the authors have shown that there is a positive constant $c$ such that the threshold Ramsey multiplicity for a path or even cycle with $k$ vertices is at least $(ck)^k$, which is tight up to the value of $c$. Here, using different methods, we show that the same result also holds for odd cycles with $k$ vertices. | \section{Introduction}
The \emph{Ramsey number} $r(H)$ of a graph $H$ is the minimum positive integer $n$ such that any two-coloring of the edges of the complete graph $K_n$ on $n$ vertices contains a monochromatic copy of $H$. Ramsey in 1930 proved that these numbers exist.
However, determining or even estimating Ramsey numbers remains a formidable challenge for most graphs. For instance, the Ramsey number of $K_5$ is already not known, while the longstanding bounds $2^{k/2} \leq r(K_k) \leq 4^k$ have only been improved by lower-order factors~\cite{ConlonUp, Sah, Spencer}.
To date, there are only a few non-trivial families of graphs for which the Ramsey number is known exactly, including stars, paths, and cycles.
Let $P_k$ and $C_k$ denote the path and cycle on $k$ vertices, respectively. In 1967, Gerencs\'er and Gy\'arf\'as~\cite{GG} determined the Ramsey number of paths, namely,
\[r(P_k) = k - 1+ \lfloor k/2 \rfloor.\]
For cycles, the general case was solved independently by Rosta~\cite{Ros} and by Faudree and Schelp~\cite{cycle2}, who showed that
\[ r(C_k) = 3k/2- 1 \text{ if } k \geq 6 \text{ is even \ \ and \ \ } r(C_k)=2k-1 \text{ if } k \geq 5 \text{ is odd}.\]
A more general problem than computing Ramsey numbers is to determine the {\it Ramsey multiplicity} $M(H, n)$, the minimum number of monochromatic copies of $H$ guaranteed in any two-edge-coloring of $K_n$. Indeed, it is easy to check that $M(H, n) = 0$ if and only if $n < r(H)$.
The asymptotic behaviour of $M(H, n)$ when $H$ is fixed and $n$ tends to infinity has attracted considerable attention. This is in part because of a famous conjecture of Erd\H{o}s~\cite{Erdos62} stating that if $H$ is a clique, then the value of $M(H, n)$ is asymptotically equal to the expected number of monochromatic copies of $H$ in a uniformly random two-edge-coloring of $K_n$. Unfortunately, this conjecture (and a later generalization to all graphs~\cite{BRcommon}) is false already for $H = K_4$, as first shown by Thomason~\cite{Tcommon} (see also~\cite{K4common, Sidcommon}). However, it remains an interesting open problem to determine which graphs satisfy the conjecture, known in the literature as \emph{common graphs}. For instance, the non-three-colorable $5$-wheel is known to be common~\cite{HHKNR} and some hope remains that all bipartite graphs are common because of a connection to a celebrated conjecture of Sidorenko and Erd\H{o}s--Simonovits~\cite{Sidorenko,Sidorenko2, Simon} (see~\cite{CFS, CKLL, CL20, KLL, LS, S1} for some recent results towards this conjecture). We refer the interested reader to~\cite[Section 2.6]{Ramseysurvey} for more on this fascinating subject.
Another much-studied problem concerns the value of $M(H, n)$ when it first becomes positive, i.e., when $n = r(H)$. As in our companion paper \cite{CFSW}, we refer to this value as the \emph{threshold Ramsey multiplicity}.
\begin{definition}
The \emph{threshold Ramsey multiplicity} $m(H)$ of a graph $H$ is the minimum number of monochromatic copies of $H$ in any two-coloring of the edges of $K_n$ with $n=r(H)$. In other words, \[m(H) = M(H, r(H)).\]
\end{definition}
The threshold Ramsey multiplicity was first studied systematically by Harary and Prins \cite{HP} almost fifty years ago. The exact value of the threshold Ramsey multiplicity is known for all graphs with at most $4$ vertices~\cite{Hsurvey, HP, K4}, but, in general, determining or even providing a non-trivial lower bound on the threshold Ramsey multiplicity appears to be quite challenging. In fact, the behavior of $m(H)$ can be rather erratic. For instance, Harary and Prins \cite{HP} proved that $m(K_2) = 1$ and $m(K_{1,k}) =1$ for $k$ even, but $m(K_{1,k})=2k$ for $k \geq 3$ odd.
In the same paper \cite{HP}, Harary and Prins asked for a determination of $m(P_k)$ and $m(C_k)$. It is this question that concerns us in this paper and its companion~\cite{CFSW}. Indeed, in~\cite{CFSW}, not only did we provide the first non-trivial bound for the Ramsey multiplicity of paths and even cycles, but the bound is tight up to a lower-order factor.
\begin{theorem}[\cite{CFSW}]\label{thm:even}
There is a positive constant $c$ such that, for every positive integer $k$, the threshold Ramsey multiplicity of the path with $k$ vertices satisfies $m(P_k) \geq (ck)^k$ and, if $k$ is even, the threshold Ramsey multiplicity of the cycle on $k$ vertices satisfies $m(C_k) \geq (ck)^k$.
\end{theorem}
In this paper, we address Harary and Prins' question for odd cycles. Unlike the cases studied in~\cite{CFSW}, this odd-cycle case has received considerable previous attention, with Rosta and Sur\'anyi \cite{RStm} already proving the exponential lower bound $m(C_k)\geq 2^{c k}$ in the 1970's. This was later improved to a superexponential bound in an unpublished work of Rosta (see~\cite{KR}). More recently, K\'arolyi and Rosta~\cite{KR} improved the lower bound to $m(C_k) \geq k^{c k}$. To the best of our knowledge, this was the state-of-the-art prior to our result, which we now state.
\begin{theorem}\label{thm:main}
There is a positive constant $c$ such that, for every odd positive integer $k$, the threshold Ramsey multiplicity of the cycle on $k$ vertices satisfies $m(C_k) \geq (ck)^k$.
\end{theorem}
As for paths and even cycles, this bound is tight up to the constant $c$. However, it is proved using rather different methods to those employed in~\cite{CFSW}, because the Ramsey numbers, and the associated extremal colorings, are quite different for odd cycles and for paths and even cycles. To describe the extremal colorings in the odd setting, consider the red/blue edge-coloring $\chi(a,b)$ of the complete graph on $n=a+b$ vertices with vertex set $A \cup B$, $|A|=a$ and $|B|=b$, where $A$ and $B$ form blue cliques and all edges between $A$ and $B$ are red. Let $k \geq 5$ be an odd positive integer. Then $\chi(k-1, k-1)$ is a coloring of the complete graph on $r(C_k) - 1 = 2k - 2$ vertices with no monochromatic $C_k$, while $\chi(k, k-1)$ is a coloring of the complete graph on $r(C_k) = 2k-1$ vertices with exactly $(k-1)!/2$ monochromatic copies of $C_k$, as all monochromatic $C_k$ are in the blue clique of order $k$.
This provides an upper bound on $m(C_k)$ showing that the bound in Theorem~\ref{thm:main} is tight apart from a lower-order factor. It also suggests that our bound can be strengthened, as follows.
\begin{conjecture} \label{conj:ck}
For any sufficiently large odd integer $k$, $m(C_k) = (k-1)!/2$.
\end{conjecture}
\section{Proof of Theorem \ref{thm:main}}
\subsection{Preliminaries}
As in our proof of Theorem \ref{thm:even} in \cite{CFSW}, we will use Szemer\'edi's regularity lemma, an important tool which gives a rough structural decomposition for all graphs. Roughly speaking, for any graph, the regularity lemma outputs a vertex partition of the graph into a small number of parts, where the bipartite graph between almost every pair of parts behaves like a random graph. Among its many applications (see, for example,~\cite{Regularity}), this decomposition is useful for embedding and counting copies of sparse graphs, such as the cycles that concern us here.
To formally state the regularity lemma, we first need some definitions to quantify what is meant by a ``random-like" bipartite graph.
For a pair of disjoint vertex subsets $(X, Y)$ of a graph, let $d(X, Y) = e(X,Y)/|X||Y|$
denote the density of edges between $X$ and $Y$.
\begin{definition}[$\epsilon$-regular pair]
A pair $(X, Y)$ of disjoint vertex subsets of a graph is said to be $\epsilon$-regular if, for all subsets $U \subset X, V \subset Y$ such that $|U| \geq \epsilon |X|$ and $|V| \geq \epsilon |Y|$, $|d(U, V) - d(X,Y)| \leq \epsilon$.
\end{definition}
The next lemma sets out two basic facts about $\epsilon$-regular pairs that will be useful later.
\begin{lemma}\label{lem:epsregprop}
If $(X,Y)$ is an $\epsilon$-regular pair and $d(X, Y)=d$, then the following hold:
\begin{enumerate}[(i)]
\item If $Y' \subset Y$ satisfies $|Y'| \geq \epsilon |Y|$, then the number of vertices in $X$ with degree in $Y'$ greater than $(d+\epsilon)|Y'|$ is less than $\epsilon |X|$ and the number of vertices in $X$ with degree in $Y'$ less than $(d-\epsilon)|Y'|$ is less than $\epsilon |X|$.
\item If $X' \subset X$ and $Y'\subset Y$ are such that $|X'| \geq \alpha |X|$ and $|Y'| \geq \alpha |Y|$, then $(X', Y')$ is $\max(\epsilon/\alpha, 2\epsilon)$-regular.
\end{enumerate}
\end{lemma}
A vertex partition is called {\it equitable} if each pair of parts differ in size by at most one. We are now ready to state the regularity lemma.
\begin{lemma}[Szemer\'edi’s regularity lemma]\label{reglem}
For every $\epsilon>0$ and positive integer $l$, there are positive integers $n_0$ and
$M_0$ such that every graph $G$ with at least $n_0$ vertices admits an equitable vertex partition $V(G) = V_1 \cup \dots \cup V_M$ into $M$ parts with $l \leq M \leq M_0$ where
all but at most $\epsilon \binom{M}{2}$ pairs of parts $(V_i, V_j )$ with $1 \leq i < j \leq M$ are $\epsilon$-regular.
\end{lemma}
In practice, we will use the following standard colored version of the regularity lemma.
\begin{lemma}[Colored regularity lemma] \label{reglem-twocolor}
For every $\epsilon>0$ and positive integer $l$, there are positive integers $n_0$ and $M_0$ such that every two-edge-coloring of the complete graph $K_n$ with $n \geq n_0$ in red and blue admits an equitable vertex partition $V(G)=V_1 \cup \dots \cup V_M$ into $M$ parts with $l \leq M \leq M_0$ where all but at most $\epsilon{M \choose 2}$ pairs of parts $(V_i,V_j)$ with $1 \leq i < j \leq M$ are $\epsilon$-regular in both the red and blue subgraphs.
\end{lemma}
Lemmas \ref{reglem} and \ref{reglem-twocolor} are in fact equivalent, since a pair $(V_i,V_j)$ in an edge-coloring of $K_n$ with colors red and blue is $\epsilon$-regular in red if and only if it is $\epsilon$-regular in blue.
\subsection{The stability lemma}
The main ingredient in the proof of Theorem~\ref{thm:main} is a stability lemma, Lemma \ref{lem:main2} below, which implies that any two-edge-coloring of $K_n$ (where, for us, $n$ will be $r(C_k) = 2k-1$ for some sufficiently large odd $k$) either has a regularity partition whose reduced graph contains a long monochromatic path or the edge-coloring of $K_n$ is close to the coloring $\chi(k, k-1)$ described before Conjecture \ref{conj:ck}. In either case, we can show that the conclusion of Theorem~\ref{thm:main} must hold.
Before introducing the stability lemma, we need
to make precise what we mean by saying that a two-edge-coloring of the complete graph on $2k-1$ vertices is close to $\chi(k, k-1)$. In the definition, we will refer to the density of a set $X$, given by $d(X,X) = 2e(X)/|X|^2$.
\begin{definition}[Extremal coloring with parameter $\lambda$] \label{def:ec2}
A two-edge-coloring of the complete graph on $n$ vertices is an {\it extremal coloring with parameter $\lambda$}
if there exists a partition $ A \cup B$ of the vertex set such that
\begin{itemize}
\item $|A| \geq (1/2-\lambda)n$ and $|B| \geq (1/2 - \lambda)n$;
\item the graph induced on $A$ has density at least $(1-\lambda)$ in some color, the graph induced on $B$ has density at least $(1-\lambda)$ in the same color, and the bipartite graph between $A$ and $B$ has density at least $(1-\lambda)$ in the other color.
\end{itemize}
\end{definition}
Our key stability lemma is now as follows.
\begin{lemma}\label{lem:main2}
For any $0<\epsilon < 10^{-20}$, there is a constant $M_0=M_0(\epsilon)$ such that if $\alpha = 20 \sqrt{\epsilon}$,
then, for $n$ sufficiently large in terms of $\epsilon$, any two-edge-coloring of the complete graph $K_n$ falls into one of the following two cases:
\begin{itemize}
\item \textbf{Case 1:} There is a positive integer $\epsilon^{-1} \leq M \leq M_0$
such that if $t$ is the odd integer with
\begin{equation}
(1/2 + \alpha) M \geq t > (1/2 + \alpha) M - 2, \label{eqn:reducedcycle}
\end{equation}
then there are disjoint vertex sets $V_0, \dots, V_{t-1}$, indexed by the elements of $\mathbb{Z}/t\mathbb{Z}$, and a color $\chi$ such that, for each $i \in\mathbb{Z} / t \mathbb{Z}$, $|V_i| \geq \lfloor n/M \rfloor$, the pair $(V_i, V_{i+1})$ is $\epsilon$-regular in the color $\chi$, and the edge density between $V_i$ and $V_{i+1}$ in the color $\chi$ is at least $11\epsilon^{1/2}$.
\item \textbf{Case 2:} The graph is an extremal coloring with parameter $300 \sqrt{\alpha}$.
\end{itemize}
\end{lemma}
We will hold off on proving Lemma~\ref{lem:main2} until Section~\ref{sec:stab}, first showing, across the next two sections, how Theorem~\ref{thm:main} follows from either of the conclusions in the lemma.
\subsection{Theorem~\ref{thm:main} for colorings satisfying Case 1 of Lemma~\ref{lem:main2}} \label{subsec:case1odd}
In this section, we prove Theorem~\ref{thm:main} for colorings satisfying Case 1 of Lemma~\ref{lem:main2}. We will repeatedly work in a setting where we have disjoint vertex sets $V_0, \dots, V_{t-1}$ from a graph where the indices of the $V_i$ are the elements of $\mathbb{Z} / t \mathbb{Z}$. We say that a path $P$ of length $\ell$ with vertices $w_0, w_1, \dots, w_\ell$ and edges $w_0 w_1, w_1 w_2, \dots, w_{\ell - 1} w_\ell$ is {\it $(V_0, \dots, V_{t-1})$-transversal} if $w_i \in V_i$ for each $0 \leq i \leq \ell$. Note that we will typically have $\ell > t$, so the path may pass through each of the vertex sets multiple times.
\begin{lemma}\label{lem:countpath2}
Suppose that $0<\epsilon < 10^{-5}$ and $t$ and $n$ are integers with $t\geq 2$ and $n \geq \epsilon^{-2}$. Suppose also that $V_0, \dots, V_{t-1}$ are disjoint vertex sets in a graph where the indices of the $V_i$ are the elements of $\mathbb{Z} / t \mathbb{Z}$
and, for each $i \in\mathbb{Z} / t \mathbb{Z}$, $|V_i| \geq n$, $(V_i, V_{i+1})$ is $\epsilon$-regular, and $d(V_i,V_{i+1}) \geq d$ for some $d \geq 5\epsilon^{1/2}$. Then the following hold:
\begin{enumerate}
\item For any integer $\ell$ with $2\leq \ell \leq t (1-\sqrt{\epsilon}) n$ and any vertex $w_0 \in V_0$ with at least $(d-\epsilon)|V_1|$ neighbors in $V_1$, the number of $(V_0, \dots, V_{t-1})$-transversal paths of length $\ell$ starting from $w_0$ is at least $(d-\epsilon - \sqrt{\epsilon})^\ell \prod_{i=1}^\ell (n-\lfloor i/t \rfloor)$.
\item For any integer $\ell$ with $4 \leq \ell \leq t (1-3\sqrt{\epsilon}) n$ which is divisible by $t$ and any two (not necessarily distinct) vertices $w_0, w_0' \in V_0$ such that $w_0$ has at least $(d-\epsilon)|V_1|$ neighbors in $V_1$ and $w'_0$ has at least $(d-\epsilon)|V_{t-1}|$ neighbors $V_{t-1}$, the number of $(V_0, \dots, V_{t-1})$-transversal paths of length $\ell$ with end vertices $w_0$ and $w_0'$ is at least $(d-5\sqrt{\epsilon})^{\ell-1} (1-2\sqrt{\epsilon})^{\ell-2} (\epsilon n)\prod_{i=1}^{\ell-2} (n-\lfloor i/t \rfloor).$
\end{enumerate}
\end{lemma}
\begin{proof}
For any integer $0 \leq l\leq \ell$, let $N_l$ be the number of good paths of length $l$ starting from $w_0$, where a $(V_0, \dots, V_{t-1})$-transversal path $w_0, w_1, \dots, w_l$ of length $l$ starting from $w_0$ is {\it good} if there are at least $(d-\epsilon)\big(|V_{l+1}| - \lfloor (l+1)/t \rfloor\big)$ ways to extend the path to $V_{l+1}$. We will prove by induction that $N_l \geq (d-\epsilon - \sqrt{\epsilon})^l \prod_{i=1}^l (n-\lfloor i/t \rfloor)$ for $0 \leq l \leq \ell$, which will settle Part 1.
For the base case, note that $N_0=1$, since the path with zero edges starting from $w_0$ is $w_0$ itself and, by assumption, the vertex $w_0 \in V_0$ has at least $(d-\epsilon)|V_1|$ neighbors in $V_1$. Suppose now that the required lower bound holds for $N_{l-1}$ and we wish to deduce the lower bound for $N_l$.
Fix any good path $P$ of length $l-1$. By the definition of goodness, there are at least $(d - \epsilon) (|V_{l}| - \lfloor l/t \rfloor)$ choices of $w_l \in V_l$ that extend $P$. To bound $N_l$, we need a lower bound on the number of vertices $w_l \in V_l$ such that the path formed by extending $P$ to $w_l$ is also good.
Let $U$ be the set of vertices in $V_{l}$ which have degree less than $(d - \epsilon) |V_{l+1} \setminus V(P)| = (d - \epsilon) (|V_{l+1}| - \lfloor (l+1)/t \rfloor)$ in $V_{l+1} \setminus V(P)$. Note that, since $\ell \leq t (1-\sqrt{\epsilon}) n$,
\begin{align*}
|V_{l+1} \setminus V(P)| & = |V_{l+1}| - \lfloor (l+1)/t \rfloor \geq |V_{l+1}| - \ell /t - 1 \geq |V_{l+1}| - t(1-\sqrt{\epsilon})n/t - 1\\
& \geq |V_{l+1}| - (1-\sqrt{\epsilon})|V_{l+1}| - 1 = \sqrt{\epsilon} |V_{l+1}| - 1 \geq \epsilon |V_{l+1}|.
\end{align*}
Together with the fact that $(V_{l}, V_{l+1})$ is $\epsilon$-regular with density at least $d$, Lemma~\ref{lem:epsregprop} (i) implies that
$|U| \leq \epsilon |V_{l}|$.
Hence, the number of choices for $w_{l}$ such that $P$ extended to $w_l$ is also good is at least
\[ (d-\epsilon)(|V_l| - \lfloor l/t \rfloor) - |U| \geq (d-\epsilon)(|V_l| - \lfloor l/t \rfloor) - \epsilon |V_l| \geq (d-\epsilon - \sqrt{\epsilon})(|V_l| - \lfloor l/t \rfloor).
\]
The last inequality is equivalent to $(\sqrt{\epsilon} - \epsilon) |V_l| \geq \sqrt{\epsilon} \lfloor \ell/t \rfloor$, which again follows from $\ell \leq t (1-\sqrt{\epsilon}) n$. Thus,
\begin{equation*}
N_l \geq (d-\epsilon-\sqrt{\epsilon}) (n-\lfloor l/t \rfloor) N_{l-1} \geq (d-\epsilon-\sqrt{\epsilon})^l \prod_{i=1}^l (n-\lfloor i/t \rfloor),
\end{equation*}
establishing Part 1.
For Part 2, we pass from the $V_i$ to a collection of subsets $V'_i$. By assumption, $|N_{V_{t-1}}(w_0')|\geq (d-\epsilon) |V_{t-1}|\geq \epsilon |V_{t-1}|$, so we may set aside a subset $W_{t-1}$ of $N_{V_{t-1}}(w_0')$ of size $\epsilon |V_{t-1}|$
and let $V'_{t-1} = V_{t-1} \setminus W_{t-1}$. If $w'_0$ is distinct from $w_0$, we let $V'_0 = V_0 \setminus \{w'_0\}$, while, in all remaining cases, we let $V'_i = V_i$, noting that $|V'_i| \geq (1 - \epsilon) |V_i|$ for all $i$.
Therefore, if we set $\epsilon' = 2\epsilon$ and $d' = d-\epsilon$, Lemma~\ref{lem:epsregprop} (ii) now tells us that for each $i \in \mathbb{Z} /t\mathbb{Z}$ the pair of sets $(V_i', V_{i+1}')$ is $\epsilon'$-regular with edge density at least $d'$.
As in Part 1, for any positive integer $0 \leq l \leq \ell-3$, let $N_l$ be the number of good paths of length $l$ starting from $w_0$, though with the condition now reading that there are at least $(d'-\epsilon')(|V_{l+1}'| - \lfloor (l+1)/t \rfloor)$ ways to extend the path to a vertex in $V_{l+1}'$. Since $w_0$ has at least $(d-\epsilon)|V_1| - |V_1 \setminus V'_1| \geq (d-2\epsilon) |V_1| > (d'-\epsilon') |V'_1|$ neighbors in $V'_1$ and $\ell \leq t(1-3\sqrt{\epsilon})n \leq t(1-\sqrt{2\epsilon})((1-\epsilon)n) = t(1-\sqrt{\epsilon'}) ((1-\epsilon)n) $, we may apply Part 1 to conclude that
\begin{equation}
N_{\ell-3} \geq (d'-\epsilon'-\sqrt{\epsilon'})^{\ell-3} \prod_{i=1}^{\ell-3} ((1-\epsilon)n-\lfloor i/t \rfloor). \label{eqn:p1Nl3}
\end{equation}
Fix any such path $P$ of length $\ell-3$.
Suppose its vertices are $w_0, w_1, \dots, w_{\ell-3}$ in order, where $w_j \in V_j'$. By definition, there are at least $(d'-\epsilon')(|V_{\ell-2}'| - \lfloor (\ell-2)/t \rfloor)$ ways to extend the path to a vertex $w_{\ell-2} \in V_{\ell-2}'$. Denote this set of candidates for $w_{\ell-2}$ by $C$. Since $\ell$ is divisible by $t$, we have that $C \subset V_{t-2}$. Using that $d \geq 5\sqrt{\epsilon}$ and $\ell/t \leq (1-3\sqrt{\epsilon})n$, we have that $|C| \geq (d'-\epsilon')(|V_{\ell-2}'|- \lfloor (\ell-2)/t \rfloor)
\geq \epsilon' |V_{t-2}'| \geq \epsilon |V_{t-2}|$. Since $|W_{t-1}| \geq \epsilon |V_{t-1}|$ and $(V_{t-2}, V_{t-1})$ is $\epsilon$-regular with density at least $d$, the number of edges between $C$ and $W_{t-1}$ is at least
\begin{equation}
(d - \epsilon) |C||W_{t-1}| \geq (d-\epsilon) (d'-\epsilon')(|V_{\ell-2}'|- \lfloor (\ell-2)/t \rfloor) \cdot \epsilon |V_{t-1}|. \label{eqn:CVb}
\end{equation}
Note now that $P$ together with the two end vertices of any edge in $E(C, W_{t-1})$ results in a path of length $\ell-1$ that can be extended to $w_0'$.
Therefore, using \eqref{eqn:p1Nl3} and \eqref{eqn:CVb}, we see that the number of paths with end vertices $w_0$ and $w_0'$ is at least
\begin{align*}
N_{\ell-3}\cdot (d - \epsilon) |C||W_{t-1}|
& \geq (d'-\epsilon'-\sqrt{\epsilon'})^{\ell-3} \prod_{i=1}^{\ell-3} ((1-\epsilon)n-\lfloor i/t \rfloor) \\
& \ \ \ \ \ \ \cdot (d-\epsilon) (d'-\epsilon')(|V_{\ell-2}'|- \lfloor (\ell-2)/t \rfloor) \cdot \epsilon |V_{t-1}| \\
& \geq
(d-3\epsilon-2\sqrt{\epsilon})^{\ell-1} (\epsilon n) \prod_{i=1}^{\ell-2} ((1-\epsilon)n-\lfloor i/t \rfloor) \\
& \geq (d-3\epsilon-2\sqrt{\epsilon})^{\ell-1}(1-\epsilon - \sqrt{\epsilon} )^{\ell-2} (\epsilon n) \prod_{i=1}^{\ell-2} (n-\lfloor i/t \rfloor) \\
& \geq
(d-5\sqrt{\epsilon})^{\ell-1} (1-2\sqrt{\epsilon})^{\ell-2} (\epsilon n)\prod_{i=1}^{\ell-2} (n-\lfloor i/t \rfloor),
\end{align*}
where the second to last inequality holds because $(1-\epsilon)n - \lfloor i/t \rfloor \geq (1-\epsilon - \sqrt{\epsilon}) (n - \lfloor i/t \rfloor )$ for $i \leq t(1-2\sqrt{\epsilon})n$.
\end{proof}
Part 2 of Lemma \ref{lem:countpath2} with $w_0 = w_0'$ implies that there are many cycles of length $\ell$ when $\ell$ is divisible by $t$.
The next lemma shows that the same result holds even when $\ell$ is not divisible by $t$.
\begin{lemma}\label{lem:countcycle1}
Suppose that $0< \epsilon <10^{-5}$, $t$ is an odd integer with $t\geq 3$, and $n$ is a positive integer with $n \geq t\epsilon^{-2}$. Suppose also that $V_0, \dots, V_{t-1}$ are disjoint vertex sets in a graph where the indices of the $V_i$ are the elements of $\mathbb{Z}/t\mathbb{Z}$ and, for each $i\in \mathbb{Z}/t\mathbb{Z}$, $|V_i| \geq n$, $(V_i, V_{i+1})$ is $\epsilon$-regular, and $d(V_i,V_{i+1}) \geq d$ for some $d \geq 10\epsilon^{1/2}$. Then, for any odd positive integer $p$ with
\begin{equation}
2t+6 \leq p \leq t(1-5\sqrt{\epsilon})n, \label{eqn:q}
\end{equation}
the number of cycles of length $p$ is at least $\frac{\epsilon^2}{4} n^4 (d-10\sqrt{\epsilon})^{p-2}(1-3\sqrt{\epsilon})^{2p} \prod_{i=1}^{p-4} (n-\lfloor i/t \rfloor)$.
\end{lemma}
\begin{proof}
Suppose $p \equiv r \bmod t$ for some $0 \leq r \leq t-1$. If $r = 0$, then we can choose $w_0=w_0'$ in Part 2 of Lemma \ref{lem:countpath2} in at least $(1-2\epsilon)n$ ways. Then this lemma easily implies that the number of cycles of length $p$ is at least $(1-2\epsilon)\epsilon n^2 (d-5\sqrt{\epsilon})^{p-1} (1-2\sqrt{\epsilon})^{p-2} \prod_{i=1}^{p-2} (n-\lfloor i/t \rfloor)$, which is stronger than the required bound.
We may therefore assume that $p$ is not divisible by $t$, i.e., that $r >0$.
Define an auxiliary constant $h$ by $h= r+t$ if $r$ is odd, $h = r$ if $r > 2$ and even, and $h =2+2t$ if $r = 2$. Note that $h$ is always a positive even integer with $4 \leq h \leq 2t+ 2$, so a cycle of length $p$ can be constructed by combining an even path $L_1$ of length $h$ alternating between $V_0$ and $V_{t-1}$ with a $(V_0, \dots, V_{t-1})$-transversal path $L_2$ of length $p-h$ with the same end vertices as $L_1$. Since $(p-h)$ is divisible by $t$, the path $L_2$ will use exactly $(p-h)/t$ vertices in each of the $V_i$.
By Lemma \ref{lem:epsregprop} (i), the set $U_0$ of vertices in $V_0$ with at least $(d -\epsilon)|V_1|$ neighbors in $V_1$ and at least $(d -\epsilon)|V_{t-1}|$ neighbors in $V_{t-1}$ has size at least $(1-2\epsilon)|V_0|$. We now fix two vertices $u, v \in U_0$ and bound the number of paths of the types $L_1$ and $L_2$ described above with end vertices $u$ and $v$.
We first bound the number of paths of type $L_1$ with end vertices $u$ and $v$. Since $h \geq 4$ and $h \leq 2t + 2 \leq 2(1-3\sqrt{\epsilon})n$, we may apply Part 2 of Lemma \ref{lem:countpath2} to $(V_0, V_{t-1})$ to conclude that the number of paths of length $h$ alternating between $V_0$ and $V_{t-1}$ with end vertices $u$ and $v$ is at least
\begin{equation}
(d-5\sqrt{\epsilon})^{h-1} (1-2\sqrt{\epsilon})^{h-2} (\epsilon n)\prod_{i=1}^{h-2} (n-\lfloor i/2 \rfloor). \label{eq:part1}
\end{equation}
We now bound the number of paths of type $L_2$ available for each such $L_1$.
For a given $L_1$, remove its $h-1$ interior vertices from $V_0$ and $ V_{t-1}$, calling the updated vertex sets $V'_0$ and $V'_{t-1}$, respectively. Since $h \leq 2t+2$, we have $|V_0'| \geq |V_0| - (t+1) \geq (1-\epsilon/2)|V_0|$ and, similarly, $|V_{t-1}'| \geq (1-\epsilon/2)|V_{t-1}|$. Hence, by Lemma \ref{lem:epsregprop} (ii), each of the pairs $(V_0', V_{t-1}')$, $(V_0', V_1)$, and $(V_{t-1}', V_{t-2})$ is $2\epsilon$-regular with density at least $d-\epsilon$. Furthermore, $u$ and $v$ each have at least $(d-\epsilon)|V_{t-1}| - (t+1) \geq (d-2\epsilon)|V_{t-1}| \geq (d-2\epsilon)|V_{t-1}'|$ neighbors in $V_{t-1}'$.
Since also
\[ 4 \leq p-h \leq t(1-5\sqrt{\epsilon})n \leq t(1-3\sqrt{2\epsilon})(1-\epsilon/2)n,\]
we may apply Part 2 of Lemma \ref{lem:countpath2} with $\epsilon$, $\ell$, $d$, and $n$ replaced by $2\epsilon$, $p-h$, $d-\epsilon$, and $(1-\epsilon/2)n$, respectively, to conclude that the number of $(V'_0, V_1, \dots, V_{t-2}, V'_{t-1})$-transversal paths of length $p-h$ with end vertices $u$ and $v$, each of which is a valid choice for $L_2$, is at least
\begin{align}
& (d-\epsilon - 5\sqrt{2\epsilon})^{p-h-1} (1-2\sqrt{2\epsilon})^{p-h-2} (2\epsilon (1-\epsilon/2)n)\prod_{i=1}^{p-h-2} ((1-\epsilon/2)n-\lfloor i/t \rfloor) \nonumber \\
& \geq (d-10\sqrt{\epsilon})^{p-h-1} (1-3\sqrt{\epsilon})^{p-h-2} (\epsilon n)\prod_{i=1}^{p-h-2} ((1-\epsilon/2)n-\lfloor i/t \rfloor) \nonumber\\
& \geq (d-10\sqrt{\epsilon})^{p-h-1} (1-3\sqrt{\epsilon})^{p-h-2} (\epsilon n) (1-\epsilon/2 - \sqrt{\epsilon/2})^{p-h-2} \prod_{i=1}^{p-h-2} (n-\lfloor i/t \rfloor) \nonumber \\
& \geq (d-10\sqrt{\epsilon})^{p-h-1} (1-3\sqrt{\epsilon})^{p-h-2} (\epsilon n) (1- 2\sqrt{\epsilon})^{p-h-2} \prod_{i=1}^{p-h-2} (n-\lfloor i/t \rfloor) \nonumber \\
& \geq (d-10\sqrt{\epsilon})^{p-h-1} (1-3\sqrt{\epsilon})^{2(p-h-2)} (\epsilon n) \prod_{i=1}^{p-h-2} (n-\lfloor i/t \rfloor),
\label{eq:part2}
\end{align}
where the second inequality holds because $(1-\epsilon/2)n - \lfloor i/t \rfloor \geq (1-\epsilon/2 - \sqrt{\epsilon/2}) (n - \lfloor i/t \rfloor )$ for $i \leq t(1-2\sqrt{\epsilon/2})n$.
Therefore, since the number of choices for $u$ and $v$ is at least $\frac{1}{2} (1-2\epsilon)|V_0| ((1-2\epsilon)|V_0|-1)$, we may combine \eqref{eq:part1} and \eqref{eq:part2} to conclude that the number of cycles of length $p$ is at least
\begin{align*}
& \frac{1}{2} (1-2\epsilon)|V_0| ((1-2\epsilon)|V_0|-1) \cdot (d-5\sqrt{\epsilon})^{h-1} (1-2\sqrt{\epsilon})^{h-2} (\epsilon n)\prod_{i=1}^{h-2} (n-\lfloor i/2 \rfloor) \\
& \ \ \ \ \ \cdot (d-10\sqrt{\epsilon})^{p-h-1} (1-3\sqrt{\epsilon})^{2(p-h-2)} (\epsilon n) \prod_{i=1}^{p-h-2} (n-\lfloor i/t \rfloor) \\
&\geq \frac{1}{4} (1-2\epsilon)^2 n^2 (d-10\sqrt{\epsilon})^{p-2}(1-3\sqrt{\epsilon})^{2p-h-2}(\epsilon n)^2 \prod_{i=1}^{p-4} (n-\lfloor i/t \rfloor) \\
& \geq \frac{\epsilon^2}{4} n^4 (d-10\sqrt{\epsilon})^{p-2}(1-3\sqrt{\epsilon})^{2p} \prod_{i=1}^{p-4} (n-\lfloor i/t \rfloor),
\end{align*}
as required.
\end{proof}
Finally, we can show that Theorem~\ref{thm:main} holds for colorings satisfying Case 1 of Lemma \ref{lem:main2}.
\begin{proof}[Proof of Theorem \ref{thm:main} for colorings satisfying Case 1 of Lemma \ref{lem:main2}.]
Suppose, for concreteness, that $\epsilon = 10^{-30}$ and let $V_0, \dots, V_{t-1}$ be as in Case 1 of Lemma~\ref{lem:main2} with $n = 2k - 1$, where $k$ (and hence $n$) is a sufficiently large odd integer.
We wish to apply Lemma~\ref{lem:countcycle1} to show there are many cycles of length $k = (n+1)/2$.
To confirm that the conditions of Lemma \ref{lem:countcycle1} hold, we only have to check \eqref{eqn:q}, i.e., that
\begin{equation}
2t+6 \leq k \leq t(1-5\sqrt{\epsilon}) \lfloor n/M \rfloor.\label{eqn:knM}
\end{equation}
By \eqref{eqn:reducedcycle}, $(1/2 + \alpha) M \geq t > (1/2 + \alpha) M - 2$, so it suffices to show that
\[k = (n+1)/2 \geq 2(1/2+\alpha)M+6\]
and
\[ k \leq \left( (1/2 + \alpha) M - 2 \right) (1-5\sqrt{\epsilon})(n/M-1).\]
The first inequality easily holds for $n$ sufficiently large in terms of $\epsilon$, while the second inequality holds because
\begin{align*}
k = (n+1)/2 \leq (1+\epsilon)n/2 \leq (1/2 + \alpha/2)M (1-5\sqrt{\epsilon})(1-\epsilon)(n/M),
\end{align*}
where we used that $\alpha=20\sqrt{\epsilon}$ and
$
(1+\alpha)(1-5\sqrt{\epsilon})(1-\epsilon) \geq (1 + 20\sqrt{\epsilon})(1-6\sqrt{\epsilon}) > 1+ \epsilon
$.
Hence, since $M \geq 2/\alpha$ and assuming $n \geq M/\epsilon$,
\begin{align*}
k \leq & (1/2 + \alpha/2)M (1-5\sqrt{\epsilon})(1-\epsilon)(n/M)
\leq \left( (1/2 + \alpha) M - 2 \right) (1-5\sqrt{\epsilon})(n/M-1),
\end{align*}
as required.
We may therefore apply Lemma \ref{lem:countcycle1} with parameters $\epsilon$, $p$, $d$, and $n$ replaced by $\epsilon$, $k$, $d$, and $\lfloor n/M \rfloor$, respectively, to conclude that the number of cycles of length $k$ is at least
\begin{equation}
\frac{\epsilon^2}{4} ( \lfloor n/M \rfloor)^4 (d-10\sqrt{\epsilon})^{k-2}(1-3\sqrt{\epsilon})^{2k} \prod_{i=1}^{k-4} ( \lfloor n/M \rfloor-\lfloor i/t \rfloor). \label {eq:cyclek}
\end{equation}
Since $ \lfloor n/M \rfloor \geq k/t$ by (\ref{eqn:knM}), the last term in
(\ref{eq:cyclek}) satisfies
\begin{equation*}
\prod_{i=1}^{k-4} ( \lfloor n/M \rfloor-\lfloor i/t \rfloor) \geq
\prod_{i=1}^{k-4} ( \lfloor n/M \rfloor- i/t) \geq
t^{-(k-4)} (k-4)!.
\end{equation*}
Therefore, (\ref{eq:cyclek}) is lower bounded by
\begin{align*}
& \frac{\epsilon^2}{4} ( \lfloor n/M \rfloor)^4 (d-10\sqrt{\epsilon})^{k-2}(1-3\sqrt{\epsilon})^{2k} t^{-(k-4)}(k-4)! \\
& \geq \frac{\epsilon^2}{4} \frac{n^4}{(2M)^4} (d-10\sqrt{\epsilon})^{k-2}(1-3\sqrt{\epsilon})^{2k}t^{-(k-4)} ((k-4)/e)^{k-4} \\
& \geq \frac{\epsilon^2}{4(2M)^4} (d-10\sqrt{\epsilon})^{k-2}(1-3\sqrt{\epsilon})^{2k}t^{-(k-4)} ((k-4)/e)^{k}.
\end{align*}
Hence, since $\epsilon$ is a constant, $d \geq 11 \sqrt{\epsilon}$, and $M$ and $t$ are bounded in terms of $\epsilon$, there is a constant $c_1$ depending only on $\epsilon$ such that the number of cycles of length $k$ is at least $(c_1 k)^k$, as required.
\end{proof}
\subsection{Theorem~\ref{thm:main} for colorings satisfying Case 2 of Lemma~\ref{lem:main2}} \label{subsec:case2odd}
The proof of Theorem~\ref{thm:main} for colorings satisfying Case 2 of Lemma~\ref{lem:main2} has several cases. To make the presentation cleaner, we first prove some simple claims.
\begin{claim}\label{claim:redgeneral}
Let $S$ and $T$ be two disjoint vertex sets in a graph $F$ such that any two vertices in $S$ have at least $s$ common neighbors in $T$. If there is an edge within $S$, then the number of cycles of length $l$ is at least $\left( s-l/2 +3/2\right)^{(l-1)/2} (|S| - (l-1)/2)^{(l-3)/2}$ for any odd integer $3 \leq l \leq \min(2s+1, 2|S|-1)$.
\end{claim}
\begin{proof}
Suppose that $(u,u')$ is an edge in $S$. To construct cycles of length $l$, we will find paths $P$ of the form $u, v_1, u_1, v_2, u_2, \dots, v_{(l-1)/2}, u_{(l-1)/2} = u'$ alternating between $S$ and $T$ with end vertices $u$ and $u'$. Each such path together with the edge $(u,u')$ clearly gives rise to a cycle of length $l$.
To estimate the number of paths $P$ of the required form,
suppose that $u, u_1, u_2, \dots, u_{(l-1)/2-1}$, $u_{(l-1)/2} = u'$ is a fixed sequence of distinct vertices in $S$. Note that any of the $s$ common neighbors of $u$ and $u_1$ in $T$ can be chosen as $v_1$. More generally, given choices for $v_1, \dots, v_{i-1}$, the number of choices for $v_i$ is at least $s - (i-1)$ for $1 \leq i \leq (l-1)/2$, since we may pick any vertex in the common neighborhood of $u_{i-1}$ and $u_{i}$ in $T$ except $v_1, \dots, v_{i-1}$. Therefore, given $u, u_1, u_2, \dots, u_{(l-1)/2-1}$, $u_{(l-1)/2} = u'$, the number of choices for $P$ is at least
\[ \prod_{i=1}^{(l-1)/2} (s - (i-1)) \geq \left( s-((l-1)/2-1)) \right)^{(l-1)/2} = \left( s-l/2 +3/2 \right)^{(l-1)/2} .\]
Since the number of choices for $u_1, \dots, u_{(l-1)/2-1}$ is
\[(|S|-2)(|S|-3) \dots (|S| - (l-1)/2) \geq (|S| - (l-1)/2)^{(l-3)/2},\]
the claim follows.
\end{proof}
\begin{claim}\label{claim:ABedgegeneral}
Let $S$ and $T$ be two disjoint cliques in a graph $F$. Suppose that there are two vertex-disjoint paths $P_1$ and $P_2$ such that each path has length at most $2$, has one end vertex in $S$, the other end vertex in $T$, and the rest of the vertices outside $S \cup T$. Then the number of cycles of length $l$ is at least $(((l-1)/2-3)/e)^{l-6}$ for any integer $7 \leq l \leq \min(2|S|-1, 2|T|-1)$.
\end{claim}
\begin{proof}
Suppose that $P_1$ has length $l_1$ and endpoints $a_1 \in S$ and $b_1 \in T$, while $P_2$ has length $l_2$ and endpoints $a_2 \in S$ and $b_2 \in T$.
Let $s = \lfloor l/2 \rfloor$. We will construct cycles of length $l$ by concatenating the following four paths: (1) a path $L_1$ of length $s - l_1$ in $S$ with end vertices $a_1$ and $a_2$, (2) the path $P_2$, (3) a path $L_2$ in $T$ of length $l-s-l_2$ with end vertices $b_1$ and $b_2$, and (4) the path $P_1$. This process clearly yields a cycle of length $l$.
Since $S$ is a clique, we can always find paths $L_1$ of length $s-l_1$ in $S\setminus \{a_1, a_2\}$ with end vertices $a_1$ and $a_2$.
Indeed, the number of such $L_1$ is exactly the number of length-$(s-l_1-1)$ ordered sequences of vertices in $S \setminus \{a_1, a_2\}$. Since $|S| - 2 \geq s-l_1 -1$, such a sequence exists. Furthermore, the number of such sequences is exactly $(|S| - 2)!/(|S| - 2 -(s-l_1-1))! \geq (s-l_1-1)! \geq ((s-l_1-1)/e)^{s-l_1-1}$, where we used the inequality $(x-1)\cdots (x-y) \geq y! \geq (y/e)^y$ for positive integers $x, y$ with $x \geq y+1$. Similarly, the number of choices for $L_2$ is at least $((l-s-l_2-1)/e)^{l-s-l_2-1}$. In total, the number of cycles of length $l$ is at least
$ ((s-l_1-1)/e)^{s-l_1-1} ((l-s-l_2-1)/e)^{l-s-l_2-1}$. Since $l_1, l_2 \leq 2$ and $l-s\geq s$, the quantity above is at least $((s-3)/e)^{s-l_1-1} ((s-3)/e)^{l-s-l_2-1}$. If $s-3 \geq e$, then the previous quantity is at least $((s-3)/e)^{l-6}$. Otherwise, we counted a positive integer number of cycles of length $l$, which is at least the bound in the claim.
\end{proof}
\begin{claim}\label{claim:PbothABgeneral}
Let $S$ and $T$ be two disjoint vertex sets in a graph $F$. Suppose that $w \in S$ and there is a complete bipartite graph between $S \setminus \{w\}$ and $T$ and at least one edge between $w$ and $T$ (so, in particular, there may be a complete bipartite graph between $S$ and $T$). If there is a path $P'$ of length two with one end vertex in $S$, the other end vertex in $T$, and the rest of the vertices outside $S \cup T$, then the number of cycles of length $l$ is at least $((l-5)/2e)^{l-5}$ for any odd integer $l$ with $7 \leq l \leq \min(2|S|+1, 2|T|+1)$.
\end{claim}
\begin{proof}
Suppose the two end vertices of $P'$ are $a\in S$ and $b\in T$. We will construct cycles of length $l$ by concatenating $P'$ with paths $P$ of length $l-2$ alternating between $S$ and $T$ with end vertices $a$ and $b$. Fix a neighbor $x$ of $w$ in $T$. If $a = w$, each path $P$ will start with $w$ and then $x$ before returning to some $a' \in S$, while if $a \neq w$, we will avoid $w$ while building our paths. In either case, a lower estimate for the number of cycles of length $l$ is given by estimating the number of paths of length $l - 4$ starting at a fixed $a' \neq w$ and ending at $b$ alternating between $S \setminus \{w\}$ and $T \setminus \{x\}$.
Since there is a complete bipartite graph between $S \setminus \{w\}$ and $T \setminus \{x\}$, any length-$(l-5)/2$ sequence of ordered vertices in $S\setminus \{a', w\}$ and any length-$(l-5)/2$ sequence of ordered vertices in $T \setminus \{b, x\}$ give rise to a relevant path by alternating between the two sequences as interior vertices. Such sequences exist because $|S| -2 \geq (l-5)/2$ and $|T| - 2 \geq (l-5)/2$.
Thus, the number of choices for the path is at least the product of the number of such sequences in
$S \setminus \{a', w\}$ and $T \setminus \{b, x\}$, which is
\begin{align*}
\frac{(|S|-2)!}{(|S| - (l-5)/2) - 2)!} \cdot \frac{(|T|-2)!}{(|T| - (l-5)/2) - 2)!}
& \geq ((l-5)/2)! ((l-5)/2)! \\
& \geq ((l-5)/2e)^{l-5},
\end{align*}
where we again used that $(x-1)\cdots (x-k) \geq k! \geq (k/e)^k$ for positive integers $x, k$ with $x \geq k+1$.
\end{proof}
\begin{claim}\label{claim:tworedABgeneral}
Let $S$ and $T$ be two disjoint vertex sets in a graph $F$. If there are no two vertex-disjoint edges between $S$ and $T$, then, by removing at most one vertex from $S\cup T$, there is no edge between $S$ and $T$.
\end{claim}
\begin{proof}
If there is no vertex in $S$ with a neighbor in $T$, the claim trivially holds.
If there is exactly one vertex $a \in S$ with neighbors in $T$, then there is no edge between $S \setminus \{a\}$ and $T$. If there is more than one vertex in $S$ with a neighbor in $T$, then all of them have the same neighbor $b \in T$ and there is no edge between $S$ and $T \setminus \{b\}$. In each case, the claim follows.
\end{proof}
We are now ready to prove Theorem \ref{thm:main} for colorings satisfying Case 2 of Lemma \ref{lem:main2}.
\begin{proof}[Proof of Theorem \ref{thm:main} for colorings satisfying Case 2 of Lemma \ref{lem:main2}.]
Suppose again that $\epsilon = 10^{-30}$ and $n = 2k - 1$ for $k$ a sufficiently large odd integer, but we now have an extremal coloring of $K_n$ with parameter $\lambda = 300\sqrt{\alpha}$ and vertex partition $A \cup B$, as in Case 2 of Lemma~\ref{lem:main2}.
Without loss of generality, we will assume that the red densities within $A$ and $B$ are both at least $1-\lambda$ and the blue density between $A$ and $B$ is at least $1-\lambda$, where $|A|, |B| \geq (1/2 - \lambda)n$.
We first conduct a simple cleaning-up procedure.
\begin{claim}
\label{claim:updateEC}
There is a vertex partition of $K_n$ as $A'\cup B' \cup X \cup Y$ satisfying the following conditions:
\begin{enumerate}
\item $A = A' \cup X$ and $B = B' \cup Y$.
\item $|X | \leq 2\sqrt{\lambda}|A|$ and $|Y| \leq 2\sqrt{\lambda}|B|$.
\item $|A'| \geq (1/2 - 2\sqrt{\lambda})n$ and $|B'| \geq (1/2 - 2\sqrt{\lambda})n$.
\item Each vertex in $A'$ has red degree at least $ (1-3\sqrt{\lambda})|A|$ in $A'$ and blue degree at least $ (1-3\sqrt{\lambda})|B|$ in $B'$.
Similarly, each vertex in $B'$ has red degree at least $ (1-3\sqrt{\lambda})|B|$ in $B'$ and blue degree at least $ (1-3\sqrt{\lambda})|A|$ in $A'$.
\end{enumerate}
\end{claim}
\begin{proof}
Suppose that there are $x|A|$ vertices in $A$ whose red degree in $A$ is at most $(1-\sqrt{\lambda})|A|$. Then,
\[
x|A|(1-\sqrt{\lambda})|A| + (1-x)|A||A| \geq (1-\lambda) |A|^2,
\]
which implies that $x \leq \sqrt{\lambda}$.
Similarly, there are at most $\sqrt{\lambda}|A|$ vertices in $A$ whose blue degree in $B$ is at most $(1-\sqrt{\lambda})|B|$.
Letting $X$ be the union of these two bad sets of vertices, we see that $|X | \leq 2\sqrt{\lambda}|A|$. We define $Y \subset B$ similarly, again noting that $|Y| \leq 2\sqrt{\lambda}|B|$. Letting $A' = A \setminus X$ and $B' = B \setminus Y$, we see that items 1 and 2 hold.
To verify item 3, note that $|A'| = |A| - |X| \geq (1-2\sqrt{\lambda})|A|$. Since $|A| \geq (1/2 - \lambda) n$, we have
$$|A'| \geq (1-2\sqrt{\lambda})(1/2-\lambda)n \geq (1/2 - 2\sqrt{\lambda})n,$$
as required. Similarly, $|B'| \geq (1/2 - 2\sqrt{\lambda})n$.
Finally, for item 4, note, for example, that each vertex in $A' = A \setminus X$ has red degree at least $(1-\sqrt{\lambda})|A| - |X| \geq (1-3\sqrt{\lambda})|A|$ in $A'$, while each vertex in $A'$ has blue degree at least $(1-\sqrt{\lambda})|B| - |Y| \geq (1-3\sqrt{\lambda})|B|$ in $B'$.
\end{proof}
The following claim allows us to assume that all the edges in $A'$ and $B'$ are red, i.e., that $A'$ and $B'$ are both red cliques, as otherwise we would be done.
\begin{claim}\label{claim:red}
If there is a blue edge within $A'$ or $B'$, then the number of blue cycles of length $k$ with $k = (n+1)/2$ is at least $(n/5)^{k-2}.$
\end{claim}
\begin{proof}
Without loss of generality, suppose that there is a blue edge $(u,u')$ in $A'$.
We will apply Claim \ref{claim:redgeneral} to the blue subgraph with $S = A'$, $T = B'$, and $l = k$. Since, by Claim \ref{claim:updateEC}, every vertex in $A'$ has at least $(1-3\sqrt{\lambda})|B|$ blue neighbors in $B'$, the size of the common blue neighborhood in $B'$ of any two vertices in $A'$ is at least \[2(1-3\sqrt{\lambda})|B| - |B'| \geq (1-6\sqrt{\lambda})|B'|.\]
Thus, again in reference to Claim \ref{claim:redgeneral}, we may take $s = (1-6\sqrt{\lambda})|B'|$.
It remains to verify that the conditions of Claim \ref{claim:redgeneral} hold, that is, that $k \leq \min(2(1-6\sqrt{\lambda})|B'|+1, 2|A'|-1)$.
But this is simple, since
\begin{equation} (1-6\sqrt{\lambda})|B'| - (k-1)/2 \geq (1-6\sqrt{\lambda})(1/2 - 2\sqrt{\lambda})n - n/4 > (1/4 - 5\sqrt{\lambda})n \label{eq:eq1}
\end{equation}
and
\begin{equation} |A'| - (k+1)/2 \geq (1/2 - 2\sqrt{\lambda})n - (k+1)/2 \geq (1/4 - 5\sqrt{\lambda})n.\label{eq:eq2}
\end{equation}
Therefore, by Claim \ref{claim:redgeneral} and the estimates \eqref{eq:eq1} and \eqref{eq:eq2}, the number of cycles of length $k$ is at least
\[
((1/4 - 5\sqrt{\lambda})n)^{(k-1)/2} \cdot ((1/4 - 5\sqrt{\lambda})n)^{(k-3)/2} = ((1/4 - 5\sqrt{\lambda})n)^{k-2} \geq (n/5)^{k-2},
\]
as required.
\end{proof}
Our next claim is as follows.
\begin{claim}\label{claim:ABedge}
Suppose that $A'$ and $B'$ are both red cliques. If there are two vertex-disjoint red paths $P_1$ and $P_2$ such that each has length at most $2$ and each has one end vertex in $A'$ and the other in $B'$, then there are at least $(n/5e)^{k-6}$ red cycles of length $k$.
\end{claim}
\begin{proof}
We will apply Claim \ref{claim:ABedgegeneral} to the red subgraph with $S = A'$, $T = B'$, and $l = k$.
To check that the condition $7 \leq k \leq \min(2|A'| -1, 2|B'|-1)$ holds, note that
$$2|A'|-1 \geq (1 - 4\sqrt{\lambda})n-1 > (n+1)/2 = k$$
for $n$ sufficiently large and, similarly, $k \leq 2|B'|-1$. Therefore, we may apply Claim~\ref{claim:ABedgegeneral} to conclude that the number of red cycles of length $k$ is at least
$$(((k-1)/2-3)/e)^{k-6}=\left( ((n-1)/4 - 3)/e\right)^{k-6} \geq (n/5e)^{k-6}$$
for $n$ sufficiently large.
\end{proof}
Therefore,
we are done if the assumptions of Claim \ref{claim:ABedge} are satisfied, so we can and will assume that if $A'$ and $B'$ are both red cliques, then there are no two vertex-disjoint red edges between $A'$ and $B'$.
By applying Claim \ref{claim:tworedABgeneral} to the red subgraph with $S = A'$ and $T = B'$, we see that we may remove at most one vertex from $A' \cup B'$ to make all the edges between $A'$ and $B'$ blue. Without loss of generality, we may therefore assume that there is a vertex $v$ in $A'$ such that all the edges between $A' \setminus \{v\}$ and $B'$ are blue. In what follows, we let $A'' = A' \setminus \{v\}$.
\begin{claim}\label{claim:PbothAB}
Suppose that all the edges between $A''$ and $B'$ are blue. If there is a blue path $P'$ of length two with one end vertex in $A''$ and the other in $B'$, then there are at least $(n/8e)^{k-5}$ blue cycles of length $k$.
\end{claim}
\begin{proof}
We will apply Claim \ref{claim:PbothABgeneral} to the blue subgraph with $S = A''$, $T = B'$, and $l = k$, using the fact that the bipartite graph between $A''$ and $B'$ is complete in blue. To check that the condition $7 \leq k \leq \min(2|A''|+1, 2|B'|+1)$, note, for instance, that
$$2|A''| + 1 \geq 2|A'| - 1 \geq (1 - 4 \sqrt{\lambda})n -1 \geq (n+1)/2 = k$$
for $n$ sufficiently large. Therefore, by Claim~\ref{claim:PbothABgeneral}, the number of blue cycles of length $k$ is at least
$((k-5)/2e)^{k-5} \geq (n/8e)^{k-5}$, as required.
\end{proof}
Since we are done if the assumptions of Claim \ref{claim:PbothAB} hold, we can now assume that there is no blue path $P'$ of length two with one end vertex in $A''$ and the other in $B'$. This means that any vertex in $\{v\} \cup X \cup Y$ is either completely red to $A''$ or completely red to $B'$. Therefore, there is a vertex partition of $\{v\} \cup X \cup Y$ into two sets $Z_1 \cup Z_2$ such that each vertex in $Z_1$ is completely red to $A''$ and each vertex in $Z_2$ is completely red to $B'$.
By another application of Claim \ref{claim:ABedge}, we can also assume that there are no two vertex-disjoint red paths of length at most two each with one end vertex in $A''$ and the other in $B'$. Therefore, if $Z_1$ and $Z_2$ are both non-empty, either $Z_1$ is completely blue to $B'$ or $Z_2$ is completely blue to $A''$. Without loss of generality, suppose that $Z_1$ is completely blue to $B'$. If now $|Z_2| \geq 1$, either there is at most one vertex in $Z_2$ with red neighbors in $A''$ or there is a vertex $a\in A''$ such that this is the only red neighbor of vertices in $Z_2$. Therefore, by removing at most one vertex from $V(G)$, it will also be completely blue between $Z_2$ and $A''$.
In summary, we have disjoint sets $Z_1' \subset Z_1$, $Z_2' \subset Z_2$, $A''' \subset A''$, and $B'' \subset B'$ (at most one of which differs from its superset) such that $|Z_1' \cup Z_2'\cup A''' \cup B''| \geq n-1$ and the following conditions hold:
\begin{enumerate}
\item $Z_1'$ is completely red to $A'''$ and completely blue to $B''$.
\item $Z_2'$ is completely blue to $A'''$ and completely red to $B''$.
\item It is completely blue between $A'''$ and $B''$.
\item $A'''$ and $B''$ are both red cliques.
\end{enumerate}
Let $\tilde A = A''' \cup Z_1'$ and $\tilde B = B'' \cup Z_2'$. Then it is completely blue between $\tilde A$ and $B''$ and between $\tilde B$ and $A'''$.
Furthermore,
\[
|\tilde A| \geq |A'''| \geq |A'| - 2 \geq (1/2 -2 \sqrt{\lambda})n - 2 > (1/2 - 3 \sqrt{\lambda})n
\]
and, similarly, $|\tilde B| \geq (1/2 - 3\sqrt{\lambda})n$. Finally,
\begin{equation}
|\tilde A| + |\tilde B| \geq n-1 = 2k-2. \label{eqn:almost}
\end{equation}
By following the proofs of Claims~\ref{claim:red} and~\ref{claim:ABedge}, we obtain the following two results.
\begin{claim}\label{claim:tildeABblue}
If there is a blue edge within either $\tilde A$ or $\tilde B$, then there are at least $(n/5)^{k-2}$ blue cycles of length $k$.
\end{claim}
\begin{claim} \label{claim:tildeABred}
Suppose that $\tilde A$ and $\tilde B$ are both red cliques, each with $k-1$ vertices. If there are two vertex-disjoint red edges between $\tilde A$ and $\tilde B$, then there are at least $(n/5e)^{k-6}$ red cycles of length $k$.
\end{claim}
By Claim \ref{claim:tildeABblue},
we can assume that there is no blue edge within $\tilde A$ or $\tilde B$. That is, $\tilde A$ and $\tilde B$ are both red cliques. If either of these cliques has order at least $k$ we are done, as we then get at least $(k-1)!/2 \geq ((k-1)/2e)^{k-1}$ red cycles of length $k$. Hence, by (\ref{eqn:almost}), we can assume that $|\tilde A| = |\tilde B| = k-1$.
By Claim \ref{claim:tildeABred},
we can assume that there are no two vertex-disjoint red edges between $\tilde A$ and $\tilde B$. Therefore, applying Claim \ref{claim:tworedABgeneral} to the red subgraph with $S = \tilde A$ and $T = \tilde B$, we see that by removing at most one vertex, say $w$, all the edges between $\tilde A$ and $\tilde B$ are blue. Without loss of generality, we will assume that $w \in \tilde A$, noting that $w$ must have at least one blue neighbor in $\tilde B$, since otherwise $\tilde B \cup \{w\}$ would be a red clique of order $k$, again completing the proof.
We require one final observation, proved in the same manner as Claim~\ref{claim:PbothAB}.
\begin{claim} \label{claim:tildePbothAB}
Suppose that $w \in \tilde A$ and all the edges between $\tilde A \setminus \{w\}$ and $\tilde B$ are blue, while at least one edge between $w$ and $\tilde B$ is blue. If there is a blue path $P'$ of length two with one end vertex in $\tilde A$ and the other in $\tilde B$, then there are at least $(n/8e)^{k-5}$ blue cycles of length $k$.
\end{claim}
Suppose that $u$ is the single vertex of $K_n$ which is not in $\tilde A \cup \tilde B$. If $u$ has a blue neighbor in both $\tilde A$ and $\tilde B$, then Claim \ref{claim:tildePbothAB} implies that we are done. Therefore, we can assume that $u$ is completely red to either $\tilde A$ or $\tilde B$. If $u$ is completely red to $\tilde A$, then $\tilde A \cup \{ u\}$ is a red clique with $k$ vertices, in which case there are at least $(k-1)!/2$ red cycles of length $k$. Since this is also true if $u$ is completely red to $\tilde B$, this completes the proof.
\end{proof}
\subsection{Proof of Lemma \ref{lem:main2}} \label{sec:stab}
The following stability lemma of Nikiforov and Schelp \cite{NS} is an essential ingredient in our proof.
\begin{lemma}[Theorem 13, \cite{NS}]\label{lem:NS}
Let $0 < \alpha < 5 \cdot 10^{-6}$, $0 \leq \beta \leq \alpha/25$, and $n \geq \alpha^{-1}$.
If $G$ is a graph with $n$ vertices and $e(G) > (1/4 - \beta) n^2$, then one of the following holds:
\begin{enumerate}
\item There are cycles $C_t \subset G$ for every $t \in [3, \lceil(1/2 + \alpha) n\rceil]$.
\item There exists a partition $V(G) = U_0 \cup U_1 \cup U_2$ such that
\[|U_0| < 2000\alpha n,\]
\[ \left( 1/2 - 10\sqrt{\alpha+\beta}\right) n < |U_1| \leq |U_2| < \left( 1/2 + 10\sqrt{\alpha+\beta}\right) n,\]
and the induced subgraph $G- U_0$ on vertex set $V(G) \setminus U_0$ is a subgraph of either the complete bipartite graph between $U_1$ and $U_2$ or its complement.
\end{enumerate}
\end{lemma}
With this preliminary in place, we can begin the proof of Lemma \ref{lem:main2}. We apply the colored regularity lemma, Lemma~\ref{reglem-twocolor}, with parameters $\epsilon$ and $l = \lceil \epsilon^{-1} \rceil$ to the given red/blue coloring of $K_n$. This implies that there exist $n_0(\epsilon)$ and $M_0(\epsilon)$ such that, for any $n \geq n_0$, there is a regular partition of $K_n$ into $M$ parts $V_1, \dots, V_M$ with $\epsilon^{-1} \leq M \leq M_0$. We now consider a reduced graph $H$ with $M$ vertices $v_1, \dots, v_M$ corresponding to $V_1, \dots, V_M$, placing an edge between $v_i$ and $v_j$ if and only if the pair $(V_i, V_j)$ is $\epsilon$-regular. We then color the edge $(v_i, v_j)$ red if the density of red edges between $V_i$ and $V_j$ is at least $d = 12 \epsilon^{1/2}$ and we color an edge blue under the analogous condition with blue in place of red. By the regularity lemma, all but at most $\epsilon \binom{M}{2}$ pairs of distinct vertices of $H$ are edges and, since $d < 1/2$, all edges of $H$ are colored red, blue, or, perhaps, both red and blue. We say an edge is red-only if it is colored in red and not blue, while blue-only is defined similarly.
Let the subgraph of $H$ induced by edges containing the color red be $H_R$ and the subgraph induced by edges containing the color blue be $H_B$. Hence,
\[ |E(H_R) | + |E(H_B) | \geq (1-\epsilon)\binom{M}{2} > (1 - 2\epsilon) M^2/2,\]
where we used that $M \geq \epsilon^{-1}$. Thus, without loss of generality, we can assume that
\[ |E(H_R) | > (1-2\epsilon)M^2/4.\]
We now apply Lemma~\ref{lem:NS} to $H_R$ with $\beta = \epsilon/2$ and $\alpha = 20 \sqrt{\epsilon}$. There are two cases:
\paragraph{Case 1 of Lemma~\ref{lem:NS}.}
In this case, we can find a red cycle $C_t$ for every $t \in [3, \lceil(1/2 + \alpha) M\rceil]$. In particular, we can find an odd cycle $C_t$ with
\[ (1/2 + \alpha) M \geq t > (1/2 + \alpha) M - 2.\]
But this means that there are disjoint vertex sets $V_{k_0}, \dots, V_{k_{t-1}}$ such that, for each $0 \leq i \leq t-1$, $|V_{k_i}| \geq \lfloor n/M \rfloor$ and each pair $(V_{k_i}, V_{k_{i+1}})$ (with addition taken mod $t$) is $\epsilon$-regular in red with red density at least $12 \epsilon^{1/2}$.
Thus, we are in Case 1 of Lemma \ref{lem:main2}.
\paragraph{Case 2 of Lemma~\ref{lem:NS}.}
In this case, there exists a partition $V(H_R) = U_0 \cup U_1 \cup U_2$ such that $|U_0| < 2000\alpha M$ and
\begin{equation}
\left( 1/2 - 10\sqrt{2\alpha}\right) M < |U_1| \leq |U_2| < \left( 1/2 + 10\sqrt{2\alpha}\right) M. \label{eqn:U1U2}
\end{equation}
Furthermore, the induced subgraph $H_R-U_0$ is a subgraph of the disjoint cliques on $U_1$ and $U_2$ or a subgraph of the complete bipartite graph between $U_1$ and $U_2$. We will assume that the induced subgraph $H_R-U_0$ is a subgraph of the graph consisting of disjoint cliques on $U_1$ and $U_2$. The other case, where all edges of $H_R - U_0$ are between $U_1$ and $U_2$, can be handled similarly.
Thus, by assumption, any edges between $U_1$ and $U_2$ are blue-only. Moreover, since the number of non-adjacent pairs is at most $\epsilon \binom{M}{2}$, the number of blue-only edges between $U_1$ and $U_2$ is at least
\begin{equation}
|U_1||U_2| - \epsilon \binom{M}{2}. \label{eqn:U1U2blue}
\end{equation}
Let $U_1' \subset U_1$ be the set of vertices in $U_1$ that have blue degree at least $(1-\sqrt{\epsilon}) |U_2|$ in $U_2$.
Suppose $|U_1 \setminus U_1'| = x |U_1|$. Then
\[
(1-\sqrt{\epsilon} )|U_2| x |U_1| + |U_2|(1-x)|U_1| \geq |U_1||U_2| - \epsilon M^2/2,
\]
which implies that $x \leq \sqrt{\epsilon}M^2 / (2|U_1||U_2|)$.
Since $|U_1|, |U_2| \geq (1/2 - 10\sqrt{2\alpha}) M$, we have
\[
x \leq \sqrt{\epsilon}M^2 / (2|U_1||U_2|) \leq \sqrt{\epsilon}M^2 / (2 (1/2 - 10\sqrt{2\alpha})^2 M^2 )< \sqrt{\epsilon}(2 + 200\sqrt{\alpha}) < 3 \sqrt{\epsilon},
\]
where we used that $\alpha < 5\cdot 10^{-6}$. Defining $U_2' \subset U_2$ analogously, we therefore have
\begin{equation}
|U_1 \setminus U_1'| \leq 3\sqrt{\epsilon}|U_1|, \ \ |U_2 \setminus U_2'| \leq 3\sqrt{\epsilon}|U_2|. \label{eq:U1U1'}
\end{equation}
Thus, each vertex in $U_1'$ has at least $(1-\sqrt{\epsilon})|U_2| - |U_2 \setminus U_2'| \geq (1-4\sqrt{\epsilon})|U_2|$ blue neighbors in $U_2'$ and, similarly, each vertex in $U_2'$ has at least $(1-4\sqrt{\epsilon})|U_1|$ blue neighbors in $U_1'$.
\begin{claim}\label{claim:U1U2red'}
If there is a blue edge within $U_1'$ or $U_2'$, then Case 1 of Lemma \ref{lem:main2} holds.
\end{claim}
\begin{proof}
It will suffice to show that there is a blue cycle $C_t$ in $H$, where $t$ is the odd integer with $(1/2 + \alpha)M \geq t > (1/2+\alpha)M-2$. Suppose that there is a blue edge $(u,u')$ in $U_1'$. We will apply Claim \ref{claim:redgeneral} to $H_B$ with $(S, T, l)$ being $(U_1', U_2', t)$. Since each vertex in $U_1'$ has at least $ (1-4\sqrt{\epsilon})|U_2|$ blue neighbors in $U_2'$, any two vertices in $U_1'$ have blue common neighborhood in $U_2'$ of order at least
\[
2(1-4\sqrt{\epsilon})|U_2| - |U_2'| \geq (1-8\sqrt{\epsilon})|U_2'|.
\]
Thus, we can let $s$ in Claim \ref{claim:redgeneral} be $(1-8\sqrt{\epsilon})|U_2'|$.
To check that Claim \ref{claim:redgeneral} applies, we need to show that $(1/2 + \alpha)M \leq \min(2|U_1'| -1, 2s+1)$.
First, by (\ref{eqn:U1U2}), (\ref{eq:U1U1'}), and the fact that $M \geq \epsilon^{-1}$,
\[
2|U_1'|-1 \geq 2 (1-3\sqrt{\epsilon})|U_1| -1\geq 2(1-3\sqrt{\epsilon}) (1/2 - 10\sqrt{2\alpha})M -1 > 0.6M > (1/2+\alpha)M.
\]
Similarly,
\[
2s+1 \geq 2(1-8\sqrt{\epsilon}) (1-3\sqrt{\epsilon})|U_2| \geq 2(1-11\sqrt{\epsilon}) (1/2 - 10\sqrt{2\alpha})M > 0.6M > (1/2+\alpha)M.
\]
Thus, by Claim \ref{claim:redgeneral}, there is a cycle of length $t$ in $H_B$, as required.
\end{proof}
We may therefore assume that there is no blue edge inside $U_1'$ or $U_2'$. That is, all the edges within $U_1'$ and $U_2'$ are red-only. We move the vertices in $U_0$ arbitrarily to $U_1$ and $U_2$ to obtain $\tilde U_1$ and $\tilde U_2$. Thus, $\tilde U_1 \cup \tilde U_2$ is a vertex partition of $V(H)$.
Let $X_1 \subset V(K_n)$ be the vertices in $K_n$ corresponding to $\tilde U_1$ in $H$ and let $X_2 \subset V(K_n)$ be the vertices corresponding to $\tilde U_2$. We will conclude the proof by showing that the partition $X_1 \cup X_2$ induces an extremal coloring.
\begin{claim}
The vertex partition $X_1 \cup X_2$ induces an extremal coloring with parameter $\lambda$, where $\lambda = 300 \sqrt{\alpha}$.
\end{claim}
\begin{proof}
By Claim \ref{claim:U1U2red'}, any edge in $U_1'$ is red-only and at most $\epsilon \binom{M}{2}$ pairs of distinct vertices in $U_1'$ are non-adjacent.
Moreover, for any red-only edge $(i,j)$ in $U_1'$, the red density between $V_i$ and $V_j$ is at least $1-d$, since otherwise $(i,j)$ would also be colored blue. Since $n/M - 1 < |V_i| \leq n/M+1$, the number of red edges in $X_1$ is at least
\[(1-d) (n/M-1)^2 \binom{|U_1'|}{2} - (n/M+1)^2 \epsilon \binom{M}{2}.\]
Note also, by (\ref{eqn:U1U2}), that
\begin{align*}
|X_1| & \leq |\tilde U_1| \cdot (n/M+1) \leq (|U_1|+|U_0|)(n/M+1) \leq (|U_1| + 2000\alpha M) (n/M+1) \nonumber \\
& \leq \left( 1/2 + 10\sqrt{2\alpha} + 2000
\alpha \right) M (n/M+1)
\leq (1/2 + 20\sqrt{\alpha}) n.
\end{align*}
Combining the two inequalities above with (\ref{eqn:U1U2}) and (\ref{eq:U1U1'}),
we see that the red density in $X_1$ is at least
\begin{align*}
\frac{(1-d) \binom{|U_1'|}{2} (n/M-1)^2 - (n/M+1)^2\epsilon M^2/2}{ |X_1|^2/2}
& \geq \frac{2(1-d) \binom{(1-3\sqrt{\epsilon})|U_1|}{2} (n/M-1)^2 - (n/M+1)^2\epsilon M^2}{ (1/2 + 20\sqrt{\alpha})^2 n^2 } \\
& \geq 1 - d - 200 \sqrt{\alpha} - 10\sqrt{\epsilon} > 1 - 300 \sqrt{\alpha}.
\end{align*}
Similarly, $|X_2| \leq (1/2 + 20\sqrt{\alpha}) n$ and the red density in $X_2 \subset V(G)$ is at least $1 - 300 \sqrt{\alpha}$.
It only remains to lower bound the blue density between $X_1$ and $X_2$. By (\ref{eqn:U1U2}) and (\ref{eqn:U1U2blue}), the number of blue edges between $X_1$ and $X_2$ is at least
\begin{align*}
(1-d)\left(|U_1||U_2| - \epsilon \binom{M}{2}\right) (n/M-1)^2
& \geq (1-d)\left((1/2 - 20\sqrt{\alpha})^2 M^2 - \epsilon \binom{M}{2}\right) (n/M - 1)^2 \\
& >(1-d) (1/4 - 25 \sqrt{\alpha})n^2.
\end{align*}
Thus, by a similar computation to before, the blue density between $X_1$ and $X_2$ is at least
\[\frac{(1-d)(1/4 - 25 \sqrt{\alpha})n^2}{|X_1||X_2|}
\geq \frac{(1-d)(1/4 - 25 \sqrt{\alpha})n^2}{n^2/4} > 1 - 300\sqrt{\alpha},\]
as required.
\end{proof}
| {
"timestamp": "2021-09-21T02:34:58",
"yymm": "2108",
"arxiv_id": "2108.00987",
"language": "en",
"url": "https://arxiv.org/abs/2108.00987",
"abstract": "The Ramsey number $r(H)$ of a graph $H$ is the minimum $n$ such that any two-coloring of the edges of the complete graph $K_n$ contains a monochromatic copy of $H$. The threshold Ramsey multiplicity $m(H)$ is then the minimum number of monochromatic copies of $H$ taken over all two-edge-colorings of $K_{r(H)}$. The study of this concept was first proposed by Harary and Prins almost fifty years ago. In a companion paper, the authors have shown that there is a positive constant $c$ such that the threshold Ramsey multiplicity for a path or even cycle with $k$ vertices is at least $(ck)^k$, which is tight up to the value of $c$. Here, using different methods, we show that the same result also holds for odd cycles with $k$ vertices.",
"subjects": "Combinatorics (math.CO)",
"title": "Threshold Ramsey multiplicity for odd cycles",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9833429614552197,
"lm_q2_score": 0.8152324983301568,
"lm_q1q2_score": 0.8016531391825138
} |
https://arxiv.org/abs/1902.01687 | Optimal Nonparametric Inference via Deep Neural Network | Deep neural network is a state-of-art method in modern science and technology. Much statistical literature have been devoted to understanding its performance in nonparametric estimation, whereas the results are suboptimal due to a redundant logarithmic sacrifice. In this paper, we show that such log-factors are not necessary. We derive upper bounds for the $L^2$ minimax risk in nonparametric estimation. Sufficient conditions on network architectures are provided such that the upper bounds become optimal (without log-sacrifice). Our proof relies on an explicitly constructed network estimator based on tensor product B-splines. We also derive asymptotic distributions for the constructed network and a relating hypothesis testing procedure. The testing procedure is further proven as minimax optimal under suitable network architectures. | \section{Introduction}
With the remarkable development of modern technology, difficult learning problems can nowadays be tackled smartly via deep learning architectures.
For instance, deep neural networks have led to impressive performance
in fields such as computer vision, natural language processing, image/speech/audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis,
where they have demonstrated superior performance to human experts.
The success of deep networks hinges on their rich expressiveness
(see \cite{db11} , \cite{rpkgdj17}, \cite{mpcb14}, \cite{bs14}, \cite{t16}, \cite{l17} and \cite{y17, y18}).
Recently, deep networks have played an increasingly important role in statistics
particularly in nonparametric curve fitting (see \cite{kk05, hk06, kk17, km11, s17}).
Applications of deep networks in other fields such as image processing or pattern recgnition include, to name a few, \cite{lbh15}, \cite{dlhyysszhw13}, \cite{wwhwzzl14}, \cite{gg16}, etc.
A fundamental problem in statistical applications of deep networks is how accurate they can estimate a nonparametric regression function.
To describe the problem, let us consider i.i.d. observations $(Y_i, \mathbf{X}_i)$, $i=1,2,\ldots, n$ generated from the following nonparametric model:
\begin{eqnarray}
Y_i=f_0(\mathbf{X}_i)+\epsilon_i, \label{eq:model}
\end{eqnarray}
where $\mathbf{X}_i \in [0,1]^d$ are i.i.d. $d$-dimensional predictors for a fixed $d\ge1$,
$\epsilon_i$ are i.i.d. random noise with $E(\epsilon_i)=0$ and $Var(\epsilon_i)=\tau^2$,
$f_0\in\mathcal{H}$ is an unknown function.
For any $L\in\mathbb{N}$ and $\mathbf{p}=(p_1,\ldots,p_L)\in\mathbb{N}^L$,
let $\mathcal{F}(L,\mathbf{p})$ denote the collection of network functions from $\mathbb{R}^d$ to $\mathbb{R}$ consisting
of $L$ hidden layers with the $l$th layer including $p_l$ neurons.
The problem of interest is to find an order $R_n$ that controls the $L^2$ minimax risk:
\begin{equation}\label{eqn:goal}
\inf_{\widehat{f}\in\mathcal{F}(L,\mathbf{p})}\sup_{f_0\in \mathcal{H}}\mathbb{E}_{f_0}\bigg(\|\widehat{f}-f_0\|_{L^2}^2\bigg|\mathbb{X}\bigg)=O_P(R_n),
\end{equation}
where $\mathbb{X}=\{\mathbf{X}_1,\ldots,\mathbf{X}_n\}$ and the infimum is taken over all estimators
$\widehat{f}\in\mathcal{F}(L,\mathbf{p})$. In other words, we are interested in the performance of the ``best'' network estimator
in the ``worst'' scenario.
Existing results regarding (\ref{eqn:goal}) are sub-optimal.
For instance, when $\mathcal{H}$ is a $\beta$-smooth H\"{o}lder class and $L,\mathbf{p}$ are properly selected,
it has been argued that $R_n=n^{-\frac{2\beta}{2\beta+d}}(\log{n})^s$ for some constant $s>0$;
see \cite{kk05, hk06, kk17, km11, s17, su18, flm18}. Such results are mostly proved based on empirical processes techniques
in which the logarithmic factors
arise from the entropy bound of the neural network class.
The aim of this paper is to fully remove the redundant logarithmic factors, i.e.,
under proper selections of $L,\mathbf{p}$ one actually has $R_n=n^{-\frac{2\beta}{2\beta+d}}$ in (\ref{eqn:goal}).
This means that neural network estimators can exactly achieve minimax estimation rate.
Our proof relies on an explicitly constructed neural network which is proven minimax optimal.
Some interesting byproducts are worth mentioning.
First, the rate $R_n$ can be further improved when $f_0$ satisfies additional structures.
Specifically, we will show that $R_n=n^{-\frac{2\beta}{2\beta+1}}$ if $f_0$ satisfies
additive structure, i.e., $f_0$ is a sum of univariate $\beta$-H\"{o}lder functions.
Such rate is minimax according to \cite{s85}.
Second, we will derive the pointwise asymptotic distribution of the constructed neural network estimator
which will be useful to establish pointwise confidence interval.
Third, the constructed neural network estimator will be further used
as a test statistic which is proven optimal when $L,\mathbf{p}$ are properly selected.
As far as we know, these are the first provably valid confidence interval and test statistic
based on neural networks in nonparametric regression.
This paper is organized as follows.
Section \ref{sec:prelim} includes some preliminaries on
deep networks and function spaces.
In Section \ref{sec:optimal:rate:convergence}, we derive
upper bounds for the minimax risk and investigate their optimality.
Both multivariate regression and additive regression are considered.
Section \ref{sec:basis:approximation} contains the main proof strategy,
which covers the construction of (optimal) network and relates results on network approximation
of tensor product B-splines.
As by products, we also provide limit distributions and optimal testing results in Section \ref{sec:byproducts}.
The proofs of some of the main results and technical lemmas are deferred to Appendix A-D.
\section{Preliminaries}\label{sec:prelim}
In this section, we review some notion about deep networks and function spaces.
Throughout let $\sigma$ denote the rectifier linear unit (ReLU) activation function,
i.e., $\sigma(x)=(x)_+$ for $x\in\mathbb{R}$.
For any real vectors $\mathbf{v}=(v_1,\ldots,v_r)^T$ and $\mathbf{y}=(y_1,\ldots,y_r)^T$, define the shift activation function
$\sigma_\mathbf{v}(\mathbf{y})=(\sigma(y_1-v_1),\ldots,\sigma(y_r-v_r))^T$.
Let $\mathbf{p}=(p_1,\ldots,p_L)\in\mathbb{N}^L$.
Any $f\in\mathcal{F}(L,\mathbf{p})$ has an expression
\[
f(\mathbf{x})=W_{L+1}\sigma_{\mathbf{v}_{L}}W_{L}\sigma_{\mathbf{v}_{L-1}}\ldots W_2\sigma_{\mathbf{v}_1}W_1 \mathbf{x},\,\,\mathbf{x} \in \mathbb{R}^{d},
\]
where $\mathbf{v}_{l}\in \mathbb{R}^{p_l}$ is a shift vector and $W_{l}\in \mathbb{R}^{p_{l}\times p_{l-1}}$ is a weight matrix.
Here we have adopted the convention $p_0=d$ and $p_{L+1}=1$.
For simplicity, we only consider fully connected networks and do not make any sparsity assumptions on the entries of $\mathbf{v}_l$ and $W_l$.
Next let us introduce various function spaces under which the estimation rates will be derived.
We will consider two types of function spaces: H\"{o}lder space and additive space.
Let $\Omega=[0,1]^d$ denote the domain of the functions.
For $f$ defined on $\Omega$, define the supnorm and $L^2$-norm of $f$ by $\|f\|_{\sup}=\sup_{\mathbf{x}\in\Omega}|f(\mathbf{x})|$
and $\|f\|_{L^2}^2=\int_\Omega f(\mathbf{x})^2Q(\mathbf{x})d\mathbf{x}$ respectively.
Here $Q(\cdot)$ is the probability density for the predictor $\mathbf{X}_i$'s.
For any $\bm{\alpha}=(\alpha_1, \alpha_2, \ldots, \alpha_d)\in \mathbb{N}^d$, define $|\bm{\alpha}|=\sum_{j=1}^d\alpha_j$ and
\begin{eqnarray*}
\partial^{\bm{\alpha}}f=\frac{\partial^{|\bm{\alpha}|}f}{\partial x_1^{\alpha_1}\ldots \partial x_d^{\alpha_d}},
\end{eqnarray*}
whenever the partial derivative exists.
For any $\beta>1$ and $F>0$, let $\Lambda^\beta(F, \Omega)$ denote the ball of $\beta$-H\"{o}lder functions with radius $F$, i.e.,
\begin{eqnarray}
\Lambda^\beta(F, \Omega)=\bigg\{f: \Omega\to \mathbb{R} \bigg|\;\sum_{\bm{\alpha}:|\bm{\alpha}|\leq \floor{\beta}}\|\partial^{\bm{\alpha}}f\|_{\sup}+\sum_{\bm{\alpha}:|\bm{\alpha}|=\floor{\beta}}\sup_{\mathbf{x}_1\neq \mathbf{x}_2\in \Omega}\frac{|\partial^{\bm{\alpha}}f(\mathbf{x}_1)-\partial^{\bm{\alpha}}f(\mathbf{x}_2)|}{\|\mathbf{x}_1-\mathbf{x}_2\|_2^{\beta-\floor{\beta}}}\leq F\bigg\},\nonumber
\end{eqnarray}
in which $\floor{\beta}$ is the largest integer smaller than $\beta$ and
$\|v\|_2$ denotes the Euclidean norm of a real vector $v$.
For any $F>0$ and $\boldsymbol\beta=(\beta_1, \ldots, \beta_d)\in (1,\infty)^d$,
define
\begin{eqnarray}
\Lambda^{\boldsymbol\beta}_+(F, \Omega)=\left\{f: [0,1]^d \to \mathbb{R}|\; f(\mathbf{x})=a+\sum_{j=1}^dg_{j}(x_j) \textrm{ with } g_j \in \Lambda^{\beta_j}(F, [0,1]), \textrm{ for } j=1,\ldots, d\right\}. \nonumber
\end{eqnarray}
Clearly, any $f\in\Lambda^{\boldsymbol\beta}_+(F, \Omega)$ has an expression $f(\mathbf{x})=a+\sum_{j=1}^dg_{j}(x_j)$
with the $j$th additive component belonging to the ball of univariate $\beta_j$-H\"{o}lder functions with radius $F$.
\section{Minimax Neural Network Estimation}\label{sec:optimal:rate:convergence}
In this section, we derive an upper bound for the $L^2$ minimax risk in the problem (\ref{eqn:goal}).
The risk bound will be proven optimal under suitable circumstances.
To simplify the expressions, we only consider networks with architecture $(L,\mathbf{p}(T))$,
where $\mathbf{p}(T):=(T,\ldots,T)\in\mathbb{N}^L$ for any $T\in\mathbb{N}$.
In other words, we focus on networks whose $L$ layers each have $T$ neurons.
Our results hold under suitable conditions on $L$ and $T$ as well as the following assumption on the design and model error.
\begin{Assumption}\label{A0}
The probability density $Q(\mathbf{x})$ of $\mathbf{X}$ is supported on $\Omega$.
There exists a constant $c>0$ such that $c^{-1}\le Q(\mathbf{x})\le c$ for any $\mathbf{x}\in\Omega$.
The error terms $\epsilon_i$'s are independent of $\mathbf{X}_i$'s.
\end{Assumption}
\begin{theorem}\label{thm:neural:estimator:rate:of:convergence:main:text}
Let Assumption \ref{A0} be satisfied and $F>0$ be a fixed constant.
Suppose that $T\to\infty$ and $T\log T=o(n)$ as $n\to\infty$, then it follows that
\begin{eqnarray}\label{thm1:risk:bound}
\inf_{\widehat{f}\in \mathcal{F}(L,\mathbf{p}(T))}\sup_{f_0\in \Lambda^\beta(F, \Omega)}
\mathbb{E}_{f_0}\bigg(\|\widehat{f}(\mathbf{x})-f_0(\mathbf{x})\|_{L^2}^2\bigg|\mathbb{X}\bigg)=O_P\bigg(\frac{1}{T^{\frac{2\beta}{d}}}+\frac{T}{n}+\frac{T^2}{2^{\frac{L}{d+k}}}\bigg).
\end{eqnarray}
As a consequence, if $T\asymp n^{\frac{d}{2\beta+d}}$ and $n^{\frac{2\beta+2d}{2\beta+d}}=o(2^{\frac{L}{d+k}})$, then the following holds:
\begin{eqnarray*}
\inf_{\widehat{f}\in \mathcal{F}(L,\mathbf{p}(T))}\sup_{f_0\in \Lambda^\beta(F, \Omega)}
\mathbb{E}_{f_0}\bigg(\|\widehat{f}(\mathbf{x})-f_0(\mathbf{x})\|_{L^2}^2\bigg|\mathbb{X}\bigg)=O_P(n^{-\frac{2\beta}{2\beta+d}}).
\end{eqnarray*}
\end{theorem}
Proof of Theorem \ref{thm:neural:estimator:rate:of:convergence:main:text}
relies on an explicitly constructed network estimator
based on tensor product B-splines; see Section \ref{sec:basis:approximation}.
The minimax risk bound in (\ref{thm1:risk:bound}) consists of three components ${T^{-\frac{2\beta}{d}}}, n^{-1}T, 2^{-\frac{L}{d+k}}T^2$
corresponding to the bias, variance and approximation error of the constructed network.
The optimal risk bound is achieved through balancing the three terms.
The approximation error of the constructed network decreases exponentially along with $L$.
Networks constructed based on other methods
such as local Taylor approximations (\cite{y17}, \cite{y18} and \cite{s17} have similar approximation performance.
However, their statistical properties are more challenging to deal with due to the unbalanced eigenvalues of the corresponding basis matrix.
In contrast, the eigenvalues of the tensor product B-spline basis matrix are known to have balanced orders, e.g., see \cite{h98},
which plays an important role in deriving the risk bounds.
Also notice that the risk bounds will blow out when $L$ is fixed,
which partially explains the superior performance of deep networks compared with shallow ones; see \cite{es16}.
The optimal rate in Theorem \ref{thm:neural:estimator:rate:of:convergence:main:text} suffers from the `curse' of dimensionality.
The following theorem demonstrates that this issue can be addressed when $f_0$ has an additive structure.
For $\boldsymbol\beta=(\beta_1, \ldots, \beta_d)\in (1,\infty)^d$, let $\beta^*=\min_{1\le j\le d}\beta_j$.
\begin{theorem}\label{thm:rate:convergence:additive:main:text}
Let Assumption \ref{A0} be satisfied and $F>0$ be a fixed constant. Suppose that $T\to\infty$ and $T\log T=o(n)$ as $n\to\infty$,
then it follows that
\begin{eqnarray*}
\inf_{\widehat{f}\in\mathcal{F}(L,\mathbf{p}(T))}\sup_{f_0\in \Lambda^{\boldsymbol\beta}_+(F, \Omega)}
\mathbb{E}_{f_0}\bigg(\|\widehat{f}(\mathbf{x})-f_0(\mathbf{x})\|_{L^2}^2\bigg|\mathbb{X}\bigg)=O_P\bigg(\frac{1}{T^{2\beta^*}}+\frac{T}{n}+\frac{T^2}{2^{\frac{L}{1+k}}}\bigg).
\end{eqnarray*}
As a consequence, if $T\asymp n^{\frac{1}{2\beta^*+1}}$ and $n^{\frac{2\beta^*+2}{2\beta^*+1}}=o(2^{\frac{L}{1+k}})$, then
\begin{eqnarray*}
\inf_{\widehat{f}\in\mathcal{F}(L,\mathbf{p}(T))}\sup_{f_0\in \Lambda^{\boldsymbol\beta}_+(F, \Omega)}\mathbb{E}_{f_0}\bigg(\|\widehat{f}(\mathbf{x})-f_0(\mathbf{x})\|_{L^2}^2\bigg|\mathbb{X}\bigg)=O_P\left(n^{-\frac{2\beta^*}{2\beta^*+1}}\right).
\end{eqnarray*}
\end{theorem}
The rate $n^{-\frac{2\beta^*}{2\beta^*+1}}$ in Theorem \ref{thm:rate:convergence:additive:main:text} is optimal
in nonparmetric additive estimation.
When $\beta_1=\cdots=\beta_d=\beta$, the rate simply becomes $n^{-\frac{2\beta}{2\beta+1}}$
whose optimality has been proven by \cite{s85}.
Otherwise, the optimal rate relies on the least order of smoothness of the $d$ univariate functions.
The proof of Theorem \ref{thm:rate:convergence:additive:main:text}
is deferred to Appendix C.
\section{Construction of Optimal Networks}\label{sec:basis:approximation}
In this section, we explicitly construct a network estimator $\widehat{f}_{\textrm{net}}\in\mathcal{F}(L,\mathbf{p}(T))$ and derive its risk bound.
Theorems \ref{thm:neural:estimator:rate:of:convergence:main:text} and \ref{thm:rate:convergence:additive:main:text}
will immediately follow due to the following trivial fact
\begin{equation}
\inf_{\widehat{f}\in\mathcal{F}(L,\mathbf{p}(T))}\sup_{f_0}
\mathbb{E}_{f_0}\bigg(\|\widehat{f}(\mathbf{x})-f_0(\mathbf{x})\|_{L^2}^2\bigg|\mathbb{X}\bigg)\le \sup_{f_0}
\mathbb{E}_{f_0}\bigg(\|\widehat{f}_{\textrm{net}}(\mathbf{x})-f_0(\mathbf{x})\|_{L^2}^2\bigg|\mathbb{X}\bigg).\label{eq:basic:inequality}
\end{equation}
The construction process starts from a pilot estimator $\widehat{f}_{\textrm{pilot}}$ obtained under
tensor product B-splines.
The tensor product B-spline basis functions are further approximated through explicitly constructed multi-layer networks,
which will be aggregated to obtain the network estimator $\widehat{f}_{\textrm{net}}$.
The key step is to show that the discrepancies between the tensor product B-spline basis functions and the corresponding network approximations are reasonably small
such that $\widehat{f}_{\textrm{net}}$ will perform similarly as $\widehat{f}_{\textrm{pilot}}$, and thus, optimally.
Our construction is different from \cite{y17} and \cite{s17}, where the basis functions are obtained through local Taylor approximation.
We find that the eigenvalue performance of the local Taylor basis matrix is difficult to quantify so that the corresponding pilot estimator
cannot be used effectively. Instead, the pilot estimator based on tensor product B-splines is more convenient to deal with.
Other basis such as wavelets or smoothing splines may also work but this will be explored elsewhere.
\subsection{A Pilot Estimator Through Tensor Product B-splines}
In this subsection, we review tensor product $B$-splines and construct the corresponding pilot estimator.
For any integer $M\geq 2$,
let $0=t_0<t_1<\cdots<t_{M-1}<t_M=1$ be knots that form a partition of the unit interval.
The definition of univariate B-splines of order $k\ge2$ depends on
additional knots $t_{-k+1}<t_{-k+2}<\ldots<t_{-1}<0$ and $1<t_{M+1} <\ldots< t_{M+k-1}$. Given knots $t=(t_{-k+1},\ldots, t_{M+k-1})\in \mathbb{R}^{M+2k-1}$, the univariate $B$-spline basis functions of order $k$, denoted $B_{i,k}(x)$, $i=-k+1,-k+2,\ldots, M-1$, can be defined inductively by $B_{i,s}(x)$ for $s=2,3,\ldots, k$. For $s=2$ and $-k+1\le i\le M+k-3$, define
\begin{eqnarray*}
B_{i, 2}(x)=\begin{cases}
\frac{x-t_i}{t_{i+1}-t_i}, & \textrm{if } x\in [t_i, t_{i+1}]\\
\frac{t_{i+2}-x}{t_{i+2}-t_{i+1}}, & \textrm{if } x\in [t_{i+1}, t_{i+2}]\\
0,& \textrm{elsewhere}
\end{cases}.
\end{eqnarray*}
Suppose that $B_{i,s}(x)$, $i=-k+1,\ldots, M+k-s-1$ have been defined. Define
\begin{eqnarray}\label{induction:formula}
B_{i, s+1}=a_{i,s}B_{i,s,t}+b_{i,s}B_{i+1,s, t},\,\,\textrm{for $i=-k+1,-k+2,\ldots, M+k-s-2$,}
\end{eqnarray}
where
\begin{eqnarray*}
{a}_{i,s}(x)=\begin{cases}
0, & \textrm{if } x<t_i\\
\frac{x-t_i}{t_{i+s}-t_i}, &\textrm{if } t_i\leq x \leq t_{i+s}\\
0,&\textrm{if } x> t_{i+s}\\
\end{cases},\;\;\;{b}_{i,s}(x)=\begin{cases}
0, & \textrm{if } x<t_{t+1}\\
\frac{t_{i+s+1}-x}{t_{i+s+1}-t_{i+1}}, &\textrm{if } t_{i+1}\leq x \leq t_{i+s+1}\\
0,&\textrm{if } x> t_{i+s+1}\\
\end{cases}.
\end{eqnarray*}
Proceeding with this construction, we can obtain $B_{i,k}(x)$.
To approximate a multivariate function, we adopt the tensor product $B$-splines.
Define $\Gamma=\{-k+1, -k+2, \ldots, 0, 1, \ldots, M-1\}^d$ and $q=|\Gamma|=(M+k-1)^d$.
For $\mathbf{i}=(i_1, i_2, \ldots, i_d)\in \Gamma$, define
$D_{\mathbf{i},k}(\mathbf{x})=\prod_{j=1}^dB_{i_j,k}(x_j)$
and obtain the corresponding pilot estimator
\begin{eqnarray}
\widehat{f}_{\textrm{pilot}}(\mathbf{x})=\sum_{\mathbf{i} \in \Gamma}\widehat{b}_{\mathbf{i}}D_{\mathbf{i}, k}(\mathbf{x}),\label{eq:pilot:estimator}
\end{eqnarray}
where $\widehat{b}_{\mathbf{i}}, \mathbf{i}\in \Gamma$ are the basis coefficients obtained
by the following least square estimation:
\begin{equation}\label{LSE:eqn}
\widehat{C}:=[\widehat{b}_{\mathbf{i}}]_{\mathbf{i}\in\Gamma}=\argmin_{b_\mathbf{i}\in\mathbb{R}^q}\sum_{i=1}^n \left(Y_i-\sum_{\mathbf{i}\in\Gamma}b_\mathbf{i} D_{\mathbf{i},k}(\mathbf{X}_i)\right)^2.
\end{equation}
\subsection{Network Approximation of Tensor Product B-splines}
In this subsection, we approximate $B_{i,k}$'s through multilayer neural networks.
We first construct networks that approximate the univariate B-spline basis functions,
and then multiply these networks through a product network $\xmark_s$ introduced by \cite{y17} to approximate the tensor product B-spline basis.
Unlike \cite{y17} and \cite{s17}, our construction proceeds in an inductive manner due to the intrinsic induction structure of B-splines.
For any $s\ge1$, the product network
$\xmark_s(x_1, x_2, \ldots, x_s)$ is constructed to approximate the monomials $\prod_{j=1}^s x_j$.
The following Proposition \ref{proposition:appriximation:product:k} which is due to \cite{y17} provides guarantees for $\xmark_s$.
\begin{proposition}\label{proposition:appriximation:product:k}
For any integers $m\ge1$ and $s\geq 2$, there exists a neural network function $\xmark_s$ with $(s-1)(2m+3)-1$ hidden layers and $10+s$ nodes in each hidden layer such that
for all $x_1, x_2, \ldots, x_s\in [0, 1]$, $0\leq \xmark_s(x_1, x_2, \ldots, x_s)\leq 1$ and
\begin{eqnarray*}
\bigg|\xmark_s(x_1, x_2, \ldots, x_s)-\prod_{j=1}^s x_j\bigg|\leq (s-1)4^{-m+1}.
\end{eqnarray*}
As a consequence, if $|\widetilde{x}_j-x_j|\leq \delta$ and $\widetilde{x}_j \in [0,1]$ for $j=1,2,\ldots,s$, then
\begin{eqnarray*}
\bigg|\xmark_s(\widetilde{x}_1, \widetilde{x}_2, \ldots, \widetilde{x}_s)-\prod_{j=1}^s x_j\bigg|\leq (s-1)4^{-m+1}+s\delta.
\end{eqnarray*}
\end{proposition}
In what follows, we will approximate the $k$th order univariate B-spline basis $B_{i,k}$. Fixing integer $m\geq 1$,
our method is based on the induction formula (\ref{induction:formula}) which allows us to start from approximating $B_{i,2}$.
Specifically, we approximate $B_{i,2}$ by $\widetilde{B}_{i, 2}$ defined as
\begin{eqnarray*}
\widetilde{B}_{i, 2}(x)=c_1\sigma(x-t_i)+c_2\sigma(x-t_{i+1})+c_3\sigma(x-t_{i+2}),
\end{eqnarray*}
where
\begin{eqnarray}
c_1=\frac{1}{t_{i+1}-t_i}, \;\;c_2=-\frac{t_{i+2}-t_{i}}{t_{i+2}-t_{i+1}}c_1, \;\;c_3=-(t_{i+2}-t_{i}+1)c_1-(t_{i+2}-t_{i+1}+1)c_2.\label{eq:definition:c1c2c3}
\end{eqnarray}
The piecewise linear function $\widetilde{B}_{i,2}$ is exactly a neural network with one hidden layer consisting of three nodes.
Suppose that we have
constructed $\widetilde{B}_{i,s}(x)$, a neural network approximation of $B_{i,s}$.
Next we will approximate $B_{i,s+1}$. For $-k+1\leq i\leq M+k-s-1$, define
\begin{eqnarray*}
\widetilde{a}_{i,s}(x)=\begin{cases}
0, & \textrm{if } x<t_i\\
\frac{x-t_i}{t_{i+s}-t_i}, &\textrm{if } t_i\leq x \leq t_{i+s}\\
1,&\textrm{if } x> t_{i+s}\\
\end{cases},
\;\;\;\widetilde{b}_{i,s}(x)=\begin{cases}
1, & \textrm{if } x<t_{i+1}\\
\frac{t_{i+s+1}-x}{t_{i+s+1}-t_{i+1}}, &\textrm{if } t_{i+1}\leq x \leq t_{i+s+1}\\
0, &\textrm{if } x> t_{i+s+1}\\
\end{cases}.
\end{eqnarray*}
In terms of ReLU activation function, we can rewrite the above as
$\widetilde{a}_{i,s}(x)=\frac{1}{t_{i+s}-t_i}\sigma(x-t_i)-\frac{1}{t_{i+s}-t_i}\sigma(x-t_{i+s})$ and $\widetilde{b}_{i,s}(x)=-\frac{1}{t_{i+s+1}-t_{i+1}}\sigma(x-t_{i+1})+\frac{1}{t_{i+s+1}-t_i}\sigma(x-t_{i+s+1})+1$, which implies that $\widetilde{a}_{i,s}$ and $\widetilde{b}_{i,s}$
are exactly neural networks with one hidden layer consisting of two nodes (see Figure \ref{figure:tildeBi2:to:tildeBi3}).
For $i=-k+1,\ldots, M+k-s-2$, define
\begin{eqnarray*}
\widetilde{B}_{i, s+1}(x)=\frac{\xmark_2(\widetilde{a}_{i,s}(x), \widetilde{B}_{i,s}(x))+\xmark_2(\widetilde{b}_{i,s}(x), \widetilde{B}_{i+1,s}(x))+2\times 4^{-m+1}+\frac{8^{s}}{7}4^{-m}}{1+4\times 4^{-m+1}+\frac{8^{s}}{14}4^{-m+1}}, x\in[0,1].
\end{eqnarray*}
The `seemingly strange' normalizing constant forces $\widetilde{B}_{i, s+1}(x)$ to take values in $[0,1]$.
We repeat the above steps until we reach the construction of $\widetilde{B}_{i, k}$
(see Figure \ref{figure:tildeBi2:to:tildeBi3} for an illustration of such induction).
We then approximate ${B}_{i,k}$ by $\widetilde{B}_{i,k}$.
Note that $\widetilde{B}_{i,k}$ has $(2m+4)(k-2)+1$ hidden layers and $8(M+k-3)$ nodes on each hidden layer.
\begin{figure}[ht!]
\centering
\subfigure[]{\includegraphics[width=2 in, height=1.5 in]{p1.pdf}}
\hspace{1cm}
\subfigure[]{\includegraphics[width=2 in, height=1.5 in]{p2.pdf}}
\vspace{0.5cm}
\subfigure[]{\includegraphics[width=2 in, height=1.5 in]{p3.pdf}}
\hspace{1cm}
\subfigure[]{\includegraphics[width=2 in, height=1.5 in]{p4.pdf}}
\caption{Construction of $\widetilde{B}_{i,3}$ through induction.
(a) and (b) demonstrate the architectures of the networks $\widetilde{a}_{i,2}$ and $\widetilde{b}_{i,2}$.
(c) demonstrates the architecture of the network $\widetilde{B}_{i,2}$ with $c_1, c_2, c_3$ defined in (\ref{eq:definition:c1c2c3}).
(d) demonstrates the induction relationship between $\widetilde{B}_{i,3}$ and $\widetilde{B}_{i,2}$.
}
\label{figure:tildeBi2:to:tildeBi3}
\end{figure}
We next approximate the tensor product B-spline basis $D_{\mathbf{i}}(\mathbf{x})=\prod_{j=1}^dB_{i_j,k}(x_j)$ by
\begin{eqnarray*}
\widetilde{D}_{\mathbf{i}, k}(\mathbf{x})=\xmark_d(\widetilde{B}_{i_1,k}(x_1),\widetilde{B}_{i_2,k}(x_2),\ldots, \widetilde{B}_{i_d,k}(x_d)), \textrm{ for each } \mathbf{i}=(i_1,\ldots, i_d) \in \Gamma.
\end{eqnarray*}
Note that
$\widetilde{D}_{\mathbf{i}, k}$ has $(2m+3)(k+d-3)+k-1$ hidden layers each consisting of $(d+7)(M+2k-3)^d$ nodes.
Finally, paralellizing $\widetilde{D}_{\mathbf{i}, k}(\mathbf{x}), I \in \Gamma$ according to (\ref{eq:pilot:estimator}),
we construct $\widehat{f}_{\textrm{net}}$ as
\begin{eqnarray}
\widehat{f}_{\textrm{net}}(\mathbf{x})=\sum_{\mathbf{i}\in \Gamma}\widehat{b}_{\mathbf{i}}\widetilde{D}_{\mathbf{i}, k}(\mathbf{x}),\,\,\,\,
x\in\Omega.\label{eq:optimal:DNN:estimator}
\end{eqnarray}
In comparing (\ref{eq:pilot:estimator}) with (\ref{eq:optimal:DNN:estimator}), if
we can show that ${D}_{\mathbf{i}, k}$ and $\widetilde{D}_{\mathbf{i},k}$ are close enough,
then one can expect that $\widehat{f}_{\text{net}}$ performs similarly to $\widehat{f}_{\textrm{pilot}}$.
A rich class of statistical results in literature enable us to efficiently analyze $\widehat{f}_{\textrm{pilot}}$.
In the rest of our analysis, we focus on cardinal B-splines for convenience.
\begin{Assumption}\label{Assumption:A1}
\label{A1:c}
The knots $\{t_i, i=-k+1,\ldots, M+k-1\}$ have constant separation $h=M^{-1}$.
\end{Assumption}
\begin{remark}
Assumption \ref{Assumption:A1} can be relaxed to $\max_{i}(t_{i+1}-t_{i})/\min_{i}(t_{i+1}-t_{i})\leq c$ for some constant $c>0$,
under which one needs to redefine the separation
$h=\max_{i}(t_{i+1}-t_{i})$. Results in this section continue to hold.
This is a standard assumption for $B$-spline literature; see \cite{h98}.
\end{remark}
The following Theorem \ref{thm:approximation:sieve:DNN}
is the main technical result of this paper,
based on which Theorems \ref{thm:neural:estimator:rate:of:convergence:main:text},
\ref{thm:rate:convergence:additive:main:text}, \ref{thm:asymptotic:normality:neural:main:text}
and \ref{thm:optimal:test:neural:main:text} will be proved.
\begin{theorem}\label{thm:approximation:sieve:DNN}
For fixed positive integer $m$,
$\widehat{f}_{\textrm{net}}\in \mathcal{F}(L,\mathbf{p}(T))$ with $L=(2m+3)(k+d-1)+1$ and $T=3(M+2k)^d$. Under Assumption \ref{A0} and \ref{Assumption:A1},
if $k>\beta$ and $F>0$, then it holds that
\begin{eqnarray*}
\sup_{f_0\in\Lambda^\beta(F, \Omega)}\mathbb{E}_{f_0}\left\{\sup_{\mathbf{x} \in \Omega}|\widehat{f}_{\text{net}}(\mathbf{x})-\widehat{f}_{\textrm{pilot}}(\mathbf{x})|^2
\bigg|\mathbb{X}\right\}=O_P(h^{-2d}4^{-2m}).
\end{eqnarray*}
\end{theorem}
Theorem \ref{thm:approximation:sieve:DNN} says that $\widehat{f}_{\text{net}}$
is a neural network with $L=(2m+3)(k+d-3)+k$ hidden layers each consisting of $T=(d+2)(M+k-1)^d$ nodes.
The theorem also provides an explicit upper bound in terms of $(h,d,m)$ for the difference between $\widehat{f}_{\textrm{net}}$ and $\widehat{f}_{\textrm{pilot}}$.
The proof of Theorem \ref{thm:approximation:sieve:DNN} relies on following Lemma \ref{lemma:approximation:b:spline:d:1},
\ref{lemma:approximation:b:spline:d:d}, \ref{lemma:spline:approximation} and \ref{lemma:difference:c:hat:c:0:main:text}.
Let $\mathbf{i}_1, \mathbf{i}_2,\ldots, \mathbf{i}_q$ be the elements of $\Gamma$.
Define
\begin{eqnarray*}
\mathbf{B}_k(x)&=&(B_{-k+1,k}(x), B_{-k+2,k}(x),\ldots, B_{0,k}(x),B_{1,k}(x),\ldots, B_{M-1,k}(x))^T\in \mathbb{R}^{M-k+1},\\
\mathbf{D}_{k}(\mathbf{x})&=&(D_{\mathbf{i}_1, k}(\mathbf{x}),D_{\mathbf{i}_2, k}(\mathbf{x}),\ldots, D_{\mathbf{i}_q, k}(\mathbf{x}))^T\in \mathbb{R}^{q},\\
\widetilde{\mathbf{B}}_k(x)&=&(\widetilde{B}_{-k+1,k}(x), \widetilde{B}_{-k+2,k}(x),\ldots, \widetilde{B}_{M-1,k}(x))^T\in \mathbb{R}^{M-k+1},\\
\widetilde{\mathbf{D}}_k(\mathbf{x})&=&(\widetilde{D}_{\mathbf{i}_1,k}(\mathbf{x}), \widetilde{D}_{\mathbf{i}_2,k}(\mathbf{x}),\ldots, \widetilde{D}_{\mathbf{i}_q,k}(\mathbf{x}))^T\in \mathbb{R}^q.
\end{eqnarray*}
Lemma \ref{lemma:approximation:b:spline:d:1} quantifies the differences between $\widetilde{\mathbf{B}}_k(\cdot)$ and $\mathbf{B}_k(\cdot)$.
For convenience, for $L,p_0,\ldots,p_{L+1}\in\mathbb{N}$, let $\mathcal{N}\mathcal{N}(L,(p_0,p_1,\ldots,p_L,p_{L+1}))$
denote the class of $p_0$-input-$p_{L+1}$-output ReLU neural network functions of $L$ hidden layers,
with the $j$th layer consisting of $p_j$ nodes, for $j=1,\ldots,L$. For any $v=(v_1, \ldots, v_p)\in \mathbb{R}^p$, let $\|v\|_\infty=\max_{1\leq i \leq p}|v_i|$.
\begin{lemma}\label{lemma:approximation:b:spline:d:1}
Given integers $k, M\geq 2$ and knots $t_{-k+1}<t_{-k+2}<\ldots< t_0<t_1< \ldots<t_M< t_{M+1} <\ldots< t_{M+k-1}$ such that $t_0=0, t_M=1$,
there exists a $\widetilde{\mathbf{B}}_k\in \mathcal{N}\mathcal{N}(k(2m+3), (1, 3(M+2k), \ldots, 3(M+2k), M+k-1))$
taking values in $[0,1]$, such that
\begin{eqnarray*}
\sup_{x\in[0,1]}\|\widetilde{\mathbf{B}}_k(x)-\mathbf{B}_{k}(x)\|_\infty\leq \frac{8^{k}}{14}4^{-m}.
\end{eqnarray*}
\end{lemma}
The proof of Lemma \ref{lemma:approximation:b:spline:d:1} is given in Appendix A. Based on Lemma \ref{lemma:approximation:b:spline:d:1}, we can bound the approximation error between $\widetilde{\mathbf{D}}_k$ and $\mathbf{D}_k$, which is summarized as Lemma \ref{lemma:approximation:b:spline:d:d}.
\begin{lemma}\label{lemma:approximation:b:spline:d:d}
Given integers $k, M\geq 2$ and knots $t_{-k+1}<t_{-k+2}<\ldots< t_0<t_1< \ldots<t_M< t_{M+1} <\ldots< t_{M+k-1}$ with
$t_0=0, t_M=1$, there exist a
$\widetilde{\mathbf{D}}_k\in \mathcal{N}\mathcal{N}((2m+3)(k+d-1), (d, 3(M+2k)^d, \ldots, 3(M+2k)^d, (M+k-1)^d))$ such that
\begin{eqnarray*}
\bigg\|\widetilde{\mathbf{D}}_k(\mathbf{x})-\mathbf{D}_{k}(\mathbf{x})\bigg\|_{\infty}\leq [4(d-1)+8^k]4^{-m},\;\textrm{ for all } \mathbf{x} \in \Omega.
\end{eqnarray*}
Furthermore, each element of $\widetilde{\mathbf{D}}_k$ is in $[0,1]$.
\end{lemma}
{\it Proof of Lemma \ref{lemma:approximation:b:spline:d:d}.}
Let $\widetilde{\mathbf{B}}_k(x_1), \widetilde{\mathbf{B}}_k(x_1), \ldots, \widetilde{\mathbf{B}}_k(x_d)$ be the neural networks provided
in Lemma \ref{lemma:approximation:b:spline:d:1}, which satisfy $|\widetilde{B}_{i, k}(x)-{B}_{i, k}(x)|\leq \delta_m$, where
$\delta_m=8^k4^{-m}/14$. For each $(i_1, i_2, \ldots, i_d)\in \{-k+1, -k+2, \ldots, 1,2, \ldots, M-1\}^d$, we apply the
product network $\xmark_d$ given in Proposition \ref{proposition:appriximation:product:k} to
$(\widetilde{B}_{i_1,k}(x_1), \widetilde{B}_{i_2,k}(x_2),\ldots, \widetilde{B}_{i_d,k}(x_d))$. According to Proposition \ref{proposition:appriximation:product:k}, we have
\begin{eqnarray*}
\bigg|\xmark_d(\widetilde{B}_{i_1,k}(x_1), \widetilde{B}_{i_2,k}(x_2),\ldots, \widetilde{B}_{i_d,k}(x_d))-\prod_{j=1}^d B_{i_j,k}(x_j)\bigg|&\leq& (d-1)4^{-m+1}+d\delta_m\\
&\leq& [4(d-1)+8^k]4^{-m}.
\end{eqnarray*}
Now we deploy $\xmark_d(\widetilde{B}_{i_1,k}(x_1), \widetilde{B}_{i_2,k}(x_2),\ldots, \widetilde{B}_{i_d,k}(x_d))$ parallelly to construct the network $\widetilde{\mathbf{D}}_k$. Since we apply neural network $X_d$ to output of $\widetilde{\mathbf{B}}_k$, so the total number of hidden layers is at most $k(2m+3)+1+(d-1)(2m+3)-1=(2m+3)(d+k-1)$. Moreover, the number nodes in each hidden layer is not greater than the number of nodes in the
output layer, which is further bounded by $3(M+2k)^d$. This completes the proof.
The following Lemma \ref{lemma:spline:approximation}
is consequence of \cite[Theorem 15.1 and Theorem 15.2]{gkkw06} and \cite[Theorem 12.8 and (13.69)]{s07}, which
quantifies an approximation error
of tensor product B-spline.
\begin{lemma}\label{lemma:spline:approximation}
Suppose that Assumption \ref{Assumption:A1} is satisfied.
For any $f\in \Lambda^\beta(F, \Omega)$ and any integer $k\geq \beta$, there exists a real sequence $c_i$ such that
$\sup_{\mathbf{x} \in \Omega}\bigg|\sum_{\mathbf{i}\in \Gamma}c_i D_{\mathbf{i}}(\mathbf{x})-f(\mathbf{x})\bigg|\leq A_f h^\beta$
where $A_f>0$ which only depends on partial derivatives of $f$ upto order $k$. Moreover, the sequence $c_i$ satisfy $|c_i|\leq A_f$. Moreover, $sup_{f\in \Lambda^\beta(F, \Omega)}A_f<\infty$, where the upper bound only depends on $F$ and $\beta$.
\end{lemma}
Define $\Phi=(\mathbf{D}_k(\mathbf{x}_1),\ldots, \mathbf{D}_k(\mathbf{x}_n))^T\in \mathbb{R}^{n\times q}$ and $\mathbf{Y}=(Y_1,\ldots Y_n)^T$.
The following Lemma \ref{lemma:difference:c:hat:c:0:main:text}
quantifies the magnitude of $\widehat{C}=(\Phi^T\Phi)^{-1}\Phi^T\mathbf{Y}$;
recalling that such $\widehat{C}$ is the solution to the least square problem (\ref{LSE:eqn}).
Its proof is provided in Appendix B.
\begin{lemma}\label{lemma:difference:c:hat:c:0:main:text}
Under Assumptions \ref{A0} and \ref{Assumption:A1},
if $h=o(1)$ and $\log(h^{-1})=o(nh^d)$, then
\[
\sup_{f_0\in\Lambda^\beta(F,\Omega)}
\mathbb{E}_{f_0}\left(\widehat{C}^T\widehat{C}\big|\mathbb{X}\right)=O_P(h^{-d}).
\]
\end{lemma}
We are now ready to provide the Proof of Theorem \ref{thm:approximation:sieve:DNN}.
{\it Proof of Theorem \ref{thm:approximation:sieve:DNN}.}
For any $f_0\in\Lambda^\beta(F, \Omega)$, let $\mathbf{f}_0=(f_0(\mathbf{x}_1),\ldots,f_0(\mathbf{x}_n))^T$.
Also let
$\bm{\epsilon}=(\epsilon_1,\ldots,\epsilon_n)^T$,
$\widehat{\mathbf{f}}_{\textrm{pilot}}=(\widehat{f}_{\textrm{pilot}}(\mathbf{x}_1),
\ldots,\widehat{f}_{\textrm{pilot}}(\mathbf{x}_n))^T$.
According to Lemma \ref{lemma:spline:approximation} and by $k\geq \beta$,
there exists a $C=(c_1, c_2,\ldots, c_q)^T\in \mathbb{R}^q$
such that for any $\mathbf{x}\in\Omega$,
$|C^T\mathbf{D}_k(\mathbf{x})-f_0(\mathbf{x})|\leq A_{f_0}h^\beta$. By least square algorithm (\ref{LSE:eqn}), we have
\begin{eqnarray*}
\widehat{\mathbf{f}}_{\textrm{pilot}}=\Phi(\Phi^T \Phi)^{-1}\Phi^T \mathbf{Y}&=&\Phi(\Phi^T \Phi)^{-1}\Phi^T (\Phi C+\mathbf{E}+\bm{\epsilon})\\
&=&\Phi C+\Phi(\Phi^T \Phi)^{-1}\Phi^T \mathbf{E}+\Phi(\Phi^T \Phi)^{-1}\Phi^T \bm{\epsilon}\\
&=&\mathbf{f}_0-(I-\Phi(\Phi^T \Phi)^{-1}\Phi^T)\mathbf{E}+\Phi(\Phi^T \Phi)^{-1}\Phi^T \bm{\epsilon},
\end{eqnarray*}
where $\mathbf{E}=(E_1, E_2, \ldots, E_n)^T\in \mathbb{R}^n$ with $E_i=f_0(\mathbf{x}_i)-C^T\mathbf{D}_k(\mathbf{x}_i)$.
It follows from (\ref{eq:pilot:estimator}), (\ref{eq:optimal:DNN:estimator}) and (\ref{LSE:eqn}) that
$\widehat{f}_{\textrm{pilot}}(\mathbf{x})=\widehat{C}^T\mathbf{D}_k(\mathbf{x})$ and $\widehat{f}_{\textrm{net}}(\mathbf{x})=\widehat{C}^T\widetilde{\mathbf{D}}_k(\mathbf{x})$.
Therefore, for any $\mathbf{x}\in\Omega$, we have
\begin{eqnarray*}
|\widehat{f}_{\textrm{pilot}}(\mathbf{x})-\widehat{f}_{\textrm{net}}(\mathbf{x})|^2&=&\big\|\widehat{C}^T\left(\mathbf{D}_k(\mathbf{x})-\widetilde{\mathbf{D}}_k(\mathbf{x})\right)\big\|_2^2\\
&=& \widehat{C}^T\widehat{C}\left(\mathbf{D}_k(\mathbf{x})-\widetilde{\mathbf{D}}_k(\mathbf{x})\right)^T\left(\mathbf{D}_k(\mathbf{x})-\widetilde{\mathbf{D}}_k(\mathbf{x})\right)\\
&\leq&q\widehat{C}^T\widehat{C} \sup_{\mathbf{x} \in [0,1]^d}\big\|\mathbf{D}_k(\mathbf{x})-\widetilde{\mathbf{D}}_k(\mathbf{x})\big\|_{\infty}^2
\le q\widehat{C}^T\widehat{C}[4(d-1)+8^k]^24^{-2m},
\end{eqnarray*}
where the last inequality follows from Lemma \ref{lemma:approximation:b:spline:d:d}.
Following Lemma \ref{lemma:difference:c:hat:c:0:main:text} and the fact $q\asymp h^{-d}$, we have
\begin{eqnarray*}
\sup_{f_0\in\Lambda^\beta(F,\Omega)}
\mathbb{E}_{f_0}\bigg(\sup_{\mathbf{x} \in \Omega}|\widehat{f}_{\textrm{pilot}}(\mathbf{x})-\widehat{f}_{\textrm{net}}(\mathbf{x})|^2 \bigg|\mathbb{X}\bigg)&\leq&
q[4(k-1)+8k]^2 4^{-2m}\sup_{f_0\in\Lambda(F,\Omega)}\mathbb{E}\left(\widehat{C}^T\widehat{C}\big|\mathbb{X}\right)\\
&=&O_P(h^{-2d}4^{-2m}),
\end{eqnarray*}
which completes the proof.
Theorem \ref{thm:neural:estimator:rate:of:convergence:main:text} is a simple consequence of Theorem \ref{thm:approximation:sieve:DNN}
and Lemma \ref{lemma:rate:of:convergence:sieve} below with $h\asymp T^{-1/d}$ and $m\asymp \frac{L}{3(k+d)}$.
Proof of Lemma \ref{lemma:rate:of:convergence:sieve} is deferred to Appendix B.
\begin{lemma}\label{lemma:rate:of:convergence:sieve}
Under the Assumptions \ref{A0} and \ref{Assumption:A1}, if $h=o(1)$ and $\log(h^{-1})=o(nh^d)$ hold, then it holds that
\begin{eqnarray*}
\sup_{f_0\in\Lambda^\beta(F,\Omega)}
\mathbb{E}_{f_0}\bigg\{\|\widehat{f}_{\textrm{pilot}}-f_0\|_{L^2}^2\bigg|\mathbb{X}\bigg\}=O_P\bigg(h^{2\beta}+\frac{1}{nh^d}\bigg).
\end{eqnarray*}
\end{lemma}
\section{Asymptotic Distribution and Optimal Testing}\label{sec:byproducts}
In this section,
we derive asymptotic distributions for $\widehat{f}_{\textrm{net}}$ and a
corresponding hypothesis testing procedure.
The results are simply byproducts of Theorem \ref{thm:approximation:sieve:DNN}.
Theorem \ref{thm:asymptotic:normality:neural:main:text} below establishes a pointwise asymptotic distribution for
$\widehat{f}_{\textrm{net}}(\mathbf{x})$ for any $\mathbf{x} \in \Omega$. The result is a direct consequence of Theorem \ref{thm:approximation:sieve:DNN}
and the asymptotic distribution of $\widehat{f}_{\textrm{pilot}}(\mathbf{x})$.
\begin{theorem}\label{thm:asymptotic:normality:neural:main:text}
Under the Assumptions \ref{A0} and \ref{Assumption:A1}, if $hn^{\frac{1}{2\beta+d}}=o(1)$, $\log(h^{-1})=o(nh^d)$ and $n^{1/2}h^{-d/2}=o(4^{m})$, then
for any $\mathbf{x} \in \Omega$, we have
\begin{eqnarray*}
\frac{\widehat{f}_{\textrm{net}}(\mathbf{x})-f_0(\mathbf{x})}{\sqrt{\mathbf{D}^T_k(\mathbf{x})(\Phi^T\Phi)^{-1}\mathbf{D}_k(\mathbf{x})}}\xrightarrow[]{D}N(0,1),
\end{eqnarray*}
where $\Phi=(\mathbf{D}_k(\mathbf{x}_1),\mathbf{D}_k(\mathbf{x}_2),\ldots, \mathbf{D}_k(\mathbf{x}_n))^T\in \mathbb{R}^{n\times q}$.
\end{theorem}
{\it Proof of Theorem \ref{thm:asymptotic:normality:neural:main:text}.}
For fixed $\mathbf{x}\in [0,1]^d$, let $V(\mathbf{x})=\mathbf{D}^T_k(\mathbf{x})(\Phi^T\Phi)^{-1}\mathbf{D}_k(\mathbf{x})$.
By \cite[Theorems 3.1 and 5.2]{h03}, it follows that
\begin{eqnarray}\label{thm11:eqn:1}
\frac{\widehat{f}_{\textrm{pilot}}(\mathbf{x})-f_0(\mathbf{x})}{\sqrt{V(\mathbf{x})}}\xrightarrow[]{D}N(0,1).
\end{eqnarray}
By Lemma \ref{lemma:empirical:eigen:value} in Appendix B, with probability approaching 1, we have
$V(\mathbf{x})\geq \frac{2}{a_1nh^d}\mathbf{D}_k(\mathbf{x})^T\mathbf{D}_k(\mathbf{x})\geq \frac{2b^2}{a_1nh^d}$.
By the proof of Theorem \ref{thm:approximation:sieve:DNN} we get that
$|\widehat{f}_{\textrm{pilot}}(\mathbf{x})-\widehat{f}_{\textrm{net}}(\mathbf{x})|^2=O_P(h^{-2d}4^{-2m})$.
Therefore,
\begin{equation}\label{thm11:eqn:2}
\frac{\widehat{f}_{\textrm{pilot}}(\mathbf{x})-\widehat{f}_{\textrm{net}}(\mathbf{x})}{\sqrt{V(\mathbf{x})}}=O_P(n^{1/2}h^{-d/2}4^{-m})=o_P(1).
\end{equation}
Theorem \ref{thm:asymptotic:normality:neural:main:text} follows by (\ref{thm11:eqn:1}) and (\ref{thm11:eqn:2}). This completes the proof.
In what follows, we consider a hypothesis testing problem: $H_0: f_0=0$ vs. $H_1: f\neq0$.
Consider a test statistic $T_n=\|\widehat{f}_{\textrm{net}}\|_n^2$, where $\|f\|_n^2=\sum_{i=1}^n f(\mathbf{x}_i)^2/n$
is the empirical norm.
The following Theorem \ref{thm:optimal:test:neural:main:text}
derives null distribution of $T_n$ and analyzes its power under a sequence of local alternatives.
Again, this result is a byproduct of Theorem \ref{thm:approximation:sieve:DNN}.
\begin{theorem}\label{thm:optimal:test:neural:main:text}
Suppose $n^{\frac{4\beta+2d}{4\beta+d}}=O(4^m)$ and $h\asymp n^{-\frac{2}{4\beta+d}}$, then the following hold:
\begin{enumerate}
\item Under $H_0: f_0=0$, it follows that
\begin{eqnarray}\label{thm:11:1}
\frac{nT_n-q}{\sqrt{2q}}\xrightarrow[]{D}N(0,1).
\end{eqnarray}
\item For any $\delta>0$, there exists a $C_\delta>0$ such that, under $H_1: f=f_0$ with
$\|f_0\|_n\geq C_\delta n^{-\frac{2\beta}{4\beta+d}}$, it holds that
\begin{eqnarray}\label{thm:11:2}
\mathbb{P}\bigg(\bigg|\frac{nT_n-q}{\sqrt{2q}}\bigg|>z_{\alpha/2}\bigg)\geq 1-\delta,
\end{eqnarray}
where $z_{\alpha/2}$ is the $1-\alpha/2$ upper percentile of standard normal variable.
\end{enumerate}
\end{theorem}
Part (\ref{thm:11:1}) of Theorem \ref{thm:optimal:test:neural:main:text} suggests a testing rule at significance $\alpha$:
reject $H_0$ if and only if
\[
\bigg|\frac{nT_n-q}{\sqrt{2q}}\bigg|\ge z_{\alpha/2}.
\]
Part (\ref{thm:11:2}) of Theorem \ref{thm:optimal:test:neural:main:text}
says that the power of $T_n$ is at least $1-\delta$ provided that the
null and alternative hypotheses are separated by $C_\delta n^{-\frac{2\beta}{4\beta+d}}$
in terms of $\|\cdot\|_n$-norm.
The separation rate is optimal in the sense of \cite{Ingster93}.
{\it Proof of Theorem \ref{thm:optimal:test:neural:main:text}.}
Observe that
\begin{eqnarray}\label{eq:them:optimal:test:neural}
\frac{n\|\widehat{f}_{\textrm{net}}\|_n^2-q}{\sqrt{2q}}=\frac{n\|\widehat{f}_{\textrm{pilot}}\|_n^2-q}{\sqrt{2q}}+\frac{n\|\widehat{f}_{\textrm{net}}\|_n^2-n\|\widehat{f}_{\textrm{pilot}}\|_n^2}{\sqrt{2q}}.
\end{eqnarray}
By Theorem \ref{thm:approximation:sieve:DNN} and Lemma \ref{lemma:rate:of:convergence:sieve} (see Appendix B for its proof),
both $\|\widehat{f}_{\textrm{net}}-\widehat{f}_{\textrm{pilot}}\|_n$ and $\|\widehat{f}_{\textrm{pilot}}-f_0\|_n$
are $O_P(1)$, we have
\begin{eqnarray*}
|\|\widehat{f}_{\textrm{net}}\|_n^2-\|\widehat{f}_{\textrm{pilot}}\|_n^2|&=&
|\|\widehat{f}_{\textrm{net}}\|_n-\|\widehat{f}_{\textrm{pilot}}\|_n|\times \left(\|\widehat{f}_{\textrm{net}}\|_n+\|\widehat{f}_{\textrm{pilot}}\|_n\right)\\
&\leq&\|\widehat{f}_{\textrm{net}}-\widehat{f}_{\textrm{pilot}}\|_n\times \left(\|\widehat{f}_{\textrm{net}}-\widehat{f}_{\textrm{pilot}}\|_n+2\|\widehat{f}_{\textrm{pilot}}\|_n\right)\\
&\leq&\|\widehat{f}_{\textrm{net}}-\widehat{f}_{\textrm{pilot}}\|_n\times
\left(\|\widehat{f}_{\textrm{net}}-\widehat{f}_{\textrm{pilot}}\|_n+2\|\widehat{f}_{\textrm{pilot}}-f_0\|_n+2\|f_0\|_n\right)\\
&=&\|\widehat{f}_{\textrm{net}}-\widehat{f}_{\textrm{pilot}}\|_n\times O_P(1)\\
&=&O_P(h^{-d}4^{-m}).
\end{eqnarray*}
Therefore, the second term in (\ref{eq:them:optimal:test:neural})
is of order $O_P(nh^{-d}4^{-m}q^{-1/2})=O_P\left(n^{\frac{4\beta+2d}{4\beta+d}}4^{-m}\right)=o_P(1)$,
where we have used the fact $q\asymp h^{-d}$.
The result then follows by Lemma \ref{lemma:optimal:test:sieve} in Appendix D. This completes the proof.
\newpage
| {
"timestamp": "2019-02-06T02:12:29",
"yymm": "1902",
"arxiv_id": "1902.01687",
"language": "en",
"url": "https://arxiv.org/abs/1902.01687",
"abstract": "Deep neural network is a state-of-art method in modern science and technology. Much statistical literature have been devoted to understanding its performance in nonparametric estimation, whereas the results are suboptimal due to a redundant logarithmic sacrifice. In this paper, we show that such log-factors are not necessary. We derive upper bounds for the $L^2$ minimax risk in nonparametric estimation. Sufficient conditions on network architectures are provided such that the upper bounds become optimal (without log-sacrifice). Our proof relies on an explicitly constructed network estimator based on tensor product B-splines. We also derive asymptotic distributions for the constructed network and a relating hypothesis testing procedure. The testing procedure is further proven as minimax optimal under suitable network architectures.",
"subjects": "Machine Learning (cs.LG); Machine Learning (stat.ML)",
"title": "Optimal Nonparametric Inference via Deep Neural Network",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.983342957061873,
"lm_q2_score": 0.8152324983301567,
"lm_q1q2_score": 0.8016531356009147
} |
https://arxiv.org/abs/1509.02945 | Low-Dimensional Galerkin Approximations of Nonlinear Delay Differential Equations | This article revisits the approximation problem of systems of nonlinear delay differential equations (DDEs) by a set of ordinary differential equations (ODEs). We work in Hilbert spaces endowed with a natural inner product including a point mass, and introduce polynomials orthogonal with respect to such an inner product that live in the domain of the linear operator associated with the underlying DDE. These polynomials are then used to design a general Galerkin scheme for which we derive rigorous convergence results and show that it can be numerically implemented via simple analytic formulas. The scheme so obtained is applied to three nonlinear DDEs, two autonomous and one forced: (i) a simple DDE with distributed delays whose solutions recall Brownian motion; (ii) a DDE with a discrete delay that exhibits bimodal and chaotic dynamics; and (iii) a periodically forced DDE with two discrete delays arising in climate dynamics. In all three cases, the Galerkin scheme introduced in this article provides a good approximation by low-dimensional ODE systems of the DDE's strange attractor, as well as of the statistical features that characterize its nonlinear dynamics. | \section{Introductuion}
Systems of delay differential equations (DDEs) are widely used in many fields such as the biosciences,
climate dynamics, chemistry, control theory, economics, and engineering
\cite{Bhattacharya_al82, Diaz2014, Diekmann_al95, GCStep15, Ghil_Childress'87, GZT08, Hale_Lunel93, Sieber2014, Kuang93, LS10, MacDonald89, Michiels_al07, Roques_al15, Smith11, Stepan89}. In particular, certain DDEs or more general differential equations with retarded arguments can be derived from hyperbolic partial differential equations that support wave propagation \cite{chekroun_glatt-holtz,Galanti_al00,Hale_Lunel93}.
In contrast to ordinary differential equations (ODEs), the state space associated even with a scalar DDE is infinite-dimensional, due to the presence of time-delay terms, which require providing initial data over an interval $ [-\tau,0],$ where $\tau >0$ is the delay.
It is often desirable, though, to have low-dimensional ODE systems that capture qualitative features,
as well as approximating certain quantitative aspects of the DDE dynamics.
The derivation of ODE approximations of DDEs involves, in general, two types of function spaces as state space:
that of continuous functions $C([-\tau, 0]; \mathbb{R}^d)$, and the Hilbert space $L^2([-\tau, 0); \mathbb{R}^d)$.
{\mg The former spaces have} been extensively used in the case of bifurcation analysis \cite{Casal_al80,Chow_al77,Das_al02,Faria_al95,Kazarinoff_al78,Nayfeh08,wischert1994delay}, {\mg while the latter are} typically adopted in situations where quantitative accuracy is an important factor, such as in optimal control \cite{Banks_al78,kappel1978autonomous,Banks_al84,Kappel86,Kappel_al87,Kunisch82,Ito_Teglas86,banks1979spline}.
Within the Hilbert {\mg space} setting, different basis functions have been proposed to decompose
the state space; {\mg these} include, among others, step functions \cite{Banks_al78,kappel1978autonomous}, splines \cite{banks1979spline,Banks_al84}, and orthogonal polynomial functions, such as Legendre polynomials \cite{Kappel86,Ito_Teglas86}. Compared with step functions or splines, the use of orthogonal polynomials leads typically to ODE approximations with lower dimensions, for a given precision \cite{banks1979spline,Ito_Teglas86}. On the other hand, classical polynomial basis functions
do not live in the domain of the linear operator underlying the DDE, which leads to technical complications in establishing convergence results \cite{Kappel86,Ito_Teglas86}; see Remark~\ref{Rmk_problems_to_overcome}(iii) below.
In the present article, we propose to avoid these technical difficulties in
approximating DDEs as systems of ODEs by using an alternative polynomial basis: the elements of this basis belong naturally to the domain of the underlying linear operator, but they have not been used in the DDE literature so far. The polynomials we shall use are named after Koornwinder~\cite{Koo84}, who investigated polynomials that are orthogonal with respect to weight functions adjoining point masses, as discussed in Section~\ref{sect:basis} below. This polynomial basis turns out to be particularly useful not only for the rigorous analysis of polynomial-based Galerkin approximations of nonlinear systems of DDEs, as shown in Section~\ref{Sec_Galerkin_approx}, but also for their numerical treatment, cf. Section~\ref{Sect_Numerics}.
Useful new properties of the Koornwinder polynomials are identified in Lemma ~\ref{Fundamental_lemma} for the scalar case, and a generalization of these polynomials to the vector case is given in Section~\ref{Sec_Vectorization}; the latter includes
the multi-dimensional extension of Lemma ~\ref{Fundamental_lemma}, namely Lemma \ref{Super_Fundamental_lemma}.
We show that these properties are essential for checking key stability and convergence conditions in {\mg Lemmas~\ref{Lem_A2} and \ref{Lem_A1}. Standard Galerkin approximation results for abstract nonlinear ODEs} in Hilbert spaces are recalled in Theorem \ref{ParisVI_thm} and the rest of Section~\ref{Subsect_ODE_Galerkin}. They are then applied, with the help of Lemmas~\ref{Lem_A2} and \ref{Lem_A1}, to nonlinear systems of DDEs in Section~\ref{Subsect_DDE_Galerkin}.
Finite-time uniform convergence results are then derived for the proposed Galerkin approximations of nonlinear systems of DDEs, subject to simple and checkable conditions on the nonlinear term.
These conditions are identified in Section~\ref{Subsect_DDE_Galerkin};
see Corollaries~\ref{Cor_DDE_local_Lip_Case1} and \ref{Cor_DDE_local_Lip_Case2}.
The results apply to a broad class of nonlinear systems of DDEs, as discussed in Section~\ref{Sec_examples}.
The proposed framework yields a simple numerical calculation of the corresponding Galerkin approximations. Their coefficients
are easily computable from the original system of DDEs by relying on simple recurrence formulas, cf.~Proposition~\ref{thm:Pn}, and by solving upper triangular systems of linear equations, cf.~Proposition~\ref{prop:dPn}; see Section \ref{Sec_Galerkin_analytic} and Appendix~\ref{Appendix_systems}.
Finally, we outline here a useful
interpretation of our proposed scheme regarding the finite-dimensional approximation of the linear part $\mathcal{A}$ of general systems of DDEs, when considered in the framework of Hilbert spaces, cf.~\eqref{Def_A2}. This interpretation relies on a formulation of the evolution in time of the
initial state $\{x(\theta): \theta \in [-\tau, 0]\}$ as the solution of a partial differential equation (PDE);
see also Remark \ref{PDE_rem}.
To do so, we first distinguish between the {\it historic part} of the evolving state, $\{x(t + \theta): \theta \in [-\tau, 0)\}$, and the {\it state part}, $\{x(t)\}$. Denoting by $u(t,\theta)$ the historic part, one can then rewrite, for instance, the simple linear DDE
\begin{equation}\label{lin_case}
\dot{x}=x(t-\tau), \quad \tau>0,
\end{equation}
as the linear PDE
\begin{equation}\label{lin_PDE}
\partial_t u = \partial_{\theta}u, \quad -\tau \le \theta < 0,
\end{equation}
with the boundary condition
\begin{equation}\label{PDE_BC}
\partial_{t} u|_{\theta = 0} = u(t,-\tau), \;\; t \geq 0.
\end{equation}
The key point is that, roughly speaking, the local differential operator $v\mapsto \partial_{\theta} v$ --- obtained as the history component of $\mathcal{A}$, and written out explicitly in~Eq.~\eqref{Def_A}, for instance ---
is approximated here by the nonlocal differential operator
\begin{equation}\label{nonlocal_PDE_intro}
v \mapsto \partial_{\theta} v +b_N(\theta)\Big(v(-\tau)- \partial_{\theta} v\big\vert_{\theta=0}\Big), \quad \theta \in[-\tau,0),
\end{equation}
and that $b_N(\theta)$ --- expressed by means of Koornwinder polynomials, cf.~\eqref{bN-coeff} --- is a bounded oscillatory coefficient that vanishes in $L^2$ as $N\rightarrow \infty$; see Lemma \ref{Fundamental_lemma}. This nonlocal operator is the PDE representation for the history component of our Koornwinder-based Galerkin approximation $\mathcal{A}_N$ given in \eqref{Eq_AN}.
Note that terms such as $v(-\tau) - \partial_{\theta} v\big\vert_{\theta=0}$, which is responsible for the nonlocal aspect of
\eqref{nonlocal_PDE_intro}, play an important role in the theory of numerical approximation of DDEs; see, for instance,\cite[Appendix A]{GZT08} and references therein. In particular, this term provides the exact value of the jump
associated with the boundary condition \eqref{PDE_BC}.
The first such jump occurs at $t=0$ in the derivative of solutions of Eq.~\eqref{lin_case} that emanate from a constant history\footnote{\
For instance, if $\tau =1$ and the history
is given by $\{\psi(\theta)\equiv c, \; - \tau \le \theta < 0\}$, then the solution $x(t)$ to Eq.~\eqref{lin_case} is equal to $c(t+1)$ on $[0,1]$. This discontinuity leads to a jump $\psi(-1)-\partial_{\theta} \psi\big\vert_{\theta=0} =
c$ in its time derivative at $t=0$.}; this discontinuity propagates to higher-order derivatives at subsequent,
integer multiples of the delay, $t=k\tau$, $k\in \mathbb{Z}^+$.
The fact that this jump term is weighted by a vanishing term
suggests that, for a given degree of accuracy, good approximation can be expected
when using relatively low-dimensional Koornwinder-based approximations, as long as $b_N$ vanishes sufficiently quickly. We do not address such numerical considerations here; see, however, Table \ref{table} in Remark \ref{PDE_rem} for results in the case of Eq.~\eqref{lin_case}.
Instead, in Section~\ref{Sect_Numerics}, we
provide several applications that show the proposed approximation to be not only rigorously justified, but very effective
in nonlinear cases that yield quasi-periodic and chaotic, as well as nearly Brownian dynamics. In each case, low-dimensional
ODE systems succeed in approximating important topological as well as statistical features of the corresponding DDE's nonlinear dynamics.
The article is organized as follows. In Section~\ref{sect:preliminaries}, we introduce the functional framework that will be adopted in Section \ref{Subsect_DDE_Galerkin} to recast a system of nonlinear DDEs into an abstract ODE. This framework relies on Hilbert spaces endowed with a natural inner product with a point mass. Koornwinder polynomials are then introduced in Section~\ref{sect:basis}. The convergence of the Galerkin ODE systems built by projecting onto these polynomials to the original DDEs is proven in Section \ref{Sec_Galerkin_approx}.
We provide explicit expressions of the Galerkin approximation
in Section~\ref{Sec_Galerkin_analytic} for the scalar case, and in Appendix~\ref{Appendix_systems} for nonlinear systems of DDEs. Finally, numerical applications to three nonlinear DDEs
are provided in Section~\ref{Sect_Numerics}. These applications involve: (i) a simple DDE with distributed delays whose solutions recall Brownian motion \cite{sprott2007simple}; (ii) a DDE with a discrete delay that
illustrates bimodal, as well as chaotic dynamics \cite{sprott2007simple}; and (iii) a periodically forced DDE with two discrete delays as a highly idealized model of the El Ni\~no-Southern Oscillation (ENSO: \cite[and references therein]{GZT08}). In all three
examples, it is shown that our Galerkin scheme provides a good approximation
by low-dimensional ODE systems of the DDE's strange attractor, as well as the statistical features that characterize the associated nonlinear dynamics.
\section{Background and motivation} \label{sect:preliminaries}
We introduce in this section the functional framework that will be adopted in Section \ref{Subsect_DDE_Galerkin} for the derivation of Galerkin approximations of a given nonlinear system of DDEs. Several function spaces can be used as a state space for the reformulation of a system of DDEs into an abstract ODE where among the most standard ones, those built-up out of the space of continuous functions on the interval $[-\tau,0]$ play an important role in the DDE theory; e.g.~\cite{Diekmann_al95,Hale_Lunel93}.
In this article, we adopt instead the use of Hilbert spaces which are more classically used in control or approximation theory of DDEs; see e.g., \cite{Banks_al78,burns1983linear,curtain1995,Kappel86,kappel1986equivalence,Kappel_al87,nakagiri1989controllability}.
For a didactic expository of the associated theory of semigroups for (linear) systems of DDEs in this functional setting we refer to \cite[Sect.~2.4]{curtain1995}; see also \cite{burns1983linear}.
More precisely, the following Hilbert product space
\begin{equation} \label{H_space}
\mathcal{H} := L^2([-\tau,0); \mathbb{R}^d) \times \mathbb{R}^d,
\end{equation}
will serve as our state space, and will be endowed with the inner product defined for any $(f_1, \gamma_1),\, (f_2, \gamma_2) \in \mathcal{H}$, as:
\begin{equation} \label{H_inner}
\langle (f_1, \gamma_1), (f_2, \gamma_2) \rangle_{\mathcal{H}} := \frac{1}{\tau} \int_{-\tau}^0\langle f_1(\theta), f_2(\theta) \rangle \mathrm{d} \theta + \langle \gamma_1,\gamma_2\rangle,
\end{equation}
where $\langle \cdot, \cdot \rangle$ denotes the Euclidean inner product of $ \mathbb{R}^d$.
We will also make use of the following subspace of $\mathcal{H}$:
\begin{equation}
\mathcal{V} := H^1([-\tau,0); \mathbb{R}^d) \times \mathbb{R}^d,
\end{equation}
where $H^1([-\tau,0); \mathbb{R}^d)$ denotes the standard Sobolev subspace of $L^2([-\tau,0); \mathbb{R}^d)$; see, e.g.~ \cite[Chap.~8]{brezis_book}. This space consists of functions that are square integrable and whose first-order weak derivatives exist in a distributional sense and are also square integrable.
Instead of presenting the general nonlinear systems of DDEs considered in this article (see Section \ref{Subsect_DDE_Galerkin}), we introduce below a class of scalar DDEs that will serve us to identify within a simple context, the issues inherent to the Galerkin approximation of DDEs; see Remark \ref{Rmk_problems_to_overcome} hereafter.
\begin{ex}\label{Ex_DDE_into_ODE}
In this example, we recall how a scalar DDE can be recast into an abstract ODE.
For simplicity, we will focus on the following autonomous scalar DDE ($d=1$):
\begin{equation} \label{Eq_DDE}
\frac{\mathrm{d} x(t)}{\mathrm{d} t} = a x(t) + bx(t-\tau) + c \int_{t-\tau}^t x(s)\mathrm{d} s + F\Big(x(t), \int_{t-\tau}^t x(s) \mathrm{d} s \Big),
\end{equation}
where $a$, $b$ and $c$ are real numbers, $\tau> 0$ is the delay parameter, and $F$
is a given scalar nonlinear function. The case of nonlinear systems of DDEs will be dealt with in Section \ref{Subsect_DDE_Galerkin}.
The reformulation of Eq.~\eqref{Eq_DDE} into an abstract ODE is classical and proceeds as follows. Let us denote by $x_t$ the time evolution of the history segments of a solution to Eq.~\eqref{Eq_DDE}, namely
\begin{equation} \label{shift}
x_t(\theta):=x(t+\theta), \qquad t \ge 0, \qquad \theta \in [-\tau, 0].
\end{equation}
Now, by introducing the new variable
\begin{equation}
u(t) := (x_t,x(t))=(x_t, x_t(0)),
\end{equation}
Eq.~\eqref{Eq_DDE} can be rewritten as the following abstract ODE:
\begin{equation} \label{eq:DDE_abs}
\frac{\mathrm{d} u}{\mathrm{d} t} = \mathcal{A} u + \mathcal{F}(u),
\end{equation}
where the linear operator $\mathcal{A} \colon D(\mathcal{A}) \subset \mathcal{V} \rightarrow \mathcal{H}$ is defined by
\begin{equation} \begin{aligned} \label{Def_A}
\lbrack \mathcal{A} \Psi \rbrack (\theta) & := \begin{cases}
{\displaystyle \frac{\mathrm{d}^+ \Psi^D}{\mathrm{d} \theta}}, & \theta \in[-\tau, 0), \vspace{0.4em}\\
{\displaystyle a \Psi^S + b\Psi^D(-\tau) + c \int_{-\tau}^0 \Psi^D(s)\mathrm{d} s}, & \theta = 0,
\end{cases}
\end{aligned} \end{equation}
with the domain $\mathcal{A}$ given by (cf. \cite[Prop.~2.6]{Kappel86})
\begin{equation} \label{D_of_A}
D(\mathcal{A}) = \Big \{(\Psi^D, \Psi^S) \in L^2([-\tau, 0); \mathbb{R})\times \mathbb{R} : \Psi^D \in H^1([-\tau, 0); \mathbb{R}), \lim_{\theta \rightarrow 0^-} \Psi^D(\theta) = \Psi^S
\Big \};
\end{equation}
and with the nonlinearity $\mathcal{F} \colon \mathcal{H} \rightarrow \mathcal{H}$ defined by
\begin{equation} \begin{aligned} \label{Def_F}
[\mathcal{F} (\Psi) ](\theta) & := \begin{cases}
0, & \theta \in[-\tau, 0), \vspace{0.4em}\\
F \Big(\Psi^S, \int_{-\tau}^0 \Psi^D(s) \mathrm{d} s \Big), & \theta = 0,
\end{cases} \quad \text{ } \forall \: \Psi = (\Psi^D, \Psi^S) \in \mathcal{H}.
\end{aligned} \end{equation}
With $D(\mathcal{A})$ such as given in \eqref{D_of_A}, the operator $\mathcal{A}$ generates a linear $C_0$-semigroup on $\mathcal{H}$ so that the Cauchy problem associated with the linear equation $\dot{u}=\mathcal{A} u$ is well-posed in the Hadamard's sense; see e.g~\cite[Thm.~2.4.6]{curtain1995}. The well-posedness problem for the nonlinear equation depends obviously on the nonlinear term $\mathcal{F}$ and we refer to Section \ref{Subsect_DDE_Galerkin} for a solution to this problem within our functional framework; see also \cite{Webb76}.
\end{ex}
\needspace{1\baselineskip}
\begin{rem}\label{Rmk_problems_to_overcome}
\hspace*{2em} \vspace*{-0.4em}
\begin{itemize}
\item[{\mg (i)}] It is important to note that when instead of $L^2([-\tau, 0); \mathbb{R}^d)$, the space of continuous functions $X=C([-\tau, 0]; \mathbb{R}^d)$ endowed with the supremum norm is retained \cite{Hale_Lunel93}, the continuity requirement at $0$ in \eqref{D_of_A} is naturally satisfied. On the other hand, $X$ is not a Hilbert space and the analysis of the {\it adjoint eigenvalue problem} \cite[Sect.~7.5]{Hale_Lunel93} is required for the derivation of low-dimensional ODE systems which no longer contain memory terms \cite{wischert1994delay}. By working within the framework of Hilbert spaces we avoid technicalities inherent to the analysis of this adjoint problem.
\item[(ii)] When we consider the Hilbert space $\mathcal{H}$, a natural choice of set of functions to decompose the solutions of \eqref{eq:DDE_abs} is constituted by the eigenfunctions of the operator $\mathcal{A}$ with domain $D(\mathcal{A})$. When $\mathcal{A}$ does not contain distributed delay terms, these eigenfunctions are well-known and can be found in e.g.~\cite[Thm.~2.4.6]{curtain1995}. In case where the eigenvalues of $\mathcal{A}$ are all simple, this set of eigenfunctions actually correspond to the set $\mathcal{E}$ of eigenfunctions in $C([-\tau, 0]; \mathbb{R}^d)$ \cite[Thm.~4.2, p.~207]{Hale_Lunel93}. The latter set may fail however in approximating continuous functions \cite[Cor.~8.1, p. 222]{Hale_Lunel93} and can be even finite-dimensional \cite[p.~220]{Hale_Lunel93} which limits seriously its usage in practice\footnote{for the purpose of low-dimensional approximations.} if for instance snippets of solutions to Eq.~\eqref{eq:DDE_abs} are spanned by elements outside of $\mathcal{E}$.\footnote{See however \cite[Thm.~2.5.10]{curtain1995} for a sufficient condition for the set of (generalized) eigenfunctions to be dense in $\mathcal{H}$ still for the case when $\mathcal{A}$ does not contain distributed delay terms.}
\item[(iii)] Due to the aforementioned limitations of the eigenfunctions, other basis functions are often used for the derivation of ODE systems to approximate the dynamics of the underlying DDE. Choices proposed in the literature include step functions \cite{Banks_al78,kappel1978autonomous}, splines \cite{banks1979spline,Banks_al84}, and orthogonal polynomial functions such as Legendre polynomials \cite{Kappel86,Ito_Teglas86}.\footnote{It is also worth mentioning the more recent works \cite{Vyasarayani12,Wahi_al05}, in which interesting approximation schemes based on linear and sine functions have been proposed for the case of state dependent delays, and for which successful numerical performances have been reported although rigorous convergence results seem still to be lacking, within this approach.}
In most of the cases, a version of the Trotter-Kato theorem (see e.g.~\cite[Thm.~4.5, p.~88]{Pazy83}) is typically used to obtain finite-time uniform approximation results of the semigroup generated by $\mathcal{A}$.
In the cases of step functions and splines, the conditions required in the Trotter-Kato theorem (see e.g.~Conditions {\bf (A1)} and {\bf (A2)} in Theorem~\ref{ParisVI_thm} below) have been analyzed in \cite{Banks_al78} and \cite{banks1979spline}.
For the case of Legendre polynomials, technical complications have been encountered to check these conditions either in the setting of Galerkin approximation \cite{Kappel86} or in the setting of tau-method \cite{Ito_Teglas86} largely due to the fact that the basis functions do not live in the domain of $\mathcal{A}$. As noted in \cite[p.~168]{Kappel86} or in \cite[Sect.~5]{Ito_Teglas86}, either $X_N\not\subset D(\mathcal{A})$ or $\Pi_N$ is not orthogonal for the polynomial functions considered in \cite{Kappel86} and \cite{Ito_Teglas86}, respectively.
On the other hand, at a given precision, the use of polynomial basis leads typically to ODE approximations with lower dimensions when compared with those built out of step functions or splines \cite{banks1979spline,Ito_Teglas86}. \qed
\end{itemize}
\end{rem}
The problems discussed in (iii) of the above remark already encountered in the linear case, have limited the applications of polynomial basis for the approximation of nonlinear systems of DDEs. The above discussion leads naturally to the question whether there exists an orthogonal polynomial basis for which standard approximation results for abstract nonlinear systems such as recalled in Theorem \ref{ParisVI_thm} below, could be applied to the case of nonlinear systems of DDEs.
The next section introduces orthogonal polynomials that will allow us to answer this question by the affirmative, leading to direct and explicit formulas for the rigorous Galerkin approximations of a broad class of nonlinear systems of DDEs; see Sections~\ref{Subsect_DDE_Galerkin} and \ref{Sec_Galerkin_analytic}. As explained next, the key is to seek for polynomials that live in the domain of $\mathcal{A}$, which is achieved here by seeking for polynomials to be orthogonal for the inner product \eqref{H_inner} with a point mass.
\section{Orthogonal polynomials for inner products with a point mass} \label{sect:basis}
The inner product given in \eqref{H_inner} is naturally associated with the measure
\begin{equation}\label{Eq_dx+point-mass}
\nu(\mathrm{d} \theta)=\mathrm{d} \theta +\delta_0,
\end{equation}
where $\delta_0$ denotes the Dirac measure concentrated at $\theta=0$.
Orthogonal polynomials with respect to the Lebesgue measure $\mathrm{d} \theta$ or measures having a smooth density with respect to it,
has a long history \cite{Szego75}. The study of orthogonal polynomials with respect to a measure including a point mass such as given by \eqref{Eq_dx+point-mass} has been studied only lately \cite[Chap.~2.9]{Ismail05}. It was in particular noticed that orthogonal polynomials with respect to such a measure can be expressed in terms of polynomials orthogonal with respect to the smooth part of the measure; see \cite{Uvarov69} for an early contribution on the topic.
Koornwinder in \cite{Koo84} dealt with the case of orthogonal polynomials on $[-1,1]$ associated with measures given by
\begin{equation}\label{Eq_Koorn_dx}
\nu(\mathrm{d} x)= \frac{\Gamma(\alpha+\beta+2)}{2^{\alpha+\beta+1}\Gamma(\alpha+1)\Gamma(\beta+1)}(1-x)^\alpha (1+x)^\beta \mathrm{d} x + M \delta_{-1}+ N\delta_{1}, \; \alpha, \beta > -1,
\end{equation}
i.e.~associated with measures having a Jacobi weight on $[-1,1]$ with two point-masses added to the extremities of the interval.
Although many properties --- such as three-term recurrence relationships or differential equations satisfied by such polynomials --- remain valid
in the case of a measure with a point mass, subtle but important qualitative and quantitative differences arise. For instance, \cite[Thm.~ 3 c)]{Alfaro_al97} shows that the zeroes closest to $1$ of polynomials orthogonal with respect to the measure $\nu$ given in \eqref{Eq_Koorn_dx} converge to $1$ faster than those associated with the standard Jacobi polynomials.
It is our goal to show that orthogonal polynomials with respect to the measure $\nu$ in \eqref{Eq_Koorn_dx} allows us to work within a simple and more direct framework than those found in the literature, for building Galerkin approximations of DDEs. Indeed, the approximation of DDEs by systems of ODEs built from orthogonal polynomials were not, so far, relying on classical Galerkin schemes as noted in \cite[p.~168]{Kappel86} or in \cite[Sect.~5]{Ito_Teglas86}.
The results given below correspond to the case $\alpha = \beta = M=0$, and $N = 1$ considered in \cite{Koo84}.
\subsection{Koornwinder polynomials}
We recall next from \cite[Eq.~(2.1)]{Koo84} the following sequence of Koornwinder polynomials $\{K_n\}$ that can be built from
the Legendre polynomials $L_n$ according to
\begin{equation} \label{eq:Pn}
K_n(s) := -(1+s)\frac{\mathrm{d}}{\mathrm{d} s} L_n(s) +( n^2 + n + 1) L_n(s), \; s \in [-1, 1], \; n \in \mathbb{N}.
\end{equation}
As recalled above, this polynomial sequence is known to be orthogonal when a Dirac point-mass at the right endpoint, $\delta_{1}$, is adjoined
to the Lebesgue measure \cite{Koo84}, in other words
\begin{equation} \begin{aligned}
\int_{-1}^{1} K_n(s) K_m(s) \mathrm{d} \mu (s)& = \frac{1}{2} \int_{-1}^{1} K_n(s) K_m(s) \mathrm{d} x + K_n(1) K_m(1)\\
&=0, \, \mbox{ if } m\neq n.
\end{aligned} \end{equation}
This orthogonality property and the main properties satisfied by $\{K_n\}$ on which we will rely on, are
summarized from \cite{Koo84} in the proposition below.
\begin{prop} \label{thm:Pn}
The polynomial $K_n$ defined in \eqref{eq:Pn} {\color{black} is of degree $n$} and admits the following expansion in terms of the Legendre polynomials:
\begin{equation} \label{eq:Pn2}
K_n(s) = - \sum_{j = 0}^{n-1} (2j+1)L_j(s) + (n^2 + 1) L_n(s), \qquad n \in \mathbb{N};
\end{equation}
and the following normalization property holds:
\begin{equation} \label{eq:Pn_normalization}
K_n(1) = 1, \qquad n \in \mathbb{N}.
\end{equation}
Moreover, the sequence given by
\begin{equation} \label{eq:Pn_prod}
\{\mathcal{K}_n := (K_n, K_n(1)) : n \in \mathbb{N}\}
\end{equation}
forms an orthogonal basis of the product space
\begin{equation} \label{eq:E}
\mathcal{E} := L^2([-1,1); \mathbb{R}) \times \mathbb{R},
\end{equation}
where $\mathcal{E}$ is {\color{black} endowed} with the following inner product:
\begin{equation} \label{eq:inner_E}
\langle (f, a), (g, b) \rangle_{\mathcal{E}} = \frac{1}{2} \int_{-1}^1 f(s)g(s) \mathrm{d} s + ab, \quad (f,a), (g, b) \in \mathcal{E}.
\end{equation}
{\color{black} Moreover $\Big\{\frac{\mathcal{K}_n}{\|\mathcal{K}_n\|_{\mathcal{E}}}\Big\}$ forms a Hilbert basis of $\mathcal{E}$ where
the norm $\|\mathcal{K}_n\|_{\mathcal{E}}$ of $\mathcal{K}_n$ induced by $\langle \cdot, \cdot \rangle_{\mathcal{E}}$ possesses the following analytic expression:}
\begin{equation} \label{eq:Pn_norm}
\|\mathcal{K}_n\|_{\mathcal{E}} = \sqrt{\frac{(n^2+1)((n+1)^2+1)}{2n+1}}, \qquad n \in \mathbb{N}.
\end{equation}
\end{prop}
\begin{proof}
Based on \eqref{eq:Pn}, the proof consists essentially of algebraic manipulations relying on the following standard properties of the Legendre polynomials \cite[Sect.~3.3]{Shen_al11}:
\begin{itemize}
\item Orthogonality:
\begin{equation} \label{eq:Ln_orth}
\int_{-1}^1 L_m(s) L_n(s) \mathrm{d} x = \frac{2}{2n+1} \delta_{mn}, \qquad m,n \in \mathbb{N},
\end{equation}
where $\delta_{mn}$ denotes the Kronecker delta.
\medskip
\item Normalization:
\begin{equation} \label{eq:Ln_normalization}
L_n(1) = 1, \qquad n \in \mathbb{N}.
\end{equation}
\item Three-term recurrence relation:
\begin{equation} \label{eq:Ln_recur}
(n+1) L_{n+1}(s) = (2n+1) s L_n(s) - n L_{n-1}(s), \; s\in [-1,1], \; n \ge 1,
\end{equation}
{\color{black} where} the first two Legendre polynomials are {\color{black} given by}
\begin{equation}
L_0 \equiv 1 \qquad \text{and} \qquad L_1(s) = s.
\end{equation}
\item First order derivative recurrence relation:
\begin{equation} \label{eq:dLn}
\frac{\mathrm{d} L_n}{\mathrm{d} s}(s) = \sum_{k \in I_n} (2k+1)L_k(s),
\end{equation}
where
\begin{equation} \label{eq:idx_n}
I_n:=\{k \in \mathbb{N} : 0\le k \le n-1, k+n \text{ is odd}\}.
\end{equation}
\end{itemize}
Standard density arguments, outlined in Appendix~\ref{sect:thm_Pn_proof}, allow us then to conclude the proof.
\end{proof}
\subsection{Rescaled Koornwinder basis} \label{Sect_rescaled_basis}
From the original Koornwinder basis given on the interval $[-1, 1]$, orthogonal polynomials on the interval $[-\tau, 0]$ for the inner product \eqref{H_inner} can now be easily obtained by using a simple linear transformation $\mathcal{T}$ defined by:
\begin{equation} \label{eq:linear_transf}
\mathcal{T} \colon [-\tau, 0] \rightarrow [-1, 1], \qquad \theta \mapsto 1 + \frac{2 \theta }{\tau}.
\end{equation}
Indeed, for $K_n$ given in \eqref{eq:Pn2}, let us define the polynomial $K_n^\tau$ by
\begin{equation} \begin{aligned} \label{eq:Pn_tilde}
K^\tau_n\colon [-\tau, 0] & \rightarrow \mathbb{R}, \\
\theta & \mapsto K_n \Bigl( 1 + \frac{2 \theta }{\tau} \Bigr), \qquad n \in \mathbb{N}.
\end{aligned} \end{equation}
Since the sequence $\{\mathcal{K}_n = (K_n, K_n(1)) : n \in \mathbb{N}\}$ forms an orthogonal basis for $\mathcal{E}$ (cf.~Proposition~\ref{thm:Pn}), it follows then that the polynomial sequence
\begin{equation} \label{eq:Pn_tilde_prod}
\{\mathcal{K}_n^\tau := (K_n^\tau, K_n^\tau(0)) : n \in \mathbb{N}\}
\end{equation}
forms an orthogonal basis for the space $\mathcal{H} = L^2([-\tau,0); \mathbb{R}) \times \mathbb{R}$ endowed with the inner product $\langle \cdot, \cdot \rangle_{\mathcal{H}}$ given in \eqref{H_inner} for $d=1$.
Since $K_n(1)=1$ from \eqref{eq:Pn_normalization}, we have
\begin{equation}\label{Eq_normalization}
K_n^\tau(0) = 1.
\end{equation}
Moreover, by applying the transformation $\mathcal{T}$, we get trivially that
\begin{equation}\label{Eq_inv_innerproduct}
\|\mathcal{K}_n^\tau \|_{\mathcal{H}} = \| \mathcal{K}_n \|_{\mathcal{E}}.
\end{equation}
We have then the following fundamental lemma.
\begin{lem}\label{Fundamental_lemma}
The rescaled Koornwinder polynomials $\{K_j^{\tau}\}_{j\geq 0}$ satisfy the following properties:
\begin{equation} \label{Eq_identity1_0}
\boxed{
\sum_{j=0}^{\infty} \frac{K^\tau_j}{\|\mathcal{K}_j^\tau \|_{\mathcal{H}}^2}=0, \quad \text{ in the $L^2$ sense},}
\end{equation}
and
\begin{equation} \label{Eq_identity2_0}
\boxed{
\sum_{j=0}^{\infty} \frac{1}{\|\mathcal{K}_j^\tau \|_{\mathcal{H}}^2}=1.}
\end{equation}
Moreover, each function in $L^2([-\tau, 0]; \mathbb{R})$ enjoys the following decomposition in terms of the Koornwinder polynomials $K_j^\tau$:
\begin{equation} \label{L2_decomp_0}
\boxed{
f = \sum_{j=0}^\infty \frac{\langle f, K_j^\tau \rangle_{L^2} }{\tau \|\mathcal{K}_j^\tau \|_{\mathcal{H}}^2 } K_j^\tau,\qquad \text{ } \forall \: f \in L^2([-\tau, 0]; \mathbb{R}).}
\end{equation}
\end{lem}
\begin{proof}
For any $\Psi=(\Psi^D,\Psi^S) \in \mathcal{H}$, we have\footnote{Note that the equality in \eqref{Eq_decomp} holds in the sense that $ \Big \|\Psi - \sum_{j=0}^{\infty} \frac{\langle \Psi,\mathcal{K}^\tau_j \rangle_{\mathcal{H}}}{\|\mathcal{K}^\tau_j\|_{\mathcal{H}}^2}\mathcal{K}_j^\tau \Big\|_\mathcal{H}$ = 0, which is equivalent to $\Big \|\Psi^D - \sum_{j=0}^{\infty} \frac{\langle \Psi,\mathcal{K}^\tau_j \rangle_{\mathcal{H}}}{\|\mathcal{K}^\tau_j\|_{\mathcal{H}}^2} K_j^\tau \Big\|_{L^2} = 0$ and
$\Big |\Psi^S - \sum_{j=0}^{\infty} \frac{\langle \Psi,\mathcal{K}^\tau_j \rangle_{\mathcal{H}}}{\|\mathcal{K}^\tau_j\|_{\mathcal{H}}^2}\Big| = 0$.}
\begin{equation} \begin{aligned}\label{Eq_decomp}
\Psi &=\sum_{j=0}^{\infty} \frac{\langle \Psi,\mathcal{K}^\tau_j \rangle_{\mathcal{H}}}{\|\mathcal{K}^\tau_j\|_{\mathcal{H}}^2}\mathcal{K}_j^\tau\\
& =\sum_{j=0}^{\infty} \Big(\frac{1}{\tau}\langle \Psi^D,K_j^\tau \rangle_{L^2}+\Psi^S K_j^\tau(0)\Big) \frac{\mathcal{K}_j^\tau }{\|\mathcal{K}_j^\tau \|_{\mathcal{H}}^2}.
\end{aligned} \end{equation}
Now, let $\Psi^D$ to be the zero-function on $[-\tau,0]$ and $\Psi^S$ to be 1.
For such a $\Psi$, by equalizing respectively the $D$-components and $S$-components of the RHS and LHS of \eqref{Eq_decomp}, one then obtains from \eqref{Eq_normalization} that
\begin{equation}
\sum_{j=0}^{\infty} \frac{K^\tau_j}{\|\mathcal{K}_j^\tau \|_{\mathcal{H}}^2}=0, \quad \text{ in the $L^2$ sense},
\end{equation}
and
\begin{equation}
\sum_{j=0}^{\infty} \frac{1}{\|\mathcal{K}_j^\tau \|_{\mathcal{H}}^2}=1.
\end{equation}
The decomposition of $L^2$ functions given in \eqref{L2_decomp_0} follows directly from \eqref{Eq_decomp} by considering $\Psi := (f, 0) \in \mathcal{H}$. Again, the equality holds in the $L^2$ sense.
\end{proof}
As an illustration of the identities \eqref{Eq_identity1_0} and \eqref{Eq_identity2_0}, Figure~\ref{fig:Cancel_prop} displays numerical computations of the partial sum $\sum_{j=0}^{N-1} \frac{K^\tau_j}{\|\mathcal{K}_j^\tau \|_{\mathcal{H}}^2}$ for $N=20$ and $N=60$, in the case $\tau = 0.5$.
\begin{figure}[hbtp]
\centering
\includegraphics[height=0.4\textwidth,width=.75\textwidth]{Cancel_prop.pdf}
\caption{{\footnotesize Sum of the first $N$ rescaled Koornwinder polynomials: blue curve corresponds to $N=20$, and red curve to $N=60$.}} \label{fig:Cancel_prop}
\end{figure}
\needspace{1\baselineskip}
\begin{rem} \label{Rmk_KoornwinderBasis}
\hspace*{2em} \vspace*{-0.4em}
\begin{itemize}
\item[(i)] Note that the continuity condition, $\lim_{\theta \rightarrow 0^-} \Psi^D(\theta) = \Psi^S $, required in \eqref{D_of_A} in order that $\Psi \in D(\mathcal{A})$, is here naturally satisfied by the Koornwinder basis function $\mathcal{K}_n^\tau = (K_n^\tau, K_n^\tau(0))$. It constitutes thus, for the inner product \eqref{H_inner}, an orthogonal polynomial basis whose elements live in the domain of the linear operator $\mathcal{A}$ given in \eqref{Def_A}.
As explained at the beginning of Section~\ref{sect:basis}, the key element for such a construction relies on the inclusion of a point mass adjoined to the continuous part of the measure. When this point mass is absent, the corresponding orthogonal polynomials are (rescaled) Legendre polynomials. The associated basis in this latter case is given by (cf.~\cite{Kappel86})
\begin{equation} \label{Legendre-based basis}
\mathfrak{B} := \{ \psi_1:= (0_{\mathcal{H}}, 1) \} \cup \{\psi_n:= (L^\tau_{n-2}, 0) \ \vert \ n=2, 3, \cdots \},
\end{equation}
where $0_\mathcal{H}$ denotes the zero function on $\mathcal{H}$, and $L^\tau_n$ is the Legendre polynomial of degree $n$ defined on the interval $[-\tau, 0]$. Clearly, none of the elements in $\mathfrak{B}$ belongs to $D(\mathcal{A})$ since $\lim_{\theta \rightarrow 0^-} L^\tau_n(\theta) \neq 0$.
\item[(ii)] The fact that the Koornwinder basis functions live in $D(\mathcal{A})$ allows us to construct {\it standard} Galerkin approximations; whereas, extra correction terms are required in the Galerkin approximation built from the Legendre-based basis given in \eqref{Legendre-based basis} (see e.g.~\cite[p.~169]{Kappel86}). Moreover, technical complications such as pointed out in Remark~\ref{Rmk_problems_to_overcome} iii) do not take place for the case of Koornwinder basis. Indeed, it will be shown in Section~\ref{Subsect_DDE_Galerkin} that the properties of the Koornwinder polynomials such as summarized in Corollary~\ref{Fundamental_lemma} as well as the vectorized version given by Corollary~\ref{Super_Fundamental_lemma} below, turn out to be sufficient to obtain finite-time uniform approximation results of the semigroup generated by the linear operator $\mathcal{A}$; see Lemmas~\ref{Lem_A2} and \ref{Lem_A1}.
\item[(iii)] It is also worth mentioning that Corollary~\ref{Fundamental_lemma} and Corollary~\ref{Super_Fundamental_lemma} as well as the rigorous convergence results presented in Section~\ref{Subsect_DDE_Galerkin} are not limited to the case of Koornwinder basis constructed here. Given any
polynomial basis on $[-\tau, 0]$ that are orthogonal with respect to a measure of the form $\nu(\mathrm{d} \theta) = \mathrm{d} \rho(\theta) + \delta_0$ with $\rho$ being a positive non-decreasing function on $[-\tau, 0]$, the aforementioned results would still hold. We refer to \cite{Uvarov69} for the construction of such polynomials when orthogonal polynomials with respect to $\widetilde{\nu}(\mathrm{d} \theta) = \mathrm{d} \rho(\theta)$ are known. \qed
\end{itemize}
\end{rem}
\subsection{Vectorization of Koornwinder polynomials}\label{Sec_Vectorization}
We introduce here a generalization of the Koornwinder polynomials that will turn out to be useful for
the approximation of nonlinear DDE systems.
The purpose is here to build from the Koornwinder polynomials introduced above, linear subspaces $\mathcal{H}_{N}$ that approximate $\mathcal{H} := L^2([-\tau,0); \mathbb{R}^d) \times \mathbb{R}^d$, for $d>1$.
Each function $\Psi$ in $\mathcal{H}$ has here $d$-components that can be, each, approximated by a series of Koornwinder polynomials as in \eqref{Eq_decomp}. If we restrict such an approximation to the first $N$ Koornwinder polynomials, $\mathcal{H}_{N}$ becomes then an $N\times d$-dimensional subspace; see \eqref{subspace_HNd} below.
Our goal is also here to introduce a vectorization of Koornwinder polynomials which allows for a natural extension of Lemma \ref{Fundamental_lemma} in the case $d>1$. This extension of Lemma \ref{Fundamental_lemma} will be particularly useful to provide finite-dimensional approximation of the linear part of systems of DDEs; see Lemma \ref{Lem_A2}.
To do so, given $j\in\{1,\cdots,Nd\}$, we can associate a Koornwinder polynomial of degree $j_q \in\{0,\cdots,N-1\}$, as follows
\begin{equation}\label{index_relation}
j=d j_q +j_r,
\end{equation}
where $j_r \in \{1,\cdots,d\}$ is given by
\begin{equation}\label{index_relationb}
j_r := \begin{cases}
\mathrm{mod}(j, d), & \text{if } \mathrm{mod}(j, d) \neq 0, \\
d, & \text{otherwise.}
\end{cases}
\end{equation}
Let us introduce now the following $d$-dimensional mapping from $[-\tau,0]$ to $\mathbb{R}^d$
\begin{equation} \label{vectorized_K}
\mathbf{K}^\tau_{j}(\theta):= (\underbrace{0, \cdots, 0}_{\text{$j_r-1$ entries}}, K^\tau_{j_q}(\theta), \underbrace{0, \cdots, 0}_{\text{$d-j_r$ entries}})^\mathrm{tr}, \qquad \theta \in [-\tau,0].
\end{equation}
The vector $\mathbf{K}^\tau_{j}(\theta)$ is then nothing else than a $d$-dimensional canonical vector whose $j_r^{th}$-entry is given by the value at $\theta$ of the (rescaled) Koornwinder polynomial of degree $j_q$; the integers $j_q$ and $j_r$ being related to $j$ according to \eqref{index_relation}-\eqref{index_relationb}. Based on these vectorized (rescaled) Koornwinder polynomials $\mathbf{K}^\tau_{j}$, we also introduce
\begin{equation}\label{Eq_superKoor}
\mathbb{K}^\tau_{j}:= \big( \mathbf{K}^\tau_{j}, \mathbf{K}^\tau_{j}(0) \big), \qquad j \ge 1.
\end{equation}
In the remaining part of this section, we summarize some key properties of $\mathbf{K}^\tau_{j}$ and $\mathbb{K}^\tau_{j}$ for later usage. Hereafter, we use $\mathcal{H}_1$ to denote the space $\mathcal{H}$ defined in \eqref{H_space} for the case $d=1$, i.e.,
\begin{equation}
\mathcal{H}_1 = L^2([-\tau,0); \mathbb{R}) \times \mathbb{R},
\end{equation}
which is again endowed with the inner product given in \eqref{H_inner} (still with $d=1$).
Since the sequence $\{\mathcal{K}^\tau_j = (K_j^\tau, K_j^\tau(0)) : j \in \mathbb{N}\}$ forms an orthogonal basis for $\mathcal{H}_1$ (cf.~Section~\ref{Sect_rescaled_basis}), one can readily check that $\{\mathbb{K}^\tau_j : j \in \mathbb{N}^*\}$
forms an orthogonal basis for the space $\mathcal{H}$.
Note also that
\begin{equation}\label{Eq_norm_superKoor}
\|\mathbb{K}^\tau_{j}\|_\mathcal{H} = \|\mathcal{K}^\tau_{j_q}\|_{\mathcal{H}_1}, \qquad j \in \mathbb{N}^*.
\end{equation}
Given this vectorization of Koornwinder polynomials, we can now formulate the following extension of Lemma \ref{Fundamental_lemma} that summarizes the key properties of the $\mathbf{K}_j$'s which will be used for the rigorous approximation of nonlinear systems of DDEs such as described in Section \ref{Subsect_DDE_Galerkin}.
\begin{lem}\label{Super_Fundamental_lemma}
The vectorized rescaled Koornwinder polynomials $\{\mathbf{K}^\tau_j \}_{j\geq 1}$ satisfy the following properties:
\begin{equation} \label{Eq_identity1}
\boxed{
\sum_{j=1}^{\infty} \frac{ \langle \alpha, \mathbf{K}^\tau_j(0) \rangle }{\|\mathcal{K}_{j_q}^\tau \|_{\mathcal{H}_1}^2} \mathbf{K}^\tau_j =0 \quad \text{ in the $L^2([-\tau,0); \mathbb{R}^d)$ sense}, \quad \text{ } \forall \: \alpha \in \mathbb{R}^d,}
\end{equation}
and
\begin{equation} \label{Eq_identity2}
\boxed{
\sum_{j=1}^{\infty} \frac{ \langle \alpha, \mathbf{K}^\tau_j(0) \rangle }{\|\mathcal{K}_{j_q}^\tau \|_{\mathcal{H}_1}^2} \mathbf{K}^\tau_j(0)= \alpha, \quad \text{ } \forall \: \alpha \in \mathbb{R}^d.}
\end{equation}
Moreover, each function in $L^2([-\tau, 0]; \mathbb{R}^d)$ enjoys the following decomposition in terms of the vectorized Koornwinder polynomials $\mathbf{K}_j^\tau$:
\begin{equation} \label{L2_decomp}
\boxed{
f = \sum_{j=1}^\infty \frac{\langle f,\mathbf{K}_j^\tau \rangle_{L^2} }{\tau \|\mathcal{K}_{j_q}^\tau \|_{\mathcal{H}_1}^2 } \mathbf{K}_j^\tau,\qquad \text{ } \forall \: f \in L^2([-\tau, 0]; \mathbb{R}^d);}
\end{equation}
and the following identity holds:
\begin{equation} \label{Eq_identity3}
\boxed{
\sum_{j=1}^\infty \frac{\langle f,\mathbf{K}_j^\tau \rangle_{L^2} }{\tau \|\mathcal{K}_{j_q}^\tau \|_{\mathcal{H}_1}^2 } \mathbf{K}_j^\tau(0) = 0,\qquad \text{ } \forall \: f \in L^2([-\tau, 0]; \mathbb{R}^d).}
\end{equation}
\end{lem}
\begin{proof}
The above identities can be obtained by using the same type of reasoning as given in the proof of Lemma~\ref{Fundamental_lemma} for the scalar case.
Indeed, by noting that $\{\mathbb{K}^\tau_j \,:\, j \in \mathbb{N}^*\}$ forms an orthogonal basis of $\mathcal{H}$, any $\Psi \in \mathcal{H}$ admits the following decomposition:
\begin{equation} \begin{aligned}\label{Eq_decomp_vec}
\Psi &=\sum_{j=1}^{\infty} \frac{\langle \Psi,\mathbb{K}^\tau_j \rangle_{\mathcal{H}}}{\|\mathbb{K}^\tau_j\|_{\mathcal{H}}^2}\mathbb{K}_j^\tau\\
& =\sum_{j=1}^{\infty} \Big(\frac{1}{\tau}\langle \Psi^D,\mathbf{K}_j^\tau \rangle_{L^2}+ \langle \Psi^S, \mathbf{K}_j^\tau(0) \rangle \Big) \frac{\mathbb{K}_j^\tau}{\|\mathcal{K}_{j_q}^\tau \|_{\mathcal{H}_1}^2},
\end{aligned} \end{equation}
where we have used the identity \eqref{Eq_norm_superKoor} in the last equality above.
Now, let $\Psi^D \in L^2([-\tau, 0]; \mathbb{R}^d)$ to be the zero-function and $\Psi^S$ to be an arbitrary vector $\alpha \in \mathbb{R}^d$. For such a $\Psi$, by equalizing respectively the $D$-components and $S$-components of the RHS and LHS of \eqref{Eq_decomp_vec}, we obtain respectively \eqref{Eq_identity1} and \eqref{Eq_identity2}.
The identities \eqref{L2_decomp} and \eqref{Eq_identity3}
also follow directly from \eqref{Eq_decomp_vec} by considering $\Psi^D = f$ and $\Psi^S = 0 \in \mathbb{R}^d$.
\end{proof}
\section{Galerkin approximation: Rigorous results}\label{Sec_Galerkin_approx}
In this section, we establish the convergence of the Galerkin scheme based on the
rescaled and vectorized Koornwinder polynomials of Section~\ref{Sec_Vectorization}. These convergence results apply, as we shall see, to a broad class of nonlinear systems of DDEs.
As mentioned in the Introduction and in Remark \ref{Rmk_problems_to_overcome}-(iii), the advantage of the Koornwinder basis relies on the facts that the constitutive basis functions are orthogonal and belong each to the domain of the linear operator associated with a given DDE. In particular, there is no discontinuity at the right end point for each basis element, by construction; see Section \ref{sect:basis}. Thanks to these properties of the basis functions, convergence results for the associated Galerkin systems can be derived in a straightforward fashion (see Corollary \ref{Cor_DDE_global_Lip}) and under useful criteria on the nonlinear terms (see Corollaries \ref{Cor_DDE_local_Lip_Case1} and \ref{Cor_DDE_local_Lip_Case2}), compared to other Galerkin schemes built from other bases; see, e.g., \cite{Kappel86,Kappel_al87, Vyasarayani12,Wahi_al05} and references therein.
In the following, we first present in Section \ref{Subsect_ODE_Galerkin} a general convergence result for Galerkin approximations of abstract nonlinear ODEs in Hilbert spaces by relying essentially on the theory of semigroups and the Trotter-Kato theorem \cite[Thm.~4.5, p.~88]{Pazy83}. The result is then applied to the DDE context in Section \ref{Subsect_DDE_Galerkin}.
General examples are provided in Section \ref{Sec_examples}.
\subsection{Galerkin approximations of nonlinear ODEs in Hilbert spaces} \label{Subsect_ODE_Galerkin}
We first present a general convergence result for Galerkin approximations of abstract nonlinear differential equations in a Hilbert space $X$, endowed with a norm $\|\cdot\|_X$. The mathematical setting is somewhat classical but we recall it below for the reader's convenience and later use.
In that respect, we assume in this Section the linear operator $\mathcal{L}$ to be the infinitesimal generator of a
$C_0$-semigroup of bounded linear operators $T(t)$ on $X$. Recall that in that case the domain $D(\mathcal{L})$ of $\mathcal{L}$ is dense in $X$ and that $\mathcal{L}$ is a closed operator; see \cite[Cor.~2.5, p.~5]{Pazy83}.
Under these assumptions, recall that there exists $M\geq 1$ and $\omega \geq 0$ \cite[Thm.~2.2, p.~4]{Pazy83} such that
\begin{equation}\label{Eq_control_T_t}
\|T(t)\| \le M e^{\omega t}, \qquad t \ge 0,
\end{equation}
where $\|\cdot \|$ denotes the operator norm subordinated to $\|\cdot\|_X$.
We are concerned with finite-dimensional approximations of the following initial-value problem:
\begin{equation} \begin{aligned} \label{ODE}
\frac{\mathrm{d} u}{\mathrm{d} t} &= \mathcal{L} u + \mathcal{G}(u), \\
u(0) &= u_0,
\end{aligned} \end{equation}
where $u_0 \in X$.
A {\it mild solution} of \eqref{ODE} over $[0,T]$, will be any function $u\in C([0,T],X)$ such that for $u_0\in X$,
\begin{equation}\label{Eq_mild}
u(t)=T(t)u_0 + \int_0^t T(t-s) \mathcal{G}(u(s)) \mathrm{d} s.
\end{equation}
Let $\{X_N: N \in \mathbb{N}\}$ be a sequence of subspaces of $X$ associated with {\it orthogonal projectors}
\begin{equation}
\Pi_N: X \rightarrow X_N,
\end{equation}
such that
\begin{equation}\label{Eq_identity_approx}
\|\Pi_N-\mbox{Id}_X\|\underset{N\rightarrow \infty}\longrightarrow 0,
\end{equation}
and
\begin{equation}\label{Eq_XN_in_domain}
X_N\subset D(\mathcal{L}), \; \forall \, N.
\end{equation}
The corresponding Galerkin approximation of \eqref{ODE} associated with $X_N$ is then given by:
\begin{equation} \begin{aligned} \label{ODE_Galerkin}
\frac{\mathrm{d} u_N}{\mathrm{d} t} &= \mathcal{L}_N u_N + \Pi_N \mathcal{G}(u_N), \\
u_N(0) &= \Pi_N u_0, \; u_0\in X,
\end{aligned} \end{equation}
where
\begin{equation}\label{Def_LN}
\mathcal{L}_N := \Pi_N \mathcal{L} \Pi_N : X \rightarrow X_N.
\end{equation}
In particular, the domain
$D(\mathcal{L}_N)$ of $\mathcal{L}_N$ is $X$, because of \eqref{Eq_XN_in_domain}.
As we will see in Section \ref{Subsect_DDE_Galerkin}, the choice of vectorized Koornwinder polynomials as a basis function will allow us
to define subspaces $X_N$ naturally associated with orthogonal projectors $\Pi_N$ that satisfy the above properties in contrast to other polynomial functions used for (non-standard) Galerkin approximation or other Legendre-tau approximations of systems of DDEs used so far; see e.g.~\cite{Ito_Teglas86,Kappel86}. See also Remark~\ref{Rmk_problems_to_overcome}-iii).
These nice properties will allow us also to rely on the following general convergence result regarding
{\it standard} Galerkin schemes, for the case of nonlinear systems of DDEs; see Section \ref{Subsect_DDE_Galerkin} below.
\vspace{1ex}
\begin{thm} \label{ParisVI_thm}
Let $\mathcal{L}$ and $\{X_N\}_{N\geq 0}$ be as described above.
Assume furthermore the following set of assumptions:
\begin{itemize}
\item[{\bf (A1)}] For each $N\in \mathbb{N}$, the linear flow $e^{\mathcal{L}_N t}:X_N \rightarrow X_N$ extends to a $C_0$-semigroup $T_N(t)$ on $X$. Furthermore the following uniform bound is satisfied by the family $\{T_N(t)\}_{N\geq 0, t\geq0}$
\begin{equation} \label{Eq_control_linearflow}
\quad \|T_N(t)\| \le M e^{\omega t}, \quad N\geq 0, \; \quad t \ge 0,
\end{equation}
where $\|T_N(t)\|=\sup\{\|T_N(t)x\|_X, \; \|x\|_X\leq 1, x\in X\}$.
\item[{\bf (A2)}]
The following convergence holds
\begin{equation} \label{Eq_L_Approx}
\lim_{N \rightarrow \infty} \|\mathcal{L}_N \phi - \mathcal{L} \phi \|_X = 0, \quad \text{ } \forall \: \phi \in D(\mathcal{L}).
\end{equation}
\item[{\bf (A3)}] $\mathcal{G}$ is globally Lipschitz.
\end{itemize}
Then for any $u_0 \in X$, there exists a unique mild solution of \eqref{ODE} and such a solution can be approximated uniformly
on each bounded interval $[0, T]$ by the sequence $\{t\mapsto u_N(t; \Pi_N u_0)\}_{N\geq 0}$ of mild solutions obtained from \eqref{ODE_Galerkin}, i.e.:
\begin{equation} \label{uniform_conv_est_ODE}
\lim_{N\rightarrow \infty} \sup_{t \in [0, T]} \|u_N(t; \Pi_N u_0) - u(t; u_0)\|_X = 0, \qquad \text{ } \forall \: T > 0.
\end{equation}
\end{thm}
\vspace{1ex}
\begin{proof}
Recall that the existence and uniqueness of solutions to Eq.~\eqref{Eq_mild} emanating from any initial data $u_0 \in X$ can be proved by a fixed point argument in $C([0,T],X)$ as in the proof of e.g.~\cite[Prop.~4.3.3]{Cazenave_al98}, by relaxing the semigroup of contractions requirement therein to the $C_0$-semigroup setting adopted here; see also
\cite[Thm.~6.1.1]{Lunardi04}.
Given $u_0\in X$, let $u$ be thus the unique mild solution of Eq.~\eqref{ODE}.
By the variation-of-constants formula applied to Eq.~\eqref{ODE_Galerkin} we have on the other hand, for $0\leq t\leq T$,
\begin{equation} \begin{aligned}
u_N(t) = e^{\mathcal{L}_N t} \Pi_N u_0 + \int_0^t e^{\mathcal{L}_N (t -s )} \Pi_N \mathcal{G}(u_N(s)) \mathrm{d} s.
\end{aligned} \end{equation}
Then $v_N(t)=u(t) - u_N(t)$ satisfies
\begin{equation} \begin{aligned} \label{eq:residual}
v_N(t)&= T(t) u_0 - e^{\mathcal{L}_N t} \Pi_N u_0 + \int_0^t T(t-s) \mathcal{G}(u(s)) \mathrm{d} s -
\int_0^t e^{\mathcal{L}_N (t -s )} \Pi_N \mathcal{G}(u_N(s)) \mathrm{d} s \\
& = T(t) u_0 - e^{\mathcal{L}_N t} \Pi_N u_0 +
\int_0^t \big( T(t-s) - e^{\mathcal{L}_N (t -s )} \Pi_N \big) \mathcal{G}(u(s)) \mathrm{d} s \\
& \hspace{10em} + \int_0^t e^{\mathcal{L}_N (t -s )} \Pi_N \big( \mathcal{G}(u(s)) - \mathcal{G}(u_N(s)) \big ) \mathrm{d} s.
\end{aligned} \end{equation}
Let us introduce
\begin{equation} \begin{aligned}
r_N(s) & := \|u(s) - u_N(s) \|_X, \\
\epsilon_N(u_0) & := \sup_{t\in[0, T]} \|T(t) u_0 - e^{\mathcal{L}_N t} \Pi_N u_0\|_X, \\
d_N(s) & := \sup_{t\in[s, T]} \| \big( T(t-s) - e^{\mathcal{L}_N (t -s )} \Pi_N \big) \mathcal{G}(u(s)) \|_X.
\end{aligned} \end{equation}
We obtain then from \eqref{eq:residual} that
\begin{equation} \begin{aligned}
r_N(t) & \le \epsilon_N(u_0) + \int_0^t d_N(s) \mathrm{d} s + \int_0^t \|e^{\mathcal{L}_N (t -s )} \Pi_N \big( \mathcal{G}(u(s)) - \mathcal{G}(u_N(s)) \big )\|_X \mathrm{d} s \\
& \le \epsilon_N(u_0) + \int_0^t d_N(s) \mathrm{d} s + M \mathrm{Lip}(\mathcal{G}) \int_0^t e^{\omega (t -s )} r_N(s) \mathrm{d} s \\
& \le \epsilon_N(u_0) + \int_0^T d_N(s) \mathrm{d} s + M \mathrm{Lip}(\mathcal{G}) e^{\omega T} \int_0^t r_N(s) \mathrm{d} s,
\end{aligned} \end{equation}
where we have used the global Lipschitz condition on $\mathcal{G}$ and \eqref{Eq_control_linearflow} to derive the second inequality.
It follows then from Gronwall's inequality that
\begin{equation} \label{rN_est}
r_N(t) \le \Big( \epsilon_N(u_0) + \int_0^T d_N(s) \mathrm{d} s \Big) e^{M \mathrm{Lip}(\mathcal{G}) T e^{\omega T} }, \qquad \text{ } \forall \: t \in [0, T].
\end{equation}
We are thus left with the estimation of $\epsilon_N(u_0)$ and $\int_0^T d_N(s) \mathrm{d} s $ as $N \rightarrow \infty$.
The assumptions {\bf (A1)}--{\bf (A2)} allow us to use a version of the Trotter-Kato theorem \cite[Thm.~4.5, p.88]{Pazy83}\footnote{Recall that because $\mathcal{L}$ is the generator of a $C_0$-semigroup $T(t)$ on $X$, it satisfies
$\|T(t)\| \leq Me^{\omega t},$ and as a consequence the resolvent set of $\mathcal{L}$ contains the interval $]\omega,\infty[$; see \cite[Thm.~5.3 p.~20]{Pazy83}. In particular, for any $f\in \mathcal{H}$ and any $\lambda >\omega$, the equation $(\lambda I -\mathcal{L}) x=f$
possesses a unique solution $x \in D(\mathcal{L})$, which implies in particular that $(\lambda I -\mathcal{L}) D(\mathcal{L})$ is dense in $X$ as required by the version of the Trotter-Kato theorem used here. This explains why this density requirement, consequence of our working assumptions, is omitted in the formulation of Theorem \ref{ParisVI_thm}.}
which implies
together with \eqref{Eq_identity_approx} that
\begin{equation}
\lim_{N\rightarrow \infty} e^{\mathcal{L}_N t} \Pi_N \phi = T(t) \phi, \quad \text{ } \forall \: \phi \in X,
\end{equation}
uniformly for $t$ in bounded intervals.
It follows that
\begin{equation} \label{epsN_est}
\lim_{N\rightarrow \infty} \epsilon_N(u_0) = 0, \quad \text{ } \forall \: u_0 \in X,
\end{equation}
and that $d_N$ converges point-wisely to zero on $[0,T]$, i.e.
\begin{equation} \label{dN_est1}
\lim_{N\rightarrow \infty} d_N(s) = 0, \quad \text{ } \forall \: s \in [0, T].
\end{equation}
On the other hand, from \eqref{Eq_control_T_t}, \eqref{Eq_control_linearflow}, and {\bf (A3)}, we get
\begin{equation} \begin{aligned}
\| \big( T(t-s) - e^{\mathcal{L}_N (t -s )} \Pi_N \big) &\mathcal{G}(u(s)) \|_X \le 2 M e^{\omega (t-s)} \|\mathcal{G}(u(s)) \|_X \\
& \le 2 M e^{\omega (t-s)} \big(\mathrm{Lip}(\mathcal{G}) \|u(s)\|_X + \|\mathcal{G}(0)\|_X\big),
\end{aligned} \end{equation}
which implies
\begin{equation} \label{dN_est2}
d_N(s) \le 2 M e^{\omega T} \big(\mathrm{Lip}(\mathcal{G}) \|u(s)\|_X+ \|\mathcal{G}(0)\|_X\big), \quad \text{ } \forall \: s \in [0, T].
\end{equation}
Since $u\in C([0,T],X)$, $s\mapsto \| u(s)\|_X$ is integrable on $[0,T]$, and the Lebesgue's dominated convergence theorem allows us then to conclude from \eqref{dN_est1} and \eqref{dN_est2} that
\begin{equation} \label{dN_est3}
\lim_{N\rightarrow \infty} \int_{0} ^T d_N(s) \mathrm{d} s = 0.
\end{equation}
The desired uniform convergence estimate \eqref{uniform_conv_est_ODE} is then trivially obtained from \eqref{rN_est}.
\end{proof}
\subsection{Galerkin approximations of nonlinear systems of DDEs}\label{Subsect_DDE_Galerkin}
In this section, given the Hilbert product space
\begin{equation*}
\mathcal{H}:=L^2([-\tau, 0); \mathbb{R}^d)\times \mathbb{R}^d, \;\; d\geq 1,
\end{equation*}
endowed with the inner product \eqref{H_inner}, we restrict our attention to the following abstract ODE:
\begin{equation} \begin{aligned} \label{Eq_abstract_ODE_DDE}
\frac{\mathrm{d} u}{\mathrm{d} t} & = \mathcal{A} u + \mathcal{F}(u), \\
\end{aligned} \end{equation}
where $\mathcal{F}$ is a nonlinear operator that will be specified later on, and where
\textemdash\, given $L_D$, a bounded linear operator from $H^1([-\tau, 0); \mathbb{R}^d)$ to $ \mathbb{R}^d$ and, $L_S$, a bounded linear
operator from $\mathbb{R}^d$ to $\mathbb{R}^d$ \textemdash\, the linear operator $\mathcal{A}$
is given by
\begin{equation} \begin{aligned} \label{Def_A2}
\lbrack \mathcal{A} \Psi \rbrack (\theta) & := \begin{cases}
{\displaystyle \frac{\mathrm{d}^+ \Psi^D}{\mathrm{d} \theta}}, & \theta \in[-\tau, 0), \vspace{0.4em}\\
{\displaystyle L_S\Psi^S + L_D\Psi^D}, & \theta = 0,
\end{cases}
\end{aligned} \end{equation}
for any $\Psi = (\Psi^D, \Psi^S)$ that lives in the domain, $D(\mathcal{A})$, defined as
\begin{equation} \label{D_of_A2}
D(\mathcal{A}): = \Big \{\Psi \in \mathcal{H} : \Psi^D \in H^1([-\tau, 0); \mathbb{R}^d), \lim_{\theta \rightarrow 0^-}\Psi^D(\theta) =\Psi^S
\Big \}.
\end{equation}
Such an abstract setting arises naturally in the reformulation of a broad class of nonlinear systems of DDEs
as an abstract ODE in $\mathcal{H}$; see e.g.~\cite{burns1983linear,curtain1995}. Examples of operators $L_D$ depending explicitly on the delay $\tau$ are given below; see \eqref{Def_LD}.
It is well-known that under these assumptions, the operator $\mathcal{A}$ defines a $C_0$-semigroup on $\mathcal{H}$ \cite[Thm.~2.3]{burns1983linear}, and in particular $\mathcal{A}$ is dense in $\mathcal{H}$ and is a closed operator.
We turn now to the definition of the subspaces $X_N$ and $\Pi_N$ of the previous section.
For each positive integer $N$, we define the $Nd$-dimensional subspace $\mathcal{H}_{N} \subset \mathcal{H}$ to be spanned by the first $Nd$ vectorized Koornwinder polynomials introduced in \eqref{Eq_superKoor}, namely
\begin{equation} \label{subspace_HNd}
\mathcal{H}_{N} = \mathrm{span} \Big \{ \mathbb{K}^\tau_{1}, \cdots, \mathbb{K}^\tau_{Nd} \Big\}.
\end{equation}
As noted in Section \ref{Sec_Vectorization}, these polynomials are orthogonal for the inner product with a point mass such as defined in \eqref{H_inner}.
The subspace $\mathcal{H}_{N}$ is thus naturally associated with an orthogonal projector $\Pi_N$, as required in the previous section.
The approximation property \eqref{Eq_identity_approx} is satisfied due to the density arguments outlined in Appendix~\ref{sect:thm_Pn_proof}.
Recall finally that by construction $\mathbb{K}^\tau_{j} \in D(\mathcal{A})$ for any $j\in \mathbb{N}^*$, and therefore
\begin{equation}\label{Eq_inclusion}
\mathcal{H}_{N}\subset D(\mathcal{A}).
\end{equation}
The corresponding $N$-dimensional Galerkin approximation of Eq.~\eqref{Eq_abstract_ODE_DDE} reads then:
\begin{equation}\label{Eq_DDE_Galerkin}
\frac{\mathrm{d} u_N}{\mathrm{d} t} = \mathcal{A}_N u_N + \Pi_N \mathcal{F}(u_N),
\end{equation}
with
\begin{equation}\label{Eq_AN}
\mathcal{A}_N := \Pi_N \mathcal{A} \Pi_N,
\end{equation}
which is therefore well defined on $\mathcal{H}$ because of \eqref{Eq_inclusion}.
We are now in position to check Conditions {\bf (A1)} and {\bf (A2)} of Theorem \ref{ParisVI_thm}.
To check Condition {\bf (A1)}, we will make usage of the following extension of the linear flow $e^{\mathcal{A}_N t}$:
\begin{equation}\label{Eq_extension}
T_N(t) u=e^{\mathcal{A}_N t} \Pi_N u +(I-\Pi_N) u, \; u\in \mathcal{H}.
\end{equation}
Such an extension leads naturally to a $C_0$-semigroup on $\mathcal{H}$.
The stability condition \eqref{Eq_control_linearflow}, will require however
some specifications of the operator $L_D$ that will be made clear later.
Condition {\bf (A2)} can be however checked in the general setting by making an appropriate use of the properties
of the vectorized Koornwinder polynomials summarized in Lemma \ref{Super_Fundamental_lemma}. More precisely,
\begin{lem} \label{Lem_A2}
Let $\mathcal{H}_N$ be the subspace defined in \eqref{subspace_HNd}. Then for $\mathcal{A}$ defined in \eqref{Def_A2} and $\mathcal{A}_N$ defined in \eqref{Eq_AN} associated with the orthogonal projector $\Pi_N$ onto $\mathcal{H}_N$, we have
\begin{equation}
\lim_{N \rightarrow \infty} \|\mathcal{A}_{N} \Psi - \mathcal{A} \Psi \|_\mathcal{H} = 0, \qquad \text{ } \forall \: \Psi \in D(\mathcal{A}).
\end{equation}
\end{lem}
\begin{proof}
By construction $\mathbb{K}^\tau_{j} \in D(\mathcal{A})$ for each $j\in \mathbb{N}^*$. Since $\big\{ \frac{\mathbb{K}^\tau_{j}}{\|\mathbb{K}^\tau_{j}\|_\mathcal{H}} : j \in \mathbb{N}^*\big\}$ forms a Hilbert basis of $\mathcal{H}$, it suffices to show that
\begin{equation} \label{Goal_convergence}
\lim_{N \rightarrow \infty} \|\mathcal{A}_{N} \Psi - \mathcal{A} \Psi \|_\mathcal{H} = 0, \;\; \Psi \in \underset{k\geq 1}\bigcup \mathcal{H}_k.
\end{equation}
We recall from \eqref{Eq_norm_superKoor} that $\|\mathbb{K}^\tau_{j}\|_{ \mathcal{H}} = \|\mathcal{K}^\tau_{j_q}\|_{\mathcal{H}_1}$ for all $j \in \mathbb{N}$. It follows then that the orthogonal projector $\Pi_{N}$ associated with the subspace $\mathcal{H}_{N}$ takes the following explicit form:
\begin{equation} \begin{aligned} \label{DDE_projector}
\Pi_{N} \Psi & = \sum_{j = 1}^{Nd} \frac{ \big \langle \Psi, \mathbb{K}^\tau_{j} \big \rangle_{\mathcal{H}}}{\|\mathcal{K}^\tau_{j_q}\|_{\mathcal{H}_1}^2} \mathbb{K}^\tau_{j} \\
& = \sum_{j = 1}^{Nd} \Big(\frac{1}{\tau}\langle \Psi^D, \mathbf{K}_j^\tau \rangle_{L^2}+ \langle \Psi^S, \mathbf{K}_j^\tau(0) \rangle \Big) \frac{\mathbb{K}_j^\tau }{\|\mathcal{K}_{j_q}^\tau \|_{\mathcal{H}_1}^2} \\
& = \begin{pmatrix}
p_N & q_N \\
p'_N & q'_N
\end{pmatrix}
\begin{pmatrix}
\Psi^D \\
\Psi^S
\end{pmatrix},
\end{aligned} \end{equation}
where the operators $p_N, p'_N, q_N, q'_N$ are defined as following:
\begin{subequations}
\begin{eqnarray}
\hspace{-3em} p_N: L^2([-\tau, 0]; \mathbb{R}^d) \rightarrow L^2([-\tau, 0]; \mathbb{R}^d), & &
\Psi^D \mapsto \sum_{j = 1}^{Nd} \frac{\langle \Psi^D, \mathbf{K}_j^\tau \rangle_{L^2}}{\tau \|\mathcal{K}_{j_q}^\tau \|_{\mathcal{H}_1}^2} \mathbf{K}_j^\tau; \label{Def_pN} \\
p'_N: L^2([-\tau, 0]; \mathbb{R}^d) \rightarrow \mathbb{R}^d, & &
\Psi^D \mapsto \sum_{j = 1}^{Nd} \frac{\langle \Psi^D, \mathbf{K}_j^\tau \rangle_{L^2}}{ \tau \|\mathcal{K}_{j_q}^\tau \|_{\mathcal{H}_1}^2} \mathbf{K}_j^\tau(0); \label{Def_pN'} \\
q_N: \mathbb{R}^d \rightarrow L^2([-\tau, 0]; \mathbb{R}^d), & &
\Psi^S \mapsto \sum_{j = 1}^{Nd} \frac{ \langle \Psi^S, \mathbf{K}_j^\tau(0) \rangle}{\|\mathcal{K}_{j_q}^\tau \|_{\mathcal{H}_1}^2} \mathbf{K}_j^\tau; \label{Def_qN}\\
q'_N: \mathbb{R}^d \rightarrow \mathbb{R}^d, & &
\Psi^S \mapsto \sum_{j = 1}^{Nd} \frac{ \langle \Psi^S, \mathbf{K}_j^\tau(0) \rangle}{\|\mathcal{K}_{j_q}^\tau \|_{\mathcal{H}_1}^2}\mathbf{K}_j^\tau(0). \label{Def_qN'}
\end{eqnarray}
\end{subequations}
In the following, we arbitrarily fix $\Phi \in \mathcal{H}_k$ for some integer $k>0$. Now let us choose $N$ such that $Nd \ge k$, then $\Pi_{Nd} \Phi = \Phi$, and we get for each such $N$
\begin{equation}\label{Eq_abstract_AN}
\mathcal{A}_{N} \Phi = \Pi_{N} \mathcal{A} \Pi_{N} \Phi
= \Pi_{N} \mathcal{A} \Phi = \begin{pmatrix}
p_N \frac{\mathrm{d}^+}{\mathrm{d} \theta}\Phi^D + q_N (L_S \Phi^S + L_D \Phi^D) \vspace{1em}\\
p'_N \frac{\mathrm{d}^+}{\mathrm{d} \theta}\Phi^D + q'_N (L_S \Phi^S + L_D \Phi^D)
\end{pmatrix}.
\end{equation}
We obtain then
\begin{equation} \label{eq:AN_residual}
(\mathcal{A} - \mathcal{A}_{N}) \Phi = \begin{pmatrix}
(I^D - p_N) \frac{\mathrm{d}^+}{\mathrm{d} \theta}\Phi^D - q_N \big(L_S \Phi^S + L_D \Phi^D\big) \vspace{1em}\\
- p'_N \frac{\mathrm{d}^+}{\mathrm{d} \theta}\Phi^D + (I^S - q'_N) \big(L_S \Phi^S + L_D \Phi^D \big)
\end{pmatrix}, \; \text{ if } Nd \ge k,
\end{equation}
where $I^D$ and $I^S$ denote the identity maps on $L^2([-\tau, 0]; \mathbb{R}^d)$ and $\mathbb{R}^d$, respectively.
We show below that the RHS of \eqref{eq:AN_residual} converges to zero. Let us begin with the term $(I^D - p_N) \frac{\mathrm{d}^+}{\mathrm{d} \theta}\Phi^D$. Note that by comparing the definition of $p_N$ give by \eqref{Def_pN} and the decomposition of $L^2([-\tau, 0]; \mathbb{R}^d)$ functions given by \eqref{L2_decomp}, we see that for each $f \in L^2([-\tau,0]; \mathbb{R}^d)$, the term $p_N f$ is the partial sum of the first $N$ terms in the corresponding decomposition. It follows then that
\begin{equation} \label{Eq_pN_residual}
\lim_{N \rightarrow \infty} \|(I^D - p_N) f\|_{L^2} = 0, \quad \text{ } \forall \: f \in L^2([-\tau,0]; \mathbb{R}^d).
\end{equation}
Since $\Phi \in \mathcal{H}_k \subset D(\mathcal{A})$, it holds that $\frac{\mathrm{d}^+}{\mathrm{d} \theta}\Phi^D \in L^2([-\tau,0]; \mathbb{R}^d)$. We obtain then from \eqref{Eq_pN_residual} that
\begin{equation} \label{pN_est}
\lim_{N \rightarrow \infty}\Big\|(I^D - p_N) \Big(\frac{\mathrm{d}^+}{\mathrm{d} \theta}\Phi^D \Big) \Big\|_{L^2} = 0.
\end{equation}
\medskip
We turn now to the estimates for $q_N (L_S \Phi^S + L_D \Phi^D)$. By the definition of $q_N$ in \eqref{Def_qN}, we get
\begin{equation}
q_N \big(L_S \Phi^S + L_D \Phi^D \big) = \sum_{j = 1}^{Nd} \frac{ \langle L_S \Phi^S + L_D \Phi^D, \mathbf{K}_j^\tau(0)\rangle}{\|\mathcal{K}_{j_q}^\tau \|_{\mathcal{H}_1}^2} \mathbf{K}_j^\tau.
\end{equation}
It follows then from the identity \eqref{Eq_identity1} that
\begin{equation} \label{qN_est}
\lim_{N \rightarrow \infty}\Big\|q_N \big( L_S \Phi^S + L_D \Phi^D \big)\|_{L^2} = 0.
\end{equation}
For the term $p'_N \frac{\mathrm{d}^+}{\mathrm{d} \theta}\Phi^D$, since $\frac{\mathrm{d}^+}{\mathrm{d} \theta}\Phi^D \in L^2([-\tau,0); \mathbb{R}^d)$, it follows from the definition of $p'_N$ given in \eqref{Def_pN'} and the identity \eqref{Eq_identity3} that
\begin{equation} \label{pN'_est}
\lim_{N \rightarrow \infty} \Big| p'_N \frac{\mathrm{d}^+}{\mathrm{d} \theta}\Phi^D\Big| = 0,
\end{equation}
where $\vert\cdot\vert$ denotes the Euclidean norm of $\mathbb{R}^d$.
By using the identity \eqref{Eq_identity2}, we also get
\begin{equation} \label{qN'_est}
\lim_{N \rightarrow \infty} \big| (I^S - q'_N) \big(L_S \Phi^S + L_D \Phi^D \big) \big| = 0.
\end{equation}
Now, by using the estimates \eqref{pN_est}, \eqref{qN_est}, \eqref{pN'_est}, and \eqref{qN'_est}, we get from \eqref{eq:AN_residual} that
\begin{equation}
\lim_{N \rightarrow \infty} \| (\mathcal{A} - \mathcal{A}_{N}) \Phi \|_\mathcal{H} = 0,
\end{equation}
and \eqref{Goal_convergence} follows.
\end{proof}
\vspace{1ex}
\begin{rem}\label{PDE_rem}
We explain here how the truncated linear operator $\mathcal{A}_N$ defined in \eqref{Eq_abstract_AN} is related to an interesting class of nonlocal linear PDEs. For the sake of clarity, we discuss the case $d=1$.
For convenience, let us write $v(\theta)=\Phi^D(\theta)$ and, recalling e.g.~Eq.~\eqref{lin_PDE} in the Introduction, replace $\mathrm{d}^+ v/\mathrm{d} \theta$ by $v_{\theta}$ in \eqref{Eq_abstract_AN}.
One then obtains that, when $v\in \mathcal{H}_N$,
\begin{equation}\label{Eq_abstract_AN2}
\begin{pmatrix}
p_N v_{\theta}\\
p'_N v_{\theta}
\end{pmatrix}=\begin{pmatrix}
v_{\theta}\\
v_{\theta}(0)
\end{pmatrix}-\sum_{n=0}^{N-1} \frac{v_{\theta}(0)}{\|\mathcal{K}^\tau_n\|_{\mathcal{H}}^2}\mathcal{K}^\tau_n.
\end{equation}
Next, we use the expressions of $\mathcal{A}_N$, $p_N$ and $q_N$ --- given, respectively, in \eqref{Eq_abstract_AN}, \eqref{Def_pN} and \eqref{Def_qN}
--- to note that the $D$-component
$v$ of any solution $u$ of
\begin{equation*}
\frac{\mathrm{d} u} {\mathrm{d} t} = \mathcal{A}_N u,
\end{equation*}
which emanates from initial data
taken\footnote{For such initial data, the solution stays in $\mathcal{H}_N$, by the definition of $\mathcal{A}_N$.} in $\mathcal{H}_N$, satisfies the following {\it nonlocal linear PDE}:
\begin{equation}\label{nonlocal_PDE}
\partial_t v =\partial_{\theta} v +b_N(\theta)\Big(L_S v(t,0) + L_D v- \partial_{\theta} v\big\vert_{\theta=0}\Big);
\end{equation}
here
\begin{equation}\label{bN-coeff}
\displaystyle b_N(\theta)=\sum_{n=0}^{N-1} \frac{K_n^\tau (\theta)}{\|\mathcal{K}_n^{\tau}\|_{\mathcal{H}}^2},
\end{equation}
and the rescaled Koornwinder polynomials $K_n^\tau$
are given by \eqref{eq:Pn_tilde}, since $d=1$.
We see therewith that the Galerkin scheme used herein introduces a
nonlocal perturbation term with respect to the $D$-component of $\mathcal{A}$ given in
\eqref{Def_A2}. This perturbation term results from the difference between the $S$-component of $\mathcal{A}$ and the derivative at $0$, when applied to functions in $\mathcal{H}_N$.
From Lemma \ref{Lem_A2} above and Lemmas \ref{Lemma_stability_prep} and \ref{Lem_A1} below, one can infer by the Trotter-Kato theorem that the effects on the solutions of Eq.~\eqref{nonlocal_PDE} of this
nonlocal perturbation --- which vanishes as $N\rightarrow \infty$, due to \eqref{Eq_identity1_0} --- do not lead to a degenerate situation and that actually these solutions converge
over finite intervals to those of the local PDE, $\partial_t v=\partial_\theta v$. This nice convergence is due to the key properties of the Koornwinder polynomials, as summarized in Lemma \ref{Fundamental_lemma} for $d=1$, and in Lemma \ref{Super_Fundamental_lemma} for the multidimensional case;
see also Fig.~\ref{fig:Cancel_prop} for the nature of the approximation at $\theta = 0$.
To conclude this remark, we return now to the discussion in the Introduction concerning the approximation of discontinuities that arise, for instance, in the first-order derivative of a DDE's solution, cf.~\cite[Appendix A and references therein]{GZT08}.
In Table \ref{table}, we report the corresponding
differences over the interval $[0,2]$
between the analytic solution of Eq.~\eqref{lin_case} with $\tau=1$ and history $x(t)\equiv 1$,
on the one hand, and low-dimensional Galerkin approximations obtained by application of the formulas derived hereafter in Section~\ref{Sec_Galerkin_analytic}, on the other.
\begin{table}[h]
\caption{Errors in Galerkin approximation of Eq.~\eqref{lin_case} \label{table}}
\centering
\begin{tabular}{ccc}
\toprule\noalign{\smallskip}
$N$ & Max.~error in Galerkin solution & Max.~error in 1$^{\textrm{st}}$-order derivative\\
& & of Galerkin solution\\
\noalign{\smallskip}\hline\noalign{\smallskip}
4 & $6.9 \times 10^{-3}$ & $5.9\times 10^{-2}$\\
8 & $9.3\times 10^{-4}$ & $2.2\times 10^{-2}$\\
16 & $2.3\times 10^{-4}$ & $1.1\times 10^{-2}$\\
32 & $9.0\times 10^{-5}$ & $5.4\times 10^{-3}$\\
\noalign{\smallskip} \bottomrule
\end{tabular}
\end{table}
The second column of this table is obviously consistent with the rigorous convergence result of Corollary \ref{Cor_DDE_global_Lip} proved below. The third column shows that, furthermore, the aforementioned discontinuities present in the derivative of the DDE's solutions are well captured by the proposed methodology as well.
\qed
\end{rem}
\medskip
In the following, we restrict the linear operator $\mathcal{A}$ defined in \eqref{Def_A2} to be such that
$L_S: \mathbb{R}^d \rightarrow \mathbb{R}^d$ is a bounded linear operator, and $L_D$ is defined by
\begin{equation} \begin{aligned} \label{Def_LD}
L_D : H^1([-\tau,0); \mathbb{R}^d) & \rightarrow \mathbb{R}^d, \\
\Psi^D & \mapsto B \Psi^D(-\tau) + \int_{-\tau}^0 C(s) \Psi^D(s) \mathrm{d} s,
\end{aligned} \end{equation}
with $B: \mathbb{R}^d \rightarrow \mathbb{R}^d$ being a bounded linear operator\footnote{Note that $\Psi^D(-\tau)$ is well-defined for $\Psi^D\in H^1([-\tau,0); \mathbb{R}^d)$, since the latter Sobolev space is continuously embedded in the space of continuous functions $C([-\tau,0]; \mathbb{R}^d)$; see \cite[Thm.~8.8]{brezis_book}.}, and $C(\cdot) \in L^2([-\tau, 0); \mathbb{R}^{d\times d})$.
In the following preparatory lemma, Lemma \ref{Lemma_stability_prep}, we recall by means of basic estimates, that the existence of $\omega>0$ such that $\mathcal{A}-\omega I$ is dissipative in $\mathcal{H}$, can be guaranteed. This fact will be used to establish a stability condition of type \eqref{Eq_control_linearflow} (with $M=1$) for the semigroups $T(t)$ and $T_N(t)$, generated respectively by $\mathcal{A}$ and its finite-dimensional approximation $\mathcal{A}_N$; see Lemma \ref{Lem_A1}. The proofs of these Lemmas are quite straightforward but are reproduced below for the sake of completeness.
\begin{lem} \label{Lemma_stability_prep}
Let $\mathcal{A}$ be defined such as in \eqref{Def_A2} with $L_D$ such as specified in \eqref{Def_LD} and $L_S: \mathbb{R}^d \rightarrow \mathbb{R}^d$ to be a bounded linear operator. Then
\begin{equation}
\langle \mathcal{A} \Psi, \Psi \rangle_{\mathcal{H}} \le \omega \|\Psi\|_{\mathcal{H}}^2, \quad \forall\; \Psi \in D(\mathcal{A}),
\end{equation}
with\footnote{Throughout this article, we will denote indistinguishably by $|\cdot|$, either the Euclidean norm of a vector in $\mathbb{R}^d$, or its subordinated (operator) norm, in the case of a $d \times d$ matrix. It should be clear from the context which norm is used.}
\begin{equation} \label{omega}
\omega = \Big(1 + \frac{1}{2 \tau} + |L_S| + \frac{\tau}{2} |B|^2 + \frac{\tau}{4} \|C\|^2_{L^2} \Big).
\end{equation}
\end{lem}
\begin{proof}
Let $\Psi \in D(\mathcal{A})$, then by the definition of $ \mathcal{A}$ given in \eqref{Def_A2}, we have
\begin{equation} \label{Stab_est0}
\langle \mathcal{A} \Psi, \Psi \rangle_{\mathcal{H}} = \frac{1}{\tau} \int_{-\tau}^0 \Big\langle \frac{\mathrm{d}^+ \Psi^D}{\mathrm{d} \theta}(\theta), \Psi^D(\theta) \Big\rangle \mathrm{d} \theta + \langle L_S \Psi^S, \Psi^S \rangle + \langle L_D \Psi^D, \Psi^S \rangle.
\end{equation}
Note that
\begin{equation} \begin{aligned} \label{Stab_est1}
\frac{1}{\tau} \int_{-\tau}^0 \Big\langle \frac{\mathrm{d}^+ \Psi^D}{\mathrm{d} \theta}(\theta), \Psi^D(\theta) \Big\rangle \mathrm{d} \theta & = \frac{1}{2 \tau} \int_{-\tau}^{0}
\mathrm{d} |\Psi^D(\theta)|^2 \\
& = \frac{1}{2 \tau} \Big(|\Psi^S|^2 - |\Psi^D(-\tau)|^2 \Big).
\end{aligned} \end{equation}
Using the definition of $L_D$ given in \eqref{Def_LD}, we obtain
\begin{equation} \begin{aligned}
\langle L_D \Psi^D, \Psi^S \rangle
& = \Big\langle B \Psi^D(-\tau) + \int_{-\tau}^0 C(\theta) \Psi^D(\theta) \mathrm{d} \theta, \Psi^S \Big\rangle \\
& \le |B| |\Psi^D(-\tau)| | \Psi^S| + \|C\|_{L^2} \|\Psi^D\|_{L^2}|\Psi^S|.
\end{aligned} \end{equation}
It follows then from Young's inequality that
\begin{equation} \begin{aligned} \label{Stab_est2}
\langle L_D \Psi^D, \Psi^S \rangle & \le \Big( \frac{1}{2 \tau} |\Psi^D(-\tau)|^2 + \frac{\tau}{2} |B|^2 | \Psi^S|^2 \Big) \\
& \quad + \Big( \frac{1}{\tau}\|\Psi^D\|_{L^2}^2
+ \frac{\tau}{4} \|C\|^2_{L^2} |\Psi^S|^2 \Big).
\end{aligned} \end{equation}
Note also that
\begin{equation} \label{Stab_est3}
\langle L_S \Psi^S, \Psi^S \rangle \le |L_s| |\Psi^S|^2.
\end{equation}
Now, by using \eqref{Stab_est1}, \eqref{Stab_est2}, and \eqref{Stab_est3} in \eqref{Stab_est0}, we get
\begin{equation} \begin{aligned}
\langle \mathcal{A} \Psi, \Psi \rangle_{\mathcal{H}} & \le \Big( \frac{1}{2 \tau} + |L_S| + \frac{\tau}{2}|B|^2 + \frac{\tau}{4} \|C\|^2_{L^2} \Big) |\Psi^S|^2 + \frac{1}{\tau}\|\Psi^D\|_{L^2}^2 \\
& \le \Big(1 + \frac{1}{2 \tau} + |L_S| + \frac{\tau}{2}|B|^2 + \frac{\tau}{4} \|C\|^2_{L^2} \Big) \Big(| \Psi^S|^2 + \frac{1}{\tau}\|\Psi^D\|_{L^2}^2 \Big) \\
& = \Big(1 + \frac{1}{2 \tau} + |L_S| + \frac{\tau}{2}|B|^2 + \frac{\tau}{4} \|C\|^2_{L^2} \Big) \|\Psi\|_{\mathcal{H}}^2,
\end{aligned} \end{equation}
leading thus to the desired estimate.
\end{proof}
\begin{lem} \label{Lem_A1}
Let $\mathcal{A}$ be defined such as in \eqref{Def_A2} with $L_D$ such as specified in \eqref{Def_LD} and $L_S: \mathbb{R}^d \rightarrow \mathbb{R}^d$ to be a bounded linear operator.
Then, the linear semigroups $T(t)$ and $T_N(t)$ generated respectively by $\mathcal{A}$ and $\mathcal{A}_N$ defined in \eqref{Eq_AN}, satisfy
\begin{equation}\label{stable_estimates}
\|T(t)\| \le e^{\omega t} \quad \text{ and } \quad \|T_N(t)\| \le e^{\omega t}, \qquad t \ge 0,
\end{equation}
with $\omega$ given by \eqref{omega}.
\end{lem}
\begin{proof}
Since $T(t)$ is a $C_0$-semigroup with infinitesimal generator $\mathcal{A}$, we have that $T(t) u_0 \in D(\mathcal{A})$ for all $u_0 \in D(\mathcal{A})$, and that
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d} t} T(t)u_0 = \mathcal{A} T(t) u_0, \qquad \text{ } \forall \: u_0 \in \mathcal{A}, \; t \ge 0;
\end{equation}
cf.~\cite[Thm.~2.4 c) p.5]{Pazy83}.
We obtain thus
\begin{equation} \label{T_est1}
\frac{\mathrm{d} }{\mathrm{d} t }\| T(t) u_0 \|^2_{\mathcal{H}} = 2 \langle \mathcal{A} T(t) u_0, T(t) u_0 \rangle_{\mathcal{H}} \le 2 \omega \|T(t) u_0 \|^2_{\mathcal{H}}, \qquad \text{ } \forall \: u_0 \in D(\mathcal{A}),
\end{equation}
where we have used Lemma~\ref{Lemma_stability_prep} to obtain the last inequality above with $\omega$ given by \eqref{omega}.
It follows then from Gronwall's inequality that
\begin{equation} \label{T_est2}
\| T(t) u_0 \|^2_{\mathcal{H}} \le e^{2\omega t} \|u_0\|^2_{\mathcal{H}}, \qquad \text{ } \forall \: u_0 \in D(\mathcal{A}).
\end{equation}
Since $D(\mathcal{A})$ is dense in $\mathcal{H}$ and $T(t)$ are bounded operators on $\mathcal{H}$, the estimate \eqref{T_est2} still holds for general initial data in $\mathcal{H}$, leading in turn to
\begin{equation} \label{stability_A}
\| T(t)\| \le e^{\omega t}, \qquad t \ge 0.
\end{equation}
The estimate for $T_N$ is also trivial and proceeds as follows. First note that
by the definition of $T_N$ given by \eqref{Eq_extension}, we have
\begin{equation} \begin{aligned}
\|T_N(t) u_0\|^2_\mathcal{H} & = \big\langle e^{\mathcal{A}_N t} \Pi_N u_0 + (I - \Pi_N) u_0, e^{\mathcal{A}_N t} \Pi_N u_0 + (I - \Pi_N) u_0 \big\rangle_{\mathcal{H}} \\
& = \big\langle e^{\mathcal{A}_N t} \Pi_N u_0, e^{\mathcal{A}_N t} \Pi_N u_0 \big\rangle_{\mathcal{H}} + \big\langle (I - \Pi_N) u_0, (I - \Pi_N) u_0 \big\rangle_{\mathcal{H}} \\
& = \| e^{\mathcal{A}_N t} \Pi_N u_0 \|^2_{\mathcal{H}} + \| (I - \Pi_N) u_0\|^2_{\mathcal{H}}.
\end{aligned} \end{equation}
It follows that
\begin{equation} \begin{aligned} \label{TN_est1}
\frac{\mathrm{d} }{\mathrm{d} t }\| T_N(t) u_0 \|^2_{\mathcal{H}} &=
\frac{\mathrm{d} }{\mathrm{d} t }\| e^{\mathcal{A}_N t} \Pi_N u_0 \|^2_{\mathcal{H}} \\
& = 2 \langle \mathcal{A}_N e^{\mathcal{A}_N t} \Pi_N u_0, e^{\mathcal{A}_N t} \Pi_N u_0 \rangle_{\mathcal{H}}, \qquad \text{ } \forall \: u_0 \in \mathcal{H}.
\end{aligned} \end{equation}
Note also that
\begin{equation} \begin{aligned} \label{TN_est2}
\langle \mathcal{A}_N \Psi, \Psi \rangle_{\mathcal{H}} & = \langle \Pi_N \mathcal{A} \Pi_N \Psi, \Psi \rangle_{\mathcal{H}} \\
& = \langle \Pi_N \mathcal{A} \Pi_N \Psi, \Pi_N \Psi \rangle_{\mathcal{H}} + \langle \Pi_N \mathcal{A} \Pi_N \Psi, (I- \Pi_N) \Psi \rangle_{\mathcal{H}}\\
& = \langle \Pi_N \mathcal{A} \Pi_N \Psi, \Pi_N \Psi \rangle_{\mathcal{H}} \\
& = \langle \mathcal{A} \Pi_N \Psi, \Pi_N \Psi \rangle_{\mathcal{H}} \\
& \le \omega \|\Pi_N \Psi\|_{\mathcal{H}} \\
& \le \omega \|\Psi\|_{\mathcal{H}}.
\end{aligned} \end{equation}
We obtain then from \eqref{TN_est1} that
\begin{equation} \begin{aligned} \label{TN_est3}
\frac{\mathrm{d} }{\mathrm{d} t }\| T_N(t) u_0 \|^2_{\mathcal{H}} & \le 2 \omega \|e^{\mathcal{A}_N t} \Pi_N u_0\|^2_{\mathcal{H}} \\
& \le 2\omega \big( \|e^{\mathcal{A}_N t} \Pi_N u_0\|^2_{\mathcal{H}} + \| (I - \Pi_N) u_0\|^2_{\mathcal{H}} \big) \\
& = 2\omega \|e^{\mathcal{A}_N t} \Pi_N u_0 +(I - \Pi_N) u_0\|^2_{\mathcal{H}} \\
& = 2\omega \|T_N(t) u_0 \|^2_{\mathcal{H}}, \qquad \text{ } \forall \: u_0 \in \mathcal{H}.
\end{aligned} \end{equation}
The desired estimate for $\| T_N(t)\|$ can be derived now from \eqref{TN_est3} by using Gronwall's inequality.
\end{proof}
\begin{rem}
Note that the estimate about $T_N$ in \eqref{stable_estimates} shows in particular that solutions of \eqref{nonlocal_PDE} grow at most exponentially with a rate independent of $N$, and stay uniformly bounded over finite intervals. \qed
\end{rem}
With these preparatory lemmas, we are now in position to obtain as corollaries of Theorem~\ref{ParisVI_thm}, the convergence results for the Galerkin approximation \eqref{Eq_DDE_Galerkin} of $d$-dimensional nonlinear systems of DDEs of the form
\begin{equation}\label{Eq_nln_sys}
\frac{\mathrm{d} \mathbf{x}}{\mathrm{d} t}=L_S \mathbf{x}(t)+ B \mathbf{x}(t-\tau) + \int_{t-\tau}^t C(s-t) \mathbf{x}(s) \mathrm{d} s + \mathbf{F}\Big(\mathbf{x}(t),\int_{t-\tau}^t \mathbf{x}(s) \mathrm{d} s\Big),
\end{equation}
where $\mathbf{F}:\mathbb{R}^d\times \mathbb{R}^d \rightarrow \mathbb{R}^d$, and $L_S$, $B$, and $C$ are as given in \eqref{Def_LD}.
We first sate the result for the case of global
Lipschitz nonlinearity, keeping in mind that already for the case of scalar DDEs ($d=1$), chaotic dynamics can take place under such a simple nonlinear setting; see Section \ref{Sec_nearly-brownian} for a numerical illustration.
\vspace{1ex}
\begin{cor} \label{Cor_DDE_global_Lip}
Let $\mathcal{A}$ be defined such as in \eqref{Def_A2} with $L_D$ such as specified in \eqref{Def_LD} and $L_S: \mathbb{R}^d \rightarrow \mathbb{R}^d$ to be a bounded linear operator.
Assume that the nonlinearity $\mathcal{F} \colon \mathcal{H} \rightarrow \mathcal{H}$ defined by
\begin{equation} \begin{aligned} \label{Def_F_sys}
[\mathcal{F} (\Psi) ](\theta) & := \begin{cases}
0, & \theta \in[-\tau, 0), \vspace{0.4em}\\
\mathbf{F} \Big(\Psi^S, \int_{-\tau}^0 \Psi^D(s) \mathrm{d} s\Big), & \theta = 0,
\end{cases} \quad \text{ } \forall \: \Psi = (\Psi^D, \Psi^S) \in \mathcal{H},
\end{aligned} \end{equation}
is globally Lipschitz.
Then, for each $u_0 \in \mathcal{H}$, the mild solution of \eqref{Eq_DDE_Galerkin} emanating from $\Pi_N u_0$ converges uniformly to the mild solution of \eqref{Eq_abstract_ODE_DDE} emanating from $u_0$ on each bounded interval $[0, T]$, i.e.:
\begin{equation} \label{uniform_conv_est}
\lim_{N\rightarrow \infty} \sup_{t \in [0, T]} \|u_N(t; \Pi_N u_0) - u(t; u_0)\|_{\mathcal{H}} = 0, \qquad \text{ } \forall \: T > 0, \; u_0 \in \mathcal{H}.
\end{equation}
\end{cor}
\vspace{1ex}
\begin{proof}
This corollary is a direct consequence of Lemma~\ref{Lem_A1} and Lemma~\ref{Lem_A2}, ensuring respectively, Conditions {\bf (A1)} and {\bf (A2)} of Theorem~\ref{ParisVI_thm}.
\end{proof}
\vspace{1ex}
In the next two corollaries, we relax the global Lipschitz condition assumed in Corollary~\ref{Cor_DDE_global_Lip} to a local Lipschitz condition in addition to either a sublinear growth for $\mathcal{F}$ (see Corollary~\ref{Cor_DDE_local_Lip_Case1}) or an energy inequality satisfied by $\mathcal{F}$; see Corollary~\ref{Cor_DDE_local_Lip_Case2}.
\begin{cor} \label{Cor_DDE_local_Lip_Case1}
Let $\mathcal{A}$ be defined in \eqref{Def_A2} with $L_D$ specified in \eqref{Def_LD} and $L_S: \mathbb{R}^d \rightarrow \mathbb{R}^d$ to be a bounded linear operator.
Assume that the nonlinearity $\mathcal{F}$ given by \eqref{Def_F_sys} is locally Lipschitz in the sense that for all $r > 0$ there exists $L(r)>0$ such that for any $\Psi_1$ and $\Psi_2$ in $\mathcal{H}$, we have
\begin{equation} \label{Local_Lip_cond}
\|\Psi_1\|_{\mathcal{H}}<r \mbox{ and } \|\Psi_2\|_{\mathcal{H}} < r \Longrightarrow \|\mathcal{F}(\Psi_1) - \mathcal{F}(\Psi_2)\|_{\mathcal{H}} \le L(r) \|\Psi_1 - \Psi_2\|_{\mathcal{H}}.
\end{equation}
Assume also that $\mathcal{F}$ satisfies the following sublinear growth:
\begin{equation} \label{Sublinear_onF}
\|\mathcal{F}(\Psi)\|_{\mathcal{H}} \le \gamma_1 \|\Psi\|_{\mathcal{H}} + \gamma_2, \qquad \text{ } \forall \: \Psi \in \mathcal{H},
\end{equation}
where $\gamma_1>0$ and $\gamma_2\geq 0$.
Then, for each $u_0 \in \mathcal{H}$, the mild solution $u_N(t; \Pi_N u_0)$ of \eqref{Eq_DDE_Galerkin} emanating from $\Pi_N u_0$ and, the mild solution $u(t; u_0)$ of \eqref{Eq_abstract_ODE_DDE} emanating from $u_0$, do not blow up in any finite time. Moreover, $u_N(t; \Pi_N u_0)$ converges uniformly to $u(t; u_0)$ on each bounded interval $[0, T]$, i.e.:
\begin{equation} \label{uniform_conv_est_Case1}
\lim_{N\rightarrow \infty} \sup_{t \in [0, T]} \|u_N(t; \Pi_N u_0) - u(t; u_0)\|_{\mathcal{H}} = 0, \qquad \text{ } \forall \: T > 0, \; u_0 \in \mathcal{H}.
\end{equation}
\end{cor}
\begin{proof}
Recall that the local Lipschitz condition \eqref{Local_Lip_cond} on $\mathcal{F}$ ensures the existence and uniqueness of a local mild solution $u(t; u_0)$ to \eqref{Eq_abstract_ODE_DDE} emanating from any $u_0 \in \mathcal{H}$; see e.g. \cite[Prop.~4.3.3]{Cazenave_al98}.\footnote{\cite[Prop.~4.3.3]{Cazenave_al98} is derived for the case of contraction semigroups. However, the proof can be easily adapted to the case of more general $C_0$-semigroups $T(t)$ for which $\| T(t) \| \le M e^{\omega t}$.}
By recalling that $\|T(t)\|_{\mathcal{H}} \le e^{\omega t}$ (see Lemma~\ref{Lem_A1}) and by using the sublinear growth assumption \eqref{Sublinear_onF} on $\mathcal{F}$, we obtain for mild solutions
\begin{equation} \begin{aligned} \label{Energy_est_for_u}
\|u(t)\|_{\mathcal{H}} & \le \| T(t) u_0\|_{\mathcal{H}} + \int_{0}^t \|T(t-s)\mathcal{F}(u(s))\|_{\mathcal{H}} \mathrm{d} s \\
& \le e^{\omega t} \|u_0\|_{\mathcal{H}} + \int_{0}^t e^{\omega(t-s)} \Big( \gamma_1 \|u(s)\|_{\mathcal{H}} + \gamma_2 \Big) \mathrm{d} s \\
& \le e^{\omega t} \|u_0\|_{\mathcal{H}} + \frac{\gamma_2 (e^{\omega t} -1)}{\omega} + \gamma_1 \int_{0}^t e^{\omega(t-s)} \|u(s)\|_{\mathcal{H}} \mathrm{d} s,
\end{aligned} \end{equation}
where the positive constant $\omega$ is given by \eqref{omega}.
A simple multiplication by $e^{-\omega t}$ to both sides of \eqref{Energy_est_for_u}, leads then trivially to
\begin{equation} \begin{aligned}
e^{-\omega t}\|u(t)\|_{\mathcal{H}} & \le \|u_0\|_{\mathcal{H}} + \frac{\gamma_2 (1 - e^{-\omega t})}{\omega} + \gamma_1 \int_{0}^t e^{-\omega s} \|u(s)\|_{\mathcal{H}} \mathrm{d} s \\
& \le \|u_0\|_{\mathcal{H}} + \frac{\gamma_2}{\omega} + \gamma_1 \int_{0}^t e^{-\omega s} \|u(s)\|_{\mathcal{H}} \mathrm{d} s.
\end{aligned} \end{equation}
An application of the Gronwall's inequality to $v(t):=e^{-\omega t}\|u(t)\|_{\mathcal{H}}$, gives then
\begin{equation} \label{Eq_bound_for_u}
\|u(t)\|_{\mathcal{H}} \le \Big( \|u_0\|_{\mathcal{H}} + \frac{\gamma_2}{\omega} \Big) e^{(\omega + \gamma_1) t},
\end{equation}
preventing thus the blow up of a mild solution in finite time.
Similarly, for mild solutions $u_N$ of \eqref{Eq_DDE_Galerkin}, we have
\begin{equation} \label{Eq_bound_for_uN}
\|u_N(t)\|_{\mathcal{H}} \le \Big( \|u_0\|_{\mathcal{H}} + \frac{\gamma_2}{\omega} \Big) e^{(\omega + \gamma_1) t}.
\end{equation}
by noting that $\|\Pi_N\| < 1$ for all $N \ge 1$, $\|T_N(t)\|_{\mathcal{H}} \le e^{\omega t}$ (see Lemma~\ref{Lem_A1}), and by using the sublinear growth assumption \eqref{Sublinear_onF} on $\mathcal{F}$, preventing also the blow up of any mild solution $u_N$ of \eqref{Eq_DDE_Galerkin}, in finite time.
Finally, \eqref{Eq_bound_for_u} and \eqref{Eq_bound_for_uN} lead to
\begin{equation} \begin{aligned}
& \|u(t; u_0)\|_{\mathcal{H}} \le C(T, \|u_0\|_{\mathcal{H}}), && \text{ } \forall \: t \in [0, T], \\
& \|u_N(t; \Pi_N u_0)\|_{\mathcal{H}} \le C(T, \|u_0\|_{\mathcal{H}}), && \text{ } \forall \: t \in [0, T] \text{ and } N \in \mathbb{N^*},
\end{aligned} \end{equation}
where
\begin{equation*}
C(T, \|u_0\|_{\mathcal{H}}) := \Big( \|u_0\|_{\mathcal{H}} + \frac{\gamma_2}{\omega} \Big) e^{(\omega + \gamma_1) T}.
\end{equation*}
Now, we can follow the proof of Theorem \ref{ParisVI_thm} to obtain the desired convergence result \eqref{uniform_conv_est_Case1}, with the only difference consisting of the global Lipschitz estimates used therein, by the local Lipschitz condition \eqref{Local_Lip_cond} applied to $\Psi$ such that
\begin{equation*}
\| \Psi\|_{\mathcal{H}} \leq2 C(T, \|u_0\|_{\mathcal{H}}).
\end{equation*}
\end{proof}
\begin{cor} \label{Cor_DDE_local_Lip_Case2}
Let $\mathcal{A}$ be defined in \eqref{Def_A2} with $L_D$ specified in \eqref{Def_LD} and $L_S$ to be a bounded linear operator from $\mathbb{R}^d$ to $\mathbb{R}^d$.
Assume that the nonlinearity $\mathcal{F}$ given by \eqref{Def_F_sys} is locally Lipschitz in the sense of \eqref{Local_Lip_cond}. Assume also that the following energy inequality holds for $\mathcal{F}$
\begin{equation} \label{Energy_ineq_onF}
\langle \mathcal{F}(\Psi), \Psi \rangle_{\mathcal{H}} \le \gamma_1 \|\Psi\|^2_{\mathcal{H}} + \gamma_2, \qquad \text{ } \forall \: \Psi \in \mathcal{H},
\end{equation}
where $\gamma_1 \in \mathbb{R}$ and $\gamma_2 \geq 0$.
Then, for each $u_0 \in D(\mathcal{A})$, the strong solution $u_N(t; \Pi_N u_0)$ of \eqref{Eq_DDE_Galerkin} emanating from $\Pi_N u_0$, and the strong solution\footnote{By strong solutions of \eqref{Eq_abstract_ODE_DDE}, we mean a solution in $C([0,T],D(\mathcal{A}))\cap C^1([0,T],\mathcal{H})$ of \eqref{Eq_abstract_ODE_DDE}.} $u(t; u_0)$ of \eqref{Eq_abstract_ODE_DDE} emanating from $u_0$, do not blow up in any finite time. Moreover, $u_N(t; \Pi_N u_0)$ converges uniformly to $u(t; u_0)$ on each bounded interval $[0, T]$, i.e.:
\begin{equation} \label{uniform_conv_est_engest_onF}
\lim_{N\rightarrow \infty} \sup_{t \in [0, T]} \|u_N(t; \Pi_N u_0) - u(t; u_0)\|_{\mathcal{H}} = 0, \qquad \text{ } \forall \: T > 0, \; u_0 \in D(\mathcal{A}).
\end{equation}
Furthermore, any strong solutions $v=u$ of \eqref{Eq_abstract_ODE_DDE} or $v=u_N$ of \eqref{Eq_DDE_Galerkin}, emanating respectively from $v(0)=u_0 \in D(\mathcal{A})$ or $v(0)=\Pi_N u_0$, have their $\mathcal{H}$-norm controlled as follows:
\begin{equation}
\|v(t)\|^2_{\mathcal{H}}\leq e^{\kappa t} \|v(0)\|^2_{\mathcal{H}} + \frac{2 \gamma_2}{\kappa} (e^{\kappa t}-1), \; t>0,
\end{equation}
where $\kappa=2 (\omega+ \gamma_1)$, with $\omega$ given in \eqref{omega}.
\end{cor}
\begin{proof}
Let $u_0\in D(\mathcal{A})$, and let $u$ be the mild solution of \eqref{Eq_abstract_ODE_DDE} emanating from $u_0$, such as ensured by the local Lipschitz condition on $\mathcal{F}$. Then by adapting the proof of e.g.~\cite[Prop.~4.3.9]{Cazenave_al98},\footnote{The regularity result \cite[Prop.~4.3.9]{Cazenave_al98} is stated for the case of contraction semigroups. However, the proof can be adapted to the case of $C_0$-semigroups for which $\|T(t)\| \le e^{\omega t}$ (i.e. with $M=1$) such as encountered here when $L_D$ is as specified in \eqref{Def_LD}.} we have that there exists a map $T:D(\mathcal{A})\rightarrow (0,\infty]$, for which $u\in C([0,T(u_0)),D(\mathcal{A}))\cap C^1([0,T(u_0)),\mathcal{H})$ and $u$ solves the initial-value problem
\begin{subequations}
\begin{eqnarray} \label{DDE_IVP}
& \displaystyle{\frac{\mathrm{d} u}{\mathrm{d} t}} \hspace{-1.5em} & =\mathcal{A} u +\mathcal{F}(u), \label{DDE_IVP_eq1}\\
& u(0) \hspace{-0.5em} & =u_0.
\end{eqnarray}
\end{subequations}
By taking the $\mathcal{H}$-inner product on both sides of \eqref{DDE_IVP_eq1} with the solution $u\in \mathcal{D}(\mathcal{A})$, and using the energy inequality \eqref{Energy_ineq_onF} and the stability property $\langle \mathcal{A} \Psi, \Psi \rangle_{\mathcal{H}} \le \omega \| \Psi\|^2_{\mathcal{H}}$ from Lemma~\ref{Lemma_stability_prep}, we obtain
\begin{equation} \label{Energy_est_for_u_2}
\frac{1}{2} \frac{\mathrm{d}}{\mathrm{d} t}\|u\|^2_{\mathcal{H}} = \langle \mathcal{A} u, u \rangle_{\mathcal{H}} + \langle \mathcal{F}(u), u \rangle_{\mathcal{H}} \le \omega \|u\|^2_{\mathcal{H}} + \gamma_1 \|u\|^2_{\mathcal{H}} + \gamma_2,
\end{equation}
where the positive constant $\omega$ is given by \eqref{omega}.
It follows then from Gronwall's inequality that
\begin{equation} \label{Eq_bounds_u_Case2}
\|u(t; u_0)\|^2_{\mathcal{H}} \leq e^{\kappa t} \|u_0\|^2_{\mathcal{H}} + \frac{2 \gamma_2}{\kappa} (e^{\kappa t}-1), \quad t \in [0, T(u_0)), \; u_0 \in D(\mathcal{A}),
\end{equation}
where $\kappa = \omega + \gamma_1$.
Similarly, by noting that
\begin{equation*}
\langle \Pi_N \mathcal{F}(\Psi), \Psi \rangle_{\mathcal{H}} = \langle \mathcal{F}(\Psi), \Psi \rangle_{\mathcal{H}}, \qquad \text{ } \forall \: \Psi \in \mathcal{H}_N,
\end{equation*}
we have
\begin{equation}
\|u_N(t; \Pi_N u_0)\|^2_{\mathcal{H}} \leq e^{\kappa t} \|u_0\|^2_{\mathcal{H}} + \frac{2 \gamma_2}{\kappa} (e^{\kappa t}-1), \quad t \in [0, T(u_0)), \; u_0 \in \mathcal{H}.
\end{equation}
We have thus shown that for any initial data $u_0 \in D(\mathcal{A})$, the strong solutions $u(t; u_0)$ and $u_N(t; \Pi_N u_0)$ do not blow up in any finite time. The convergence result \eqref{uniform_conv_est_engest_onF}
can then be deduced as in the proof of Corollary~\ref{Cor_DDE_local_Lip_Case1}.
\end{proof}
\vspace{1ex}
\begin{rem} \label{Rmk_forcing}
It is worth mentioning that the conclusions of Corollaries \ref{Cor_DDE_global_Lip}, \ref{Cor_DDE_local_Lip_Case1} and \ref{Cor_DDE_local_Lip_Case2} still hold when the underlying system of DDEs \eqref{Eq_nln_sys} is perturbed by a suitable time-dependent forcing, $\mathbf{g}(t)$. For instance, it suffices to assume that $\mathbf{g}(t) \in L^2_{\mathrm{loc}}([0,\infty); \mathbb{R}^d)$ to still get the convergence results. \qed
\end{rem}
\subsection{Examples}\label{Sec_examples}
In this section we provide some class of nonlinear scalar DDEs of the form \eqref{Eq_DDE} that fit with the assumptions of Corollary \ref{Cor_DDE_local_Lip_Case2}.
In that respect we restrict our attention to the case of $\mathcal{A}$ such as defined in \eqref{Def_A} for $d=1$. We discuss below some classes of nonlinearities that verify the local Lipschitz condition \eqref{Local_Lip_cond} and the energy inequality \eqref{Energy_ineq_onF}.
Extension to systems can be easily built out of these examples and are thus left to the reader.
\subsubsection{Delay equations with a global Lipschitz nonlinearity}\label{sec_ex1}
Let $\mathcal{F}$ be given such as in \eqref{Def_F}, and for which $F$ is assumed to be of the form
\begin{equation}\label{F_example0}
F \Big(\Psi^S,\int_{-\tau}^0 \Psi^D(\theta) \mathrm{d} \theta \Big)=g\Big(\int_{-\tau}^0 \Psi^D(\theta) \mathrm{d} \theta\Big) +h(\Psi^S),
\end{equation}
with $g$ and $h$ to be global Lipschitz maps from $\mathbb{R}$ to $\mathbb{R}$, of constants $L_1$ and $L_2$, respectively. Then, we have
\begin{equation*}
\langle \mathcal{F}(\Psi), \Psi \rangle_{\mathcal{H}} =g \Big(\int_{-\tau}^0 \Psi^D(\theta) \mathrm{d} \theta \Big) \Psi^S +h(\Psi^S) \Psi^S,
\end{equation*}
which gives
\begin{equation*} \begin{aligned}
\langle \mathcal{F}(\Psi), \Psi \rangle_{\mathcal{H}} &\leq L_1 \Big|\int_{-\tau}^0 \Psi^D(\theta) \mathrm{d} \theta\Big|\Big|\Psi^S\Big|+ (|g(0)|+|h(0)|)|\Psi^S| + L_2 |\Psi^S|^2\\
& \leq \gamma_1 \| \Psi \|^2_{\mathcal{H}} +\gamma_2,
\end{aligned} \end{equation*}
with $\gamma_1>0$ and $\gamma_2\geq 0$, and thus $F$ satisfies the energy inequality \eqref{Energy_ineq_onF}. The local Lipschitz condition \eqref{Local_Lip_cond} for $\mathcal{F}$ is trivially satisfied under the assumptions on $F$.
Note that such nonlinear equations arise in many applications where a delayed monotone feedback mechanism is naturally involved in the description of the system's evolution; see \cite{GZT08,krisztin2008,Mallet_Sell96}. It is also interesting to mention that such seemingly simple scalar DDEs with global Lipschitz nonlinearity can also support chaotic dynamics as illustrated in Section \ref{Sec_nearly-brownian} below.
\subsubsection{Delay equations with locally Lipschitz nonlinearity}\label{sec_ex2}
We relax now the global Lipschitz requirement. In that respect, we consider
\begin{equation} \label{F_example}
F\Big(\Psi^S, \int_{-\tau}^0 \Psi^D(\theta) \mathrm{d} \theta \Big)=- g_1 \Big(\int_{-\tau}^0 \Psi^D(\theta) \mathrm{d} \theta\Big) g_2\big(\Psi^S \big),
\end{equation}
and assume that
\begin{itemize}
\item[(i)] $g_1: \mathbb{R} \rightarrow \mathbb{R}^+$ is locally Lipschitz;
\item[(ii)] $g_2: \mathbb{R} \rightarrow \mathbb{R}$ is locally Lipschitz and verifies the condition
\begin{equation}\label{Eq_pos_g2}
g_2(x)x \ge 0, \qquad x \in \mathbb{R}.
\end{equation}
\end{itemize}
These assumptions allow us to consider a broad class of nonlinear effects that are not necessarily bounded or polynomial.
We check below that the abstract nonlinear map $\mathcal{F}$ defined in \eqref{Def_F} with $F$ given by \eqref{F_example} and that satisfy (i) and (ii), satisfies also the conditions of Corollary \ref{Cor_DDE_local_Lip_Case2}. We first check the local Lipschitz condition.
Trivially, let us first remark that
\begin{equation} \label{F_Lip_est0}
\|\mathcal{F}(\Psi_1) - \mathcal{F}(\Psi_2) \|_{\mathcal{H}} = \left |F \Big(\Psi_1^S, \int_{-\tau}^0 \Psi^D_1(\theta) \mathrm{d} \theta \Big) - F \Big(\Psi^S_2, \int_{-\tau}^0 \Psi^D_2(\theta) \mathrm{d} \theta \Big) \right|.
\end{equation}
Let us introduce the notations $\alpha_i := \Psi_i^S$ and $\beta_i := \int_{-\tau}^0 \Psi^D_i(\theta) \mathrm{d} \theta$, $i = 1, 2$. Let $\Psi_i$ be chosen such that for $i = 1, 2$, $\|\Psi_i\|_{\mathcal{H}} \le R$ for some $R>0$. It follows that
\begin{equation}
|\alpha_i| \le R, \quad |\beta_i| = \left| \int_{-\tau}^0 \Psi^D_i(\theta) \mathrm{d} \theta \right| \le \sqrt{\tau} \|\Psi^D_i\|_{L^2},
\end{equation}
and thus by definition of the $\mathcal{H}$-inner product \eqref{H_inner}
\begin{equation}
|\beta_i| \le \tau \|\Psi_i\|_{\mathcal{H}} \le \tau R,
\end{equation}
which leads to
\begin{equation} \begin{aligned} \label{F_Lip_est1}
& |F (\alpha_1, \beta_1) - F (\alpha_2, \beta_2) |\\
& \le |g_1(\beta_1) g_2(\alpha_1) - g_1(\beta_2) g_2(\alpha_2)| \\
& \le \Big|g_1(\beta_1) \big (g_2(\alpha_1) - g_2(\alpha_2) \big) \Big| + \Big|\big( g_1(\beta_1) - g_1(\beta_2) \big) g_2(\alpha_2) \Big| \\
& \le \mathrm{L}_2(R) |g_1(\beta_1)| |\alpha_1 - \alpha_2| + \mathrm{L}_1(\tau R) |g_2(\alpha_2)| |\beta_1 - \beta_2|,
\end{aligned} \end{equation}
where $\mathrm{L}_1(r)$ (resp.~$\mathrm{L}_2(r)$) denotes the local Lipschitz constant associated with $g_1(x)$ (resp.~$g_2(x)$) for $|x|<r$.
On the other hand,
\begin{equation*} \begin{aligned}
|g_1(\beta_1)| & \le |g_1(\beta_1) - g_1(0)| + |g_1(0)| \\
& \le \mathrm{L}_1(\tau R) |\beta_1| + |g_1(0)| \\
& \le \tau R \mathrm{L}_1(\tau R) + |g_1(0)|,
\end{aligned} \end{equation*}
and
\begin{equation*} \begin{aligned}
|g_2(\alpha_2)| \le R \mathrm{L}_2(R) + |g_2(0)|.
\end{aligned} \end{equation*}
Note also that
\begin{equation*}
|\alpha_1 - \alpha_2| \le \| \Psi_1 - \Psi_2 \|_{\mathcal{H}},
\end{equation*}
and that
\begin{equation*}
|\beta_1 - \beta_2| \le \sqrt{\tau} \|\Psi^D_1 - \Psi^D_2\|_{L^2} \le \tau \|\Psi_1 - \Psi_2\|_{\mathcal{H}}.
\end{equation*}
We obtain then from \eqref{F_Lip_est1} that
\begin{equation} \label{F_Lip_est2}
|F (\alpha_1, \beta_1) - F (\alpha_2, \beta_2) | \le L(R) \|\Psi_1 - \Psi_2\|_{\mathcal{H}},
\end{equation}
where
\begin{equation*}
L(R):= \mathrm{L}_2(R) \Big(\sqrt{\tau} R \mathrm{L}_1(\tau R) + |g_1(0)| \Big)
+ \tau \mathrm{L}_1(\tau R) \Big(R \mathrm{L}_2( R) + |g_2(0)| \Big),
\end{equation*}
which gives the local Lipschitz property of $\mathcal{F}$ as a map from $\mathcal{H}$ to $\mathcal{H}$.
The energy inequality \eqref{Energy_ineq_onF} is here readily satisfied since
\begin{equation}
\langle \mathcal{F}(\Psi), \Psi \rangle_{\mathcal{H}} = F \Big(\Psi^S, \int_{-\tau}^0 \Psi^D(\theta) \mathrm{d} \theta \Big) \Psi^S \leq 0,
\end{equation}
because of \eqref{Eq_pos_g2}.
\needspace{1\baselineskip}
\begin{rem}\label{Rmk_ex}
\hspace*{2em} \vspace*{-0.4em}
\begin{itemize}
\item[(i)] Note that famous delayed models from population dynamics are covered by Corollary \ref{Cor_DDE_local_Lip_Case2}, although not satisfying \eqref{F_example}. For instance, delayed logistic equations of the form
\begin{equation}
\frac{\mathrm{d} x}{\mathrm{d} t}=r x(t)\Big(1-K^{-1}\int_{t-\tau}^t \omega(s) f(x(s))\mathrm{d} s\Big), \; r, K, \tau>0,
\end{equation}
with $\omega \in L^{\infty} (\mathbb{R},\mathbb{R}^+)$ and $f\in L^1(\mathbb{R},\mathbb{R}^+)$ satisfying the inequality
\begin{equation*}
|f(x)-f(y)| \leq \gamma |x-y|,
\end{equation*}
for some $\gamma > 0$ and for almost every $x,y \in \mathbb{R}$, are still covered by Corollary \ref{Cor_DDE_local_Lip_Case2}.
\item[(ii)] Note also that many other nonlinear effects could have been considered in \eqref{Eq_DDE} for which the convergence result of Corollary \ref{Cor_DDE_local_Lip_Case2} would hold. For instance we could have considered
\begin{equation}\label{other_nonl}
F\Big(\Psi^S, \int_{-\tau}^0 \Psi^D(\theta) \mathrm{d} \theta\Big)= \sum_{j=1}^{2p-1} b_j \Big(\int_{-\tau}^0 \Psi^D(\theta) \mathrm{d} \theta\Big) (\Psi^S)^j, p\geq 2,
\end{equation}
where each $b_j$ is a local Lipschitz function in $L^{\infty}(\mathbb{R})$ and $b_{2p-1} (\cdot )\leq \beta <0$ for some constant $\beta$.
Under these assumptions, the nonlinearity \eqref{other_nonl} satisfies the energy inequality \eqref{Energy_ineq_onF} with $\gamma_1 = 0$ and with $\gamma_2$ sufficiently large, depending on the $L^\infty$ norm of
the functions $b_j$'s. This is because the term with the largest power $(\Psi^S)^{2p-1}$ is strictly
negative by assumption on $b_{2p-1}$, and the other terms with a lower degree can be controlled by using Young's inequality. \qed
\end{itemize}
\end{rem}
\section{Galerkin approximation: Analytic formulas for scalar DDEs}\label{Sec_Galerkin_analytic}
This section is devoted to the derivation of explicit expressions of the Galerkin approximation \eqref{Eq_DDE_Galerkin} associated with
nonlinear DDEs. For simplicity, we focus on the case of a scalar DDE taking the form given by \eqref{Eq_DDE}. The more general case of nonlinear systems of DDEs is dealt with in Appendix \ref{Appendix_systems}; see also Appendix \ref{Subsect_var_C} for the case where the linear part of \eqref{Eq_DDE} involves a distributed-delay term as in \eqref{Eq_nln_sys}.
As a preparation for the forthcoming analytic derivations, we need to express the derivative of the Koornwinder polynomials in terms of the polynomials themselves. This is the content of the following proposition.
\begin{prop} \label{prop:dPn}
The Koornwinder polynomial $K_n$ of degree $n\in \mathbb{N}$ defined in \eqref{eq:Pn} satisfies the differential relation
\begin{equation} \label{eq:dPn}
\frac{\mathrm{d} K_n}{\mathrm{d} s}(s) = \sum_{k = 0}^{n-1} a_{n,k} K_k(s), \quad s \in (-1,1),
\end{equation}
where the coefficients $\boldsymbol{a}_n:=(a_{n,0}, \cdots, a_{n,n-1})^\mathrm{tr}$, satisfy the upper triangular system of linear equations
\begin{equation} \label{eq:algebraic}
\mathbf{T}\boldsymbol{a}_n = \boldsymbol{b}_n,
\end{equation}
with $\mathbf{T}:=(\mathbf{T}_{i,j})_{n\times n}$ and $b_{n}:=(b_{n,0}, \cdots, b_{n,n-1})^\mathrm{tr}$ given by
\begin{equation} \begin{aligned} \label{eq:algebraic_def}
\mathbf{T}_{i,j} & = \begin{cases}
0, & \; \text{ if } j < i,\\
i^2 + 1, & \; \text{ if } j = i,\\
-(2i+1), & \; \text{ if } j > i,
\end{cases} \; \qquad \text{ where } \qquad 0 \le i, j \le n-1, \\
b_{n,i} & = \begin{cases}
\frac{1}{2}(2i+1)(n+i+1)(n-i), & \text{ if $n+i$ is even}, \vspace{1em}\\
(n^2 + n)(2i+1) - \frac{i}{2}(n+i)(n-i+1) & \\
\hspace{2em} -\frac{ i}{2}(i+1)(n-i-1)(n+i+2), & \text{ if $n+i$ is odd}.
\end{cases}
\end{aligned} \end{equation}
For the rescaled version $K^\tau_n$ defined by \eqref{eq:Pn_tilde}, it holds that
\begin{equation} \label{eq:dKn_tau}
\frac{\mathrm{d} K^\tau_n}{\mathrm{d} \theta}(\theta) = \frac{2}{\tau} \sum_{k = 0}^{n-1} a_{n,k} K^\tau_k(\theta), \qquad n \in \mathbb{N}, \quad \theta\in (-\tau,0).
\end{equation}
\end{prop}
\begin{proof}
See Appendix \ref{sect:coef_matrix_proof}.
\end{proof}
Let us now rewrite the unknown $u_N$ in the Galerkin system \eqref{Eq_DDE_Galerkin} in terms of the first $N$ rescaled Koornwinder polynomials, i.e.:
\begin{equation} \label{x_t expand}
u_N(t) = \sum_{n=0}^{N-1} y_n(t) \mathcal{K}^\tau_n, \qquad t \ge 0,
\end{equation}
where
\begin{equation}
y_n(t) = \frac{\langle u_N(t), \mathcal{K}^\tau_n \rangle_{\mathcal{H}}}{\| \mathcal{K}^\tau_n \|^2_{\mathcal{H}}}.
\end{equation}
We then replace $u_N$ in Eq.~\eqref{Eq_DDE_Galerkin} by the expansion given in \eqref{x_t expand}, and take the $\mathcal{H}$-inner product on both sides with $\mathcal{K}^\tau_j$ for each $j \in \{0, \cdots, N-1
\}$ to obtain:
\begin{equation} \label{GalerkinCalc_v1}
\|\mathcal{K}^\tau_j\|_{\mathcal{H}}^2 \frac{\mathrm{d} y_j}{\mathrm{d} t} = \sum_{n=0}^{N-1} y_n(t) \left \langle \mathcal{A}_N \mathcal{K}^\tau_n, \mathcal{K}^\tau_j \right \rangle_{\mathcal{H}} + \Bigg \langle \Pi_N \mathcal{F} \Bigg ( \sum_{n=0}^{N-1} y_n(t) \mathcal{K}^\tau_n \Bigg), \mathcal{K}^\tau_j \Bigg \rangle_{\mathcal{H}}.
\end{equation}
Recall that the linear operator $\mathcal{A}$ here is defined by \eqref{Def_A}. Then, for each $n \in \{0, \cdots, N-1
\}$, it holds that
\begin{equation} \begin{aligned}
\mathcal{A}_N \mathcal{K}^\tau_n = \Pi_N \mathcal{A} \mathcal{K}^\tau_n & = \sum_{l = 0}^{N-1} \Big(\frac{1}{\tau} \Big \langle \frac{\mathrm{d}^+}{\mathrm{d} \theta} K_n^\tau, K_l^\tau \Big \rangle_{L^2} \\
& \qquad + \Big(a K^\tau_n(0) + b K_n^\tau(-\tau) + c \int_{-\tau}^0 K_n^\tau(\theta) \mathrm{d}
\theta \Big) \Big) \frac{\mathcal{K}_l^\tau }{\|\mathcal{K}_l^\tau \|_{\mathcal{H}}^2}.
\end{aligned} \end{equation}
We obtain then that
\begin{equation} \begin{aligned} \label{GalerkinCalc_part1_1}
\left \langle \mathcal{A}_N \mathcal{K}^\tau_n, \mathcal{K}^\tau_j \right \rangle_{\mathcal{H}} & = \frac{1}{\tau} \Big \langle \frac{\mathrm{d}^+}{\mathrm{d} \theta} K_n^\tau, K_j^\tau \Big \rangle_{L^2} \\
& \quad + \Big( a K^\tau_n(0) + b K^\tau_n(-\tau) + c \int_{-\tau}^0 K_n^\tau(\theta) \mathrm{d} \theta\Big) K^\tau_j(0).
\end{aligned} \end{equation}
It follows from the expression of $\frac{\mathrm{d} K^\tau_n}{\mathrm{d} \theta}$ in \eqref{eq:dKn_tau} that
\begin{equation} \begin{aligned} \label{GalerkinCalc_part1_2}
\frac{1}{\tau} \int_{-\tau}^0 \frac{\mathrm{d}^+ K^\tau_n}{\mathrm{d} \theta}(\theta) K^\tau_j(\theta)\mathrm{d} \theta & = \frac{2}{\tau}\sum_{k=0}^{n-1} a_{n,k} \left( \frac{1}{\tau} \int_{-\tau}^0 K^\tau_k(\theta) K^\tau_j(\theta)\mathrm{d} \theta \right) \\
& = \frac{2}{\tau}\sum_{k=0}^{n-1} a_{n,k} \left( \langle \mathcal{K}^\tau_k, \mathcal{K}^\tau_j \rangle_{\mathcal{H}} - K^\tau_k(0) K^\tau_j(0) \right ) \\
& = \frac{2}{\tau} \sum_{k=0}^{n-1} a_{n,k} \left( \delta_{j,k} \|\mathcal{K}^\tau_j\|^2_{\mathcal{H}} - 1 \right ),
\end{aligned} \end{equation}
where $\delta_{j,k}$ denotes the Kronecker delta, and the last equality above follows from the orthogonal property of the Koornwinder polynomials as well as the normalization property $K^\tau_n(0) = 1$; cf.~\eqref{Eq_normalization}.
Note that $K^\tau_0 \equiv 1$, which follows from the definition of the Koornwinder polynomials $K_n$ given by \eqref{eq:Pn} and the fact that the first Legendre polynomial $L_0$ is identically $1$. We get then $\|\mathcal{K}^\tau_0\|^2_{\mathcal{H}} = 2$. Note also that
\begin{equation}
\langle \mathcal{K}^\tau_n, \mathcal{K}^\tau_0 \rangle_{\mathcal{H}} =
\frac{1}{\tau} \int_{-\tau}^0 K^\tau_n(\theta) \mathrm{d} \theta + 1 = \delta_{n,0} \|\mathcal{K}^\tau_0\|^2_{\mathcal{H}} = 2 \delta_{n,0},
\end{equation}
leading thus to
\begin{equation} \label{Eq_int_Kn}
\int_{-\tau}^0 K^\tau_n(\theta) \mathrm{d} \theta = \tau (2 \delta_{n,0} - 1), \quad n \in \mathbb{N}.
\end{equation}
By using \eqref{Eq_int_Kn}, the normalization property $K^\tau_j(0) = 1$ and the identity $K^\tau_j(-\tau) = K_j(-1)$ (valid for any $j \ge 0$), we obtain
\begin{equation} \label{GalerkinCalc_part1_3}
\Big( a K^\tau_n(0) + b K^\tau_n(-\tau) + c \int_{-\tau}^0 K_n^\tau(\theta) \mathrm{d} \theta \Big) K^\tau_j(0) = a + b K_n(-1) + c \tau (2 \delta_{n,0} - 1).
\end{equation}
Now, by using \eqref{GalerkinCalc_part1_2} and \eqref{GalerkinCalc_part1_3} in \eqref{GalerkinCalc_part1_1}, we obtain
\begin{equation} \begin{aligned} \label{GalerkinCalc_part1}
\sum_{n=0}^{N-1} y_n(t) \left \langle \mathcal{A}_N \mathcal{K}^\tau_n, \mathcal{K}^\tau_j \right \rangle_{\mathcal{H}} & = \sum_{n=0}^{N-1} \Bigl ( a + b K_n(-1) + c \tau (2 \delta_{n,0} - 1) \\
& \hspace{5em} + \frac{2}{\tau} \sum_{k=0}^{n-1} a_{n,k} \left( \delta_{j,k} \|\mathcal{K}_j\|^2_{\mathcal{H}} - 1 \right ) \Bigr) y_n(t).
\end{aligned} \end{equation}
For the nonlinear part, since $\langle \Pi_N \Phi, \mathcal{K}^\tau_n \rangle_{\mathcal{H}} = \langle \Phi, \mathcal{K}^\tau_n \rangle_{\mathcal{H}}$ for all $\Phi \in \mathcal{H}$ and all $n \in \{0, \cdots, N-1\}$, together with the definition of $\mathcal{F}$ given in \eqref{Def_F}, we obtain
\begin{equation} \label{GalerkinCalc_part2a}
\Bigl \langle \Pi_N \mathcal{F} \Bigl ( \sum_{n=0}^{N-1} y_n(t) \mathcal{K}^\tau_n \Bigr ), \mathcal{K}^\tau_j \Bigr \rangle_{\mathcal{H}} = F \Biggl( \sum_{n=0}^{N-1} y_n(t), \int_{-\tau}^0 \sum_{n=0}^{N-1} y_n(t) K^\tau_n(\theta) \mathrm{d} \theta \Biggr).
\end{equation}
From \eqref{Eq_int_Kn}, it also follows that
\begin{equation} \label{GalerkinCalc_part2b}
\int_{-\tau}^0 \sum_{n=0}^{N-1} y_n(t) K^\tau_n(\theta) \mathrm{d} \theta =
\tau y_0(t) - \tau \sum_{n=1}^{N-1} y_n(t).
\end{equation}
By using \eqref{GalerkinCalc_part2b} in \eqref{GalerkinCalc_part2a}, we get
\begin{equation} \label{GalerkinCalc_part2}
\Bigl \langle \Pi_N \mathcal{F} \Bigl ( \sum_{n=0}^{N-1} y_n(t) \mathcal{K}^\tau_n \Bigr ), \mathcal{K}^\tau_j \Bigr \rangle_{\mathcal{H}} = F \Biggl( \sum_{n=0}^{N-1} y_n(t),
\tau y_0(t) - \tau \sum_{n=1}^{N-1} y_n(t) \Biggr).
\end{equation}
Now, by using \eqref{GalerkinCalc_part1} and \eqref{GalerkinCalc_part2} and recalling that $\|\mathcal{K}^\tau_n\|_{\mathcal{H}} = \|\mathcal{K}_n\|_{\mathcal{E}}$, we obtain from Eq.~\eqref{GalerkinCalc_v1} the following explicit form of the $N$-dimensional Galerkin system \eqref{Eq_DDE_Galerkin}:
\begin{equation} \label{Galerkin_AnalForm}
\begin{aligned}
\frac{\mathrm{d} y_j}{\mathrm{d} t} & = \frac{1}{\|\mathcal{K}_j\|_{\mathcal{E}}^2 } \sum_{n=0}^{N-1} \Big( a + b K_n(-1) + c \tau (2 \delta_{n,0} - 1) \\
& \hspace{8em} + \frac{2}{\tau}\sum_{k=0}^{n-1} a_{n,k} \left( \delta_{j,k} \|\mathcal{K}_j\|^2_{\mathcal{E}} - 1 \right) \Big) y_n(t) \\
& \hspace{1em} + \frac{1}{\|\mathcal{K}_j\|_{\mathcal{E}}^2} F \left( \sum_{n=0}^{N-1} y_n(t), \tau y_0(t) - \tau \sum_{n=1}^{N-1} y_n(t) \right), \;\; 0\leq j\leq N-1.
\end{aligned}
\end{equation}
For later usage, we rewrite the above Galerkin system into the following compact form:
\begin{equation} \label{Galerkin_cptForm}
\boxed{\frac{\mathrm{d} \boldsymbol{y}}{\mathrm{d} t} = A \boldsymbol{y} + G (\boldsymbol{y}),}
\end{equation}
where $A \boldsymbol{y}$ denotes the linear part of Eq.~\eqref{Galerkin_AnalForm}, and $G(\boldsymbol{y})$ the nonlinear part. Namely, $A$ is the $N\times N$ matrix whose elements are given by
\begin{equation} \label{eq:A}
\boxed{
\begin{aligned}
(A)_{j,n} & = \frac{1}{\|\mathcal{K}_j\|_{\mathcal{E}}^2 }\Big(a + b K_n(-1) + c \tau (2 \delta_{n,0} - 1) \\
& \hspace{8em}+ \frac{2}{\tau}\sum_{k=0}^{n-1} a_{n,k} \left( \delta_{j,k} \|\mathcal{K}_j\|^2_{\mathcal{E}} - 1 \right ) \Big),
\end{aligned}
}
\end{equation}
where $j, n = 0, \cdots, N-1$, and the nonlinear vector field $G \colon \mathbb{R}^N \rightarrow \mathbb{R}^N$, is given component-wisely
by
\begin{equation} \label{eq:G}
\boxed{G_j(\boldsymbol{y}) = \frac{1}{\|\mathcal{K}_{j}\|_{\mathcal{E}}^2} F \left( \sum_{n=0}^{N-1} y_n(t), \tau y_0(t) - \tau \sum_{n=1}^{N-1} y_n(t) \right), \; 0\leq j\leq N-1.}
\end{equation}
\medskip
\begin{rem} \label{Rmk_Galerkin_forcing}
When a time-dependent forcing $g(t)$ is added to the RHS of the DDE \eqref{Eq_DDE}, the only change in the corresponding Galerkin system is to add a term $\|\mathcal{K}_{j}\|_{\mathcal{E}}^{-2} g(t)$ to each $G_j$ in \eqref{eq:G}. Recall also from Remark~\ref{Rmk_forcing} that it is sufficient to require that $g\in L_{\mathrm{loc}}^2(\mathbb{R}; \mathbb{R})$ in order to ensure the convergence result of the Galerkin system over any finite interval. \qed
\end{rem}
\section{Approximation of chaotic dynamics: Numerical results} \label{Sect_Numerics}
In this section, we report on the performance of our Galerkin schemein approximating quasi-periodic and
chaotic dynamics.
In the case of the latter, it is well known that any finite-time uniform convergence result --- such as the one given by \eqref{uniform_conv_est_engest_onF} and obtained in Corollary \ref{Cor_DDE_local_Lip_Case2} --- becomes less useful, due to sensitivity to initial data.
In this case, when individual solutions diverge exponentially, it is natural to consider instead the approximation of the strange attractor or of the statistics of the dynamics. More generally, the approximation of meaningful invariant probability measures supported by the strange attractor is of primary interest in chaotic dynamics. Nonlinear DDEs, like those considered here, are known to support such probability measures once a global attractor is known to exist \cite{chekroun_glatt-holtz}.
If, in addition to the finite-time uniform convergence \eqref{uniform_conv_est_engest_onF},
a uniform dissipativity assumption is satisfied, then any sequence of invariant measures associated with the Galerkin approximation converges weakly to an invariant measure of the full system; see \cite[Thm.~2.2]{wang2009approximating}.
The uniform dissipativity assumption referred to in \cite{wang2009approximating} is satisfied if one can establish, uniformly in $N$, the existence of an absorbing ball for the Galerkin reduced systems in another separable Hilbert space $\mathcal{V}$, which is compactly imbedded in $\mathcal{H}$, in the case of strongly dissipative systems.
We show in Sections~\ref{Sec_nearly-brownian} and \ref{Sect_bimodal}
below that, for two simple nonlinear scalar DDEs that --- even in instances in which the
uniform dissipativity assumption of \cite[Thm.~2.2]{wang2009approximating} is not guaranteed --- our Galerkin scheme is still able to approximate significant statistical properties of the chaotic dynamics, or the strange attractor itself. Finally, we illustrate in Section~\ref{Sect_ENSO}, {\mg for a highly idealized ENSO model from climate dynamics} that our approach also works in the case of periodically forced DDEs with multiple delays.
\subsection{``Nearly-Brownian'' chaotic dynamics.}\label{Sec_nearly-brownian}
We consider here a modified version of the DDE analyzed in \cite{sprott2007simple} with a sinusoidal nonlinearity.
The modification consists of replacing the discrete delay by distributed delays. As we will see, this modification does not affect the main dynamical properties identified in \cite{sprott2007simple} in the case of a discrete delay, once the proper parameter values are chosen.
More precisely, we consider the following DDE
\begin{equation}\label{Eq_sindel}
\dot{x}=a\sin\Big(\int_{t-\tau}^tx(s) \mathrm{d} s\Big),
\end{equation}
with $a=0.5$ and $\tau=5.5$. This example fits within the general class discussed in Section \ref{sec_ex1}, to which the rigorous convergence results described in Sections \ref{Subsect_DDE_Galerkin} in an abstract setting, and in Section \ref{Sec_Galerkin_analytic} more concretely, do apply.
As pointed out in the introduction of this section, such finite-time uniform convergence results
are essential in general, but they are not the ones we are necessarily looking for
in approximating chaotic dynamics. We rely, therefore, on careful numerical simulations
to explore the performance of our Koornwinder-polynomial--based Galerkin systems to approximate the statistical features of the chaotic dynamics in these examples.
A sample trajectory of Eq.~\eqref{Eq_sindel} is shown in black in Fig.~\ref{Trajectories_sindel}. While the governing DDE is perfectly deterministic, the visual resemblance of this trajectory to a sample path of Brownian motion is obvious. A trajectory with the same constant initial history over the interval $[-\tau, 0)$, but obtained by solving a 10D-approximation by our Galerkin scheme of the DDE, is shown as the red curve in the figure. It is clear that individual trajectories do have the same overall behavior, which is nearly Brownian, but the pointwise approximation of the exact trajectory by the approximate one is not good.
To study instead the statistics of the solutions,
we have estimated --- for a collection of $n=10^4$ initial histories of constant value drawn uniformly in $[-1,1]$ --- the empirical probability
density function (PDF) of the corresponding $x(t)$-values at a given time $t$.
Figure \ref{Sine_model_statistics} reports on the numerical results at $t=2680$ when $x(t)$ is simulated from Eq.~\eqref{Eq_sindel} by forward Euler integration (black curve) with a time step of $\Delta t = \tau/2^{10}$, and when $x(t)$ is approximated by a 10D-Galerkin approximation (red curve) obtained from the analytic formulas \eqref{Galerkin_cptForm}--\eqref{eq:G} applied to Eq.~\eqref{Eq_sindel}.
The Galerkin system {\mg of ODEs} is integrated using a semi-implicit Euler method
that still uses $\Delta t = \tau/2^{10}$, but in which the linear part $A y$ is treated implicitly, {\mg while} the nonlinear term $G(y)$ is treated explicitly. The approximate solution $x_N(t)$, provided by an $N$-dimensional Galerkin system of the form \eqref{Galerkin_cptForm}, is obtained by using the expansion \eqref{x_t expand} into Koornwinder polynomials. More precisely,
$x_N(t)$ is obtained as the state part of $u_N$ given by \eqref{x_t expand} which, thanks to the normalization property $K_n^\tau(0) = 1$ given in ~\eqref{Eq_normalization}, reduces to
\begin{equation}
x_N(t) = \sum_{j = 1}^N y_j(t),
\end{equation}
where $y:=(y_0, \cdots, y_{N-1})$ is the vector solution of \eqref{Galerkin_cptForm}.
Also shown in blue in panels (a, c) of Fig.~\ref{Sine_model_statistics}, is a Gaussian distribution with the same mean and standard deviation, both of which are estimated from the $x(t)$-values at $t=2680$ of the simulated DDE solutions. In both cases, the empirical distributions, as obtained from the simulated DDE solution or as obtained by using a
Galerkin approximation with $N=10$, closely follow a Gaussian law.
\begin{figure}[hbtp]
\centering
\includegraphics[height=0.4\textwidth,width=.9\textwidth]{Trajectories_sindel.pdf}
\caption{{\footnotesize {\mg Trajectory $x(t)$ simulated by} the DDE \eqref{Eq_sindel} (black curve), and its 10D-Galerkin approximation {\mg $x_N(t)$, with $N = 10$} (red curve), obtained by the method described in Section~\ref{Sec_Galerkin_analytic}.}} \label{Trajectories_sindel}
\end{figure}
\begin{figure}[hbtp]
\centering
\includegraphics[height=0.7\textwidth,width=.9\textwidth]{sine_statistics_1E4runs.pdf}
\caption{{\footnotesize {\mg (a, c):} Probability density functions (PDFs); and (b, d): standard deviations $\sigma$, as estimated from the simulations of the DDE \eqref{Eq_sindel} (black curve {\mg and black circles, in the left panels and the right panels, respectively),} and its 10D-Galerkin approximation (red curve {\mg and red circles)} by the method described in Section~\ref{Sec_Galerkin_analytic}, respectively. In each of the panels (a) and (b), the results are compared with a Gaussian distribution estimated by standard analytic formulas (blue curves), and in each of the panels (c) and (d), the results are compared with the best linear regression fit (blue curve) of $\log(\sigma)$ versus $\log(t)$, providing the corresponding slope and the exponent reported therein.}} \label{Sine_model_statistics}
\end{figure}
For both $x(t)$, as simulated by numerically solving Eq.~\eqref{Eq_sindel}, and $x_N(t)$, as obtained by using a
Galerkin approximation with $N=10$, the standard deviations of the corresponding collection of trajectories
are shown versus time in panels (b) and (d) of Fig.~\ref{Sine_model_statistics}. The best linear regression fit of $\log(\sigma)$ versus $\log(t)$ gives $\sigma=0.195 t^{0.479}$, in the case of Eq.~\eqref{Eq_sindel}, and $\sigma=0.190 t^{0.480}$, in the case of the 10D-Galerkin approximation.
In both cases, the slopes of the fitted lines indicate that the {\mg deterministic dynamics of Eq.~\eqref{Eq_sindel}} mimics that of Brownian motion, for which the slope would be $0.5$, with a diffusion
coefficient $D=\sigma^2/t$. In particular, the solutions do not stay within any bounded subset; the dynamics thus violates any dissipation criterion, while still possessing an attractor with strange behavior (not shown).
To the best of our knowledge, the 10D-Galerkin approximation computed here {\mg thus provides}
the first example of a chaotic system of ODEs whose solutions exhibit a statistical behavior close to that of Brownian motion.
\subsection{Bimodal chaotic dynamics with low-frequency variability.} \label{Sect_bimodal}
The model studied in this subsection,
\begin{equation}\label{Eq_DDE_chafee}
\dot{x}=a x(t-\tau) -b x(t-\tau)^3,
\end{equation}
is also based on \cite{sprott2007simple}; the parameter values used here are $a=0.5$, $b=20$ and $\tau=3.35$.
Remark~\ref{Rmk_Chafee} at the end of this subsection discusses similarities between the dynamics of the model above and important aspects of ENSO dynamics.
\begin{figure}[hbtp]
\centering
\includegraphics[height=0.5\textwidth,width=.9\textwidth]{Super_chafee_6D_6e6_half.pdf}
\caption{{\footnotesize The attractor associated with Eq.~\eqref{Eq_DDE_chafee} (left panel) and its approximation obtained from a 6D-Galerkin approximation (right panel).}} \label{Super_chafee}
\end{figure}
\begin{figure}[hbtp]
\centering
\includegraphics[height=0.4\textwidth,width=.8\textwidth]{PSD_chafee.pdf}
\caption{{\footnotesize Power spectral density of $x(t)$ as simulated from Eq.~\eqref{Eq_DDE_chafee}, and from a 6D-Galerkin approximation derived from the analytic formulas described in Section \ref{Sec_Galerkin_analytic} and applied to Eq.~\eqref{Eq_DDE_chafee}.}} \label{PSD_chafee}
\end{figure}
In contrast to the DDE considered in the previous {\mg subsection,} Eq.~\eqref{Eq_DDE_chafee} does not fit directly within the general framework of Section \ref{Subsect_DDE_Galerkin}, for which rigorous convergence results are available.
The discrete lag present in the cubic term leads to complications for a rigorous analysis. Replacing this lag effect by a distributed one, as in Eq.~\eqref{Eq_sindel}, would place the DDE of Eq.~\eqref{Eq_DDE_chafee} into the class considered in Section~\ref{sec_ex2}, for which finite-time uniform convergence results are guaranteed.
But even then, we cannot be assured {\it a priori} of an effective approximation of statistical features of the dynamics, as discussed above.
The purpose of this subsection is to show that, even in such a borderline case with respect to
the theory presented in this article, statistical and even topological features can still be
remarkably well approximated by the Galerkin systems of Section \ref{Sec_Galerkin_analytic}, when appropriately modified to handle the case of discrete delay in the nonlinear terms.\footnote{This modification consists just of noting that, by replacing the nonlinear term with distributed delays in \eqref{Eq_DDE} by $F(x(t), x(t-\tau))$, the identity \eqref{GalerkinCalc_part2a} becomes
\begin{equation}
\Bigl \langle \Pi_N \mathcal{F} \Bigl ( \sum_{n=0}^{N-1} y_n(t) \mathcal{K}^\tau_n \Bigr ), \mathcal{K}^\tau_j \Bigr \rangle_{\mathcal{H}} = F \Biggl( \sum_{n=0}^{N-1} y_n(t), \sum_{n=0}^{N-1} y_n(t) K_n(-1) \Biggr);
\end{equation}
{\mg the latter, in turn,} leads to the corresponding change in the formula \eqref{eq:G} for the nonlinear vector field in Eq.~\eqref{Galerkin_cptForm}.
\label{footnote_discrete_delay}}
As shown in the left panel of Figure \ref{Super_chafee} in natural delayed coordinates, the strange attractor associated with Eq.~\eqref{Eq_DDE_chafee} (black) has a nearly symmetric topological shape and is constituted by two fairly high-density``islands'' connected by a foliation of heteroclinic-like orbits. These properties of the attractor are very well
captured by a 6D-Galerkin approximation (right panel, red).
These two high-density islands, along with the lower-density areas of heteroclinic-like connections {\mg between the two,} give rise to a bimodal chaotic behavior, which gives rise, in turn, to an interesting time variability. Figure ~\ref{PSD_chafee}
plots the results of a standard numerical estimation of the spectral density --- also known in the engineering literature as the power spectrum --- associated with $x(t)$, as directly simulated using Eq.~\eqref{Eq_DDE_chafee} (black curve) and as approximated by a 6D-Galerkin
approximation (red curve). In both cases, the power spectra are estimated from the corresponding autocorrelation functions\cite{eckmann_ruelle,Ghil2002}.
The numerical results show that the spectrum of $x(t)$ contains two broadband peaks at low frequencies that stand out above an exponentially decaying background. As shown in Fig.~\ref{PSD_chafee}, these
two peaks, as well as the exponential background, are strikingly well approximated by a 6D-Galerkin approximation, in
both amplitude and frequency, as well as in the rate of decay for high frequencies.
The approximation by a truly low-dimensional, 6D-Galerkin model of these key features of the power spectrum of the solutions to the DDE~\eqref{Eq_DDE_chafee} has deep dynamical implications in terms of the Ruelle-Pollicott resonances and mixing properties of the dynamics on the attractor \cite{Chek_al14_RP}. These implications are beyond the scope of this article but {\mg we intend to discuss them} elsewhere.
\begin{rem} \label{Rmk_Chafee}
Equation.~\eqref{Eq_DDE_chafee} can actually be seen as a highly idealized ENSO model with memory effects; see \cite{BH89,Cane_al90,Dijkstra05,GCStep15,Galanti_al00, GZT08, Sieber2014, Munnich_al91, TSCJ94,Zaliapin_al10,Zivkovic_al13} and references therein. Indeed, for the given parameter values, the solution of Eq.~\eqref{Eq_DDE_chafee} admits two metastable states, as can be seen from the two islands in the attractor given in Figure~\ref{Super_chafee}. These two metastable states {\mg are analogous to the warm, El Ni\~no and the cold, La Ni\~na} states in ENSO dynamics. Moreover, the two broadband peaks at low frequencies in the
spectral density of the solution, as shown in Figure~\ref{PSD_chafee}, are also reminiscent of the quasi-quadrennial and the quasi-biennial mode in ENSO \cite{MG_AWR'00, Ghil2002, Jiang95} dynamics. The important role of such low-frequency variability in the understanding and prediction of climate dynamics on various time scales was emphasized in \cite{CKG11,HD_MG'05, Ghil2001, MG_AWR'00, MG_AWR'02} and references therein.
\qed
\end{rem}
\subsection{A highly idealized ENSO model with memory effects} \label{Sect_ENSO}
In this section, we consider the following {\mg periodically} forced DDE with two discrete delays \cite{GZT08}:
\begin{equation} \label{ENSO_model}
\dot{x}= -\alpha \tanh(\kappa x(t-\tau_1)) + \beta \tanh(\kappa x(t-\tau_2)) + \gamma \cos(2\pi t).
\end{equation}
This equation is a slightly modified version of the model used in \cite{TSCJ94} for the study of the interaction between the seasonal forcing and the intrinsic ENSO variability.\footnote{The nonlinearity used in \cite{TSCJ94} consists of a sigmoid made of two $\tanh$ segments joined continuously by a line segment, cf. \cite[Eq.~(9)]{Munnich_al91}; this sigmoid is simplified here to be just a hyperbolic tangent function.} We also refer to \cite{BH89,Cane_al90,Dijkstra05,GCStep15,Galanti_al00,GZT08, Munnich_al91,TSCJ94,Zaliapin_al10,Zivkovic_al13,GZ15} and references therein for other models with retarded arguments used in this context.
The purpose of this subsection is to
show that the Galerkin scheme developed in this article can also be easily adapted to deal with DDEs with multiple delays, as well as with non-autonomous, forced DDEs. For this purpose, Eq.~\eqref{ENSO_model} is placed in a quasi-periodic regime {\mg by choosing the parameter values $ \alpha = 2.1, \beta = 1.05, \gamma =3, \kappa = 10, \tau_1 = 0.95,$ and $\tau_2 = 5.13$.}
We outline now the necessary modifications to the nonlinear term $G(y)$ in the Galerkin system \eqref{Galerkin_cptForm} for the case of multiple discrete delays. As a direct generalization of the case with a single discrete delay, given in footnote~\ref{footnote_discrete_delay}, the nonlinearity $F$ in \eqref{Eq_DDE} takes the form $F(x(t), x(t-\tau_1), \cdots, x(t-\tau_p))$, where $0 < \tau_1 < \cdots < \tau_p=:\tau$. In this more general situation, the identity \eqref{GalerkinCalc_part2a} becomes
\begin{equation} \begin{aligned}
\hspace*{-1em}\Bigl \langle \Pi_N \mathcal{F} \Bigl ( \sum_{n=0}^{N-1} y_n(t) \mathcal{K}^\tau_n \Bigr ), \mathcal{K}^\tau_j \Bigr \rangle_{\mathcal{H}} & = F \Biggl( \sum_{n=0}^{N-1} y_n(t), \sum_{n=0}^{N-1} y_n(t) K_n(-\frac{\tau_1}{\tau}), \cdots, \\
& \qquad \sum_{n=0}^{N-1} y_n(t) K_n(-\frac{\tau_{p-1}}{\tau}), \sum_{n=0}^{N-1} y_n(t) K_n(-1) \Biggr),
\end{aligned} \end{equation}
which leads to the corresponding change in the formula \eqref{eq:G} for the nonlinear vector field in Eq.~\eqref{Galerkin_cptForm}. Note also that the forcing term is dealt with in Remark~\ref{Rmk_Galerkin_forcing}.
Again, we compare the DDE's attractor in the left panel of Fig.~\ref{Fig_ENSO_attractors} with the attractor obtained from an associated Galerkin system, in the right panel. Despite the complexity of the DDE's attractor, a $40$-dimensional Galerkin system can already provide a very accurate reconstruction of this attractor. The need for a higher dimensionality of the Galerkin approximation in this case arises from the presence of incommensurable frequencies in the periodically forced model, namely the seasonal cycle and the internal frequencies \cite{Dijkstra05, JNG'94, MG_AWR'00, MG_AWR'02, TSCJ94}.
\begin{figure}[hbtp]
\centering
\includegraphics[height=0.4\textwidth,width=.8\textwidth]{tanh_2delays_attractor.pdf}
\caption{{\footnotesize The attractor associated with Eq.~\eqref{ENSO_model} (left panel) and its approximation obtained from a 40D-Galerkin approximation (right panel).}} \label{Fig_ENSO_attractors}
\end{figure}
\section*{Acknowledgments}
This work has been partially supported by the Office of Naval Research (ONR) Multidisciplinary University Research Initiative (MURI) grant N00014-12-1-0911 (MDC, HL and MG) and by National Science Foundation (NSF) grant DMS-1049253 (SW).
| {
"timestamp": "2015-09-11T02:00:57",
"yymm": "1509",
"arxiv_id": "1509.02945",
"language": "en",
"url": "https://arxiv.org/abs/1509.02945",
"abstract": "This article revisits the approximation problem of systems of nonlinear delay differential equations (DDEs) by a set of ordinary differential equations (ODEs). We work in Hilbert spaces endowed with a natural inner product including a point mass, and introduce polynomials orthogonal with respect to such an inner product that live in the domain of the linear operator associated with the underlying DDE. These polynomials are then used to design a general Galerkin scheme for which we derive rigorous convergence results and show that it can be numerically implemented via simple analytic formulas. The scheme so obtained is applied to three nonlinear DDEs, two autonomous and one forced: (i) a simple DDE with distributed delays whose solutions recall Brownian motion; (ii) a DDE with a discrete delay that exhibits bimodal and chaotic dynamics; and (iii) a periodically forced DDE with two discrete delays arising in climate dynamics. In all three cases, the Galerkin scheme introduced in this article provides a good approximation by low-dimensional ODE systems of the DDE's strange attractor, as well as of the statistical features that characterize its nonlinear dynamics.",
"subjects": "Chaotic Dynamics (nlin.CD); Classical Analysis and ODEs (math.CA); Dynamical Systems (math.DS)",
"title": "Low-Dimensional Galerkin Approximations of Nonlinear Delay Differential Equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.983342958526322,
"lm_q2_score": 0.8152324938410783,
"lm_q1q2_score": 0.8016531323804775
} |
https://arxiv.org/abs/0710.0829 | Computing the Conditioning of the Components of a Linear Least Squares Solution | In this paper, we address the accuracy of the results for the overdetermined full rank linear least squares problem. We recall theoretical results obtained in Arioli, Baboulin and Gratton, SIMAX 29(2):413--433, 2007, on conditioning of the least squares solution and the components of the solution when the matrix perturbations are measured in Frobenius or spectral norms. Then we define computable estimates for these condition numbers and we interpret them in terms of statistical quantities. In particular, we show that, in the classical linear statistical model, the ratio of the variance of one component of the solution by the variance of the right-hand side is exactly the condition number of this solution component when perturbations on the right-hand side are considered. We also provide fragment codes using LAPACK routines to compute the variance-covariance matrix and the least squares conditioning and we give the corresponding computational cost. Finally we present a small historical numerical example that was used by Laplace in Theorie Analytique des Probabilites, 1820, for computing the mass of Jupiter and experiments from the space industry with real physical data. | \section{Introduction}
We consider the linear least squares problem (LLSP)
$\min_{x \in \mathbb{R}^n}\|Ax-b\|_2$, where $b \in \mathbb{R}^m$
and $A \in \mathbb{R}^{m\times n}$ is a matrix of full column rank $n$.\\
Our concern comes from the following observation: in many parameter estimation
problems, there may be random errors in the observation vector $b$ due to
instrumental measurements as well as roundoff errors in the algorithms.
The matrix $A$ may be subject to errors in its computation
(approximation and/or roundoff errors).
In such cases, while the condition number of the matrix
$A$ provides some information
about the sensitivity of the LLSP to perturbations, a single global
conditioning quantity is often not relevant enough since we may have
significant disparity between the errors in the solution components.
We refer to the last section of the manuscript for
illustrative examples.\\
There are several results for analyzing
the accuracy of the LLSP by components.
For linear systems $Ax=b$ and for LLSP,
~\cite{CHA.IPS.95} defines so called componentwise condition numbers that correspond
to amplification factors of the relative errors in solution components due to
perturbations in data $A$ or $b$ and explains how to estimate them.
For LLSP,~\cite{KEN.LAU.98} proposes to estimate componentwise condition
numbers by a statistical method. More recently,~\cite{ABG.07} developed
theoretical results on conditioning of linear functionals of
LLSP solutions.\\
The main objective of our paper is to provide computable quantities of these
theoretical values in order to assess the accuracy of an LLSP solution or some
of its components. To achieve this goal, traditional tools for the numerical
linear algebra practitioner are condition numbers or backward errors whereas
the statistician usually refers to variance or covariance. Our purpose here is
to show that these mathematical quantities coming either from numerical
analysis or statistics are closely related. In particular, we will show in
Equation~(\ref{eq:1}) that, in the classical linear statistical model, the
ratio of the variance of one component of the solution by the variance of the
right-hand side is exactly the condition number of this component
when perturbations on the right-hand side are considered. In that
sense, we attempt to clarify, similarly to~\cite{HIGH.STEW.87}, the analogy
between quantities handled by the linear algebra and the statistical approaches
in linear least squares. Then we define computable estimates for these
quantities and explain how they can be computed using the standard libraries
LAPACK or ScaLAPACK.
This paper is organized as follows.
In Section~\ref{sec:theoback}, we recall and exploit some results of practical interest
coming from~\cite{ABG.07}. We also define the condition
numbers of an LLSP solution or one component of it.
In Section~\ref{sec:link}, we recall some definitions and results related to
the linear statistical model for LLSP, and we interpret
the condition numbers in terms of statistical quantities.
In Section~\ref{sec:lapack} we provide practical formulas and
FORTRAN code fragments for computing
the variance-covariance matrix and LLSP condition numbers using LAPACK.
In Section~\ref{sec:numerics}, we propose two numerical examples
that show the relevance of the proposed quantities and their practical
computation. The first test case is a historical example from Laplace
and the second example is related to gravity field computations.
Finally some concluding remarks are given in Section~\ref{sec:concl}.\\
Throughout this paper we will use the following notations.
We use the Frobenius norm $\nfro{.}$
and the spectral norm $\neuc{.}$ on matrices
and the usual Euclidean norm $\neuc{.}$ on vectors.
$A^{\dagger}$ denotes the Moore-Penrose pseudo inverse of $A$,
the matrix $I$ is the identity matrix and $e_i$ is the $i$-th canonical vector.
\section{Theoretical background for linear least squares conditioning}\label{sec:theoback}
Following the notations in~\cite{ABG.07}, we consider the function
\begin{equation}
\label{funct}
\begin{array}{c c c c}
g\ : &
\mathbb{R} ^{m \times n} \times \mathbb{R} ^m & \longrightarrow & \mathbb{R} ^k
\\
& A,b & \longmapsto & g(A,b)=L^{T}x(A,b)=L^T(A^TA)^{-1}A^Tb,\\
\end{array}
\end{equation}
where $L$ is an $n \times k$ matrix, with $k \leq n$.
Since $A$ has full rank $n$, $g$ is continuously F-differentiable in a
neighbourhood of $(A,b)$ and we
denote by $g'$ its F-derivative.\\
Let $\alpha$ and $\beta$ be two positive real numbers.
In the present paper we consider
the Euclidean norm for the solution space $\mathbb{R}^k$.
For the data space $\mathbb{R}^{m\times n}\times \mathbb{R}^m$,
we use the product norms defined by
$$\norm{(A,b)}_{\rm{F~or~2}}=
\sqrt{\alpha^2\norm{A}_{\rm{F~or~2}}^2+\beta^2\neuc{b}^2},~\alpha,\beta>0.$$
Following~\cite{GEURTS}, the absolute condition number of $g$ at the point
$(A,b)$ using the product norm defined above is given by:
$$\kappa_{g,{\rm{F~or~2}}}(A,b)
=\max_{(\Delta A,\Delta b)}
\frac{\neuc{g'(A,b).(\Delta A,\Delta b)}}{\norm{(\Delta A,\Delta b)}_{\rm{F~or~2}}}.$$
The corresponding relative condition number of $g$ at $(A,b)$ is expressed by
$$\kappa_{g,{\rm{F~or~2}}}^{(rel)}(A,b)=
\frac{\kappa_{g,F}(A,b)~\norm{(A,b)}_{\rm{F~or~2}}}{\neuc{g(A,b)}}.$$
To address the special cases where only $A$ (resp. $b$) is perturbed,
we also define the quantities
$\kappa_{g,{\rm{F~or~2}}}(A)
=\max_{\Delta A}
\frac{\neuc{\frac{\partial g}{\partial A}(A,b).\Delta A}}{\norm{\Delta A}_{\rm{F~or~2}}}$
(resp. $\kappa_{g,2}(b)
=\max_{\Delta b}
\frac{\neuc{\frac{\partial g}{\partial b}(A,b).\Delta b}}{\norm{\Delta b}_2}$).
\begin{Remark}
{\em
The product norm for the data space is very flexible;
the coefficients $\alpha$ and $\beta$ allow us to monitor
the perturbations on $A$ and $b$.
For instance, large values of $\alpha$ (resp. $\beta$ ) enable us to obtain
condition number problems where mainly $b$ (resp. $A$) are perturbed.
In particular, we will address the special cases where
only $b$ (resp. $A$) is perturbed by choosing the $\alpha$ and $\beta$
parameters as
$\alpha=+\infty~{\rm and}~\beta=1$
(resp. $\alpha=1~{\rm and}~\beta=+\infty$) since we have
$$\lim_{\alpha \rightarrow +\infty}\kappa_{g,\rm{F~or~2}}(A,b)
=\frac{1}{\beta}\kappa_{g,2}(b)
~{\rm and}~
\lim_{\beta \rightarrow +\infty}\kappa_{g,\rm{F~or~2}}(A,b)
=\frac{1}{\alpha}\kappa_{g,\rm{F~or~2}}(A).
$$
This can be justified as follows:
\begin{eqnarray*}
\kappa_{g,\rm{F~or~2}}(A,b) & = &
\max_{(\Delta A,\Delta b)}
\frac{\neuc{\frac{\partial g}{\partial A}(A,b).\Delta A
+\frac{\partial g}{\partial b}(A,b).\Delta b}}
{\sqrt{\alpha^2\norm{\Delta A}_{\rm{F~or~2}}^2+\beta^2\neuc{\Delta b}^2}}\\
& = &
\max_{(\Delta A,\Delta b)}
\frac{\neuc{\frac{\partial g}{\partial A}(A,b).\frac{\Delta A}{\alpha}
+\frac{\partial g}{\partial b}(A,b).\frac{\Delta b}{\beta}}}
{\sqrt{\norm{\Delta A}_{\rm{F~or~2}}^2+\neuc{\Delta b}^2}}.\\
\end{eqnarray*}
The above expression represents the operator norm of a linear functional
depending continuously on $\alpha$, and then we get
$$\lim_{\alpha \rightarrow +\infty}\kappa_{g,\rm{F~or~2}}(A,b)
=\max_{(\Delta A,\Delta b)}
\frac{\neuc{\frac{\partial g}{\partial b}(A,b).\frac{\Delta b}{\beta}}}
{\sqrt{\norm{\Delta A}_{\rm{F~or~2}}^2+\neuc{\Delta b}^2}}
=\max_{\Delta b}
\frac{\neuc{\frac{\partial g}{\partial b}(A,b).\frac{\Delta b}{\beta}}}
{\neuc{\Delta b}}
=\frac{1}{\beta}\kappa_{g,2}(b).
$$
The proof is the same for the case where $\beta=+\infty$.\\
}
\end{Remark}\\
The condition numbers related to $L^Tx(A,b)$ are referred to as {\bf partial condition
numbers} (PCN) of the LLSP with respect to the linear operator $L$
in~\cite{ABG.07}.\\
We are interested in computing the PCN for two special cases.
The first case is when $L$ is
the identity matrix (conditioning of the solution) and the second case is when $L$ is a canonical
vector $e_i$ (conditioning of a solution component). We can extract
from~\cite{ABG.07} two theorems that can lead to computable quantities in these
two special cases.\\
\begin{theo} \label{theobound}
In the general case where~$(L \in \mathbb{R}^{n\times k})$, the absolute
condition numbers of $g(A,b)=L^Tx(A,b)$ in the Frobenius and spectral norms can
be respectively bounded as follows
$$\frac{1}{\sqrt{3}}f(A,b) \leq \kappa_{g,F}(A,b) \leq f(A,b)$$
$$\frac{1}{\sqrt{3}}f(A,b) \leq \kappa_{g,2}(A,b) \leq \sqrt{2} f(A,b)$$
where
\begin{equation}\label{eq:equationforf(A,b)}
f(A,b)=
\left(\neuc{L^T(A^TA)^{-1}}^2 \frac{\neuc{r}^2}{\alpha^2}
+\neuc{L^T A^{\dagger}}^2 (\frac{\neuc{x}^2}{\alpha^2}
+\frac{1}{\beta^2})\right)^{\frac{1}{2}}.
\end{equation}
\end{theo}
\begin{theo}\label{corocond}
In the two particular cases:
\begin{enumerate}
\item $L$ is a vector ($L \in \mathbb{R}^n$), or
\item $L$ is the $n$-by-$n$ identity matrix ($L=I$)
\end{enumerate}
the absolute condition number of $g(A,b)=L^Tx(A,b)$ in the Frobenius norm is
given by the formula:
$$
\kappa_{g,F}(A,b)=\left(\neuc{L^T(A^TA)^{-1}}^2 \frac{\neuc{r}^2}{\alpha^2}
+\neuc{L^T A^{\dagger}}^2 (\frac{\neuc{x}^2}{\alpha^2}
+\frac{1}{\beta^2})\right)^{\frac{1}{2}}.
$$
\end{theo}
Theorem~\ref{corocond} provides the exact value for the condition number in the
Frobenius norm for our two cases of interest ($L=e_i$ and $L=I$). From
Theorem~\ref{theobound}, we observe that
\begin{equation}\label{eq:condfrob_or_condspec}
\frac{1}{\sqrt{3}} \kappa_{g,F}(A,b)
\leq
\kappa_{g,2}(A,b)
\leq
\sqrt{6}\kappa_{g,F}(A,b).
\end{equation}
which states that the partial condition number in spectral norm is of the same
order of magnitude as the one in Frobenius norm. In the remainder of the
paper, the focus is given to the partial condition number in Frobenius norm only.\\
For the case $L=I$, the result of Theorem~\ref{corocond} is similar
to~\cite{GR.96} and~\cite[p. 92]{GEURTS}. The upper bound for
$\kappa_{2,F}(A,b)$ that can be derived from
Equation~(\ref{eq:condfrob_or_condspec}) is also the one obtained by~\cite{GEURTS}
when we consider pertubations in $A$.\\
Let us denote by $\kappa_i(A,b)$ the condition number related to the component
$x_i$ in Frobenius norm (i.e $\kappa_i(A,b)=\kappa_{g,F}(A,b)$ where
$g(A,b)=e_i^Tx(A,b)=x_i(A,b)$). Then replacing $L$ by $e_i$ in
Theorem~\ref{corocond} provides us with an exact expression for computing
$\kappa_i(A,b)$, this gives
\begin{equation}\label{eq:componentwise_formula}
\kappa_i(A,b)=\left(\neuc{e_i^T(A^TA)^{-1}}^2 \frac{\neuc{r}^2}{\alpha^2}
+\neuc{e_i^T A^{\dagger}}^2 (\frac{\neuc{x}^2}{\alpha^2}
+\frac{1}{\beta^2})\right)^{\frac{1}{2}}.
\end{equation}
$\kappa_i(A,b)$ will be referred to as {\bf the condition number of the
solution component $x_i$}.\\
Let us denote by $\kappa_{LS}(A,b)$ the condition number related to the
solution $x$ in Frobenius norm (i.e $\kappa_{LS}(A,b)=\kappa_{g,F}(A,b)$ where
$g(A,b)=x(A,b)$). Then Theorem~\ref{corocond} provides us with an exact
expression for computing $\kappa_{LS}(A,b)$, that is
\begin{equation}\label{eq:normwise_formula}
\kappa_{LS}(A,b)=
\neuc{(A^TA)^{-1}}^{1/2}
\left(
\frac{
\neuc{(A^TA)^{-1}} \neuc{r}^2 + \neuc{x}^2
}
{
\alpha^2
}
+ \frac{1}{\beta^2}
\right)^{\frac{1}{2}}.
\end{equation}
where we have used the fact that $\neuc{ (A^T A )^{-1}} = \neuc{ A^\dagger }^2$.\\
$\kappa_{LS}(A,b)$ will be referred to as {\bf the condition number of the
least squares solution}.\\
Note that~\cite{IRLLS.07} defines condition numbers for both $x$ and $r$
in order to derive error bounds for $x$ and $r$ but uses infinity-norm
to measure perturbations.\\
In this paper, we will also be interested in the special case where only
$b$ is perturbed ($\alpha=+\infty$ and $\beta=1$). In this case, we
will call $\kappa_i(b)$ the condition number of the solution component $x_i$,
and $\kappa_{LS}(b)$ the condition number of the least squares solution.
When we restrict the perturbations to be on $b$,
Equation~(\ref{eq:componentwise_formula}) simplifies to
\begin{equation}\label{eq:justb_componentwise}
\kappa_i(b)=\neuc{e_i^T A^{\dagger}},
\end{equation}
and Equation~(\ref{eq:normwise_formula}) simplifies to
\begin{equation}\label{eq:justb_normwise}
\kappa_{LS}(b)= \neuc{A^{\dagger}}.
\end{equation}
This latter formula is standard and is in accordance with
~\cite[p. 29]{BJORCK}.
\section{Condition numbers and statistical quantities}\label{sec:link}
\subsection{Background for the linear statistical model}\label{sec:cov}
We consider here the classical linear statistical model
$$
b = Ax +\epsilon,
~A \in \mathbb{R}^{m\times n},
~b \in \mathbb{R}^m,
{\rm rank}(A)=n,
$$
where $\epsilon$ is a vector of random errors having expected value
$E(\epsilon)=0$ and variance-covariance $V(\epsilon)=\sigma_b^2I$.
In statistical language, the matrix $A$ is referred to as the regression matrix
and the unknown vector $x$ is called the vector of regression coefficients.\\
Following the Gauss-Markov theorem~\cite{ZELEN},
the least squares estimates $\hat{x}$ is the linear unbiased estimator
of $x$ satisfying
$$\|A\hat{x}-b\|_2=\min_{x \in \mathbb{R}^n}\|Ax-b\|_2,$$
with minimum variance-covariance equal to
\begin{equation}\label{eq;variance-covariance}
C=\sigma_b^2 (A^TA)^{-1}.
\end{equation}
Moreover $\frac{1}{m-n}\neuc{b-A\hat{x}}^2$ is an unbiased estimate
of $\sigma_b^2$. This quantity is sometimes called the mean squared error (MSE).\\
The diagonal elements $c_{ii}$ of $C$ give the variance of each component
$\hat{x}_i$ of the solution. The off-diagonal elements $c_{ij},~i \neq j$
give the covariance between $\hat{x}_i$ and $\hat{x}_j$.\\
We define $\sigma_{\hat x_i}$ as the standard deviation of the solution component
$\hat x_i$ and we have
\begin{equation}\label{eq:654}
\sigma_{\hat x_i} = \sqrt{c_{ii}}.
\end{equation}
In the next section, we will prove that the condition numbers $\kappa_{i}(A,b)$ and
$\kappa_{LS}(A,b)$ can be related to the statistical
quantities $\sigma_{\hat x_i}$ and $\sigma_b$.
\subsection{Perturbation on $b$ only}
Using Equation~(\ref{eq;variance-covariance}), the variance $c_{ii}$ of the
solution component $\hat x_i$ can be expressed as
$$
c_{ii} = e_i^T C e_i = \sigma_b^2 e_i^T ( A^{T} A )^{-1} e_i.
$$
We note that $ ( A^TA ) ^{-1} = A^{\dagger}A^{\dagger T}$ so that
$$
c_{ii}
= \sigma_b^2 e_i^T ( A^{\dagger}A^{\dagger T} ) e_i
= \sigma_b^2 \neuc{e_i^T A^{\dagger}}^2.
$$
Using Equation~(\ref{eq:654}), we get
$$
\sigma_{\hat x_i} = \sqrt{c_{ii}} = \sigma_b \neuc{e_i^T A^{\dagger}}.
$$
Finally from Equation~(\ref{eq:justb_componentwise}), we get
\begin{equation}\label{eq:1}
\sigma_{\hat x_i} = \sigma_b \kappa_i(b).
\end{equation}
Equation~(\ref{eq:1}) shows that the condition number $\kappa_i(b)$
relates linearly the standard deviation of $\sigma_b$ with the
standard deviation of $\sigma_{\hat x_i}$.\\
Now if we consider the constant vector $\ell$ of size $n$, we have~(see \cite{ZELEN})
$$ {\rm variance}( \ell^T \hat x) = \ell^T C \ell. $$
Since $C$ is symmetric, we can write
$$
\max_{\| \ell \|_2=1} {\rm variance}(\ell^T \hat x) = \neuc{C}.
$$
Using the fact that $\| C \|_2 =\sigma_b^2 \neuc{ (A^TA)^{-1}}=\sigma_b^2 \neuc{ A^{\dagger} }^2 $, and
Equation~(\ref{eq:justb_normwise}), we get
$$
\max_{\| \ell \|_2=1} {\rm variance}(\ell^T \hat x) = \sigma_b^2 \kappa_{LS}(b)^2
$$
or, if we call $\sigma( \ell^T \hat x ) $ the standard deviation of $\ell^T \hat x$,
$$
\max_{\| \ell \|_2=1} \sigma(\ell^T \hat x) = \sigma_b \kappa_{LS}(b).
$$
Note that
$\sigma_b = \max_{\| \ell \|_2=1} \sigma(\ell^T \epsilon) $ since
$V(\epsilon)=\sigma_b^2I$.
\begin{Remark}
{\em
Matlab proposes a routine LSCOV that computes the quantities $\sqrt{c_{ii}}$
in a vector STDX and the mean squared error MSE using the syntax [X,STDX,MSE] = LSCOV(A,B).\\
Then the condition numbers $\kappa_i(b)$
can be computed by the matlab expression
STDX/sqrt(MSE).
}
\end{Remark}
\subsection{Perturbation on $A$ and $b$}
We now provide the expression of the condition number provided
in Equation~(\ref{eq:componentwise_formula}) and in
Equation~(\ref{eq:normwise_formula}) in term of statistical quantities.\\
Observing the following relations
$$
C_i = \sigma_b^2 e_i^T(A^TA)^{-1} \quad {\rm and} \quad
c_{ii} = \sigma_b^2 \neuc{e_i^T A^{\dagger}}^2,
$$
where $C_i$ is the $i$-th column of the variance-covariance matrix,
the condition number of $x_i$ given in Formula~(\ref{eq:componentwise_formula})
can expressed as
$$
\kappa_i(A,b)=\frac{1}{\sigma_b}\left(
\frac{\neuc{C_i}^2}{\sigma_b^2}
\frac{\neuc{r}^2}{\alpha^2}
+c_{ii}(\frac{\neuc{x}^2}{\alpha^2}
+\frac{1}{\beta^2})\right)^{\frac{1}{2}}.
$$
The quantity $\sigma_b^2$ will often be estimated by $\frac{1}{m-n}\neuc{r}^2$
in which case the expression can be simplified
\begin{equation} \label{form:cii2}
\kappa_i(A,b)=\frac{1}{\sigma_b}\left(
\frac{1}{\alpha^2} \frac{\neuc{C_i}^2}{(m-n)}
+ \frac{c_{ii}\neuc{x}^2}{\alpha^2}
+ \frac{c_{ii}}{\beta^2})\right)^{\frac{1}{2}}.
\end{equation}
From Equation~(\ref{eq:normwise_formula}), we obtain
$$
\kappa_{LS}(A,b)=
\frac{\neuc{C}^{1/2}}{\sigma_b}
\left(
\frac{
\neuc{C} \neuc{r}^2
}
{ \alpha^2 \sigma_b^2 }
+ \frac{\neuc{x}^2}{\alpha^2}
+ \frac{1}{\beta^2}
\right)^{\frac{1}{2}}.
$$
The quantity $\sigma_b^2$ will often be estimated by $\frac{1}{m-n}\neuc{r}^2$
in which case the expression can be simplified
$$
\kappa_{LS}(A,b)=
\frac{\neuc{C}^{1/2}}{\sigma_b}
\left(
\frac{ 1 } { \alpha^2 }
\frac{ \neuc{C} } { (m-n) }
+ \frac{\neuc{x}^2}{\alpha^2}
+ \frac{1}{\beta^2}
\right)^{\frac{1}{2}}.
$$
\section{Computation with LAPACK}\label{sec:lapack}
Section~\ref{sec:theoback} provides us with formulas to
compute the condition numbers $\kappa_i$ and $\kappa_{LS}$. As explained in
Section~\ref{sec:link}, those quantities are intimately interrelated with the
entries of the variance-covariance matrix. The goal of this section is to
present practical methods and codes to compute those quantities
efficiently with LAPACK.
The assumption made is that the LLSP has already been
solved with either the normal equations method or a QR factorization approach. Therefore
the solution vector $\hat x$, the norm of the residual $\| \hat r \|_2$, and
the R-factor $R$ of the QR factorization of $A$ are readily available (we recall
that the Cholesky factor of the normal equations is the R-factor of the QR
factorization up to some signs). In the example codes, we have used the
LAPACK routine DGELS
that solves the LLSP using QR factorization of A. Note that
it is possible to have a more accurate solution using extra-precise
iterative refinement~\cite{IRLLS.07}.
\subsection{Variance-covariance computation}
We will use the fact that $\frac{1}{m-n}\neuc{b-A\hat{x}}^2$ is an unbiased estimate
of $\sigma_b^2$. We wish to compute
the following quantities related to the variance-covariance matrix $C$
\begin{itemize}
\item the $i$-th column $ C_i =\sigma_b^2 e_i^T ( A^T A )^{-1} $
\item the $i$-th diagonal element $c_{ii} =\sigma_b^2 \| e_i ^T A^\dagger \|_2^2 $
\item the whole matrix $ C $
\end{itemize}
We note that the quantities $ C_i $, $c_{ii}$, and $C$ are of interest
for statisticians. The NAG routine F04YAF~\cite{nag} is indeed an example
of tool to compute these three quantities.\\
For the two first quantities of interest, we note that
$$
\neuc{e_i^T A^{\dagger}}^2=\neuc{R^{-T}e_i}^2
~{\rm and}~
\neuc{e_i^T(A^TA)^{-1}}=\neuc{R^{-1}(R^{-T}e_i)}.
$$
\subsubsection{Computation of the $i$-th column $ C_i $}
$C_i$ can be computed with two $n$--by--$n$ triangular solves
\begin{equation}\label{form:trsys}
R^Ty=e_i~{\rm and}~ Rz=y.
\end{equation}
The $i$-th column of $C$ can be computed by
the following code fragment.\\
\\
{\bf
Code 1:\\
CALL DGELS(~'N',~M,~N,~1,~A,~LDA,~B,~LDB,~WORK,~LWORK,~INFO~)\\
RESNORM = DNRM2(~(M-N),~B(N+1),~1)\\
SIGMA2 = RESNORM**2/DBLE(M-N)\\
E(1:N) = 0.D0\\
E(I) = 1.D0\\
CALL DTRSV(~'U',~'T',~'N',~N-I+1,~A(I,I),~LDA,~E(I),~1)\\
CALL DTRSV(~'U',~'N',~'N',~N,~A,~LDA,~E,~1)\\
CALL DSCAL(~N,~SIGMA2,~E,~1)\\
}
\\
This requires about $2n^2$ flops (in addition to the cost of solving
the linear least squares problem using DGELS).\\
$c_{ii}$ can be computed by one $n$--by--$n$ triangular solve and taking the
square of the norm of the solution which involves about $(n-i+1)^2$ flops. It is
important to note that the larger $i$, the less expensive to obtain $c_{ii}$. In
particular if $i=n$ then only one operation is needed: $c_{nn} = R_{nn}^{-2}$.
This suggests that a correct ordering of the variables can save some
computation.\\
\\
\subsubsection{Computation of the $i$-th diagonal element $ c_{ii} $}
From $c_{ii} = \sigma_b^2 \neuc{e_i^TR^{-1}}^2$, it comes that each $ c_{ii} $
corresponds to the $i$-th row of $R^{-1}$.
Then the diagonal elements of $C$ can be computed by the
following code fragment.\\
\\
{\bf
Code 2:\\
CALL DGELS(~'N',~M,~N,~1,~A,~LDA,~B,~LDB,~WORK,~LWORK,~INFO~)\\
RESNORM~=~DNRM2((M-N), B(N+1), 1)\\
SIGMA2~=~RESNORM**2/DBLE(M-N)\\
CALL DTRTRI(~'U',~'N',~N,~A,~LDA,~INFO)\\
DO~I=1,N \\
\hspace*{0.5cm} CDIAG(I)~=~DNRM2(~N-I+1,~A(I,I),~LDA)\\
\hspace*{0.5cm} CDIAG(I)~=~SIGMA2~*~CDIAG(I)**2\\
END DO\\
}
\\
This requires about $n^3/3$ flops (plus the cost of DGELS).\\
\\
\subsubsection{Computation of the whole matrix $C$}
In order to compute explicity all the coefficients of the matrix $C$, one can
use the routine DPOTRI which computes the inverse of a matrix from its Cholesky
factorization. First the routine computes the inverse of $R$ using DTRTRI
and then performs the triangular matrix-matrix multiply $R^{-1}R^{-T}$ by
DLAUUM. This requires about $2n^3/3$ flops.
We can also compute the variance-covariance matrix without inverting $R$
using for instance the algorithm given in~\cite[p. 119]{BJORCK} but the
computational cost remains $2n^3/3$ (plus the cost of DGELS).\\
\\
We can obtain the upper triangular part of $C$
by the following code fragment.\\
\\
{\bf
Code 3:\\
CALL DGELS(~'N',~M,~N,~1,~A,~LDA,~B,~LDB,~WORK,~LWORK,~INFO~)\\
RESNORM~=~DNRM2((M-N), B(N+1), 1)\\
SIGMA2~=~RESNORM**2/DBLE(M-N)\\
CALL DPOTRI(~'U',~N,~A,~LDA,~INFO)\\
CALL DLASCL(~'U',~0,~0,~N,~N,~1.D0,~SIGMA2,~N,~N,~A,~LDA,~INFO)\\
}
\\
\subsection{Condition numbers computation}
For computing $\kappa_i(A,b)$, we need to compute both
the $i$-th diagonal element and the norm of the $i$-th column
of the variance-covariance matrix and we cannot use direcly Code 1 but the
following code fragment\\
\\
{\bf
Code 4:\\
ALPHA2 = ALPHA**2\\
BETA2 = BETA**2\\
CALL DGELS(~'N',~M,~N,~1,~A,~LDA,~B,~LDB,~WORK,~LWORK,~INFO~)\\
XNORM = DNRM2(N,~B(1),~1)\\
RESNORM = DNRM2((M-N), B(N+1), 1)\\
CALL DTRSV(~'U',~'T',~'N',~N-I+1,~A(I,I),~LDA,~E(I),~1~)\\
ENORM = DNRM2(N, E, 1)\\
K = (ENORM**2)*(XNORM**2/ALPHA2+1.d0/BETA2)\\
CALL DTRSV(~'U',~'N',~'N',~N,~A,~LDA,~E,~1~)\\
ENORM = DNRM2(N, E, 1)\\
K = SQRT((ENORM*RESNORM)**2/ALPHA2 + K)\\
}
\\
For computing all the $\kappa_i(A,b)$,
we need to compute the columns $C_i$ and the diagonal elements $c_{ii}$
using Formula~(\ref{form:cii2}) and then we have to compute
the whole variance-covariance matrix. This can be
performed by a slight modification of Code 3.\\
When only $b$ is perturbed, then we have to invert $R$ and we can use
a modification of Code 2 (see numerical example in Section~\ref{sec:cnes}).\\
\\
For estimating $\kappa_{LS}(A,b)$, we need to have an estimate of
$\neuc{R^{-1}}$.
The computation of $\neuc{R^{-1}}$ requires to compute the minimum singular value of the
matrix $A$ (or $R$).
One way is to compute the full SVD of $A$ (or $R$) which requires $\mathcal{O} (n^3)$ flops.
As an alternative,
$\neuc{R^{-1}}$ can be estimated for instance by considering
other matrix norms through the following inequalities
\begin{eqnarray*}
\frac{1}{\sqrt{n}}\nfro{R^{-1}} & \leq \neuc{R^{-1}} \leq & \nfro{R^{-1}}, \\
\frac{1}{\sqrt{n}}\|R^{-1}\|_{\infty} & \leq \neuc{R^{-1}} \leq & \sqrt{n} \|R^{-1}\|_{\infty},\\
\frac{1}{\sqrt{n}}\|R^{-1}\|_1 & \leq \neuc{R^{-1}} \leq & \sqrt{n} \|R^{-1}\|_1.\\
\end{eqnarray*}
$\|R^{-1}\|_1$ or $\|R^{-1}\|_{\infty}$ can be estimated using
Higham modification~\cite[p. 293]{HIGHAM} of Hager's~\cite{HAGER}
method as it is implemented in LAPACK~\cite{LAPACK} DTRCON routine
(see Code 5). The cost is $\mathcal{O} ( n^2 )$.\\
\\
{\bf
Code 5:\\
CALL DTRCON(~'I',~'U',~'N',~N,~A,~LDA,~RCOND,~WORK,~IWORK,~INFO)\\
RNORM~=~DLANTR(~'I',~'U',~'N',~N,~N,~A,~LDA,~WORK)\\
RINVNORM~=~(1.D0/RNORM)/RCOND\\
}
\\
We can also evaluate $\neuc{R^{-1}}$ by considering
$\nfro{R^{-1}}$ since we have
\begin{eqnarray*}
\nfro{R^{-1}}^2 & = & \nfro{R^{-T}}^2\\
& = & {\rm tr}(R^{-1}R^{-T})\\
& = & \frac{1}{\sigma_b^2} {\rm tr}(C),\\
\end{eqnarray*}
where tr($C$) denotes the trace of the matrix $C$, i.e
$\sum_{i=1}^{n} c_{ii}$.
Hence the condition number of the
least-squares solution can be approximated by
\begin{equation} \label{form:trace}
\kappa_{LS}(A,b) \simeq
\left( \frac{{\rm tr}(C)}{\sigma_b^2} \left(\frac{{\rm tr}(C)
\neuc{r}^2+\sigma_b^2 \neuc{x}^2}
{\sigma_b^2 \alpha^2}+\frac{1}{\beta^2}\right) \right)^{\frac{1}{2}}.
\end{equation}
Then we can estimate $\kappa_{LS}(A,b)$ by computing and summing
the diagonal elements of $C$ using Code 2.\\
When only $b$ is perturbed ($\alpha = +\infty~{\rm and}~\beta=1$),
then we get
$$\kappa_{LS}(b) \simeq \frac{\sqrt{{\rm tr}(C)}}{\sigma_b}.$$
This result relates to~\cite[p. 167]{FAREBROTHER} where
${\rm tr}(C)$ measures the squared effect on the
LLSP solution $x$ to small changes in $b$.\\
\\
We give in Table~\ref{TabCompar} the LAPACK routines used for
computing the condition numbers of an LLSP solution or
its components and the corresponding number of floating-point operations
per second. Since the LAPACK routines involved in the covariance and/or
LLSP condition numbers have their equivalent in the parallel
library ScaLAPACK~\cite{SCALAPACK}, then this table is also available
when using ScaLAPACK.
This enables us to easily compute these quantities for larger LLSP.
\begin{table}[hbtp!]
\centering
\caption{Computation of least squares conditioning with (Sca)LAPACK}
\vspace{0.4cm}
\begin{tabular}{|c|c|c|c|}
\hline
condition number&linear algebra operation&LAPACK routines&flops count\\
&&&\\
\hline
$\kappa_i(A,b)$&$R^Ty=e_i~{\rm and}~ Rz=y$& 2 calls to (P)DTRSV&$2n^2$\\
&&&\\
\hline
all $\kappa_i(A,b),~i=1,n$&$RY=I~{\rm and~compute}~YY^T$&(P)DPOTRI&$2n^3/3$\\
&&&\\
\hline
all $\kappa_i(b),~i=1,n$&invert $R$&(P)DTRTRI&$n^3/3$\\
&&&\\
\hline
$\kappa_{LS}(A,b)$&estimate $\|R^{-1}\|_{1~{\rm or}~\infty}$&(P)DTRCON&${\cal O}(n^2)$\\
&compute $\nfro{R^{-1}}$&(P)DTRTRI&$n^3/3$\\
\hline
\end{tabular}
\label{TabCompar}
\end{table}
\begin{Remark}
{\em
The cost for computing all the $\kappa_i(A,b)$ or estimating
$\kappa_{LS}(A,b)$ is always ${\cal O}(n^3)$.
This seems affordable when we compare it to the cost of the
least squares solution using Householder QR factorization ($2mn^2-2n^3/3$)
or the normal equations ($mn^2+n^3/3$) because we have in general $m \gg n$.\\
}
\end{Remark}
\section{Numerical experiments}\label{sec:numerics}
\subsection{Laplace's computation of the mass of Jupiter and assessment of
the validity of its results}
In~\cite{LAPLACE}, Laplace computes the mass of Jupiter, Saturn and Uranus
and provides the variances associated with those variables in order to
assess the quality of the results. The data comes from the French
astronomer Bouvart in the form of the normal equations given in
Equation~(\ref{eq:laplace}).
\begin{equation}\label{eq:laplace}
\begin{array}{rcl}
795938 z_0 - 12729398 z_1 + 6788.2 z_2 - 1959.0 z_3 + 696.13 z_4 +
2602 z_5 & = & 7212.600 \\
-12729398 z_0 + 424865729 z_1 - 153106.5 z_2 - 39749.1 z_3 - 5459 z_4 +
5722 z_5 & = & -738297.800 \\
6788.2 z_0 - 153106.5 z_1 + 71.8720 z_2 - 3.2252 z_3 + 1.2484 z_4 +
1.3371 z_5 & = & 237.782 \\
-1959.0 z_0 - 39749.1 z_1 - 3.2252 z_2 + 57.1911 z_3 + 3.6213 z_4 +
1.1128 z_5 & = & -40.335 \\
696.13 z_0 - 5459 z_1 + 1.2484 z_2 + 3.6213 z_3 + 21.543 z_4 +
46.310 z_5 & = & -343.455 \\
2602 z_0 + 5722 z_1 + 1.3371 z_2 + 1.1128 z_3 + 46.310 z_4 +
129 z_5 & = & -1002.900 \\
\end{array}
\end{equation}
For computing the mass of Jupiter, we know that Bouvart performed $ m =
129 $ observations and there are $n=6$ variables in the system. The
residual of the solution $\| b - A\hat x \|_2^2$ is also given by Bouvart
and is $31096$. On the $6$ unknowns, Laplace only seeks one, the second
variable $z_1$. The mass of Jupiter in term of the mass of the Sun is
given by $z_1$ and the formula:
$$ \textmd{mass of Jupiter} = \frac{1+z_1}{1067.09}.$$
It turns out that the first variable $z_0$
represents the mass of Uranus through the formula
$$ \textmd{mass of Uranus} = \frac{1+z_0}{19504}.$$
If we solve the system~(\ref{eq:laplace}), we obtain the solution vector\\
\begin{center}
\begin{minipage}{10cm}
Solution vector\\
0.08954 -0.00304 -11.53658 -0.51492 5.19460 -11.18638
\end{minipage}
\end{center}
From $z_1$, we can compute the mass of Jupiter as a fraction of the mass of the Sun
and we obtain $1070$.
This value is indeed accurate since the correct value according to NASA is
$1048$. From $z_0$, we can compute the mass of Uranus as a fraction of the mass of the Sun and we obtain
$17918$. This value is inaccurate since the correct value according to NASA is
$22992$.\\
Laplace has computed the variance of $z_0$ and $z_1$ to assess the fact that
$z_1$ was probably correct and $z_0$ probably inaccurate. To compute those
variances, Laplace first performed a Cholesky factorization from right to left of
the system~(\ref{eq:laplace}), then, since the variables were correctly ordered
the number of operations involved in the computation of the variances of $z_0$ and $z_1$
were minimized. The variance-covariance matrix for Laplace's system is:
$$
\left(
\begin{array}{cccccc}
0.005245 & -0.000004 & -0.499200 & 0.137212 & 0.235241 & -0.186069 \\
\cdot & 0.000004 & 0.009873 & 0.003302 & 0.002779 & -0.001235 \\
\cdot & \cdot & 71.466023 & -5.441882 & -16.672689 & 14.922752 \\
\cdot & \cdot & \cdot & 10.860492 & 5.418506 & -4.896579 \\
\cdot & \cdot & \cdot & \cdot & 66.088476 & -28.467391 \\
\cdot & \cdot & \cdot & \cdot & \cdot & 15.874809 \\
\end{array}
\right)
$$
Our computation gives us that the variance for the mass of Jupiter is
$4.383233\cdot10^{-6}$. For reference, Laplace in 1820 computed
$4.383209\cdot10^{-6}$. (We deduce the variance from Laplace's value 5.0778624.
To get what we now call the variance, one needs to compute the quantity:
$ 1/(2*10**5.0778624)*m/(m-n)$.)
From the variance-covariance matrix, one can assess that the computation
of the mass of Jupiter (second variable) is extremely reliable while the
computation of the mass of Uranus (first variable) is not.
For more details, we recommend to read \cite{langoureview2007}.
\subsection{Gravity field computation}\label{sec:cnes}
A classical example of parameter estimation problem is the computation of the
Earth's gravity field coefficients. More specifically, we estimate the
parameters of the gravitational potential that can be expressed in spherical
coordinates $(r,\theta,\lambda)$ by~\cite{BALMINO}
\begin{equation} \label{potential}
V(r,\theta,\lambda)=\frac{GM}{R}\sum_{\ell=0}^{\ell_{max}}\left(\frac{R}{r}\right)^{\ell+1}
\sum_{m=0}^{\ell}\overline{P}_{\ell m}(\cos{\theta})\left[\overline{C}_{\ell m}
\cos{m\lambda}+\overline{S}_{\ell m}\sin{m\lambda}\right]
\end{equation}
where $G$ is the gravitational constant, $M$ is the Earth's mass,
$R$ is the Earth's reference radius,
the $\overline{P}_{\ell m}$ represent the fully normalized Legendre
functions of degree $\ell$ and
order $m$ and $\overline{C}_{\ell m}$,$\overline{S}_{\ell m}$
are the corresponding normalized harmonic coefficients.
The objective here is to compute the harmonic coefficients $\overline{C}_{\ell m}$ and
$\overline{S}_{\ell m}$ the most accurately as possible.
The number of unknown parameters is expressed by $n=(\ell_{max}+1)^2.$
These coefficients are computed by solving a linear least squares
problem that may involve millions of observations and tens of thousands of variables.
More details about the physical problem and the resolution methods can be found in
\cite{PHD.MB}.
The data used in the following experiments were provided by
CNES\footnote{Centre National d'Etudes Spatiales, Toulouse, France}
and they correspond to 10 days of observations
using GRACE\footnote{Gravity Recovery and
Climate Experiment, NASA, launched March 2002} measurements
(about $166,000$ observations).
We compute the spherical harmonic coefficients
$\overline{C}_{\ell m}$ and $\overline{S}_{\ell m}$ up to a degree
$\ell_{max}=50$; except the coefficients
$\overline{C}_{11}, \overline{S}_{11}, \overline{C}_{00}, \overline{C}_{10}$
that are a priori known.
Then we have $n=2,597$ unknowns in the corresponding least squares problems
(note that the GRACE satellite enables us to compute
a gravity field model up to degree 150).
The problem is solved using the normal equations method and we have the Cholesky
decomposition $A^TA=U^TU$.\\
We compute the relative condition numbers of each coefficient $x_i$
using the formula
$$\kappa^{(rel)}_i(b)=\neuc{e_i^TU^{-1}} \neuc{b}/|x_i|,$$
and the following code fragment, derived from Code 2, in which
the array $D$ contains the normal equations $A^TA$ and the vector $X$
contains the right-hand side $A^Tb$.\\
\\
{\bf CALL DPOSV(~'U',~N,~1,~D,~LDD,~X,~LDX,~INFO)\\
CALL DTRTRI(~'U',~'N',~N,~D,~LDD,~INFO)\\
DO~I=1,N \\
\hspace*{0.5cm} KAPPA(I)~=~DNRM2(~N-I+1,~D(I,I),~LDD)~*~BNORM/ABS(X(I)) \\
END DO}\\
\\
Figure~\ref{plotcond} represents the relative condition numbers of all the $n$
coefficients. We observe the disparity between the condition numbers
(between $10^2$ and $10^8$).
To be able to give a physical interpretation, we need first to sort the coefficients
by degrees and orders as given in the development of $V(r,\theta,\lambda)$
in Expression~(\ref{potential}).\\
In Figure~\ref{plotcos}, we plot the coefficients $\overline{C}_{\ell m}$ as
a function of the degrees and orders (the curve with the $\overline{S}_{\ell m}$
is similar). We notice that for a given order, the condition number increases
with the degree and that, for a given degree, the variation of the sensitivity
with the order is less significant.\\
We can also study the effect of regularization on the conditioning.
The physicists use in general a Kaula~\cite{KAULA} regularization technique
that consists of adding to $A^TA$ a diagonal matrix
$D=diag(0,\cdots,0,\delta,\cdots,\delta)$
where $\delta$ is a constant that is proportional to
$\frac{10^{-5}}{\ell_{max}^2}$ and
the nonzero terms in $D$ correspond to the variables
that need to be regularized. An example of the effect of Kaula regularization
is shown in Figure~\ref{zonaux} where we consider
the coefficients of order $0$ also called zonal coefficients.
We compute here the absolute condition numbers of these coefficients
using the formula $\kappa_i(b)=\neuc{e_i^TU^{-1}}$.
Note that the $\kappa_i(b)$ are much lower that 1. This is not surprising
because typically in our application $\neuc{b} \sim 10^5$/ and $|x_i| \sim 10^{-12}$
which would make the associated relative condition numbers greater than 1.
We observe that the regularization
is effective on coefficients of highest degree that are in general more sensitive
to perturbations.
\begin{figure}[!ht]
\begin{center}
{\epsfig{file=condpar.eps,width=0.7 \textwidth}} \\
\caption{\label{plotcond}
Amplitude of the relative condition numbers for the gravity field coefficients.
}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
{\epsfig{file=cosinus.eps,width=0.7 \textwidth}} \\
\caption{\label{plotcos}
Conditioning of spherical harmonic coefficients $\overline{C}_{\ell m}~(2 \leq \ell \leq 50~,~1 \leq m\leq 50)$.
}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
{\epsfig{file=zonaux.eps,width=0.7 \textwidth}} \\
\caption{\label{zonaux}
Effect of regularization on zonal coefficients $\overline{C}_{{\ell} 0}~(2 \leq {\ell} \leq 50)$
}
\end{center}
\end{figure}
\newpage
\section{Conclusion}\label{sec:concl}
To assess the accuracy of a linear least squares solution, the practitioner of
numerical linear algebra uses generally quantities like condition
numbers or backward errors when the statistician is more interested in
covariance analysis.
In this paper we proposed quantities that talk to both communities
and that can assess the quality of the solution of a least squares problem
or one of its component. We provided pratical ways to compute these quantities
using (Sca)LAPACK and we experimented these computations
on pratical examples including a real physical application in the area of space geodesy.
\bibliographystyle{siam}
| {
"timestamp": "2007-10-03T18:29:12",
"yymm": "0710",
"arxiv_id": "0710.0829",
"language": "en",
"url": "https://arxiv.org/abs/0710.0829",
"abstract": "In this paper, we address the accuracy of the results for the overdetermined full rank linear least squares problem. We recall theoretical results obtained in Arioli, Baboulin and Gratton, SIMAX 29(2):413--433, 2007, on conditioning of the least squares solution and the components of the solution when the matrix perturbations are measured in Frobenius or spectral norms. Then we define computable estimates for these condition numbers and we interpret them in terms of statistical quantities. In particular, we show that, in the classical linear statistical model, the ratio of the variance of one component of the solution by the variance of the right-hand side is exactly the condition number of this solution component when perturbations on the right-hand side are considered. We also provide fragment codes using LAPACK routines to compute the variance-covariance matrix and the least squares conditioning and we give the corresponding computational cost. Finally we present a small historical numerical example that was used by Laplace in Theorie Analytique des Probabilites, 1820, for computing the mass of Jupiter and experiments from the space industry with real physical data.",
"subjects": "Numerical Analysis (math.NA); Statistics Theory (math.ST)",
"title": "Computing the Conditioning of the Components of a Linear Least Squares Solution",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9833429599907709,
"lm_q2_score": 0.8152324915965392,
"lm_q1q2_score": 0.8016531313671922
} |
https://arxiv.org/abs/2009.04050 | Minimal universality criterion sets on the representations of quadratic forms | For a set $S$ of (positive definite and integral) quadratic forms with bounded rank, a quadratic form $f$ is called $S$-universal if it represents all quadratic forms in $S$. A subset $S_0$ of $S$ is called an $S$-universality criterion set if any $S_0$-universal quadratic form is $S$-universal. We say $S_0$ is minimal if there does not exist a proper subset of $S_0$ that is an $S$-universality criterion set. In this article, we study various properties of minimal universality criterion sets. In particular, we show that for `most' binary quadratic forms $f$, minimal $S$-universality criterion sets are unique in the case when $S$ is the set of all subforms of the binary form $f$. | \section{Introduction}
Let $S$ be a set of (positive definite integral) quadratic forms with bounded rank. A quadratic form $f$ is called {\it $S$-universal} if it represents all quadratic forms in the set $S$. A subset $S_0$ of $S$ is called {\it an $S$-universality criterion set} if any $S_0$-universal quadratic form is $S$-universal. Conway and Schneeberger's 15-theorem \cite{c} says that the set $\{1,2,3,5,6,7,10,14,15\}$ is an $S$-universality criterion set, where $S$ is the set of all positive integers (see also \cite{b}). Note that any positive integer $a$ corresponds to the unary quadratic form $ax^2$. For an arbitrary set $S$ of quadratic forms with bounded rank, the existence of a finite $S$-universality criterion set was proved by the third author and his collaborators in \cite{kko}.
An $S$-universality criterion set $S_0$ is called {\it minimal} if any proper subset of $S_0$ is not an $S$-universality criterion set. In \cite{kko}, the authors proposed the following two questions on the minimal universality criterion sets: Let $\Gamma(S)$ be the set of all $S$-universality criterion sets.
\begin{itemize}
\item [(i)] For which $S$ is there a unique minimal $S_0 \in \Gamma(S)$?
\item [(ii)] Is $\vert S_0\vert=\gamma(S)$ for every minimal $S_0 \in \Gamma(S)$? If not, when?
\end{itemize}
For the question (i), when $S$ is the set of all quadratic forms of rank $k$, the uniqueness of the minimal $S$-universality criterion set was proved by Bhargava \cite{b} for the case when $k=1$, and by Kominers
\cite{kom1}, \cite{kom2} for the cases when $k=2$, $k=8$, respectively (see also \cite{k}, \cite{kko0}, and \cite{o1}).
For the question (ii), Elkies, Kane, and Kominers \cite{ekk} answered in the negative for some special set $S$ of quadratic forms. In fact, they considered the set $S_f$ of all subforms of the quadratic form $f(x,y,z)= x^2+y^2+2z^2$. Clearly, the set $\{f\}$ itself is a minimal $S_f$-universality criterion set. They proved that any quadratic form that represents both subforms $x^2+y^2+8z^2$ and $2x^2+2y^2+2z^2$ of $f$ also represents $f$ itself. Therefore the set $\{x^2+y^2+8z^2, 2x^2+2y^2+2z^2\}$ is also a minimal $S_f$-universality criterion set.
In this article, we prove that the question (i) is true if $S$ is any set of positive integers, that is, any set of unary quadratic forms. We also prove that minimal $\Phi_n$-universality criterion sets are not unique when $\Phi_n$ is the set of all quadratic forms of rank $n$ for any $n\ge 9$. To analyze the example given by Elkies, Kane, and Kominers more closely, we introduce the notion of a {\it recoverable} quadratic form. A quadratic form $f$ is called recoverable if minimal $S_f$-universality criterion sets are not unique in the case when $S_f$ is the set of all subforms of $f$. The third author and his collaborators proved in \cite{jko} that any unary quadratic form is not recoverable. In this article, we show that `most' binary quadratic forms are not recoverable, and in fact, there are infinitely many recoverable binary quadratic forms up to isometry.
The subsequent discussion will be conducted in the better adapted geometric language of quadratic spaces and lattices. Throughout this article, we always assume that every ${\mathbb Z}$-lattice $L={\mathbb Z} x_1+{\mathbb Z} x_2+\dots+{\mathbb Z} x_k$ is {\it positive definite and integral}, that is, the corresponding symmetric matrix
$$
M_L=(B(x_i,x_j)) \in M_{k\times k}({\mathbb Z})
$$
is positive definite and the scale $\mathfrak s(L)$ of the ${\mathbb Z}$-lattice $L$ is ${\mathbb Z}$. The corresponding quadratic map $Q: L \to {\mathbb Z}$ will be defined by $Q(x)=B(x,x)$ for any $x\in L$. For any positive integer $a$, the ${\mathbb Z}$-lattice obtained from $L$ by scaling $a$ will be denoted by $L^a$. Hence we have $M_{L^a}=a\cdot M_L$.
The discriminant $dL$ of the ${\mathbb Z}$-lattice $L$ will be defined by $dL=\det(M_L)$. We call $L$ is diagonal if $B(x_i,x_j)=0$ for any $i$ and $j$ with $i\ne j$. If $L$ is diagonal, then we simply write
$$
L=\langle Q(x_1),\dots,Q(x_k)\rangle.
$$
We say $L$ is even if $Q(x)$ is even for any $x \in L$.
If an integer $n$ is represented by $L$ over ${\mathbb Z}_p$ for any prime $p$ including infinite prime, then we say that $n$ is represented by the genus of $L$, and we write $n {\ \rightarrow\ } \text{gen}(L)$. Note that $n$ is represented by the genus of $L$ if and only if $n$ is represented by a ${\mathbb Z}$-lattice in the genus of $L$.
When $n$ is represented by the ${\mathbb Z}$-lattice $L$ itself, then we write $n{\ \rightarrow\ } L$. We define
$$
Q(L)=\{ n \in {\mathbb Z} : n {\ \rightarrow\ } L\}.
$$
For any positive integer $n$, The cubic ${\mathbb Z}$-lattice $I_n={\mathbb Z} e_1+{\mathbb Z} e_2+\dots+{\mathbb Z} e_n$ is the ${\mathbb Z}$-lattice satisfying $B(e_i,e_j)=\delta_{ij}$.
Any unexplained notation and terminology can be found in \cite{ki} or \cite{om}.
\section{Uniqueness of the minimal universality criterion set}
In general, minimal $S$-universality criterion sets are not unique for an arbitrary set $S$ of quadratic forms with bounded rank.
In this section, we prove that the minimal $S$-universality criterion set is unique if $S$ is any subset of positive integers.
Let $\mathbb N$ be the set of positive integers. For a positive integer $k$ and a nonnegative integer $\alpha$, we define the set of arithmetic progressions
$$
A_{k,\alpha}=\{kn+\alpha : n\in \mathbb N \cup \{0\}\}.
$$
If a ${\mathbb Z}$-lattice $L$ represents all elements in $A_{k,\alpha}$, we simply write $A_{k,\alpha} {\ \rightarrow\ } L$.
\begin{prop} \label{keyp}
Let $S=\{ s_0,s_1,s_2,\dots\}$ be a subset of ${\mathbb N}$ such that $s_i < s_{i+1}$ for any nonnegative integer $i$, and let $k$ be a positive integer. If there is a ${\mathbb Z}$-lattice $\ell$ such that
$$
s_0,s_1,\dots,s_{k-1} \in Q(\ell) \quad \text{and} \quad s_k \not \in Q(\ell),
$$
then there is a ${\mathbb Z}$-lattice $L$ such that $Q(L)\cap S=S-\{s_k\}$.
\end{prop}
\begin{proof}
First, we define
$$
\mathfrak C=\{ 0 \le u \le s_{k+1}-1 : A_{s_{k+1},u} \cap \{s_{k+1},s_{k+2},\dots\} \ne \emptyset\}=\{ c_1,c_2,\dots,c_v\},
$$
and for each $c \in \mathfrak C$, let $s(c)=\min(A_{s_{k+1},c} \cap \{s_{k+1},s_{k+2},\dots\})$.
We define
$$
L=\ell \perp s_{k+1} I_4 \perp \langle s(c_1),s(c_2),\cdots,s(c_v) \rangle.
$$
Since $s_{k+1} >s_k$ and $s(c_j) > s_k$ for any $j=1,2,\dots,v$, we see that $s_k$ is not represented by $L$. Furthermore, for any integer
$a \in \{s_{k+1},s_{k+2},\dots\}$, there is a nonnegative integer $M$ and an integer $j$ with $1\le j \le v$ such that $a=s_{k+1}M+s(c_j)$. Since $M$ is represented by $I_4$, the integer $a$ is represented by $L$. The proposition follows directly from this.
\end{proof}
\begin{thm}
For any set $S=\{s_0,s_1,s_2,\dots\}$ of positive integers, the minimal $S$-universality criterion set is unique.
\end{thm}
\begin{proof}
Without loss of generality, we may assume that $s_i < s_{i+1}$ for any nonnegative integer $i$. A positive integer $s_i \in S$ is called a truant of $S$ if there is a ${\mathbb Z}$-lattice $L$ such that $L$ represents all integers in the set $\{s_0,s_1,\dots,s_{i-1}\}$, whereas $L$ does not represent $s_i$. Clearly, $s_0$ is a truant of $S$. Let $T(S)$ be the set of truants of $S$. Then, by Proposition \ref{keyp}, any $S$-universality criterion set should contain $T(S)$. Hence it suffices to show that $T(S)$ itself is an $S$-universality criterion set. Let $L$ be a ${\mathbb Z}$-lattice that represents all integers in $T(S)$. Suppose that $L$ is not $S$-universal. Let $m$ be the smallest integer such that $s_m$ is not represented by $L$. Then, clearly, $s_m$ is a truant of $S$, and hence $s_m \in T(S)$. This is a contradiction. Therefore, $T(S)$ is the unique minimal $S$-universality criterion set.
\end{proof}
\begin{rmk} {\rm As pointed out by \cite{h}, Bhargava also proved the above result. However, no proof of this has appeared in the literature to the author's knowledge.}
\end{rmk}
Let $L$ be a ${\mathbb Z}$-lattice of rank $m$. For any positive integer $j$ less than or equal to $m$, the $j$-th successive minimum of $L$ will be denoted by $\mu_j(L)$ ( for the definition of the successive minimum, see Chapter 12 of \cite{ca}). It is well-known that there is a constant $\gamma_m$, which is called the Hermite constant, such that $$
dL\le \mu_1(L)\mu_2(L)\cdots \mu_m(L) \le \gamma_m^mdL
$$
(for the proof, see Proposition 2.3 of \cite{ea}). We define $\min(L)=\min\{ Q(x)\mid x \in L-\{0\}\}$. Note that $\min(L)=\mu_1(L)$.
\begin{thm}
For any positive integer $k$, there is a subset $S$ of positive integers such that the cardinality of its minimal universality criterion set is exactly $k$.
\end{thm}
\begin{proof}
Let $L= {\mathbb Z} x_{1} + {\mathbb Z} x_{2} + \cdots + {\mathbb Z} x_{k}$ be a ${\mathbb Z}$-lattice such that $Q(x_{i}) = k+i$ for any $i$ with $1 \le i \le k$.
If $m$ is the rank of $L$, then we have $m \le k$ and $\mu_{m}(L) \le 2k$.
It follows from $\mu_{1}(L) \le \mu_{2}(L) \le \dots \le \mu_{m}(L)$ that
$$
dL \le \mu_{1}(L) \mu_{2}(L) \cdots \mu_{m}(L) \le (2k)^{k}.
$$
Hence there are only finitely many candidates for $L$ up to isometry since the discriminant and the rank of $L$ are bounded.
Let $\{ L_{1}, L_{2}, \dots, L_{t} \}$ be the set of all possible candidates for $L$. We define
$$
S = \cap_{i=1}^{t} Q(L_{i}).
$$
Then from the definition of $S$, it is obvious that $\{k+1, k+2, \dots, 2k\}$ is an $S$-universality criterion set.
Put $M_{1} = \langle k+2 \rangle$.
Since $k+1$ is not represented by $M_{1}$, there is a ${\mathbb Z}$-lattice $N_{1}$ such that $Q(N_{1}) \cap S = S-\{k+1\}$ by Proposition \ref{keyp}.
For each $i=2,3, \dots, k$, put
$$
M_{i} = \langle k+1, \dots, k+i-1 \rangle.
$$
Then, one may easily show that $k+j {\ \rightarrow\ } M_{i}$ for any $j = 1, \dots, i-1$, whereas $k+i$ is not represented by $M_{i}$.
Then, by Proposition \ref{keyp} again, there is a ${\mathbb Z}$-lattice $N_{i}$ such that $Q(N_{i}) \cap S = S-\{k+i\}$.
This implies that $\{k+1, k+2, \dots, 2k\}$ is the minimal $S$-universality criterion set.
\end{proof}
Let $\Phi_n$ be the set of all ${\mathbb Z}$-lattices of rank $n$. As explained in the introduction, it is known that the minimal $\Phi_n$-universality criterion set is unique if $n=1,2$, or $8$. For all the other positive integers $n$, the explicit minimal $\Phi_n$-universality criterion set is not known yet.
\begin{prop}
For any integer $n$ with $n \ge 9$, there are infinitely many minimal $\Phi_n$-universality criterion sets.
\end{prop}
\begin{proof} Let $\Phi_n^0=\{L_1,L_2,\dots,L_s\}$ be a minimal $\Phi_n$-universality criterion set. Assume that $L_i=I_{k_i} \perp \ell_i$, where $\min(\ell_i)\ge 2$.
If $n_0=\max\{k_i\}<n$, then $I_{n_0} \perp \ell_1\perp\dots\perp\ell_s$ represents all ${\mathbb Z}$-lattices in $\Phi_n^0$, but it does not represent $I_n$. This is a contradiction. Therefore $n_0=n$, that is, $I_n \in \Phi_n^0$. Similarly, one may easily show that there is an integer $j$ such that $L_j$ represents $D_m[1]$ for some integer $m \equiv 0 \Mod 4$ with $n-4\le m<n$. Note that $L_j=D_m[1]\perp M$ for some ${\mathbb Z}$-lattice $M$ with rank less than or equal to $4$. Without loss of generality, assume that $L_1=I_n$ and $L_2=D_m[1]\perp M$. Note that any ${\mathbb Z}$-lattice that represents both $L_1$ and $L_2$ should represent $I_n\perp D_m[1]$. Furthermore, since $I_n$ is $4$-universal, $L_j$ cannot represent $D_m[1]$ for any $j\ge 3$.
Now we show that for any ${\mathbb Z}$-lattice $N$ with rank $n-m$,
$$
\Phi_n^0(N)=\{I_n, D_m[1]\perp N,L_3,\dots, L_s\}
$$
is also a minimal $\Phi_n$-universality criterion set. Assume that a ${\mathbb Z}$-lattice $\mathcal L$ represents all ${\mathbb Z}$-lattices in $\Phi_n^0(N)$.
Since $I_n\perp D_m[1]$ is represented by $\mathcal L$, $L_2=D_m[1]\perp M$ is also represented by $\mathcal L$. Therefore, $\mathcal L$ is $n$-universal from the assumption that $\Phi_n^0$ is a $\Phi_n$-universality criterion set. By using similar argument, one may easily show that $\Phi_n^0(N)$ is, in fact, minimal.
\end{proof}
\begin{rmk}{\rm
We conjecture that
$$
\left\lbrace I_4, A_4, A_2\perp A_2, A_2\perp \begin{pmatrix} 2&1\\1&3\end{pmatrix}\right\rbrace
$$
is the unique minimal $\Phi_{4}$-universality criterion set. Here, $A_m={\mathbb Z}(e_1-e_2)+{\mathbb Z}(e_2-e_3)+\dots+{\mathbb Z}(e_m-e_{m+1})$ is a root ${\mathbb Z}$-lattice, where $\{e_i\}$ is the standard orthonormal basis of the cubic ${\mathbb Z}$-lattice $I_n$.}
\end{rmk}
\section{Recoverable ${\mathbb Z}$-lattices}
In this section, we introduce the notion of {\it recoverable} $\mathbb{Z}$-lattices and give some properties on those $\mathbb{Z}$-lattices, and we show some necessary conditions and some sufficient conditions for ${\mathbb Z}$-lattices to be recoverable.
In \cite{ekk}, Elkies and his collaborators gave an example of a set $S$ of ternary ${\mathbb Z}$-lattices such that the sizes of minimal $S$-universality criterion sets vary. To explain their example more precisely, let $T$ be the set of all ternary sublattices of $\langle 1,1,2\rangle$. Then, clearly, $T_0=\{\langle1,1,2\rangle\}$ is a minimal $T$-universality criterion set.
Furthermore, they proved that
$$
T_1=\{ \langle 1,1,8\rangle, \langle 2,2,2\rangle\}
$$
is also a minimal $T$-universality criterion set. The point is that any ${\mathbb Z}$-lattice that represents both $\langle 1,1,8\rangle$ and $\langle 2,2,2\rangle$, which are sublattices of $\langle1,1,2\rangle$, also represents $\langle 1,1,2\rangle$ itself.
From this point of view, the following definition seems to be quite natural:
\begin{defn}
Let $\ell$ be a ${\mathbb Z}$-lattice and let $S_0=\{\ell_1,\ell_2,\dots,\ell_t\}$ be the set of proper ${\mathbb Z}$-sublattices of $\ell$. We say $\ell$ is {\it recoverable by $S_0$} if any $S_0$-universal ${\mathbb Z}$-lattice represents $\ell$ itself.
\end{defn}
Note that the ternary ${\mathbb Z}$-lattice $\langle 1,1,2 \rangle$ is recoverable by $T_{1}$.
We simply say $\ell$ is {\it recoverable} if there is a finite set of proper sublattices satisfying the above property. Note that if $\ell$ is recoverable, then there is a minimal $S$-universality criterion set whose cardinality is greater than $1$, where $S$ is the set of all sublattices of $\ell$.
\begin{lem} \label{not-recover}
A ${\mathbb Z}$-lattice $\ell$ is not recoverable if and only if there is a ${\mathbb Z}$-lattice that represents all proper sublattices of $\ell$, but not $\ell$ itself.
\end{lem}
\begin{proof}
Suppose that $\ell$ is not recoverable. Let $S$ be the set of all proper sublattices of $\ell$ and let $S_{0}=\{\ell_1,\ell_2,\dots,\ell_t\}$ be a minimal $S$-universality criterion set. Since we are assuming that $\ell$ is not recoverable, there is a ${\mathbb Z}$-lattice $L$ that represents all ${\mathbb Z}$-lattices in $S_0$, whereas $L$ does not represent $\ell$ itself. Now, since the set $S_0$ is an $S$-universality criterion set, $L$ represents all proper sublattices of $\ell$, but not $\ell$ itself. The converse is trivial.
\end{proof}
\begin{lem} \label{scaling}
Let $\ell$ be a ${\mathbb Z}$-lattice and let $a$ be a positive integer. If $\ell^a$ is recoverable, then so is $\ell$.
\end{lem}
\begin{proof}
Assume that $\ell^a$ is recoverable by $\{\ell^a_1,\ell^a_2,\dots,\ell^a_t\}$, where $\ell_i$ is a proper sublattice of $\ell$ for any $i=1,2,\dots,t$. Let $M$ be any ${\mathbb Z}$-lattice that represents $\ell_i$ for any $i$. Then $\ell_i^a {\ \rightarrow\ } M^a$ for any $i$, and hence $\ell^a {\ \rightarrow\ } M^a$. Therefore $\ell {\ \rightarrow\ } M$ and $\ell$ is recoverable by $\{\ell_1,\ell_2,\dots,\ell_t\}$ .
\end{proof}
\begin{rmk}{\rm
Any unary ${\mathbb Z}$-lattice $\ell$ cannot be recoverable. Let $\ell=\langle 1 \rangle$. Note that $\langle 2,2,5 \rangle$ represents all squares of integers except for $1$ (see \cite{jko}). Hence $\langle 2,2,5 \rangle$ represents all proper sublattices of $\ell$, but not $\ell$ itself. Therefore $\ell$ is not recoverable by Lemma \ref{not-recover}. Moreover, since every unary ${\mathbb Z}$-lattice can be obtained from $\ell$ by a suitable scaling, it is not recoverable by Lemma \ref{scaling}.}
\end{rmk}
\begin{rmk} {\rm
Note that the converse of the above lemma does not hold in general. Let $\ell=\langle1,4\rangle$ be the binary ${\mathbb Z}$-lattice. Let $L$ be any ${\mathbb Z}$-lattice representing both $\ell_1=\langle1,16\rangle$ and $\ell_2=\langle4,4\rangle$. Since $L$ represents $\ell_1$, there is a vector $e_1 \in L$ and a ${\mathbb Z}$-sublattice $L_1$ of $L$ such that $L={\mathbb Z} e_1+L_1$, where $Q(e_1)=1$ and $B(e_1,L_1)=0$. Furthermore, since $L$ represents $\ell_2 = \langle4,4\rangle$, there are nonnegative integers $a,b$ and vectors $x,y \in L_1$ such that
$$
Q(ae_1+x)=a^{2}+Q(x)=Q(be_1+y)=4 \quad \text{and} \quad B(ae_1+x,be_1+y)=0.
$$
If $a=2$, then $x=0$ and $b=0$. Hence $\langle4\rangle {\ \rightarrow\ } L_1$.
If $a=1$, then
$$
b=0 \quad \text{and}\quad Q(y)=4 \quad\text{or}\quad
b=1, \ Q(x)=Q(y)=3, \ \text{and}\ B(x,y)=-1.
$$
For the latter case, we have $Q(x+y)=4$. Finally, if $a=0$, then $Q(x)=4$. Therefore $L_1$ represents $4$ in any case, which implies that $L$ represents $\ell$. Hence
$\ell$ is recoverable by $\{\ell_1,\ell_2\}$.
Now, we show that $\ell^2=\langle2,8\rangle$ is not recoverable. To show this, let $S$ be the set of all binary ${\mathbb Z}$-lattices with minimum greater than or equal to $9$, and let $S_0=\{\mathfrak m_1,\dots,\mathfrak m_t\}$ be a finite minimal $S$-universality criterion set. Then $\mathfrak m_1\perp \cdots\perp \mathfrak m_t$ represents all binary ${\mathbb Z}$-lattices with minimum greater than or equal to $9$. Now, we define
$$
L=K \perp \mathfrak m_1\perp \cdots\perp \mathfrak m_t,
\quad
\text{where}
\quad
K=\begin{pmatrix} 2&1&1&0\\1&8&0&0\\1&0&8&4\\0&0&4&10\end{pmatrix}.
$$
Clearly, $\ell^2=\langle2,8\rangle$ is not represented by $L$. We show that any proper sublattice of $\ell^2$ is represented by $L$.
Let $\ell_{3}$ be any proper sublattice of $\ell^2$. If $\min(\ell_3) \ge 9$, then $\ell_3$ is represented by $\mathfrak m_1\perp\cdots\perp \mathfrak m_t$. Hence we may assume that $\min(\ell_3)=2$ or $8$. For the former case, we have $\ell_3 \simeq \langle2,8m^2\rangle$ for some integer $m\ge2$. Since $\langle 8m^2\rangle {\ \rightarrow\ } \mathfrak m_1\perp \cdots\perp \mathfrak m_t$, $L$ represents $\ell_3$. For the latter case, one may easily check that $\ell_3$ is isometric to one of the binary ${\mathbb Z}$-lattices
$$
\langle 8,2m^2\rangle \quad \text{and} \quad \begin{pmatrix} 8&4\\4&2+8n^2\end{pmatrix},
$$
where $m\ge2$ and $n\ge1$. Note that $K$ represents the binary ${\mathbb Z}$-lattice $\ell_3$ for $m=2$ or $n=1$. If $m\ge3$ or $n\ge2$, then one may easily show that
$$
\langle 8,2m^2\rangle {\ \rightarrow\ } \langle 8,8, 2m^2-8\rangle {\ \rightarrow\ } L \ \ \text{or} \ \ \begin{pmatrix} 8&4\\4&2+8n^2\end{pmatrix} {\ \rightarrow\ } \begin{pmatrix}8&4\\4&10\end{pmatrix} \perp \langle 8n^2-8\rangle {\ \rightarrow\ } L.
$$
Therefore $L$ represents all proper sublattices of $\ell^2$, but not $\ell^2$ itself. Consequently, $\ell^2$ is not recoverable.}
\end{rmk}
One may easily check that every additively indecomposable ${\mathbb Z}$-lattice is not recoverable.
We further prove that every indecomposable ${\mathbb Z}$-lattice $L$ with rank less than $4$ is not recoverable.
\begin{prop}\label{bin}
Any indecomposable binary ${\mathbb Z}$-lattice is not recoverable.
\end{prop}
\begin{proof}
Let $\ell$ be an indecomposable binary ${\mathbb Z}$-lattice. Let $\{x,y\}$ be a Minkowski-reduced (ordered) basis for $\ell$, that is, $0 \le 2\vert B(x,y)\vert \le Q(x) \le Q(y)$. Let $S$ be the set of all proper sublattices of $\ell$, and let $S_{0}=\{\ell_{1},\ell_{2},\dots,\ell_{t}\}$ be a minimal $S$-universality criterion set.
If we define $L=\ell_1\perp\dots\perp \ell_t$, then $L$ represents all proper sublattices of $\ell$. Hence it suffices to show that $L$ does not represent $\ell$ itself. To do this, let $\ell_i={\mathbb Z} x_i+{\mathbb Z} y_i$, where $\{x_i,y_i\}$ is a Minkowski reduced basis for $\ell_i$ for any $i=1,2,\dots,t$. Suppose on the contrary that there is a representation $\phi:\ell \rightarrow L$. Since $\ell_{i}$ is a sublattice of $\ell$ for any $i=1,2,\dots,t$, we may assume, without loss of generality, that $\phi(x)=x_{1}$. Suppose that $\phi(y) = \alpha x_{1} + \beta y_{1} + z$, where $\alpha,\beta \in {\mathbb Z}$ and $z \in ({\mathbb Z} x_{2}+{\mathbb Z} y_{2}) \perp \cdots \perp ({\mathbb Z} x_{t}+{\mathbb Z} y_{t})$.
Since $\ell$ is indecomposable, $\beta$ cannot be zero.
Then, we have
$$
d\ell=d(\phi(\ell)) = d({\mathbb Z} x_{1}+{\mathbb Z} (\alpha x_{1} + \beta y_{1} + z)) \ge d({\mathbb Z} x_{1}+{\mathbb Z} (\alpha x_{1} + \beta y_{1})) \ge d\ell_{1} > d\ell,
$$
which is a contradiction. Therefore $L$ does not represent $\ell$ and hence, by lemma \ref{not-recover}, the binary ${\mathbb Z}$-lattice $\ell$ is not recoverable.
\end{proof}
\begin{prop}
Any indecomposable ternary ${\mathbb Z}$-lattice is not recoverable.
\end{prop}
\begin{proof}
Suppose that there is a ternary ${\mathbb Z}$-lattice, say $L$, that is recoverable. Then there are proper sublattices $L_{1}, L_{2}, \dots, L_{t}$ of $L$ such that $L$ is represented by $L_{1} \perp L_{2} \perp \cdots \perp L_{t}$.
Without loss of generality, we may assume that all $L_{i}$'s are of rank 3. Let
$$
\phi : L {\ \rightarrow\ } L_{1} \perp L_{2} \perp \cdots \perp L_{t}
$$
be a representation. Let $\{ u,v,w \}$ be a Minkowski reduced (ordered) basis for $L$. Without loss of generality, we may assume that $\phi(u)=x_{1} \in L_1$. Clearly, there exists a Minkowski reduced basis for $L_{1}$, say $\{ x_{1}, x_{2}, x_{3} \}$, containing $x_{1}$.
Assume that
$$
\phi(v) = a_{1}x_{1}+x+y,
$$
where $a_1\in {\mathbb Z}$, $x \in {\mathbb Z} x_{2}+ {\mathbb Z} x_{3}$ and $y \in L_{2} \perp \cdots \perp L_{t}$.
First, assume that $x=0$.
Since
$$
2|a_{1}| Q(x_{1}) =2|B(x_{1}, a_{1}x_{1}+y)|=2\vert B(u,v)\vert\le Q(u)=Q(x_{1}),
$$
we have $a_{1}=0$.
Put
$$
\phi(w) = b_{1}x_{1}+b_{2}x_{2}+b_{3}x_{3}+z,
$$
where $b_i\in {\mathbb Z}$ for any $i=1,2,3$ and $z \in L_{2} \perp \cdots \perp L_{t}$.
Then, we have
$$
\phi(L)= {\mathbb Z} x_1 +{\mathbb Z} y +{\mathbb Z} (b_{1}x_{1}+b_{2}x_{2}+b_{3}x_{3}+z).
$$
If $b_{3} \ne 0$, then
$$
\begin{array}{lll}
\mu_{3}(L) = Q(b_{1}x_{1}+b_{2}x_{2}+b_{3}x_{3}+z)\!\!\!&= Q(b_{1}x_{1}+b_{2}x_{2}+b_{3}x_{3})+Q(z)\\[0.2cm]
\!\!\!&\ge \mu_{3}(L_{1}) +Q(z) \ge \mu_{3}(L)+Q(z),
\end{array}
$$
which implies that $z=0$.
Hence, $\phi(L)$ is decomposable, which is a contradiction.
Therefore, we have $b_{3}=0$.
Observe that
$$
L_{1} = {\mathbb Z} x_{1} + {\mathbb Z} x_{2} + {\mathbb Z} x_{3} \subseteq L={\mathbb Z} u +{\mathbb Z} v+{\mathbb Z} w \simeq \phi(L)
$$
Then, $b_{1}x_{1} + b_{2}x_{2} = \alpha u + \beta v + \gamma w$ for some integers $\alpha, \beta$ and $\gamma$.
If $\gamma \ne 0$, then
$$
\mu_{3}(L) = Q(b_{1}x_{1}+b_{2}x_{2})+Q(z) \ge Q(w)+Q(z) = \mu_{3}(L)+Q(z).
$$
This implies that $z=0$, which is a contradiction.
Hence, $b_{1}x_{1} + b_{2}x_{2} = \alpha u + \beta v$.
Similarly, we have $x_{1} = \alpha_{1} u + \beta_{1} v$ for some integers $\alpha_1$ and $\beta_1$.
Since $Q(x_{1})=Q(u)$ and $B(u,v)=0$, we have $x_{1} = \pm u$ and $b_{1}x_{1} + b_{2}x_{2} = \beta v$ or $x_{1} = \pm v$ and $b_{1}x_{1} + b_{2}x_{2} = \alpha u$.
In any case, $\phi(L)$ is decomposable, which is a contradiction.
Finally, assume that $x \ne 0$.
Since
$$
\begin{array}{lll}
\mu_{2}(L) = Q(v)=Q(a_{1}x_{1}+x+y) \!\!\!&= Q(a_{1}x_{1}+x)+Q(y)\\[0.2cm]
\!\!\!& \ge \mu_{2}(L_{1}) +Q(y) \ge \mu_{2}(L) +Q(y),
\end{array}
$$
we have $y=0$.
Put
$$
\phi(v) = a_{1}x_{1}+a_{2}x_{2}+a_{3}x_{3} \quad \mbox{and}\quad \phi(w) = b_{1}x_{1}+b_{2}x_{2}+b_{3}x_{3}+z
$$
where $a_i,b_i\in {\mathbb Z}$ for any $i=1,2,3$ and $z \in L_{2} \perp \cdots \perp L_{t}$.
If $b_{3} \ne 0$, then
$$
\mu_{3}(L) = Q(b_{1}x_{1}+b_{2}x_{2}+b_{3}x_{3})+Q(z) \ge \mu_{3}(L_{1})+Q(z) \ge \mu_{3}(L)+Q(z).
$$
This implies that $z=0$. Then $\phi(L) \subseteq L_{1}$, which is a contradiction.
Hence, $b_{3}=0$.
Suppose that $a_{3} \ne 0$.
Since
$$
\mu_{2}(L)=Q(a_{1}x_{1}+a_{2}x_{2}+a_{3}x_{3}) \ge \mu_{3}(L_{1}) \ge \mu_{3}(L),$$
we have $\mu_{2}(L)= \mu_{3} (L) = \mu_{2}(L_{1}) = \mu_{3}(L_{1})$. Then, we have
$$
\mu_{2}(L) = \mu_{3} (L) = Q(b_{1}x_{1}+b_{2}x_{2}+z) \ge \mu_{2}(L_{1})+Q(z) = \mu_{2}(L)+Q(z).
$$
This implies that $z=0$, which is a contradiction.
Therefore $a_{3} = 0$. Since $a_{2} \ne 0$, we have
$$
\mu_{2}(L) = Q(a_{1}x_{1}+a_{2}x_{2}) \ge \mu_{2}(L_{1}) =Q(x_{2}) \ge \mu_{2}(L).
$$
This means that $Q(a_{1}x_{1}+a_{2}x_{2})=Q(x_{2})$.
Let
$$
{\mathbb Z} x_{1}+{\mathbb Z} x_{2} = \begin{pmatrix} s & r \\ r & t \end{pmatrix}.
$$
Then, we have
$$
t=a_{1}^{2}s+2a_{1}a_{2}r+a_{2}^{2}t=s \left( a_{1} + \frac{ra_{2}}{s} \right)^{2} +a_{2}^{2} \left( t-\frac{r^{2}}{s} \right) \ge a_{2}^{2} \left( t-\frac{s}{4} \right).
$$
If $|a_{2}| \ge 2$, then $t \ge 4t-s>t$, which is a contradiction.
Hence, we have $a_{2} = \pm 1$. Then $\phi(L) = {\mathbb Z} x_{1}+{\mathbb Z} x_{2} +{\mathbb Z} z$ is decomposable, which is a contradiction. This completes the proof.
\end{proof}
\section{Recoverable binary $\mathbb{Z}$-lattices}
In this section, we focus on recoverable binary ${\mathbb Z}$-lattices.
We find some necessary conditions and some sufficient conditions for binary ${\mathbb Z}$-lattices to be recoverable.
Let $n$ be a positive integer and let $S$ be the set of all binary ${\mathbb Z}$-lattices with minimum greater than or equal to $n$. Then there is a finite minimal $S$-universality criterion set $S_{n}=\{m_{1},\dots,m_{t}\}$ by \cite{kko}. If we define $M=m_{1}\perp \cdots\perp m_{t}$, then $M$ represents all binary ${\mathbb Z}$-lattices with minimum greater than or equal to $n$.
In this section, $\MM{n}$ stands for a ${\mathbb Z}$-lattice representing all binary ${\mathbb Z}$-lattices with minimum greater than or equal to $n$ and $\min(\MM{n})=n$.
From the above argument, such a ${\mathbb Z}$-lattice always exists.
\begin{prop}\label{2leab}
Let $a$ and $b$ be positive integers such that $2 \le a < b$ and $a$ does not divide $b$. Then the diagonal binary ${\mathbb Z}$-lattice $\ell=\langle a, b \rangle $ is not recoverable.
\end{prop}
\begin{proof} It suffices to show that there is a ${\mathbb Z}$-lattice such that it represents all proper sublattices of $\ell$, whereas it does not represent $\ell$ itself.
Let $h$ be a positive integer such that $h^{2}a < b < (h+1)^{2}a$. For any $h \ge 2$, we define a ${\mathbb Z}$-lattice
$$
K(h) =\perp_{\substack{2\le i\le h \\ 1 \le j \le [\frac{i}{2}]}}
\begin{pmatrix}i^{2}a&ija\\ija&j^{2}a+b\end{pmatrix}.
$$
Now, we define
$$
L(h) =
\begin{cases}
\left( {\mathbb Z} x + {\mathbb Z} y \right) \perp \MM{b+1} & \text{if} \ h=1,\\
\left( {\mathbb Z} x + {\mathbb Z} y \right) \perp K(h) \perp \MM{b+1} & \text{otherwise},
\end{cases}
$$
where ${\mathbb Z} x + {\mathbb Z} y = \begin{pmatrix} a&1\\1&b \end{pmatrix}$.
We claim that $L(h)$ represents all proper sublattices of $\ell$, whereas it does not represent $\ell$ itself.
First, we will prove that $L(h)$ represents all proper sublattices of $\ell$.
Let $\ell'$ be a proper sublattice of $\ell$. If $\min(\ell') >b$, then $\MM{b+1}$ represents $\ell'$ and so does $L(h)$. If $\min(\ell') =b$, then $\ell' \simeq \langle b, \alpha^{2}a \rangle$ for some integer $\alpha$ with $\alpha^{2}a > b$.
Since $\alpha^{2}a$ is represented by $\MM{b+1}$, $L(h)$ represents $\ell'$. Now, assume that $a<\min(\ell')<b$.
Since $4a \le \min(\ell')$, we have $h \ge 2$. Furthermore, there are integers $i,j$, and $\beta$ with $2 \le i \le h$, $0 \le j \le \left[\frac{i}{2}\right]$ and $\beta \ge 1$ such that
$$
\ell' \simeq \begin{pmatrix} i^{2}a & ija \\ ija & j^{2}a+\beta^{2}b \end{pmatrix}.
$$
If $\beta=1$, then clearly $\ell' {\ \rightarrow\ } L(h)$.
Assume that $\beta \ge 2$.
Since $(\beta^{2}-1)b>b$, we have
$$
\ell' \simeq \begin{pmatrix} i^{2}a & ija \\ ija & j^{2}a+\beta^{2}b \end{pmatrix}
{\ \rightarrow\ } \begin{pmatrix} i^{2}a & ija \\ ija & j^{2}a+b \end{pmatrix} \perp \MM{b+1},
$$
which implies that $L(h)$ represents $\ell'$. Finally, if $\min(\ell') = a$, then $\ell' \simeq \langle a, \beta^{2}b \rangle$ for some integer $\beta \ge 2$. Since $\beta^{2}b$ is represented by $\MM{b+1}$, $L(h)$ represents $\ell'$.
Next, we will show that $L(h)$ does not represent $\ell$. Clearly, $L(1)$ does not represent $\ell$. Assume $h \ge 2$. For any $i,j$ with $2 \le i \le h$ and $ 1 \le j \le \left[ \frac{i}{2} \right]$, let
$$
K_{ij}={\mathbb Z} z + {\mathbb Z} w =\begin{pmatrix} i^{2}a&ija\\ija&j^{2}a+b \end{pmatrix}.
$$
Since
$$
Q(sz+tw) = (si+tj)^{2}a + t^{2}b >a,
$$
for any integers $s$ and $t$, the binary ${\mathbb Z}$-lattice $K_{ij}$ does not represent $a$. Suppose that $Q(sz+tw)=b$ for some integers $s$ and $t$. Since $a$ does not divide $b$, we have $t^{2}=1$ and $si+tj=0$. Furthermore, since $j = |si| \le \left[ \frac{i}{2} \right]$,
we have $s=j=0$. This is a contradiction. Hence $K_{ij}$ does not represent $b$ for any possible integers $i$ and $j$.
Therefore we have
\begin{equation}\label{qvalue}
Q(K(h)) \subseteq (\left\{ ma+nb \mid m,n \in {\mathbb N} \cup\{0\} \right\} \setminus \left\{ a, b \right\}).
\end{equation}
Suppose that $L(h)$ represents $\ell$.
Let $u \in L(h)$ be a vector with $Q(u) = b$. Then
$$
u=\alpha x + \beta y + z + w,
$$
for some integers $\alpha, \beta$ and some vectors $z \in K(h)$ and $w \in \MM{b+1}$.
Since
$$
Q(u) = Q(\alpha x + \beta y) + Q(z) + Q(w)=b \quad \text{and}\quad Q(w) > b,
$$
we have $w=0$. By \eqref{qvalue}, we have $Q(z)=0$ or $Q(z) = \delta a$ for some integer $\delta \ge 2$.
If $|\beta| \ge 2$, then $Q(\alpha x + \beta y) \ge \beta^{2}(b-1) >b $.
If $\beta=0$, then $Q(\alpha x)$ is a multiple of $a$, and so is $Q(u)$.
Hence, we have $|\beta|=1$. This implies that
$$
u=
\begin{cases}
\pm (x-y) \ \text{or} \ \pm y & \text{if} \ a=2,\\
\pm y& \text{if} \ a \ge 3.
\end{cases}
$$
Let $v \in L(h)$ be a vector with $Q(v) = a$. Then $v=x$ or $v=-x$. Finally, we have $B(u, v) \ne 0$, which is a contradiction. This completes the proof.
\end{proof}
\begin{prop}\label{odd}
For any positive odd integer $m$, the diagonal binary ${\mathbb Z}$-lattice $\ell=\langle 1, m \rangle$ is not recoverable.
\end{prop}
\begin{proof}
For a positive odd integer $m$, let $\ell=\langle 1, m \rangle$ be the diagonal ${\mathbb Z}$-lattice.
Since $\langle 1 \rangle \perp \MM{2}$ represents all proper sublattices of $\langle 1,1 \rangle$, but not $\langle 1,1 \rangle$ itself, the binary ${\mathbb Z}$-lattice $\langle 1, 1 \rangle$ is not recoverable.
Hence we may assume that $m \ge 3$.
Let $N$ be any quinary ${\mathbb Z}$-lattice that represents all binary even ${\mathbb Z}$-lattices.
Note that such a ${\mathbb Z}$-lattice exists, for example, the root lattice $D_{5}$ is one of such quinary ${\mathbb Z}$-lattices (see \cite{jko2}). Define a ${\mathbb Z}$-lattice
$$
L=\langle 1 \rangle \perp N \perp \MM{m+1}.
$$
It is obvious that $\langle 1, m \rangle$ is not represented by $L$.
Let $\ell_1$ be any proper ${\mathbb Z}$-sublattice of $\ell$.
First, suppose that $\min(\ell_1)=1$. Then $\ell_1 \simeq \langle 1, m\beta^{2} \rangle$ for some integer $\beta \ge 2$.
Since $\langle m\beta^{2} \rangle {\ \rightarrow\ } \MM{m+1}$, we have $\ell {\ \rightarrow\ } L$.
Now, suppose that $\min(\ell_1)>1$. Then clearly, $\min(\ell_1) \ge 3$.
Choose a Minkowski reduced basis for $\ell_1$ so that for some integers $a,b$, and $c$ with $0 \le 2b \le a \le c$ such that
$$
\ell_1 \simeq \begin{pmatrix} a&b\\ b&c \end{pmatrix}.
$$
Note that we are assuming that $a\ge 3$. If $a \equiv c \equiv 0 \pmod{2}$, then $\ell_1{\ \rightarrow\ } N$ and so $\ell_1 {\ \rightarrow\ } L$. If $a \equiv c \equiv 1 \pmod{2}$, then we define a ${\mathbb Z}$-lattice
$$
\ell_1' = \begin{pmatrix} a-1&b-1\\ b-1&c-1 \end{pmatrix}.
$$
Since $d\ell_1' \ge \frac{3}{4}ac-c =\frac{3c}{4}(a-\frac43) > 0$, the even ${\mathbb Z}$-lattice $\ell_1'$ is positive definite.
Hence $\ell_1' {\ \rightarrow\ } N$ and therefore $\ell_1 {\ \rightarrow\ } \langle 1 \rangle \perp N {\ \rightarrow\ } L$. If $a \equiv 1 \pmod{2}$ and $c\equiv 0 \pmod{2}$, then we define a ${\mathbb Z}$-lattice
$$
\ell_1'' = \begin{pmatrix} a-1&b\\ b&c \end{pmatrix}.
$$
Since $d\ell_1'' = \left( \frac{ac}{4}-b^{2} \right) +\frac{c}{4} \left( 3a-4 \right) >0$, the even ${\mathbb Z}$-lattice $\ell_1''$ is positive definite.
Hence $\ell_1'' {\ \rightarrow\ } N$ and therefore $\ell_1 {\ \rightarrow\ } \langle 1\rangle \perp \ell_1'' {\ \rightarrow\ } \langle 1\rangle \perp N {\ \rightarrow\ } L$. Since the proof of the case when $a \equiv 0 \pmod{2}$ and $c\equiv 1 \pmod{2}$ is quite similar to this, the proof is left to the readers.
\end{proof}
\begin{prop} \label{2m4}
For any positive integer $m$ with $m \equiv 2 \pmod{4}$, the diagonal binary ${\mathbb Z}$-lattice $\ell=\langle 1, m \rangle$ is not recoverable.
\end{prop}
\begin{proof}
For a positive integer $m \equiv 2 \pmod{4}$, let $\ell=\langle 1,m \rangle$ be the diagonal ${\mathbb Z}$-lattice. Since $\langle 1,1 \rangle \perp \MM{3}$ represents all proper sublattices of $\langle 1,2 \rangle$, but it does not represent $\langle 1,2 \rangle$ itself, the binary ${\mathbb Z}$-lattice $\langle 1,2 \rangle$ is not recoverable. If we define
$$
L=\langle 1 \rangle \perp \begin{pmatrix} 4 & 0 & 2 \\ 0 & 5 & 1 \\ 2 & 1 & 7 \end{pmatrix} \perp \MM{7},
$$
then one may check that $L$ represents all proper sublattices of $\langle 1, 6 \rangle$, but it does not represent $\langle 1, 6 \rangle$ itself.
From now on, we assume that $m \ge 10$.
Define
$$
L_m' = {\mathbb Z} x + {\mathbb Z} y + {\mathbb Z} z + {\mathbb Z} t = \langle 1,3,5,m-1 \rangle.
$$
Let $N$ be an even 2-universal quinary ${\mathbb Z}$-lattice and let $\mathcal{N}$ be the ${\mathbb Z}$-lattice obtained from $N$ by scaling the quadratic space $\mathbb{Q} \otimes N$ by $2$. Hence $\mathcal N$ represents all binary ${\mathbb Z}$-lattices whose norm is contained in $4{\mathbb Z}$.
Now, we define
$$
L_m = L_m' \perp \mathcal{N} \perp \MM{m+1}.
$$
We will show that any proper sublattice of $\ell=\langle 1, m \rangle$ is represented by $L_m$,
whereas $\ell$ itself is not represented by $L_m$. Suppose, on the contrary, that $\langle 1, m \rangle$ is represented by $L_m$.
Then, one may easily check that
$$
\langle m \rangle \longrightarrow \langle 3,5 \rangle \perp \mathcal{N}.
$$
Hence we have $m \equiv 3a^2+5b^2 \pmod{4}$ for some integers $a$ and $b$, which is a contradiction to the fact that $m \equiv 2 \pmod{4}$.
Let $\ell= {\mathbb Z} u +{\mathbb Z} v = \langle 1, m \rangle$ and let $\ell_1$ be any proper sublattice of $\ell$. If $\min(\ell_1)=1$, then $\ell_1\simeq \langle1,m\beta^2 \rangle$ for some integer $\beta\ge2$. Clearly, $\ell_1 {\ \rightarrow\ } L_m$. Assume that $1< \min(\ell_1)< m$. Then there are integers $a,b$, and $c$ such that $\ell_1= {\mathbb Z} (au) + {\mathbb Z} (bu+ cv)$. If $|c| \ge 2$, then we have $\ell_1 \subseteq {\mathbb Z} u + {\mathbb Z} (cv) = \langle 1, c^{2}m \rangle {\ \rightarrow\ } L_m$.
Hence we may assume that $\ell_1 = {\mathbb Z} (au)+{\mathbb Z} (bu+v)$, where the integers $a$ and $b$ satisfy $a \ge 2$ and $0 \le b < a$. Note that
$$
\ell_1 = \begin{pmatrix} a^{2} & ab \\ ab & b^{2}+m \end{pmatrix}.
$$
First, assume that $a \equiv b \equiv 0 \pmod{2}$.
Since
$$
\begin{pmatrix} a^{2} & ab \\ ab & b^{2}+m-6 \end{pmatrix} {\ \rightarrow\ } \mathcal{N},
$$
we have $\ell_1{\ \rightarrow\ } \langle 1,5 \rangle \perp \mathcal{N} {\ \rightarrow\ } L_m$. Now, assume that $a \equiv 0 \pmod{2}$ and $b\equiv 1 \pmod{2}$.
Since
$$
\begin{pmatrix} a^{2} & ab \\ ab & b^{2}+m-3 \end{pmatrix} {\ \rightarrow\ } \mathcal{N},
$$
we have $\ell_1 {\ \rightarrow\ } \langle 3 \rangle \perp \mathcal{N} {\ \rightarrow\ } L_m$. Assume that $a \equiv b \equiv 1 \Mod{2}$. Let $w\in\mathcal{N}$ be a vector with $Q(w)=m-2$. Then, we have
$$
{\mathbb Z} (x+y+z)+{\mathbb Z}(y+w)=\begin{pmatrix} 9&3\\ 3&1+m \end{pmatrix} {\ \rightarrow\ } L_m.
$$
Hence we may assume that $a \ge 5$. Consider the following ${\mathbb Z}$-lattice
$$
\ell_1' = \begin{pmatrix} a^{2}-9 & ab-3 \\ ab-3 & b^{2}+m-3 \end{pmatrix}.
$$
Since $m > a^2$, we have $d (\ell_1')>0$. Hence $\ell_1' {\ \rightarrow\ } \mathcal{N}$.
Therefore there are vectors $w_{1}, w_{2} \in \mathcal{N}$ such that
$$
\ell_1' \simeq {\mathbb Z} w_{1}+{\mathbb Z} w_{2} \subseteq \mathcal{N}.
$$
Then, we have
$$
{\mathbb Z} (x+y+z+w_{1}) + {\mathbb Z} (y+w_{2}) = \begin{pmatrix} a^{2} & ab \\ ab & b^{2}+m \end{pmatrix}.
$$
This implies that $\ell_1$ is represented by $L_m$.
Finally, assume that $a \equiv 1 \pmod{2}$ and $b\equiv 0 \pmod{2}$.
If $a=3$, then $b=0$ or $2$. If $b=0$, then
$$
\ell_1 = \langle 9,m \rangle \longrightarrow \langle 1,5,m-1 \rangle \perp \mathcal{N} {\ \rightarrow\ } L_m.
$$
If $b=2$, then we have
$$
\ell_1 = \begin{pmatrix} 9 & 6 \\ 6 & m+4 \end{pmatrix} \simeq \begin{pmatrix} 9 & 3 \\ 3 & 1+m \end{pmatrix} {\ \rightarrow\ } L_m.
$$
Now, assume that $a \ge 5$. Consider the following ${\mathbb Z}$-lattice
$$
\ell_1'' = \begin{pmatrix} a^{2}-9 & ab-4 \\ ab-4 & b^{2}+m-6 \end{pmatrix}.
$$
Since $d (\ell_1'')>0$, we have $\ell_1'' {\ \rightarrow\ } \mathcal{N}$.
Hence there are vectors $w'_{1}, w'_{2} \in \mathcal{N}$ such that
$$
\ell_1'' \simeq {\mathbb Z} w'_{1}+{\mathbb Z} w'_{2} \subseteq \mathcal{N}.
$$
Then, we have
$$
{\mathbb Z} (x+y+z+w'_{1}) + {\mathbb Z} (-x+z+w'_{2}) = \begin{pmatrix} a^{2} & ab \\ ab & b^{2}+m \end{pmatrix}.
$$
Therefore, we have $\ell_1 {\ \rightarrow\ } L_m$.
If $\min(\ell_1)=m$, then $\ell_1 \simeq \langle \alpha^2,m \rangle$ for some integer $\alpha$ with $\alpha^2>m$. Hence, we have $\ell_1 {\ \rightarrow\ } \langle 1,m-1\rangle\perp \MM{m+1} {\ \rightarrow\ } L_m$. Finally, if $\min(\ell_1)>m$, then we have $\ell_1 {\ \rightarrow\ } \MM{m+1} {\ \rightarrow\ } L_m$. This completes the proof.
\end{proof}
\section{Recoverable numbers}
From Propositions \ref{bin}, \ref{2leab}, \ref{odd}, and \ref{2m4}, one may conclude that if a binary ${\mathbb Z}$-lattice $\ell$ is recoverable, then $\ell = \langle a, 4ma \rangle$ for some positive integers $a$ and $m$.
In this section, we focus on the case when $a=1$. A positive integer $m$ is called \textit{recoverable} if the diagonal binary ${\mathbb Z}$-lattice $\langle 1,4m \rangle$ is recoverable. We prove that any square of an integer is a recoverable number. We also prove that there are infinitely many non square recoverable numbers.
\begin{prop}
Any square of an integer is recoverable, that is, the diagonal binary ${\mathbb Z}$-lattice $\ell = \langle 1, 4m^{2} \rangle$ is recoverable for any integer $m$.
\end{prop}
\begin{proof}
Let $\ell= \langle 1, 4m^{2} \rangle$ be the diagonal binary ${\mathbb Z}$-lattice.
Let $S$ be the set of all proper sublattices of $\ell$. By Lemma \ref{not-recover}, it suffices to show that any $S$-universal ${\mathbb Z}$-lattice represents $\ell$ itself. Let $L$ be an $S$-universal ${\mathbb Z}$-lattice. Since $\langle 1, 16m^{2} \rangle {\ \rightarrow\ } L$, we have $L={\mathbb Z} e_{1} \perp L' =\langle 1 \rangle \perp L'$ for some ${\mathbb Z}$-lattice $L'$.
Since $\langle 4, 4m^{2} \rangle {\ \rightarrow\ } L$, one of the following holds:
\begin{enumerate}
\item there is a vector $y \in L'$ such that ${\mathbb Z} (2e_{1}) + {\mathbb Z} y = \langle 4, 4m^{2} \rangle$;
\item there are vectors $x,y \in L'$ and an integer $a$ such that
$$
{\mathbb Z} (e_{1}+x) + {\mathbb Z} (ae_{1}+y) = \langle 4, 4m^{2} \rangle;
$$
\item there are vectors $x,y \in L'$ and an integer $a$ such that
$$
{\mathbb Z} x + {\mathbb Z} (ae_{1}+y) = \langle 4, 4m^{2} \rangle.
$$
\end{enumerate}
If (1) holds, then $Q(y)=4m^{2}$. Hence $L$ represents $\ell$. If (2) holds, then
$$
{\mathbb Z} x+{\mathbb Z} y = \begin{pmatrix} 3&-a\\ -a & 4m^{2}-a^{2}\end{pmatrix}.
$$
Hence we have $Q(ax+y)=4m^{2}$. Therefore $L$ represents $\ell$. Finally, if (3) holds, then we have $Q(mx)=4m^{2}$. Therefore $L$ represents $\ell$. This completes the proof.
\end{proof}
Let $\mathscr{L}$ be the set of all isometry classes of binary ${\mathbb Z}$-lattices and let $\mathscr{L}_{13}$ be the set of all isometry classes of binary ${\mathbb Z}$-lattices whose second successive minimum is greater than or equal to $13$.
We define a map $\phi_{9} : \mathscr{L}_{13} {\ \rightarrow\ } \mathscr{L}$ by
$$
\phi_{9} \left( \begin{pmatrix} a & b \\ b & c \end{pmatrix} \right) = \begin{pmatrix} a & b \\ b & c-9 \end{pmatrix},
$$
where $\begin{pmatrix} a & b \\ b & c \end{pmatrix}$ is a Minkowski-reduced form in the class so that $0 \le 2b \le a \le c$.
Since
$$
d(\phi_{9}(K))=ac-b^2-9a=\left(\frac {ac}4-b^2\right)+\frac{3a}4(c-12)>0,
$$
the above map $\phi_9$ is well-defined.
\begin{lem}\label{phi9}
Let $L$ be a ${\mathbb Z}$-lattice and let $K$ be a binary ${\mathbb Z}$-lattice in $\mathscr{L}_{13}$.
If $\phi_{9}^{k}(K)$ is represented by $L$ for some nonnegative integer $k$,
then
$$
K \longrightarrow L \perp 9I_{5}.
$$
Here, $9I_{5}$ is the quinary ${\mathbb Z}$-lattice obtained from the cubic lattice $I_{5}$ by scaling the quadratic space $\mathbb{Q} \otimes I_{5}$ by $9$.
\end{lem}
\begin{proof}
Let $L$ be a ${\mathbb Z}$-lattice and let $K$ be a binary ${\mathbb Z}$-lattice in $\mathscr{L}_{13}$. Let $\begin{pmatrix} a & b \\ b & c \end{pmatrix}$ be the Minkowski-reduced form in the isometry class of $K$.
Note that $9I_{5}$ represents all binary ${\mathbb Z}$-lattices whose scale is contained in $9{\mathbb Z}$. We use an induction on $k$. Suppose that $\phi_{9}(K)$ is represented by $L$.
Then, it is obvious that
$$
K= \begin{pmatrix} a & b \\ b & c \end{pmatrix} = \begin{pmatrix} a & b \\ b & c-9 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 & 9 \end{pmatrix} {\ \rightarrow\ } L \perp 9I_{5}.
$$
Suppose that the assertion is true for $k$. Assume that $\phi_{9}^{k+1}(K) {\ \rightarrow\ } L$.
Let $K'=\phi_{9}(K)$.
Then $\phi_{9}^{k}(K') = \phi_{9}^{k+1}(K) {\ \rightarrow\ } L$.
It follows from the induction hypothesis that $K' {\ \rightarrow\ } L \perp 9I_{5}$.
This implies that
$$
K' =\begin{pmatrix} a & b \\ b & c-9 \end{pmatrix} = \begin{pmatrix} \alpha_{1} & \beta_{1} \\ \beta_{1} & \gamma_{1} \end{pmatrix} + \begin{pmatrix} \alpha_{2} & \beta_{2} \\ \beta_{2} & \gamma_{2} \end{pmatrix},
$$
where $\begin{pmatrix} \alpha_{1} & \beta_{1} \\ \beta_{1} & \gamma_{1} \end{pmatrix} {\ \rightarrow\ } L$ and $\begin{pmatrix} \alpha_{2} & \beta_{2} \\ \beta_{2} & \gamma_{2} \end{pmatrix} {\ \rightarrow\ } 9I_{5}$.
Since
$
\begin{pmatrix} \alpha_{2} & \beta_{2} \\ \beta_{2} & \gamma_{2}+9 \end{pmatrix} {\ \rightarrow\ } 9I_{5},
$
we have
$$
K = \begin{pmatrix} a & b \\ b & c \end{pmatrix}
=\begin{pmatrix} \alpha_{1} & \beta_{1} \\ \beta_{1} & \gamma_{1} \end{pmatrix} + \begin{pmatrix} \alpha_{2} & \beta_{2} \\ \beta_{2} & \gamma_{2}+9 \end{pmatrix}
{\ \rightarrow\ } L \perp 9I_{5}.
$$
This completes the proof.
\end{proof}
\begin{lem}\label{ord3m}
Any proper sublattice of $\langle 1,1 \rangle$ is represented by both
$$
L_{1}= \langle 1,2,3 \rangle \perp \begin{pmatrix} 2 & 1 \\ 1 & 5 \end{pmatrix} \perp 9I_{5} \ \mbox{ and }\ L_{2}=\langle 1,2,6 \rangle \perp \begin{pmatrix} 2 & 1 \\ 1 & 5 \end{pmatrix} \perp 9I_{5}.
$$
\end{lem}
\begin{proof}
Since the proof is quite similar to each other, we only provide the proof of the first case.
Let $\ell$ be any proper sublattice of $\langle 1,1 \rangle$.
One may directly check that the quinary ${\mathbb Z}$-lattice $\langle 1,2,3 \rangle \perp \begin{pmatrix} 2 & 1 \\ 1 & 5 \end{pmatrix}$ represents all binary ${\mathbb Z}$-lattices whose second successive minimum is less than or equal to $12$ except for the following $15$ binary ${\mathbb Z}$-lattices:
\begin{equation}\label{15lattices}
\begin{array}{ll}
&
\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \begin{pmatrix} 1 & 0 \\ 0 & 6 \end{pmatrix},
\begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}, \begin{pmatrix} 2 & 1 \\ 1 & 3 \end{pmatrix},
\begin{pmatrix} 2 & 1 \\ 1 & 4 \end{pmatrix},\\[0.3cm]
&
\begin{pmatrix} 4 & 0 \\ 0 & 6 \end{pmatrix}, \begin{pmatrix} 4 & 1 \\ 1 & 4 \end{pmatrix}, \begin{pmatrix} 4 & 1 \\ 1 & 13 \end{pmatrix},\begin{pmatrix} 4 & 2 \\ 2 & 7 \end{pmatrix}, \begin{pmatrix} 6 & 0 \\ 0 & 7 \end{pmatrix}, \\[0.3cm]
&
\begin{pmatrix} 6 & 0 \\ 0 & 10 \end{pmatrix}, \begin{pmatrix} 6 & 3 \\ 3 & 7 \end{pmatrix}, \begin{pmatrix} 6 & 3 \\ 3 & 10 \end{pmatrix}, \begin{pmatrix} 7 & 1 \\ 1 & 10 \end{pmatrix}, \begin{pmatrix} 10 & 2 \\ 2 & 10 \end{pmatrix}.
\end{array}
\end{equation}
Note that the above $15$ binary ${\mathbb Z}$-lattices are not proper sublattices of $\langle 1,1 \rangle$.
Hence we may assume that $\mu_2(\ell)\ge 13$. Since $\mu_{2}(\phi_{9}(\ell)) \le \max\{\mu_{1}(\ell), \mu_{2}(\ell)-9 \}$, there exists a positive integer $k$ such that $\phi_{9}^{k-1}(\ell) \in \mathscr{L}_{13}$ and $\mu_{2}(\phi_{9}^{k}(\ell)) \le 12$. When $k=1$, then we let $\phi_9^0(\ell)=\ell$. If
$$
\phi_{9}^{k}(\ell) {\ \rightarrow\ } \langle 1,2,3 \rangle \perp \begin{pmatrix} 2 & 1 \\ 1 & 5 \end{pmatrix},
$$
then, by Lemma \ref{phi9}, we have
$$
\ell {\ \rightarrow\ } \langle 1,2,3 \rangle \perp \begin{pmatrix} 2 & 1 \\ 1 & 5 \end{pmatrix} \perp 9I_{5}.
$$
Hence, we may assume that $\phi_{9}^{k}(\ell)$ is isometric to one of 15 binary ${\mathbb Z}$-lattices listed in \eqref{15lattices}.
Since $d\ell$ is a square of an integer and $d(\phi_{9}(\ell)) = d\ell -9\mu_{1}(\ell)$, we see that $\text{ord}_{3}(d(\phi_{9}^{k}(\ell)))$ cannot be one. Hence $\phi_{9}^{k}(\ell)$ is isometric to one of the following ${\mathbb Z}$-lattices:
$$
\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \begin{pmatrix} 2 & 1 \\ 1 & 3 \end{pmatrix}, \quad \text{and} \quad \begin{pmatrix} 2 & 1 \\ 1 & 4 \end{pmatrix}.
$$
Since $\phi_{9}^{k-1}(\ell) \in \mathscr{L}_{13}$ and
$$
\begin{array}{ll}
&
\phi_{9}^{-1}\left(\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\right)= \left\{ \begin{pmatrix} 1 & 0 \\ 0 & 10 \end{pmatrix}, \begin{pmatrix} 2 & 1 \\ 1 & 10 \end{pmatrix}, \begin{pmatrix} 5 & 2 \\ 2 & 10 \end{pmatrix}, \begin{pmatrix} 10 & 3 \\ 3 & 10 \end{pmatrix}\right\},\\[0.3cm]
&
\phi_{9}^{-1}\left(\begin{pmatrix} 2 & 1 \\ 1 & 3 \end{pmatrix}\right)= \left\{ \begin{pmatrix} 2 & 1 \\ 1 & 12 \end{pmatrix}, \begin{pmatrix} 3 & 1 \\ 1 & 11 \end{pmatrix}, \begin{pmatrix} 7 & 3 \\ 3 & 11 \end{pmatrix}, \begin{pmatrix} 10 & 5 \\ 5 & 12 \end{pmatrix} \right\},\\[0.3cm]
&
\phi_{9}^{-1}\left(\begin{pmatrix} 2 & 1 \\ 1 & 4 \end{pmatrix}\right)= \left\{ \begin{pmatrix} 2 & 1 \\ 1 & 13 \end{pmatrix}, \begin{pmatrix} 4 & 1 \\ 1 & 11 \end{pmatrix}, \begin{pmatrix} 8 & 3 \\ 3 & 11 \end{pmatrix}\right\},
\end{array}
$$
we have
$$
\phi_{9}^{k}(\ell)\simeq \begin{pmatrix} 2 & 1 \\ 1 & 4 \end{pmatrix}\quad \text{and}\quad
\phi_{9}^{k-1}(\ell)\simeq \begin{pmatrix} 2 & 1 \\ 1 & 13 \end{pmatrix}.
$$
One may check that
$$
\phi_{9}^{k-1}(\ell)\simeq\begin{pmatrix} 2 & 1 \\ 1 & 13 \end{pmatrix} {\ \rightarrow\ } \langle 1,2,3 \rangle \perp \begin{pmatrix} 2 & 1 \\ 1 & 5 \end{pmatrix}.
$$
Hence, by Lemma \ref{phi9}, we have
$$
\ell \longrightarrow \langle 1,2,3 \rangle \perp \begin{pmatrix} 2 & 1 \\ 1 & 5 \end{pmatrix} \perp 9I_{5}.
$$
This completes the proof.
\end{proof}
\begin{prop}
If $m$ is a positive integer with $\text{ord}_{3}(m)=1$, then $m$ is not a recoverable number.
\end{prop}
\begin{proof}
Let $m$ be a positive integer with $\text{ord}_{3}(m)=1$. Then, we write $m=3m'$ with $m'\equiv 1$ or $2 \Mod 3$. We define
$$
L_m =
\begin{cases}
\langle 1,2,6,4m-1 \rangle \perp \begin{pmatrix} 2 & 1 \\ 1 & 5 \end{pmatrix} \perp 9I_{5}\perp \MM{4m+1} & \text{if $m'\equiv 1 \Mod 3$},\\[0.5cm]
\langle 1,2,3,4m-1 \rangle \perp \begin{pmatrix} 2 & 1 \\ 1 & 5 \end{pmatrix} \perp 9I_{5}\perp \MM{4m+1} & \text{if $m'\equiv 2 \Mod 3$}.
\end{cases}
$$
Clearly, $L_m$ does not represent $\langle 1, 4m \rangle$. As in the proof of Lemma \ref{2m4}, it is enough to show that $L_m$ represents every proper sublattice of $\langle 1,4m \rangle$, which is of the form $\begin{pmatrix} a^{2} & ab \\ ab & b^{2}+4m \end{pmatrix}$ for some integers $a$ and $b$ with $a \ge 2$ and $0\le 2b \le a$. By Lemma \ref{ord3m}, we have
$$
\begin{pmatrix} a^{2} & ab \\ ab & b^{2}+1 \end{pmatrix}
{\ \rightarrow\ } \langle 1,2,3 \rangle \perp \begin{pmatrix} 2 & 1 \\ 1 & 5 \end{pmatrix} \perp 9I_{5} \
\text{and} \ \langle 1,2,6 \rangle \perp \begin{pmatrix} 2 & 1 \\ 1 & 5 \end{pmatrix} \perp 9I_{5},
$$
which implies that
$$
\begin{pmatrix} a^{2} & ab \\ ab & b^{2}+4m \end{pmatrix} {\ \rightarrow\ } L_m.
$$
This completes the proof.
\end{proof}
\begin{prop}\label{12lattices}
Any integer $m$ is a recoverable number if $4m$ is represented by all of the following binary ${\mathbb Z}$-lattices
$$
\begin{array}{lll}
&\begin{pmatrix} 2 & 1 \\ 1 & 4 \end{pmatrix},
\begin{pmatrix} 3 & 1 \\ 1 & 4 \end{pmatrix},
\begin{pmatrix} 4 & 0 \\ 0 & 4 \end{pmatrix},
\begin{pmatrix} 4 & 0 \\ 0 & 5 \end{pmatrix},
\begin{pmatrix} 4 & 1 \\ 1 & 6 \end{pmatrix},
\begin{pmatrix} 4 & 1 \\ 1 & 7 \end{pmatrix},\\[0.3cm]
&\begin{pmatrix} 4 & 0 \\ 0 & 8 \end{pmatrix},
\begin{pmatrix} 4 & 1 \\ 1 & 8 \end{pmatrix},
\begin{pmatrix} 4 & 2 \\ 2 & 8 \end{pmatrix},
\begin{pmatrix} 4 & 0 \\ 0 & 9 \end{pmatrix},
\begin{pmatrix} 4 & 1 \\ 1 & 9 \end{pmatrix}, \quad \text{and}\quad
\begin{pmatrix} 4 & 2 \\ 2 & 9 \end{pmatrix}.
\end{array}
$$
\end{prop}
\begin{proof}
Let $L$ be any ${\mathbb Z}$-lattice that represents all proper sublattices of $\langle 1,4m \rangle$.
Since $\langle 1, 16m \rangle {\ \rightarrow\ } L$, we have $L={\mathbb Z} e_1 \perp L'= \langle 1 \rangle \perp L'$ for some ${\mathbb Z}$-lattice $L'$. To prove the proposition, it suffices to show that $\langle 1,4m\rangle$ is represented by $L$, that is, $4m$ is represented by $L'$.
Since $\langle 4, 4m \rangle {\ \rightarrow\ } L$, one of the followings holds:
\begin{enumerate}
\item there is a vector $y \in L'$ such that ${\mathbb Z} (2e_{1}) + {\mathbb Z} y = \langle 4, 4m \rangle$;
\item there are vectors $x,y \in L'$ and an integer $a$ such that
$$
{\mathbb Z} (e_{1}+x) + {\mathbb Z} (ae_{1}+y) = \langle 4, 4m \rangle;
$$
\item there are vectors $x,y \in L'$ and an integer $a$ such that
$$
{\mathbb Z} x + {\mathbb Z} (ae_{1}+y) = \langle 4, 4m \rangle.
$$
\end{enumerate}
If (1) or (2) holds, then $4m$ is represented by $L'$. Therefore we may assume that (3) holds. Hence $4$ is represented by $L'$.
Now, note that $\langle 9,4m \rangle$ is also represented by $L$. Similarly to the above, one may easily show that $4m$ is represented by $L'$ or
one of binary ${\mathbb Z}$-lattices $\begin{pmatrix} 8 & -a \\ -a & 4m-a^2 \end{pmatrix}$ and $\langle 9, 4m-a^2\rangle$ is represented by $L'$. Hence $8$ or $9$ is represented by $L'$.
Suppose that $L'$ represents 4 and 8. Then $L'$ represents at least one of the following binary ${\mathbb Z}$-lattices:
$$
\begin{pmatrix} 4 & 0 \\ 0 & 8 \end{pmatrix},
\begin{pmatrix} 4 & 1 \\ 1 & 8 \end{pmatrix},
\begin{pmatrix} 4 & 2 \\ 2 & 8 \end{pmatrix},
\begin{pmatrix} 4 & 3 \\ 3 & 8 \end{pmatrix},
\begin{pmatrix} 4 & 4 \\ 4 & 8 \end{pmatrix}, \quad \text{and}\quad
\begin{pmatrix} 4 & 5 \\ 5 & 8 \end{pmatrix}.
$$
Here, we have
$$
\begin{pmatrix} 4 & 3 \\ 3 & 8 \end{pmatrix} \simeq \begin{pmatrix} 4 & 1 \\ 1 & 6 \end{pmatrix},
\begin{pmatrix} 4 & 4 \\ 4 & 8 \end{pmatrix} \simeq \begin{pmatrix} 4 & 0 \\ 0 & 4 \end{pmatrix},\quad \text{and}\quad
\begin{pmatrix} 4 & 5 \\ 5 & 8 \end{pmatrix} \simeq \begin{pmatrix} 2 & 1 \\ 1 & 4 \end{pmatrix}.
$$
Finally, suppose that $L'$ represents 4 and 9.
Then $L'$ represents at least one of the following binary ${\mathbb Z}$-lattices:
$$
\begin{pmatrix} 4 & 0 \\ 0 & 9 \end{pmatrix},
\begin{pmatrix} 4 & 1 \\ 1 & 9 \end{pmatrix},
\begin{pmatrix} 4 & 2 \\ 2 & 9 \end{pmatrix},
\begin{pmatrix} 4 & 3 \\ 3 & 9 \end{pmatrix},
\begin{pmatrix} 4 & 4 \\ 4 & 9 \end{pmatrix}, \quad\text{and}\quad
\begin{pmatrix} 4 & 5 \\ 5 & 9 \end{pmatrix},
$$
Here, we have
$$
\begin{pmatrix} 4 & 3 \\ 3 & 9 \end{pmatrix} \simeq \begin{pmatrix} 4 & 1 \\ 1 & 7 \end{pmatrix},
\begin{pmatrix} 4 & 4 \\ 4 & 9 \end{pmatrix} \simeq \begin{pmatrix} 4 & 0 \\ 0 & 5 \end{pmatrix}, \quad\text{and}\quad
\begin{pmatrix} 4 & 5 \\ 5 & 9 \end{pmatrix} \simeq \begin{pmatrix} 3 & 1 \\ 1 & 4 \end{pmatrix}.
$$
Therefore, if $4m$ is represented by all of the above $12$ binary ${\mathbb Z}$-lattices, then $4m$ is represented by $L'$. This completes the proof.
\end{proof}
\begin{cor}
Let $m \equiv 1 \pmod 8$ be a prime. If $m$ is a quadratic residue modulo $q$ for any prime $q\in\{3,5,7,11,23,31\}$, then $m$ is a recoverable number. In particular, any prime $m \equiv 5569 \pmod {3\cdot 5\cdot 7\cdot 11\cdot 23\cdot 31}$ is a recoverable number. Therefore there are infinitely many non square recoverable numbers.
\end{cor}
\begin{proof}
Note that $4m$ is represented by all of $12$ binary ${\mathbb Z}$-lattices in Proposition \ref{12lattices} if and only if $m$ is represented by all of the following binary ${\mathbb Z}$-lattices
\begin{equation}\label{9lattices}
\begin{array}{lll}
&\hspace{0.7cm}\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix},
\begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix},
\begin{pmatrix} 1 & \frac12 \\ \frac12 & 2 \end{pmatrix},
\begin{pmatrix} 1 & \frac12 \\ \frac12 & 3 \end{pmatrix},\\[0.3cm]
&
\begin{pmatrix} 1 & 0 \\ 0 & 5 \end{pmatrix},
\begin{pmatrix} 1 & \frac12 \\ \frac12 & 7 \end{pmatrix},
\begin{pmatrix} 1 & 0 \\ 0 & 8 \end{pmatrix},
\begin{pmatrix} 1 & \frac12 \\ \frac12 & 9 \end{pmatrix}, \quad\text{and}\quad
\begin{pmatrix} 1 & 0 \\ 0 & 9 \end{pmatrix},
\end{array}
\end{equation}
and furthermore, $m$ is represented by both
\begin{equation}\label{2genera}
\text{gen}\left(
\begin{pmatrix} 1 & \frac12 \\ \frac12 & 6 \end{pmatrix}\right) \quad \text{and} \quad \text{gen}\left(\begin{pmatrix} 1 & \frac12 \\ \frac12 & 8 \end{pmatrix}\right).
\end{equation}
As a sample, assume that $4m$ is represented by $\begin{pmatrix} 4 & 1 \\ 1 & 6 \end{pmatrix}$. Then $2m$ is represented by $\begin{pmatrix} 2 & \frac12 \\ \frac12 & 3 \end{pmatrix}$. Assume that $2x^2+xy+3y^2=2m$ for some integers $x$ and $y$. Then either $x+y$ or $y$ is even. If $x+y=2z$ for some integer $z$, then $2x^2-5xz+6z^2=m$. Hence $m$ is represented by $\begin{pmatrix} 2 & \frac12 \\ \frac12 & 3 \end{pmatrix}$. If $y=2z$, then $x^2+xz+6z^2=m$. Hence $m$ is represented by $\begin{pmatrix} 1 & \frac12 \\ \frac12 & 6 \end{pmatrix}$.
Note that
$$
\text{gen}\left(\begin{pmatrix} 1 & \frac12 \\ \frac12 & 6 \end{pmatrix}\right)\big/\sim=\left\{ \begin{pmatrix} 1 & \frac12 \\ \frac12 & 6 \end{pmatrix}, \begin{pmatrix} 2 & \frac12 \\ \frac12 & 3 \end{pmatrix} \right\}.
$$
Conversely, assume that $m$ is represented by the genus of $\begin{pmatrix} 1 & \frac12 \\ \frac12 & 6 \end{pmatrix}$. If $m$ is represented by $\begin{pmatrix} 1 & \frac12 \\ \frac12 & 6 \end{pmatrix}$, then we have
$$
\langle 4m \rangle {\ \rightarrow\ } \begin{pmatrix} 4 &2 \\2 & 24 \end{pmatrix} {\ \rightarrow\ } \begin{pmatrix} 4 &1 \\1 & 6 \end{pmatrix}.
$$
If $m$ is represented by $\begin{pmatrix} 2 & \frac12 \\ \frac12 & 3 \end{pmatrix}$, then we have
$$
\langle 4m \rangle {\ \rightarrow\ } \begin{pmatrix} 8 & 2 \\ 2 & 12 \end{pmatrix} \simeq \begin{pmatrix} 8 & 6 \\ 6 & 16 \end{pmatrix} {\ \rightarrow\ } \begin{pmatrix} 8 & 3 \\ 3 & 4 \end{pmatrix} \simeq \begin{pmatrix} 4 &1 \\1 & 6 \end{pmatrix}.
$$
Hence $4m$ is represented by \!$\begin{pmatrix} 4 & 1 \\ 1 & 6 \end{pmatrix}$ if and only if $m$ is represented by \!$\text{gen}\left(\!\begin{pmatrix} 1 & \frac12 \\ \frac12 & 6 \end{pmatrix}\!\right)$.
Note that 9 binary ${\mathbb Z}$-lattices in \eqref{9lattices} have class number $1$.
Therefore if $m\equiv 1 \pmod 8$ is prime, and for any prime $q\in \{3,5,7,11,23,31\}$, $m$ is a quadratic residue modulo $q$, then one may easily check that $m$ is represented by $9$ binary ${\mathbb Z}$-lattices in \eqref{9lattices} and by both genera in \eqref{2genera}.
This implies that $m$ is a recoverable number by Proposition \ref{12lattices}. This completes the proof.
\end{proof}
\begin{rmk}{\rm We checked that any non square integer less than or equal to $35$ is not a recoverable number. }
\end{rmk}
| {
"timestamp": "2020-09-10T02:06:33",
"yymm": "2009",
"arxiv_id": "2009.04050",
"language": "en",
"url": "https://arxiv.org/abs/2009.04050",
"abstract": "For a set $S$ of (positive definite and integral) quadratic forms with bounded rank, a quadratic form $f$ is called $S$-universal if it represents all quadratic forms in $S$. A subset $S_0$ of $S$ is called an $S$-universality criterion set if any $S_0$-universal quadratic form is $S$-universal. We say $S_0$ is minimal if there does not exist a proper subset of $S_0$ that is an $S$-universality criterion set. In this article, we study various properties of minimal universality criterion sets. In particular, we show that for `most' binary quadratic forms $f$, minimal $S$-universality criterion sets are unique in the case when $S$ is the set of all subforms of the binary form $f$.",
"subjects": "Number Theory (math.NT)",
"title": "Minimal universality criterion sets on the representations of quadratic forms",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620070720565,
"lm_q2_score": 0.8080672204860316,
"lm_q1q2_score": 0.8016527886045105
} |
https://arxiv.org/abs/2101.06818 | Approximating monomials using Chebyshev polynomials | This paper considers the approximation of a monomial $x^n$ over the interval $[-1,1]$ by a lower-degree polynomial. This polynomial approximation can be easily computed analytically and is obtained by truncating the analytical Chebyshev series expansion of $x^n$. The error in the polynomial approximation in the supremum norm has an exact expression with an interesting probabilistic interpretation. We use this interpretation along with concentration inequalities to develop a useful upper bound for the error. | \section{Motivation and Introduction}\label{sec:intro}
We are interested in approximating the monomial $x^n$ by a polynomial of degree $0 \leq k < n$ over the interval $[-1,1]$. The monomials $1,x,x^2,\dots$ form a basis for $C[-1,1]$, so it seems unlikely that we can represent a monomial in terms of lower degree polynomials. In Figure~\ref{fig:monomialbasis}, we plot a few functions from the monomial basis over $[0,1]$; the basis function look increasingly alike as we take higher and higher powers, i.e., they appear to ``lose independence.'' Numerical analysts often avoid the monomial basis in polynomial interpolation since they result in ill-conditioned Vandermonde matrices, leading to poor numerical performance in finite precision arithmetic. This loss of independence means that it is reasonable to approximate the monomial $x^n$ as a linear combination of lower order monomials, i.e., a lower order polynomial approximation. The natural question to ask, therefore, is: how small can $k$ be so that a well-chosen polynomial of degree $k$ can accurately approximate $x^n$?
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.4]{monomial_basis}
\caption{Visualization of a few monomials in the interval $[0,1]$.}
\label{fig:monomialbasis}
\end{figure}
The surprising answer to this question is that we can approximate the monomial $x^n$ over $[-1,1]$ by a polynomial of small degree, which we will make precise. Let $\|f\|_\infty = \max_{x \in [-1,1]} | f(x)|$ denote the supremum norm on $C[-1,1]$ and let $\pi_k^*(\cdot)$ be the best polynomial approximation to $x^n$ in this norm; that is
\[ E_{n,k} := \min_{\pi \in \mathcal{P}_k} \|x^n- \pi(x) \|_\infty= \|x^n - \pi_k^*(x)\|_\infty, \]
where $\mathcal{P}_k$ is a vector space of polynomials with real coefficients of degree at most $k$. The minimizer $\pi_k^*(\cdot)$ exists and is unique~\cite[Chapter 10]{trefethen2013approximation}, but does not have a closed form expression. Newman and Rivlin~\cite[Theorem 2]{newman1976approximation} showed that\footnote{We briefly mention that the notation our manuscript differs from~\cite{newman1976approximation} in that we reverse the roles of $n$ and $k$.}
\begin{equation}\label{eqn:newmanrivlin} \frac{p_{n,k}}{4e} \leq \|x^n - \pi_k^*(x)\|_\infty \leq p_{n,k}, \end{equation}
where the term $p_{n,k}$ is given by the formula \[ p_{n,k} = \frac{1}{2^{n-1}} \sum_{j = \lfloor (n+k)/2 \rfloor +1}^n \binom{n}{j}. \]
Since $p_{n,k}$ involves the sum of binomial coefficients, it has a probabilistic interpretation which we explore in Section~\ref{sec:prob}.
To see why a small $k$ is sufficient, consider the upper bound $p_{n,k}$. In Section~\ref{sec:prob} we use the probabilistic interpretation to obtain the following bound $p_{n,k} \leq 2\exp\left(-{k^2}/{2n}\right)$. Suppose we are given a user-defined tolerance $\epsilon > 0$. To ensure
\[ \|x^n - \pi_k^*(x)\|_\infty \leq \epsilon,\]
we need to choose $k \geq \sqrt{2n\log(2/\epsilon)}$. The accuracy of the polynomial approximation is visualized in Figure~\ref{fig:approx}, where in the left panel we plot the monomial $x^{n}$ for $n=75$ and the best polynomial approximation $\pi_{k}^*$ for $k=5,15,25$. The polynomial $\pi_k^*$ is computed using the Remez algorithm, implemented in chebfun~\cite{Driscoll2014}. We see that for $k=25$, the polynomial approximation looks very accurate. In the right panel, we display $p_{n,k}$, which is the upper bound of the best polynomial approximation, as well as the upper bound for $p_{n,k}$. We see that $p_{n,k}$ and its upper bound both have sharp decay with increasing $k$. Numerical evidence in~\cite{nakatsukasa2018rational} further confirms this analysis; the authors show that the error $E_{n,k}$ behaves approximately like $\frac12 \text{erfc}(k/\sqrt{n})$, where erfc is the complementary error function. .
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.5]{monomial_approx.eps}
\caption{(left) Approximation of the monomial $x^{n}$ for $n=75$ by $\pi_k^*$, (right) $p_{n,k}$ and its upper bound $2\exp(-k^2/2n)$. The visualization is restricted to the interval $[0,1]$.}
\label{fig:approx}
\end{figure}
Polynomial and rational approximations to the monomial has received considerable attention in the approximation theory community, and surveys of various results can be found in~\cite{reddy1987approximations,nakatsukasa2018rational}. Polynomial approximations to high order monomials have many applications in numerical analysis. This key insight was exploited by Cornelius Lanczos~\cite{lanczos1988applied} in his ``$\tau$-method'' for the numerical solution of differential equations. For a simulating discussion on this topic, please see~\cite{nakatsukasa2018rational}. In numerical linear algebra, this has been exploited to efficiently compute matrix powers and the Schatten p-norm of a matrix~\cite{avron2011randomized,dudley2020monte}.
In this short note, we show to construct a polynomial approximation $x^n \approx \phi_k(x)$ using a truncated Chebyshev polynomial expansion. The error in the truncated representation equals the sum of the discarded coefficients and is precisely $p_{n,k}$. The polynomial $\phi_k$ and the resulting error can both be computed analytically and, therefore, is of great practical use. We briefly review Chebyshev polynomials in Section~\ref{sec:cheby} and state and prove the main result in Section~\ref{sec:main}. In Section~\ref{sec:prob}, we explore probabilistic interpretations of $p_{n,k}$ and obtain bounds for partial sums of binomial coefficients.
\section{Chebyshev polynomials}\label{sec:cheby}
The Chebyshev polynomials of the first kind $T_n(x)$ for $n=0,1,2,\dots$ can be represented as
\[T_n(x) = \cos(n\arccos{x}) \qquad x \in [-1,1]. \]
Starting with $T_0(x) = 1$, $T_1(x) = x$, the Chebyshev polynomials satisfy a recurrence relationship of the form $T_{n+1}(x) = 2xT_n(x) - T_{n-1}(x)$ for $n\geq 1$. The Chebyshev polynomials are orthogonal with respect to the weighted inner product \[\langle u,v \rangle = \int_{-1}^1 w(x) u(x) v(x) dx \]
where the weight function takes the form $w(x) = (1-x^2)^{-1/2}$. Any function $f \in C[-1,1]$ that is Lipschitz continuous can be represented in terms of a Chebyshev polynomial expansion of the form
\[ f(x) = \frac12 c_0 + \sum_{j=1}^\infty c_jT_j(x) \qquad x \in [-1,1], \]
where the coefficients $c_j$ are obtained as $c_j = \frac{2}{{\pi}}\langle f(x), T_j(x)\rangle$ and the series is uniformly convergent. The monomial $x^n$ is rather special since it has the following exact representation in terms of the Chebyshev polynomials~\cite[Section 4]{cody1970survey} \begin{equation}\label{eqn:xncheby} x^n = \sum_{j=0}^{n}{}^{'} c_jT_j(x),\end{equation}
where ${}^{'}$ means the summand corresponding to $j=0$ is halved (if it appears) and the coefficients $c_j$ for $j=0,\dots,n$ are
\begin{equation}\label{eqn:cj} c_j = \left\{ \begin{array}{ll} 2^{1-n}\binom{n}{(n-j)/2} & n-j \text{ even} \\ 0 & \text{otherwise}.\end{array} \right.\end{equation}
Equation~\eqref{eqn:xncheby} takes a more familiar form, when we consider the trigonometric perspective of Chebyshev polynomials. For example, the well-known trigonometric identity $\cos(3\theta) = 4\cos^3 \theta - 3\cos \theta$, can be arranged as \[ \cos^3 \theta = \frac{3}{4}\cos\theta + \frac{1}{4} \cos(3\theta) = \frac{1}{2^2} \left( \binom{3}{1} \cos\theta + \binom{3}{0} \cos(3\theta)\right). \]
With $x=\cos\theta$, we get $x^3 = 2^{-2} \binom{3}{1} T_1(x) + 2^{-2} \binom{3}{0} T_3(x)$. For completeness, we provide a derivation of~\eqref{eqn:xncheby} in Appendix~\ref{app:der}. It is important to note here that the series in~\eqref{eqn:xncheby} is finite, but can be truncated to obtain an accurate approximation; see Section~\ref{sec:main}.
Chebyshev polynomials have many applications in approximation theory and numerical analysis~\cite{trefethen2013approximation} but we limit ourselves to two such examples here. First, if the function is differentiable $r$ times or analytic, the Chebyshev coefficients exhibit decay (algebraic or geometric respectively). Therefore, the Chebyshev series can be truncated to obtain an polynomial approximation of the function and the accuracy of the approximation depends on the rate of decay of the coefficients. Another application of Chebyshev polynomials is in the theory and practice of polynomial interpolation. The polynomial ${q}_{n-1}^*(x) = x^{n} - 2^{1-n}T_{n}(x)$ solves the minimax problem
\begin{equation}\label{eqn:minmax} \min_{q \in \mathcal{P}_{n-1}} \|x^{n} - q(x)\|_\infty = 2^{1-n}. \end{equation}
Based on the minimax characterization, to interpolate a function over $[-1,1]$ by a polynomial of degree $n-1$, the function to be interpolated should be evaluated at the roots of the Chebyshev polynomial $T_{n}$ given by the points $x_j = \cos\left(\frac{2j+1}{2n }\pi\right)$ for $j=0,\dots,n-1$.
\section{Main result}\label{sec:main}
We construct the polynomial approximation $ x^n \approx \phi_k(x)$ by truncating the Chebyshev polynomial expansion in~\eqref{eqn:xncheby} beyond the term $j=k$. That is \[\phi_k(x) := \sum_{j=0}^k{}^{'}c_jT_j(x).\]
Our main result is the following theorem, which quantifies the error in the polynomial approximation. The proof of this theorem is based on the expression in~\eqref{eqn:xncheby}. We believe this result is new.
\begin{theorem}\label{thm:main}
The error in the polynomial approximation $\phi_k(x)$ satisfies
\[ \|x^n - \phi_k(x)\|_\infty = p_{n,k}. \]
\end{theorem}
\begin{proof}
From~\eqref{eqn:xncheby}, $x^{n} - \phi_k(x) = \sum_{j=k+1}^nc_jT_j(x)$. Using triangle inequality, we find that $\|x^n-\phi_k(x)\|_\infty \leq \sum_{j=k+1}^n c_j$ since the coefficients are nonnegative and the Chebyshev polynomials are bounded as $|T_j(x)| \leq 1$. Substituting the coefficients $c_j$ from~\eqref{eqn:cj}, to get
\begin{equation}
\label{eqn:inter}
\|x^n-\phi_k(x)\|_\infty \leq \frac{1}{2^{n-1}}\sum_{\substack{j=k+1 \\ n-j \text{ even}}}^n \binom{n }{(n-j)/2}.
\end{equation}
Using the properties of the binomial coefficients, the summation simplifies as
\[
\sum_{\substack{j=k+1 \\ n-j \text{ even}}}^n \binom{n }{(n-j)/2} = \sum_{\substack{j=k+1 \\ n+j \text{ even}}}^n \binom{n }{(n+j)/2} =\sum_{j=\lfloor (n+k)/2\rfloor+1}^n \binom{n }{j}. \]
Plug this identity into~\eqref{eqn:inter} to get $\|x^n - \phi_k(x)\|_\infty \leq p_{n,k}$. The bound is clearly achieved at $x= 1$, where all the Chebyshev polynomials take the value $1$.
\end{proof}
This theorem shows that the polynomial approximation $\phi_k$ is nearly optimal, and the error due to this approximation is $p_{n,k}$. However, it is the optimal polynomial for the special case $k=n-1$. It is easy to see that $x^n - \phi_{n-1}(x) = 2^{1-n}T_n(x)$ and so $\phi_{n-1}$ is the same as the best polynomial approximation $q^*_{n-1}$ in~\eqref{eqn:minmax}. For $k < n-1$, from~\eqref{eqn:newmanrivlin} and Theorem~\ref{thm:main}
\[ \frac{ \|x^n-\phi_k(x)\|_\infty}{4e} \leq \|x^n-\pi_k^*(x)\|_\infty \leq \|x^n-\phi_k(x)\|_\infty, \]
so that the error in the Chebyshev polynomial approximation is suboptimal by at most the factor $4e \approx 10.87$. Therefore, by using $\phi_k$ we lose only one significant digit of accuracy compared to $\pi^*_k$.
\section{A probabilistic digression}\label{sec:prob}
In Section~\ref{sec:intro}, we saw that the error in the monomial approximation depends on $p_{n,k}$. Since $p_{n,k}$ depends on the sum of binomial coefficients, it has a probabilistic interpretation. Newman and Rivlin~\cite{newman1976approximation} observed that if a fair coin is tossed $n$ times, $p_{n,k}$ is the probability that the magnitude of the difference between the number of heads and the number of tails exceeds $k$. They used this insight along with the de Moivre-Laplace theorem~\cite[Section 1.3]{vershynin2018high} (which is a special case of the Central Limit Theorem) to obtain the approximation $p_{n,k} \approx 2 \text{erfc}(k/\sqrt{n})$.
To convert this into a rigorous inequality for $p_{n,k}$ we use a different tool from probability, namely, concentration inequalities. The inequalities are useful in quantifying how much a random variable deviates from its mean. We start with the following alternative interpretation for $p_{n,k}$: it is twice the probability that greater than $\lfloor (n+k)/2 \rfloor$ coin tosses result in heads (or equivalently tails). We associate each coin toss with an independent Bernoulli random variable $X_i$ with parameter $p=1/2$ since the coin is fair. The random variable $X = \sum_{i=1}^nX_i$ has the Binomial distribution with parameters $n$ and $p$. Then,
\[ p_{n,k} = 2\mathbb{P}\left( \lfloor (n+k)/2 \rfloor +1 \leq X \leq n\right) \leq 2\mathbb{P}\left(X \geq (n+k)/2 \right). \]
Since $X$ has the Binomial distribution, we can once again use the de Moivre-Laplace theorem, to say that as $n\rightarrow \infty$,
\[ \frac{X - np}{\sqrt{np(1-p)}} \longrightarrow \mathcal{N}(0,1), \qquad \text{in distribution}.\]
Roughly speaking, this theorem says that $X$ behaves as a normal random variable with mean $n/2$ and variance $n/4$. Since the tails of normal distributions decay exponentially, we expect that $X$ lies in the range $\frac{n}{2} \pm 1.96 \sqrt{\frac{n}{4}}$ with nearly $95\%$ probability; alternatively, the probability that it is outside this range is very small. To make this more precise, we apply Hoeffding's concentration inequality~\cite[Theorem 2.2.6]{vershynin2018high}, to obtain
\[ \mathbb{P}\left(X \geq (n+k)/2 \right) = \mathbb{P}\left(X - \mathbb{E}[X] \geq k/2 \right) \leq \exp\left(-\frac{k^2}{2n}\right). \]
This gives our desired bound $p_{n,k} \leq 2\exp(-{k^2}/{2n})$.
We can use a similar technique to prove the following result which may be of independent interest. If $0 \leq k \leq n/2$, then
\[ \sum_{j=0}^k \binom{n}{j} \leq 2^n \exp\left( - \frac{(n-2k)^2}{2n}\right).\]
Other concentration inequalities such as Chernoff and Bernstein (see~\cite[Chapter 2]{vershynin2018high}) also give equally interesting bounds. We invite the reader to explore such results.
\section{Acknowledgements}
The author would like to thank Alen Alexanderian, Ethan Dudley, Ivy Huang, Ilse Ipsen, and Nathan Reading for comments and feedback. The work was supported by the National Science Foundation through the grants DMS-1745654 and DMS-1845406.
\section{Declaration of Interest}
The author has no relevant financial or non-financial competing interests to report.
| {
"timestamp": "2021-01-19T02:27:42",
"yymm": "2101",
"arxiv_id": "2101.06818",
"language": "en",
"url": "https://arxiv.org/abs/2101.06818",
"abstract": "This paper considers the approximation of a monomial $x^n$ over the interval $[-1,1]$ by a lower-degree polynomial. This polynomial approximation can be easily computed analytically and is obtained by truncating the analytical Chebyshev series expansion of $x^n$. The error in the polynomial approximation in the supremum norm has an exact expression with an interesting probabilistic interpretation. We use this interpretation along with concentration inequalities to develop a useful upper bound for the error.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Approximating monomials using Chebyshev polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130583409233,
"lm_q2_score": 0.8104789063814616,
"lm_q1q2_score": 0.8015742219211363
} |
https://arxiv.org/abs/1812.11519 | Two-scale methods for convex envelopes | We develop two-scale methods for computing the convex envelope of a continuous function over a convex domain in any dimension.This hinges on a fully nonlinear obstacle formulation [A. M. Oberman, "The convex envelope is the solution of a nonlinear obstacle problem", Proc. Amer. Math. Soc. 135(6):1689--1694, 2007]. We prove convergence and error estimates in the max norm. The proof utilizes a discrete comparison principle, a discrete barrier argument to deal with Dirichlet boundary values, and the property of flatness in one direction within the non-contact set. Our error analysis extends to a modified version of the finite difference wide stencil method of [A. M. Oberman, "Computing the convex envelope using a nonlinear partial differential equation", Math. Models Meth. Appl. Sci, 18(05):759--780, 2008]. | \section{Introduction}
Given an open set $\Omega \subset \mathbb{R}^d$ and a continuous function
$f: \overline{\Omega} \rightarrow \mathbb{R}$, its convex envelop in $\Omega$ is defined
as
\begin{equation}\label{E:def-CE}
u(x) = \sup\left\{ l(x): l \leq f \text{ in } \overline{\Omega}, \; l \text{ is affine} \right\},
\end{equation}
which in fact is the largest convex function majorized by $f$ in $\overline{\Omega}$.
This function $u$ can also be viewed as the viscosity solution of the following fully nonlinear, degenerate elliptic PDE introduced by Oberman \cite{Ob1}
\begin{equation}\label{E:pde-int-CE}
T[u;f](x) := \min\left\{ f(x) - u(x), \lambda_1[D^2u](x)\right\} = 0,
\end{equation}
where $\lambda_1[D^2u]$ denotes the smallest eigenvalue of the Hessian $D^2u$.
This is the complementarity form of the fully nonlinear obstacle problem at hand.
\begin{figure}[!htb]
\vspace{-10pt}
\includegraphics[width=0.8\textwidth]{figures/pde_CE.jpg}
\vspace{-20pt}
\caption{\small Illustration of the equation \eqref{E:pde-int-CE}. In the non-contact set $\{u < f\}$, the function $u$ must be flat in one direction, i.e. $\lambda_1[D^2u] = 0$.}
\label{F:pde_CE}
\end{figure}
\Cref{F:pde_CE} illustrates the pde
formulation \eqref{E:pde-int-CE}. Roughly speaking,
in the contact set
\[
\mathcal{C}(f) := \left\{ x \in \overline{\Omega}: u(x) = f(x) \right\},
\]
we have the equality $u = f$ and the inequality $\lambda_1[D^2u] \geq 0$
given by the convexity of $u$. Outside the contact set, we have
$u < f$ and that $u$ is flat in at least
one direction which implies $\lambda_1[D^2u] = 0$.
In this paper, we consider the case $\Omega$ bounded and
strictly convex, which guarantees the Dirichlet boundary condition
$u = f$ on $\partial\Omega$ is attained. Therefore the convex envelope $u$ of $f$ is the viscosity solution of the following problem:
\begin{equation} \label{E:pde-CE}
\left\{
\begin{aligned}
T[u;f](x) = \min\left\{ f(x) - u(x), \lambda_1[D^2u](x)\right\} = 0 & \quad {\rm in} \;\; \Omega,
\\ u =f & \quad {\rm on} \;\; \partial\Omega.
\end{aligned}
\right.
\end{equation}
The regularity study of convex envelopes dates back to \cite{TrudUrbas1984, CaNiSp}, thus before the PDE formulation \eqref{E:pde-CE} of \cite{Ob1}.
However, the problem considered in \cite{TrudUrbas1984, CaNiSp} is a Dirichlet problem for the degenerate Monge-Amp\`{e}re equation, $\det(D^2u)=0$, which corresponds to the convex envelope of function $f$ given on the boundary $\partial\Omega$ as a Dirichlet condition.
For the convex envelope $u$ in \eqref{E:def-CE}, De Philippis and Figalli \cite{DeFi} obtained recently the
optimal regularity $u \in C^{1,1}(\overline{\Omega})$ under the assumption that $\Omega$ is a uniformly convex domain of class $C^{3,1}$ and $f \in C^{3,1}(\overline{\Omega})$.
There are a handful of papers regarding the numerical
approximation of convex envelopes. Oberman \cite{Ob2} proposed a
wide stencil method to approximate \eqref{E:pde-int-CE}. Dolzmann \cite{Dolzmann} developed a method to compute rank-one convex envelopes, a related notion of critical importance in materials science. Dolzmann and Walkington \cite{DolzWalk} proved an $O(h^{1/3})$ rate of convergence. Finally, Bartels \cite{Bartels} improved the error estimate of \cite{DolzWalk} to $O(h)$ upon increasing the number of directions and function evaluations within elements,
thus at the expense of extra computational cost.
In this paper, we construct and study
a two-scale method for \eqref{E:pde-CE}, which is
somewhat related to the wide stencil method of \cite{Ob2}. Two-scale methods
are developed in \cite{NoNtZh1}, whereas suboptimal pointwise error estimates are derived in
\cite{NoNtZh2} and optimal ones in \cite{LiNo}. We prove existence, uniqueness, and uniform convergence, as well
as pointwise error estimates under realistic regularity assumptions on $u$. Our proof hinges on a discrete comparison principle and discrete barrier functions, and is thus classical. However, we exploit that $u$ is flat in at least one direction outside the contact set $\mathcal{C}(f)$ \cite{CaNiSp, ObSi}, a crucial
property that plays an essential role in dealing with low regularity of $u$. Our techniques extend to a modified wide stencil method obtained from that in \cite{Ob2} upon adding a two-scale structure.
The remainder of this paper is organized as follows. In section 2,
we introduce the two-scale method for convex envelope problem \eqref{E:pde-CE} and
prove several properties of it. In section 3, we prove our main
error estimate in the $L^{\infty}$ norm after
reviewing geometric properties of $u$ and studying
the consistency error. We next extend our analysis to a modified wide stencil method
in section 4. We conclude in section 5 with numerical experiments which illustrate the performance
of the two-scale methods and compare with theory.
\section{Two-Scale Method} \label{S:TwoSc}
In this section, we extend the two-scale method developed in
\cite{NoNtZh1} to solve \eqref{E:pde-CE}, and prove several important
properties including convergence.
\subsection{Definition of the Two-Scale Method} \label{S:IntroTwoSc}
Let $\{\mathcal{T}_h\}$ be a sequence of meshes made of closed simplices $T$. Let $\mathcal{T}_h$
be shape-regular and quasi-uniform
with mesh size $h$ and shape-regular constant $\sigma$, i.e.
\begin{equation}\label{E:shape-regularity}
\max_{h} \ \max_{T \in \mathcal{T}_h} \ \frac{h_T}{\rho_T} \le \sigma,
\end{equation}
where $h_T$ denotes the diameter of $T$ and
$\rho_T$ the diameter of the largest ball inscribed in $T$.
Let $\Omega_h$ be the interior of the union of elements $T\in\mathcal{T}_h$,
$\mathcal{N}_h$ be the nodes of $\mathcal{T}_h$,
$\mathcal{N}_h^b := \{x_i \in \mathcal{N}_h: x_i \in \partial \Omega\}$
be the boundary nodes and
$\mathcal{N}_h^0 := \mathcal{N}_h \setminus \mathcal{N}_h^b$ be the interior nodes;
since we require that
$\mathcal{N}_h^b \subset \partial \Omega$ we deduce that $\Omega_h\subset\Omega$ is also convex. Let
$\mathbb{V}_h$ be the space of continuous piecewise linear functions over $\mathcal{T}_h$.
Before introducing the two-scale method we need additional notation.
Let $\mathbb{S}$ be the unit sphere in $\mathbb{R}^d$.
We consider a finite discretization $\mathbb S_{\theta} \subset \mathbb{S}$ of $\mathbb{S}$ governed by the
parameter $\theta$: given any
$v \in \mathbb{S}$, there exists $v^\theta \in \mathbb S_{\theta}$ such that
\begin{equation*}
|v - v^{\theta} | \leq \theta.
\end{equation*}
Let the meshsize $h$ be the fine scale and $\delta \geq h$ (to be
chosen later) be the coarse scale.
For every $x_i\in\mathcal{N}_h^0$, let
\begin{equation}\label{E:deltai}
\delta_i := \min\big\{\delta,\textrm{dist}(x_i,\partial\Omega_h)\big\},
\end{equation}
and observe that $\delta_i\ge C(\sigma) h$ and the open ball
$B(x_i,\delta_i)$ centered at $x_i$ with radius $\delta_i$ is
contained in $\Omega_h$.
For any function $w \in C(\overline{\Omega}_h)$, in particular for $w \in \mathbb{V}_h$,
let the centered second difference operator be
\begin{equation} \label{E:2Sc2Dif}
\sdd w(x_i;v) := \frac{ w(x_i+ \delta_i v) -2 w(x_i) + w(x_i- \delta_i v) }{ \delta_i^2}
\end{equation}
and note that it is well defined for all $x_i\in\mathcal{N}_h^0$ and $v \in \mathbb{S}$.
Since
\begin{equation}\label{E:lam1}
\lambda_1[D^2 w](x) = \min_{v \in \mathbb{S}} \partial_{vv}^2 w(x),
\end{equation}
we consider the following approximation of $\lambda_1[D^2 w]$ at $x=x_i \in \mathcal{N}_h^0$
\begin{equation*}
\lambda_1[D^2 w](x_i) \approx \min_{v \in \mathbb S_{\theta}} \sdd w(x_i;v).
\end{equation*}
If $\varepsilon := (h,\delta,\theta)$ encodes the discretetization parameters, our two-scale operator $T_{\varepsilon}$
for the convex envelope problem \eqref{E:pde-int-CE} is finally given by
\begin{equation}\label{E:disc-oper}
T_{\varepsilon}[w_h;f](x_i) = \min\left\{f(x_i) - w_h(x_i), \;
\min_{v \in \mathbb S_{\theta}} \sdd w_h(x_i;v) \right\}
\quad\forall \, x_i \in \mathcal{N}_h^0
\end{equation}
for any $w_h\in\mathbb{V}_h$. The corresponding two-scale method reads:
seek $u_{\varepsilon} \in \mathbb{V}_h$
\begin{equation} \label{E:2ScOp}
T_{\varepsilon}[u_{\varepsilon};f](x_i) = 0 \quad \; \forall x_i \in \mathcal{N}_h^0,
\end{equation}
and $u_{\varepsilon}(x_i) = f(x_i)$ for all $x_i \in \mathcal{N}_h^b$. We say that $w_h \in \mathbb{V}_h$ is
a discrete subsolution (supersolution) of \eqref{E:2ScOp} if
\[
T_{\varepsilon}[w_h; f](x_i) \ge 0 \ (\le 0) \quad \forall x_i \in \mathcal{N}_h^0 \ ; \quad
w_h(x_i) \le (\ge) f(x_i) \quad \forall x_i \in \mathcal{N}_h^b .
\]
Therefore, a discrete solution of \eqref{E:2ScOp} is both a discrete sub and
supersolution.
Although this discrete solution $u_{\varepsilon}$ fails to be convex in general,
it is still discretely convex, which is a notion of approximate
convexity introduced in \cite{NoNtZh1}. We say that $w_h \in \mathbb{V}_h$
is \textit{discretely convex} \cite{NoNtZh1} if
\begin{equation*}
\sdd w_h(x_i;v) \geq 0 \qquad \forall x_i \in \mathcal{N}_h^0, \quad
\forall v \in \mathbb S_{\theta}.
\end{equation*}
\subsection{Discrete Comparison Principle}\label{S:DCP}
One important feature of the definition \eqref{E:disc-oper} of the discrete operator $T_{\varepsilon}$ is its monotonicity. This is similar to the two-scale method for Monge-Amp\`{e}re equation in \cite[Lemma 2.3]{NoNtZh1}.
\begin{Lemma}[monotonicity] \label{L:Monotonicity}
Let $x_i \in \mathcal{N}_h^0$ be an interior node and $u_h,w_h \in \mathbb{V}_h$. If
$u_h(x_i) \geq w_h(x_i)$ and
\begin{equation*}
\sdd u_h(x_i;v) \leq \sdd w_h(x_i;v)
\end{equation*}
for any $v \in \mathbb S_{\theta}$, then
\begin{equation*}
T_{\varepsilon}[u_h;f](x_i) \leq T_{\varepsilon}[w_h;f](x_i).
\end{equation*}
In particular, if $u_h - w_h$
attains a non-negative maximum at $x_i$, then
\begin{equation*}
T_{\varepsilon}[u_h;f](x_i) \leq T_{\varepsilon}[w_h;f](x_i).
\end{equation*}
\end{Lemma}
\begin{proof}
If $\sdd u_h(x_i;v) \leq \sdd w_h(x_i;v)$ for any $v \in \mathbb S_{\theta}$, then
\begin{equation*}
\min_{v \in \mathbb S_{\theta}} \sdd u_h(x_i;v) \le \min_{v \in \mathbb S_{\theta}} \sdd w_h(x_i;v).
\end{equation*}
Recalling the definition \eqref{E:disc-oper} of $T_{\varepsilon}$ and combining
with the fact $u_h(x_i) \geq w_h(x_i)$, this implies
\begin{equation*}
T_{\varepsilon}[u_h;f](x_i) \le T_{\varepsilon}[w_h;f](x_i).
\end{equation*}
On the other hand, if $u_h - w_h$
attains a non-negative maximum at $x_i \in \mathcal{N}_h^0$, then we
have $u_h(x_i) \geq w_h(x_i)$ and
\begin{equation*}
u_h(x_i) - w_h(x_i) \geq u_h(z) - w_h(z) \quad \forall z \in \overline{\Omega}_h.
\end{equation*}
By definition \eqref{E:2Sc2Dif} of operator $\sdd$, we obtain
\begin{equation*}
\sdd u_h(x_i;v) \leq \sdd w_h(x_i;v) \quad \forall v \in \mathbb S_{\theta},
\end{equation*}
and thus use the previous result to conclude the proof.
\end{proof}
Monotonicity leads to the following discrete comparison principle.
\begin{Lemma}[discrete comparison principle] \label{L:DCP}
Let $u_h,w_h \in \mathbb{V}_h$ with $u_h(x_i) \leq w_h(x_i)$
for all $x_i \in \mathcal{N}_h^b$ and
\begin{equation}\label{E:comparison}
T_{\varepsilon}[u_h;f](x_i) \geq T_{\varepsilon}[w_h;f](x_i) \quad \forall x_i \in \mathcal{N}_h^0.
\end{equation}
Then, $u_h \leq w_h$ in $\Omega_h$.
\end{Lemma}
\begin{proof}
The proof splits into two steps.
\textbf{Step 1.} We first consider the case with strict inequality
\begin{equation}\label{E:strict}
T_{\varepsilon}[u_h;f](x_i) > T_{\varepsilon}[w_h;f](x_i) \quad \forall x_i \in \mathcal{N}_h^0.
\end{equation}
We assume by contradiction that there exists an interior node $x_k \in \mathcal{N}_h^0$ such that
$u_h - w_h$ attains a maximum at $x_k$, and $u_h(x_k) > w_h(x_k)$.
Then, by \Cref{L:Monotonicity} (monotonicity) we obtain the contradiction
\begin{equation} \label{E:Contr}
T_{\varepsilon}[u_h;f](x_k) \leq T_{\varepsilon}[w_h;f](x_k).
\end{equation}
\textbf{Step 2.}
Now we deal with \eqref{E:comparison} without the strict inequality. We introduce the auxiliary
strictly convex function $q(x) = \frac{1}{2}|x-x_0|^2- \frac{1}{2}R^2$, which satisfies
$q \leq 0$ on $\overline{\Omega}$, and in particular
$q \leq 0$ on $\partial\Omega_h$ provided
$R = \textrm{diam} (\Omega)$ and $x_0 \in \Omega$.
Its Lagrange interpolant $q_h=\mathcal I_h q$ is discretely convex and satisfies
\begin{equation*}
\sdd q_h(x_i;v) \geq \sdd q(x_i;v) = \partial^2_{vv} q(x_i) = 1
\quad\forall x_i \in \mathcal{N}_h^0,
\quad\forall v \in \mathbb S_{\theta},
\end{equation*}
because $q$ is quadratic. For arbitrary $\alpha>0$,
consider the function
$u_{\alpha} = u_h + \alpha q_h - \alpha$, which satisfies
$u_{\alpha} < u_h \leq w_h$ on $\partial\Omega_h$ and
\begin{equation*}
\begin{aligned}
T_{\varepsilon}[u_{\alpha};f](x_i)
& = \min\left\{f(x_i) - u_{\alpha}(x_i), \;
\min_{v \in \mathbb S_{\theta}} \sdd u_{\alpha}(x_i)(x_i;v) \right\} \\
& \geq \min\left\{f(x_i) - (u_h(x_i) - \alpha), \;
\min_{v \in \mathbb S_{\theta}} \left(\sdd u_h(x_i;v) + \alpha \right) \right\} \\
&= T_{\varepsilon}[u_h;f](x_i) + \alpha > T_{\varepsilon}[w_h;f](x_i) \quad \forall x_i \in \mathcal{N}_h^0.
\end{aligned}
\end{equation*}
Applying Step 1 we deduce
\begin{equation*}
u_h + \alpha q_h - \alpha \leq w_h \quad \forall \alpha>0.
\end{equation*}
Finally, let $\alpha\to0$ to obtain the asserted inequality.
\end{proof}
\subsection{Existence, Uniqueness and Stability}\label{S:Exist-Uniq}
We now prove several properties of our discrete system \eqref{E:2ScOp}
which are useful for the proof of convergence.
\begin{Lemma} [existence, uniqueness and stability]\label{L:Exist-Uniq-Stab}
There exists a unique $u_{\varepsilon}\in\mathbb{V}_h$ that solves the discrete
equation \eqref{E:2ScOp}. The solution $u_{\varepsilon}$ is stable in
the sense that $\|u_{\varepsilon}\|_{L^{\infty}(\Omega_h)} \leq \|f\|_{L^{\infty}(\Omega)}$
regardless of the parameters $\varepsilon=(h,\delta,\theta)$ of the method.
\end{Lemma}
\begin{proof}
Since uniqueness is a trivial consequence of Lemma \ref{L:DCP}
(discrete comparison principle), we
just have to prove existence and stability.
\textbf{Step 1 - Stability:}
We first show that $u_h^- = \mathcal I_h u$ is a discrete subsolution
where $u$ is the exact convex envelope and $u_h^+ = \mathcal I_h f$
is a discrete supersolution, where again $\mathcal I_h$ stands for the Lagrange interpolation operator.
Since $u$ is the exact convex envelope,
for any $x_i \in \mathcal{N}_h^0$, we have $u_h^-(x_i) \leq f(x_i)$ and
$\sdd u_h^-(x_i;v) \geq 0$ because $u$ is convex. By definition \eqref{E:disc-oper}
of $T_{\varepsilon}$, this gives us $T_{\varepsilon}[u_h^-;f](x_i) \geq 0$ for all $x_i \in \mathcal{N}_h^0$.
It is also clear that we have
$T_{\varepsilon}[u_h^+;f](x_i) \leq f(x_i) - u_h^+(x_i) = 0$ for all $x_i \in \mathcal{N}_h^0$.
Therefore combining with the fact that
$u_h^+(x_i) = u_h^-(x_i) = f(x_i)$ for $x_i \in \mathcal{N}_h^b$, we
see that $u_h^-$ and $u_h^+$ are discrete subsolution and supersolution
respectively. By Lemma \ref{L:DCP} (discrete comparison principle),
this implies
\begin{equation}\label{E:disc-sol-bounds}
u_h^- \leq u_{\varepsilon} \leq u_h^+,
\end{equation}
and we thus obtain the stability of $u_{\varepsilon}$ because both
$\|u_h^- \|_{L^{\infty}(\Omega_h)}$ and $\|u_h^+ \|_{L^{\infty}(\Omega_h)}$
are bounded by $\|f\|_{L^{\infty}(\Omega)}$.
\textbf{Step 2 - Discrete Perron Method:}
It remains to prove the existence of $u_{\varepsilon}$. We proceed as in
\cite{NoNtZh1, NoZh2} and use the discrete Perron's method to construct a monotone increasing sequence of functions
$\left\{ u_h^k \right\}_{k=0}^\infty$.
The initial iterate $u_h^0$ is chosen to be $u_h^-$, and
thus satisfies the boundary condition $u_h^0(x_i) = f(x_i)$ for
all $x_i \in \mathcal{N}_h^b$ and
\begin{equation}\label{E:discrete-sub}
T_{\varepsilon}[u_h^0;f](x_i) \geq 0 \quad\forall x_i\in\mathcal{N}_h^0.
\end{equation}
We construct $\left\{ u_h^k \right\}$
by induction. Suppose that we have already built
$u_h^k\in\mathbb{V}_h$ satisfying both the boundary condition and
\eqref{E:discrete-sub}.
To construct $u_h^{k+1}\in\mathbb{V}_h$ such that $u_h^{k+1}\geq u_h^k$
and also satisfies both the boundary condition and \eqref{E:discrete-sub},
we consider all interior nodes in order and construct auxiliary
functions $u_h^{k,i-1}\in\mathbb{V}_h$ using the first $i-1$ nodes and starting
from $u_h^{k,0}:=u_h^k$ as follows. At $x_i\in\mathcal{N}_h^0$ we check whether or not
$T_{\varepsilon}[u_h^{k,i-1};f](x_i) > 0$. If so, we
increase the value of $u_h^{k,i-1}(x_i)$ and denote the resulting
function by $u_h^{k,i}$, until
\begin{equation*}
T_{\varepsilon}[u_h^{k,i};f](x_i) = 0.
\end{equation*}
This is possible because $T_{\varepsilon}[u_h^{k,i};f](x_i)$
is strictly decreasing with respect to $u_h^{k,i}(x_i)$.
Expression \eqref{E:disc-oper} also shows that
this process does not decrease $T_{\varepsilon}[u_h^{k,i};f](x_j)$
for any $x_j \ne x_i$, whence
\begin{equation*}
T_{\varepsilon}[u_h^{k,i};f](x_j) \geq T_{\varepsilon}[u_h^{k,i-1};f](x_j) \geq 0
\quad\forall x_j\ne x_i.
\end{equation*}
We repeat this process with the remaining nodes $x_j$ for $i<j\leq N$
where $N$ is the number of all interior points,
and set $u_h^{k+1} := u_h^{k,N}$ to be the last intermediate
function. By construction, we clearly obtain
\begin{equation*}
T_{\varepsilon}[u_h^{k+1};f](x_i) \geq 0,
\quad
u_h^{k+1}(x_i) \geq u_h^k(x_i)
\quad\forall x_i\in\mathcal{N}_h^0,
\end{equation*}
and $u_h^k(x_i) = f(x_i)$ for all $x_i \in \mathcal{N}_h^b$.
\textbf{Step 3 - Convergence of $u_h^k$:}
By construction we have $u_h^k \geq u_h^0 = u_h^-$ and by \Cref{L:DCP} (discrete comparison principle),
$u_h^k \leq u_h^+$ and thus $u_h^k(x_i)$ is uniformly bounded.
Since the sequence $\{u_h^k\}_k$ is monotone, it must converge
to a limit
\begin{equation*}
u_{\varepsilon}(x_i) = \lim_{k\to\infty} u_h^k(x_i) = \lim_{k\to\infty} u_h^{k,i}(x_i)
\quad\forall x_i\in\mathcal{N}_h.
\end{equation*}
Due to continuity of $T_{\varepsilon}[w_h;f]$ with respect to $w_h(x_j)$, we have
$T_{\varepsilon}[u_{\varepsilon};f](x_i) = \lim_{k\to\infty}T_{\varepsilon}[u_h^{k,i};f](x_i) = 0 $
for any $x_i \in \mathcal{N}_h^0$.
This implies that the limit $u_{\varepsilon}$ is the solution of discrete equation \eqref{E:2ScOp}
and finishes the proof.
\end{proof}
Another way to prove existence and uniqueness is to
take advantage of the existing results for Bellman equation and
Howard's algorithm as we can see in section \ref{S:Exp}. \looseness=-1
We define for $x \in \overline{\Omega}$
\begin{equation}\label{E:def-uo-uu}
\overline{u}(x) := \limsup_{\varepsilon,\frac{h}{\delta} \to 0, \;y \to x} u_{\varepsilon}(y),
\quad
\underline{u}(x) := \liminf_{\varepsilon,\frac{h}{\delta} \to 0, \;y \to x} u_{\varepsilon}(y),
\end{equation}
where the limits are taken for $y \in \Omega_h$.
From equation \eqref{E:disc-sol-bounds} and the continuity of
both $u$ and $f$, we immediately obtain the following lemma
characterizing the behavior of $\overline{u}$ and $\underline{u}$
on the boundary $\partial \Omega$.
\begin{Lemma}[boundary behavior] \label{L:disc-sol-boundary}
Let $\Omega$ be a strictly convex bounded domain, let $u_{\varepsilon}$
be the discrete solution of \eqref{E:2ScOp}, and let
$\overline{u}(x)$ and $\underline{u}(x)$ be
defined in \eqref{E:def-uo-uu}. Then we have
$\overline{u}(x) = \underline{u}(x) = f(x)$
for all $x \in \partial \Omega$.
\end{Lemma}
\begin{proof}
Since $\Omega$ is strictly convex, the Dirichlet boundary condition $u=f$ on $\partial\Omega$ is attained as a direct consequence of \cite[Corollary 17.1.5]{Rockafellar2015convex}, or can be proved in the same way as \cite[Theorem 1.5.2]{Gutierrez}. Next use
\begin{equation*}
\mathcal I_h u(x) = u_h^-(x) \leq u_{\varepsilon}(x) \leq u_h^+(x) = \mathcal I_h f(x) \quad x \in \Omega_h
\end{equation*}
with equality on $\partial\Omega$ to deduce the assertion.
\end{proof}
\subsection{Consistency} \label{S:Consistency}
We now quantify the consistency error of our discrete
operator $T_{\varepsilon}[\mathcal I_h u;f]$ for a smooth function $u$, which is enough for the proof of convergence.
In \Cref{S:RoC} we will carry out
a more delicate analysis of the consistency error which
enables us to prove error estimates for solutions with
weaker but realistic regularity. In the meantime, we stress
that the convex envelope $u$ is
generically never better than of class $C^{1,1}(\overline{\Omega})$ \cite{DeFi}.
Given a node $x_i \in \mathcal{N}_h^0$ we denote
\begin{equation}\label{E:Bi}
B_i := \cup \{T: T\in\mathcal{T}_h, \, \textrm{dist}(x_i,T) \leq \delta_i \},
\end{equation}
where $\delta_i$ is defined in \eqref{E:deltai}.
We also denote by $\Omega_{h,s}$ the following $s$-interior region of $\Omega_h$ for any parameter $s > 0$
\begin{equation*}
\Omega_{h,s} = \left\{ x \in \Omega_h \; : \;
\textrm{dist}(x,\partial \Omega_h ) \geq s \right\}.
\end{equation*}
Hereafter, we use the symbols $C(d,\sigma)$, $C(d)$ and $C$ to
denote constants that depend only on
the dimension $d$ and the shape-regularity constant $\sigma$, but
are independent of the two scales
$h$ and $\delta$, the parameter $\theta$ and the function $u$.
\Cref{L:Consistency-smooth} below establishes a consistency error estimate for the
two-scale method similar to \cite[Lemma 4.1]{NoNtZh1} and \cite[Lemma 4.2]{NoNtZh1}. The
proof follows along the lines of \cite{NoNtZh1}.
\begin{Lemma} [consistency for smooth functions] \label{L:Consistency-smooth}
Let $u\in C^{2+k,\alpha}(B_i)$ for $k=0,1$ and $\alpha \in (0,1]$,
$\mathcal I_h u$ be its Lagrange interpolant,
and $B_i$ be defined in \eqref{E:Bi}.
The following estimates are then valid:
\begin{enumerate}[(i)]
\item
For all $x_i \in \mathcal{N}_h^0$ and all $v \in \mathbb{S}$, we have
\begin{equation}\label{E:sddIhuB}
\left|\sdd \mathcal I_h u (x_i;v) \right| \leq C(d,\sigma) \; |u|_{W^2_{\infty}(B_i) },
\end{equation}
\item
For all $x_i \in \mathcal{N}_h^0 \cap \Omega_{h,\delta}$ and all $v \in \mathbb{S}$,
we have
\begin{equation}\label{E:sdd-error}
\left|\sdd \mathcal I_h u(x_i;v) - \sd{u}{v}(x_i) \right| \leq C(d,\sigma)
\left( |u|_{C^{2+k,\alpha}(B_i)} \delta^{k+\alpha}+ |u|_{W^2_{\infty}(B_i)}\frac{h^2}{\delta^2} \right),
\end{equation}
\item
For all $x_i \in \mathcal{N}_h^0 \cap \Omega_{h,\delta}$ and all $v \in \mathbb{S}$, we have
\begin{equation}\label{E:Op-error-smooth}
\small
\bigg| T_{\varepsilon}[\mathcal I_h u;f](x_i) - T[u;f](x_i) \bigg| \leq
C(d,\sigma) \left[ |u|_{C^{2+k,\alpha}(B_i)} \delta^{k+\alpha} +
|u|_{W^2_{\infty}(B_i)} \left( \frac{h^2}{\delta^2} + \theta^2 \right) \right].
\end{equation}
\end{enumerate}
\end{Lemma}
\begin{proof}
For the proof of \eqref{E:sddIhuB} and \eqref{E:sdd-error},
the readers may refer to \cite[Lemma 4.1]{NoNtZh1}.
Here we only prove \eqref{E:Op-error-smooth}.
Recalling the definitions of $T$ in \eqref{E:pde-int-CE}
and $T_{\varepsilon}$ in \eqref{E:disc-oper} we only need to prove
{\small\begin{equation*
\left| \lambda_1[D^2u](x_i) - \min_{v \in \mathbb S_{\theta}} \sdd \mathcal I_h u(x_i;v) \right|
\leq C(d,\sigma) \left[ |u|_{C^{2+k,\alpha}(B_i)} \delta^{k+\alpha} +
|u|_{W^2_{\infty}(B_i)} \left( \frac{h^2}{\delta^2} + \theta^2 \right) \right].
\end{equation*}}
To this end, first let $v_{\theta}$ be the direction such that
\begin{equation*}
\sdd \mathcal I_h u (x_i;v_{\theta}) = \min_{v \in \mathbb S_{\theta}} \sdd \mathcal I_h u (x_i;v).
\end{equation*}
We use \eqref{E:lam1} and \eqref{E:sdd-error} to get
\begin{equation*}
\begin{aligned}
\lambda_1[D^2u](x_i) - \min_{v \in \mathbb S_{\theta}} \sdd \mathcal I_h u(x_i;v)
\leq & \; \sd{u}{v_{\theta}}(x_i) - \sdd \mathcal I_h u(x_i;v_{\theta}) \\
\leq & \; C(d,\sigma)
\left( |u|_{C^{2+k,\alpha}(B_i)} \delta^{k+\alpha}+ |u|_{W^2_{\infty}(B_i)} \frac{h^2}{\delta^2} \right),
\end{aligned}
\end{equation*}
which proves one inequality of \eqref{E:Op-error-smooth}.
To show the reverse inequality we let $v$ be the direction that realizes the minimum in \eqref{E:lam1},
which means
\begin{equation*}
\partial_{vv}^2 u(x_i) = \lambda_1[D^2 u](x_i),
\end{equation*}
and we also know that $v$ is the eigenvector of $D^2 u(x_i)$ corresponding to
the smallest eigenvalue $\lambda_1$.
By definition of $\mathbb S_{\theta}$, there exists $v_{\theta} \in \mathbb S_{\theta}$ such that $|v - v_{\theta}| \leq \theta$,
and we can thus write
\begin{equation*}
\min_{v \in \mathbb S_{\theta}} \sdd \mathcal I_h u(x_i;v) - \lambda_1[D^2u](x_i)
\leq \sdd \mathcal I_h u(x_i;v_{\theta}) - \partial_{vv}^2 u(x_i)
= I_1 + I_2,
\end{equation*}
where
\begin{equation*}
I_1 = \sdd \mathcal I_h u(x_i;v_{\theta}) - \partial_{v_{\theta}\vt}^2 u(x_i),
\qquad
I_2 = \partial_{v_{\theta}\vt}^2 u(x_i) - \partial_{vv}^2 u(x_i).
\end{equation*}
It is clear that $I_1$ can be bounded by \eqref{E:sdd-error}.
For $I_2$, write $v_{\theta} = v + w$, then
\begin{equation*}
\begin{aligned}
\partial_{v_{\theta}\vt}^2 u(x_i) &= v_{\theta}^T D^2u(x_i) v_{\theta}
= \partial_{vv}^2 u(x_i) + 2 w^T D^2u(x_i) v + w^T D^2u(x_i) w \\
&= \partial_{vv}^2 u(x_i) + 2 \lambda_1 v \cdot w + w^T D^2u(x_i) w.
\end{aligned}
\end{equation*}
Since
\begin{equation*}
1= |v_{\theta} |^2 = |v|^2 + 2 v \cdot w + |w|^2,
\end{equation*}
and $|v| = 1$, we observe that
\begin{equation*}
| v \cdot w | = \frac{1}{2} |w|^2 \leq \frac{1}{2}\theta^2,
\end{equation*}
whence we obtain
\begin{equation*}
I_2 \leq C |u|_{W^2_{\infty}(B_i)} \theta^2.
\end{equation*}
Combining the bounds for both $I_1$ and $I_2$ we have
{\small
\begin{equation*}
\min_{v \in \mathbb S_{\theta}} \sdd \mathcal I_h u(x_i;v) - \lambda_1[D^2u](x_i)
\leq C(d,\sigma) \left[ |u|_{C^{2+k,\alpha}(B_i)} \delta^{k+\alpha} +
|u|_{W^2_{\infty}(B_i)} \left( \frac{h^2}{\delta^2} + \theta^2 \right) \right] .
\end{equation*}}
This finishes the proof of \eqref{E:Op-error-smooth}.
\end{proof}
\subsection{Convergence}\label{S:Convergence}
We are now ready to prove the convergence result.
\begin{Theorem}[convergence] \label{T:Convergence}
If $\Omega$ is a bounded and strictly convex domain and $f \in C(\overline{\Omega})$,
then the discrete solution $u_{\varepsilon}$ of \eqref{E:2ScOp} converges uniformly
to the convex envelope $u$ of $f$ as $\varepsilon = (h, \delta, \theta) \to 0$ and
$\frac{h}{\delta} \to 0$.
\end{Theorem}
\begin{proof}
Our approximation scheme \eqref{E:2ScOp} satisfies monotonicity (\Cref{L:DCP}),
stability (\Cref{L:Exist-Uniq-Stab}), and consistency
(\Cref{L:Consistency-smooth}). Moreover, the PDE \eqref{E:pde-CE} for the convex envelope problem
admits a comparison principle \cite[Proposition 2.7]{ObRu} for
Dirichlet boundary conditions in the classical sense.
Similarly to \cite[Section 4]{JeSm}, \cite[Theorem 17]{FeJe} and \cite[Section 5]{NoNtZh1},
in order to use the convergence theorem of Barles and Souganidis \cite{BaSoug},
we still need the additional fact that $\overline{u}(x) = \underline{u}(x) = f(x)$
on $\partial\Omega$. Since this is proved in \Cref{L:disc-sol-boundary} (boundary behavior), \cite{BaSoug} yields uniform convergence
of the discrete solution $u_{\varepsilon}$ to the viscosity solution $u$ of \eqref{E:pde-CE}.
\end{proof}
\section{Rates of Convergence} \label {S:RoC}
In this section, we prove convergence rates for solutions
of class $C^{k,\alpha}(\overline{\Omega})$ for $k=0,1$ and $0<\alpha \leq 1$.
Since in general we could only expect
$u \in C^{1,1}(\overline{\Omega})$ even for smooth $f$ and $\Omega$, our estimate
of consistency error in \Cref{S:Consistency} fails.
The challenge is thus to estimate the consistency error
for solutions with less regularity.
We first show a key geometric lemma about convex envelopes which enables us to give an
estimate of the consistency error for $u \in C^{k,\alpha}(\overline{\Omega})$. On the basis on this result, we
next prove the convergence rate using \Cref{L:DCP} (discrete comparison principle).
\subsection{Flatness}\label{S:Flatness}
The heuristic behind the governing PDE \eqref{E:pde-int-CE} is that the convex envelope $u$ must be flat at least in one direction within the non-contact set, i.e. $\lambda_1[D^2 u](x) = 0$ for all $x \notin \mathcal{C}(f)$.
The question whether there is a line segment containing $x$, on which $u$ is flat, is studied in \cite[Section 3]{ObSi} for the Dirichlet convex envelope problem in which $f$ is only defined on $\partial\Omega$. For $f \in C(\overline{\Omega})$ defined in the entire $\Omega$, and corresponding definition \eqref{E:def-CE} of convex envelope $u$, we have a similar property.
\begin{Lemma}[flatness in one direction]\label{L:Flatness}
Let $f \in C(\overline{\Omega})$ and $x \in \Omega$ be such that $\textrm{dist}(x,\mathcal{C}(f)) \geq d \delta$.
Then for any slope $p \in \partial u(x)$,
there exists a direction $v \in \mathbb{S}$ such that
\begin{equation*}
x_{\pm} = x \pm \delta v,
\quad
u(x_{\pm}) = u(x) \pm \delta (p \cdot v),
\quad
\sdd u(x;v) = 0.
\end{equation*}
Moreover, $p$ belongs also to the subdifferential sets
$\partial u(x_{\pm})$.
\end{Lemma}
This lemma says that if $x$ is away from the contact set $\mathcal{C}(f)$
at least at distance $d \delta$, then there exists a line segment centered at $x_i$ with length at least $2\delta$ such that the convex envelope $u$ is flat on this segment. The flattness means
the second difference of $u$ in this direction is $0$, which plays
an important role in obtaining consistency error for $x$
far away from $\mathcal{C}(f)$. To prove \Cref{L:Flatness},
we need the following definition and subsequent result:
given $x \in \Omega \setminus \mathcal{C}(f)$ and $p \in \partial u(x)$, let
\begin{equation*}
\mathcal{C}(f;x,p) := \left\{ y \in \overline{\Omega}: f(y) = u(x) + p \cdot (y-x) \right\},
\end{equation*}
and note that $\mathcal{C}(f;x,p) \subset \mathcal{C}(f)$ because $u$ is convex and
$u(y) \ge u(x) + p \cdot (y-x)$ whence $u(y)=f(y)$. The following auxiliary result
is exactly the same as \cite[Lemma 3.3]{DeFi}
and similar to \cite[Lemma 2]{CaNiSp} and \cite[Theorem 3.2]{ObSi}.
We still give a proof here for completeness.
\begin{Lemma}[structure of non-contact set]\label{L:noncontact-set}
Let $f \in C(\overline{\Omega})$ and $x \in \Omega \setminus \mathcal{C}(f)$.
Then for any slope $p \in \partial u(x)$, there exist points
$x_1,\ldots,x_k \in \mathcal{C}(f)$ with $2 \leq k \leq d+1$ such that
\begin{equation*}
x \in \textrm{conv} \;(x_1,\ldots,x_k),
\end{equation*}
and $u$ is affine in the convex hull $\textrm{conv} \;(x_1,\ldots,x_k)$ of $(x_i)_{i=1}^k$. Moreover, $p$ is also in the subdifferential set
$\partial u(y)$ for any $y \in \textrm{conv} \;(x_1,\ldots,x_k)$.
\end{Lemma}
\begin{proof}
For any $p \in \partial u(x)$, define
$P(y) := u(x) + p \cdot (y-x)$ and observe that
\begin{equation*}
\mathcal{C} := \mathcal{C}(f;x,p) = \left\{ y \in \overline{\Omega}: f(y) = P(y) \right\}.
\end{equation*}
We claim that $x \in \textrm{conv}(\mathcal{C})$. Argue by contradiction,
suppose $x \notin \textrm{conv}(\mathcal{C})$, and use the hyperplane separation theorem
to find an affine function $L$ such that
$L(x) > 0$ and $L(y) < 0$ for every $y \in \mathcal{C}$. By the definition
of $\mathcal{C}$ and the fact that $P \leq u \leq f$, it is clear that
$f - P$ is strictly positive in the compact set
$\overline{\Omega} \cap \{ L \geq 0\}$: in fact, if $f(y) \le P(y)$ then $f(y) = P(y) = u(y)$
and $y \in \mathcal{C}$, whence $L[y] < 0$. Therefore it is easy to see that
for some small $\alpha > 0$, we have
\begin{equation*}
\wt{L}(y) := P(y) + \alpha L(y) \leq f(y) \quad \forall y \in \overline{\Omega},
\end{equation*}
but $\wt{L}(x) > P(x) = u(x)$. This contradicts the definition
of convex envelope $u$ and thus proves the claim $x \in \textrm{conv}(\mathcal{C})$.
Now we use Carath\'{e}odory's theorem to obtain the existence of
$x_1,\ldots,x_k \in \mathcal{C}$ with $k \leq d+1$ such that
$x \in \textrm{conv}(x_1,\ldots,x_k)$.
To prove that $p \in \partial u(y)$ for any $y \in \textrm{conv}(x_1,\ldots,x_k)$,
we define
\begin{equation*}
\mathcal{K} := \left\{ y \in \overline{\Omega}:
u(y) = u(x) + p \cdot (y - x)
\right\} = \left\{ y \in \overline{\Omega}: u(y) = P(y)
\right\},
\end{equation*}
whence $u$ is affine in $\mathcal{K}$. We claim that $\mathcal{K}$ is convex.
Let $y_1,y_2 \in \mathcal{K}, \lambda \in (0,1)$
and $z = \lambda y_1 + (1- \lambda) y_2$. Since $u$ is convex,
we have
\begin{equation*}
u(z) \leq \lambda u(y_1) + (1- \lambda) u(y_2)
= \lambda P(y_1) + (1- \lambda) P(y_2) = P(z).
\end{equation*}
On the other hand, since $p \in \partial u(x)$,
the supporting plane $P$ must be below $u$,
and in particular
\begin{equation*}
u(z) \geq P(z).
\end{equation*}
Therefore $u(z) = P(z)$, and thus $z \in \mathcal{K}$, which implies
the convexity of $\mathcal{K}$. Since $P \leq u \leq f$,
we have $\{x_1, \ldots, x_k \} \subset \mathcal{C} \subset \mathcal{K}$
and $\textrm{conv} \;(x_1, \ldots, x_k) \subset \mathcal{K}$.
It is clear that for any $y \in \mathcal{K}$, we have
$u(y) = u(x) + p \cdot (y - x)$ and
\begin{equation*}
P(z) = u(x) + p \cdot (z - x)
= u(y) + p \cdot (z - y) \leq u(z) \quad \forall z \in \overline{\Omega}.
\end{equation*}
By definition of $\partial u(y)$ this implies $p \in \partial u(y)$
for any $y \in \textrm{conv}(x_1,\ldots,x_k)$. In addition, $u$ is
affine in $\textrm{conv} \;(x_1, \ldots, x_k)$.
\end{proof}
\begin{proof}[Proof of \Cref{L:Flatness}]
For any $p \in \partial u(x)$, by \Cref{L:noncontact-set} (structure of non-contact set), there exist $k$ ($2\le k \le d+1$) points $x_i \in \mathcal{C}(f;x,p)$ such that
\begin{equation*}
x = \sum_{i=1}^{k} \lambda_i x_i, \quad
\lambda_i \geq 0, \quad
\sum_{i=1}^{k} \lambda_i = 1,
\end{equation*}
and $p$ belongs to the subdifferential set $\partial u(y)$
for any $y \in \textrm{conv}(x_1,\ldots,x_k)$. If $j$ is such that
$\lambda_j = \max_{1 \leq i \leq k} \lambda_i$, then we have
\begin{equation*}
\lambda_j \geq \frac{1}{k} \sum_{i=1}^{k} \lambda_i
= \frac{1}{k} \geq \frac{1}{d+1}.
\end{equation*}
Now let $x_0
= \sum_{i \neq j} \frac{\lambda_i}{1-\lambda_j} x_i \in
\textrm{conv} \;(x_1, \ldots, x_k)$ to get
\begin{equation*}
x = \sum_{i=1}^{k} \lambda_i x_i =
\lambda_j x_j + \sum_{i \neq j} \lambda_i x_i =
\lambda_j x_j + (1-\lambda_j) x_0.
\end{equation*}
Since both $x_0,x_j \in \textrm{conv} \;(x_1,\ldots,x_k)$, the segment
$\overline{x_0x_j}$ is also in $\textrm{conv} \;(x_1,\ldots,x_k)$.
Due to the fact $\textrm{dist}(x,\mathcal{C}(f)) \geq d \delta$,
we have $|x_j - x| \geq d \delta$, and
\begin{equation*}
|x_0 - x| = \frac{\lambda_j}{1 - \lambda_j} |x_j - x|
\geq \frac{1/(d+1)}{1-1/(d+1)} \; d \delta = \delta.
\end{equation*}
Therefore, if $v = \frac{x_j - x}{|x_j - x|}$ and
$x_{\pm} = x \pm \delta v$, clearly $x_{\pm}$
lie in the segment $\overline{x_0x_j}$, and thus
also inside $\textrm{conv}(x_1,\ldots,x_k)$.
Finally, \Cref{L:noncontact-set} (structure of non-contact set) shows $p \in \partial u(x_{\pm})$ and
$u(x_{\pm}) = u(x) \pm \delta (p \cdot v)$,
which immediately leads to $\sdd u(x;v) = 0$.
\end{proof}
\subsection{Consistency for Solutions with H\"older Regularity}\label{S:Consistency-lowreg}
In this section, we take advantage of results in \Cref{S:Flatness} to
derive a consistency error for solutions with realistic H\"older
regularity $u \in C^{k,\alpha}(\overline{\Omega})$ for $k=0,1$
and $0<\alpha \leq 1$, which improves upon the consistency error
estimates in \Cref{S:Consistency}.
The Lagrange interpolant $\mathcal I_h u \in \mathbb{V}_h$ of $u$ satisfies for all interior nodes $x_i \in \mathcal{N}_h^0$
\begin{equation*}
\mathcal I_h u(x_i) = u(x_i) \leq f(x_i), \quad
\sdd \mathcal I_h u(x_i;v) \geq \sdd u(x_i;v) \geq 0 \quad \forall v \in \mathbb{S}
\end{equation*}
because of the convexity of $u$. In view of definition \eqref{E:disc-oper}
of $T_{\varepsilon}$, this in turn implies $T_{\varepsilon}[\mathcal I_h u;f](x_i) \geq 0$ for all $x_i \in \mathcal{N}_h^0$.
The following proposition yields upper bounds for $T_{\varepsilon}[\mathcal I_h u;f](x_i)$ depending on the location of
$x_i$ relative to $\mathcal{C}(f)$ and $\partial\Omega$.
\begin{Proposition}[consistency for $u$ with H\"older regularity]\label{Prop:Consistency-lowreg}
Let $\Omega$ be a bounded strictly convex domain,
$u \in C^{k,\alpha}(\overline{\Omega})$ for $k=0,1$ and $0<\alpha \leq 1$
be the exact solution of the convex envelope problem \eqref{E:pde-CE}.
In addition, let $B_i$ be defined in \eqref{E:Bi} and set
\begin{equation}\label{E:wtBi}
\wt{B_i} := \{x \in \overline{\Omega}: |x - x_i| \leq d\delta \}.
\end{equation}
For $x_i \in \mathcal{N}_h^0$,
the following estimates are then valid:
\begin{enumerate}[(i)]
\item If $\textrm{dist}(x_i,\mathcal{C}(f)) \geq d \delta$, we have
\begin{equation}\label{E:error-noncontact}
\min_{v_{\theta} \in \mathbb S_{\theta}} \sdd \mathcal I_h u(x_i;v_{\theta}) \leq C(d,\sigma)
\frac{(\delta \theta)^{k+\alpha} + h^{k+\alpha}}{\delta^2}
|u|_{C^{k,\alpha}(B_i)}.
\end{equation}
\item If $\textrm{dist}(x_i,\mathcal{C}(f)) < d \delta, \; \textrm{dist}(x_i,\partial\Omega) \geq d\delta$,
and $f \in C^{k,\alpha}(\overline{\Omega})$, then for $k = 0$ we have
\begin{equation}\label{E:error-contact-0}
f(x_i) - u(x_i) \leq C(d,\sigma) \delta^{\alpha}
\left( |u|_{C^{0,\alpha}(\wt{B_i})} + |f|_{C^{0,\alpha}(\wt{B_i})} \right),
\end{equation}
whereas for $k = 1$ we have
\begin{equation}\label{E:error-contact-1}
f(x_i) - u(x_i) \leq C(d,\sigma) \delta^{1+\alpha}
|f|_{C^{1,\alpha}(\wt{B_i})}.
\end{equation}
\item If $0 < \textrm{dist}(x_i,\partial\Omega) < d\delta$, then for all $v \in \mathbb{S}$,
we have
\begin{equation}\label{E:error-boundary}
\sdd \mathcal I_h u(x_i;v) \leq C(d,\sigma) \delta_i^{k+\alpha-2} |u|_{C^{k,\alpha}(B_i)},
\end{equation}
and \eqref{E:error-contact-0} also holds provided $k=0$.
\end{enumerate}
\end{Proposition}
\begin{proof}
Since $\Omega$ is strictly convex, we have $\partial\Omega \subset \mathcal{C}(f)$. This implies
that $x_i \in \mathcal{N}_h^0$ must fall within one of the following three mutually exclusive cases.
\textbf{Case 1: } $\textrm{dist}(x_i,\mathcal{C}(f)) \geq d \delta$. By \Cref{L:Flatness} (flatness in one direction),
for any $p \in \partial u(x_i)$, there exists
$v \in \mathbb{S}$ such that
\begin{equation*}
x_{\pm} = x_i \pm \delta v,
\quad
u(x_{\pm}) = u(x_i) \pm \delta (p \cdot v),
\quad
\sdd u(x_i;v) = 0.
\end{equation*}
By the definition of $\mathbb S_{\theta}$, there exists $v_{\theta} \in \mathbb S_{\theta}$ such that $|v - v_{\theta}| \leq \theta$.
We claim that
\begin{equation*}\sdd \mathcal I_h u(x_i;v_{\theta}) \leq C(d,\sigma)
\frac{(\delta \theta)^{k+\alpha} + h^{k+\alpha}}{\delta^2}
|u|_{C^{k,\alpha}(B_i)},
\end{equation*}
which implies \eqref{E:error-noncontact}. Using
$\; \textrm{dist}(x_i,\mathcal{C}(f)) \geq d \delta$, we have $\delta_i = \delta$
in definition \eqref{E:2Sc2Dif}.
Let $x^{\theta}_{\pm} = x_i \pm \delta v_{\theta}$, then
$x^{\theta}_{\pm} \in B_i$ and $|x^{\theta}_{\pm} - x_{\pm} | \leq \delta \theta$.
Since the interpolation error satisfies
\begin{equation}\label{E:interp-error}
|u - \mathcal I_h u|_{L^{\infty}(B_i)} \leq C(d,\sigma) h^{k+\alpha}|u|_{C^{k,\alpha}(B_i)},
\end{equation}
we infer that
\begin{equation}\label{E:sdd-u-uh}
\left| \sdd \mathcal I_h u(x_i;v_{\theta}) - \sdd u(x_i;v_{\theta}) \right| \leq
C(d,\sigma) \frac{h^{k+\alpha}}{\delta^2} |u|_{C^{k,\alpha}(B_i)},
\end{equation}
whence it remains to prove
\begin{equation*}
\sdd u(x_i;v_{\theta}) \leq C(d,\sigma) \frac{(\delta \theta)^{k+\alpha}}{\delta^2}
|u|_{C^{k,\alpha}(B_i)}.
\end{equation*}
For $k = 0$,
by definition of $|u|_{C^{k,\alpha}(B_i)}$ seminorm, we see that
\begin{align*}
|u(x_{\pm}) - u(x^{\theta}_{\pm})| \leq |x^{\theta}_{\pm} - x_{\pm} |^{\alpha} \; |u|_{C^{k,\alpha}(B_i)}
\leq (\delta \theta)^{\alpha} \; |u|_{C^{k,\alpha}(B_i)}.
\end{align*}
Using this inequality, along with $\sdd u(x_i;v) = 0$, yields the desired bound
\begin{equation*}
\begin{aligned}
\sdd u(x_i;v_{\theta}) & \leq \sdd u(x_i;v) +
\frac{|u(x_+) - u(x^{\theta}_+)| + |u(x_+) - u(x^{\theta}_+)|}{\delta^2} \\
& \leq \frac{2(\delta \theta)^{k+\alpha}}{\delta^2}
|u|_{C^{k,\alpha}(B_i)}.
\end{aligned}
\end{equation*}
For $k=1$, we know $p = \nabla u(x_i) = \nabla u(x_{\pm})$.
If $w = v_{\theta} - v$, we then have
\begin{equation*}
\begin{aligned}
u(x^{\theta}_{\pm}) &= u(x_{\pm}) \pm \int_0^1 \delta \;
\nabla u\left(x_{\pm} \pm t\delta w \right) \cdot w \; dt \\
&= u(x_{\pm}) \pm \delta \nabla u(x_{\pm}) \cdot w
\pm \int_0^1 \delta \;
\left[ \nabla u\left(x_{\pm} \pm t\delta w \right) - \nabla u(x_{\pm}) \right]
\cdot w \; dt,
\end{aligned}
\end{equation*}
whence
\begin{equation}\label{E:C1a-error}
\begin{aligned}
u(x^{\theta}_{\pm}) &\leq u(x_{\pm}) \pm \delta \nabla u(x_{\pm}) \cdot w
+ \int_0^1 \delta \;
|t \delta w|^{\alpha}|u|_{C^{k,\alpha}(B_i)} \; |w| \; dt \\
&\leq u(x_{\pm}) \pm \delta p \cdot w
+ C (\delta \theta)^{1+\alpha} |u|_{C^{k,\alpha}(B_i)} .
\end{aligned}
\end{equation}
Therefore plugging the above inequalities into the expression
of $\sdd u(x_i;v_{\theta})$ we obtain
\begin{equation*}
\begin{aligned}
\sdd u(x_i;v_{\theta}) &\leq \sdd u(x_i;v) +
\frac{1}{\delta^2} \left( \delta p \cdot w - \delta p \cdot w
+ 2 C (\delta \theta)^{1+\alpha} |u|_{C^{k,\alpha}(B_i)} \right) \\
&\leq C \frac{(\delta \theta)^{1+\alpha}}{\delta^2} |u|_{C^{k,\alpha}(B_i)},
\end{aligned}
\end{equation*}
and finish the proof of our claim.
\textbf{Case 2: } $\textrm{dist}(x_i,\mathcal{C}(f)) < d \delta$ and $\; \textrm{dist}(x_i,\partial\Omega) \geq d\delta$.
By the assumptions, there exists $y \in \mathcal{C}(f) \setminus \partial\Omega$ such that
$|x_i - y| < d \delta$. We claim that if $k = 0$,
\begin{equation*}
f(x_i) - \mathcal I_h u(x_i) \leq C(d,\sigma) \delta^{\alpha}
\left( |u|_{C^{0,\alpha}(\wt{B_i})} + |f|_{C^{0,\alpha}(\wt{B_i})} \right),
\end{equation*}
which is \eqref{E:error-contact-0}. This claim is a
consequence of $\mathcal I_h u(x_i) = u(x_i), u(y) = f(y)$ and
\begin{equation*}
\begin{aligned}
&\left| u(x_i) - u(y) \right| \leq |x_i - y|^{\alpha} |u|_{C^{0,\alpha}(\wt{B_i})}
\leq d^{\alpha} \delta^{\alpha} |u|_{C^{0,\alpha}(\wt{B_i})}, \\
&\left| f(x_i) - f(y) \right| \leq |x_i - y|^{\alpha} |f|_{C^{0,\alpha}(\wt{B_i})}
\leq d^{\alpha} \delta^{\alpha} |f|_{C^{0,\alpha}(\wt{B_i})}.
\end{aligned}
\end{equation*}
If $k = 1$, we claim that
\begin{equation*}
f(x_i) - \mathcal I_h u(x_i) \leq C(d,\sigma) \delta^{1+\alpha} |f|_{C^{1,\alpha}(\wt{B_i})},
\end{equation*}
which is \eqref{E:error-contact-1}. To prove this claim, we let $p = \nabla u(y)$,
then consider the supporting hyperplane
$P(x) := u(y) + (x - y) \cdot p$. Since $f$
is differentiable, $f(y) = P(y)$ and
$f(x) \geq u(x) \geq P(x)$, we know $p = \nabla f(y)$.
Proceeding similarly to \eqref{E:C1a-error}, we end up with
\begin{equation*}
\left| f(x_i) - P(x_i) \right| =
\left| f(x_i) - f(y) - (x_i - y) \cdot p \right|
\leq C(d,\sigma) \delta^{1+\alpha}
|f|_{C^{1,\alpha}(\wt{B_i})}.
\end{equation*}
Therefore our claim holds because
\begin{equation*}
f(x_i) - \mathcal I_h u(x_i) = f(x_i) - u(x_i)
\leq f(x_i) - P(x_i)
\leq C(d,\sigma) \delta^{1+\alpha}
|f|_{C^{1,\alpha}(\wt{B_i})}.
\end{equation*}
\textbf{Case 3:} $0 < \textrm{dist}(x_i,\partial\Omega) < d\delta$.
We point out that, unlike the first two cases, the upper bound given in \eqref{E:error-boundary}
does not converge to zero as $\delta_i \to 0$. However, this result is still useful in
our proof of error estimates.
We claim that for all $v \in \mathbb{S}$,
\begin{equation*}
\sdd \mathcal I_h u(x_i;v) \leq C(d,\sigma) \delta_i^{k+\alpha-2} |u|_{C^{k,\alpha}(B_i)},
\end{equation*}
which is \eqref{E:error-boundary}. Using \eqref{E:interp-error} and the fact
$ \delta_i/h \geq C(d,\sigma)$ due to the shape-regularity assumption on the mesh $\mathcal{T}_h$, we have
\begin{equation*}
\begin{aligned}
\left| \sdd u(x_i;v) - \sdd \mathcal I_h u(x_i;v) \right|
&\leq C(d,\sigma) \frac{h^{k+\alpha}}{\delta_i^2} |u|_{C^{k,\alpha}(B_i)} \\
&\leq C(d,\sigma) \delta_i^{k+\alpha - 2} |u|_{C^{k,\alpha}(B_i)}.
\end{aligned}
\end{equation*}
Consequently, it just suffices to prove
\begin{equation*}
\sdd u(x_i;v) \leq C(d,\sigma) \delta_i^{k+\alpha-2} |u|_{C^{k,\alpha}(B_i)}.
\end{equation*}
If $k = 0$, this is obtained from
\begin{equation*}
\left| u(x_i \pm \delta_i v) - u(x_i) \right|
\leq \delta_i^{\alpha} |u|_{C^{0,\alpha}(B_i)}.
\end{equation*}
If $k = 1$, let $p = \nabla u(x_i)$ and $P(x) = u(x_i) + (x-x_i) \cdot p$, we have similarly to \eqref{E:C1a-error}
\begin{equation*}
\left| (u-P)(x_i \pm \delta_i v) \right|
\leq C \delta_i^{1+\alpha} |u|_{C^{1,\alpha}(B_i)}.
\end{equation*}
Therefore since $\sdd P(x_i;v) = 0$, our claim is a consequence of
\begin{equation*}
\sdd u(x_i;v) \leq
\sdd P(x_i;v) + \frac{C \delta_i^{1+\alpha} |u|_{C^{1,\alpha}(B_i)}}{\delta_i^2}
= C \delta_i^{\alpha-1} |u|_{C^{1,\alpha}(B_i)}.
\end{equation*}
This concludes the proof.
\end{proof}
\begin{comment}
\begin{remark}
When $x_i \in \mathcal{N}_h^0$ falls into the third case in our proof,
we could also give an upper bound of $T_{\varepsilon}[\mathcal I_h u;f](x_i)$ through estimating
$f(x_i) - \mathcal I_h u(x_i) = f(x_i) - u(x_i)$.
To illustrate this idea, we need to further assume $f \in C^{k,\alpha}(\overline{\Omega})$.
Since $\textrm{dist}(x_i,\partial\Omega) < d\delta$, there exists $y \in \partial\Omega$ such that
$|x_i - y| < d\delta$, and thus we have $u(y) = f(y)$. If $k + \alpha \geq 1$,
then by our assumptions both $u$ and $f$ are Lipschitz, so
\begin{equation*}
\begin{aligned}
&\left| u(x_i) - u(y) \right| \leq |x_i - y| \; |u|_{C^{0,1}(B_i)} < d\delta |u|_{C^{0,1}(B_i)}, \\
&\left| f(x_i) - f(y) \right| \leq |x_i - y| \; |f|_{C^{0,1}(B_i)} < d\delta |f|_{C^{0,1}(B_i)},
\end{aligned}
\end{equation*}
which combining with $u(y) = f(y)$ implies that
\begin{equation*}
\left| \mathcal I_h u(x_i) - f(x_i) \right| = \left| u(x_i) - f(x_i) \right| \leq
d\delta \left(|u|_{C^{0,1}(B_i)} + |f|_{C^{0,1}(B_i)} \right).
\end{equation*}
If $k+ \alpha < 1$, similarly we have
\begin{equation*}
\left| \mathcal I_h u(x_i) - f(x_i) \right| = \left| u(x_i) - f(x_i) \right| \leq
(d\delta)^{\alpha} \left(|u|_{C^{0,\alpha}(B_i)} + |f|_{C^{0,\alpha}(B_i)} \right).
\end{equation*}
One drawback of this estimate is when $k = 1$, the
estimate is $O(\delta)$ instead of $O(\delta^{k+\alpha})$.
This is due to the fact that when $y \in \partial\Omega$, even
though $u(y) = f(y)$, we do not have $\nabla u(y) = \nabla f(y)$,
which must hold if $y$ is in the interior of $\Omega$.
\end{remark}
\end{comment}
\subsection{Discrete Barrier Functions}\label{S:DBarrier}
In \Cref{Prop:Consistency-lowreg} (consistency for $u$ with H\"older regularity) we estimate the consistency error
for the convex envelope $u \in C^{k,\alpha}(\overline{\Omega})$ for $k=0,1$
and $0<\alpha \leq 1$. In order to take advantage of this result
for error analysis, we now introduce two discrete barrier
functions. The first one is used to handle those $x_i \in \mathcal{N}_h^0$
far from the contact set $\mathcal{C}(f)$, which satisfy the condition
in \Cref{Prop:Consistency-lowreg}(i). The second discrete barrier
function is used to handle those $x_i \in \mathcal{N}_h^0$ close to the boundary of $\Omega$,
which satisfy the condition in \Cref{Prop:Consistency-lowreg}(iii).
First we collect properties of the discrete barrier function $q_h$ introduced
in the proof of \Cref{L:DCP} (discrete comparison principle);
see also \cite[Lemma 4.1]{LiNo}.
\begin{Lemma}[discrete barrier $q_h$]\label{L:Barrier_qh}
Let $x_0 \in \Omega$ and $R = \textrm{diam} (\Omega)$.
The interpolant $q_h = \mathcal I_h q \in \mathbb{V}_h$ of the function $q(x) = \frac{1}{2}|x-x_0|^2 - \frac{1}{2}R^2$ satisfies
\begin{subequations}
\begin{align}
\label{E:Barrier_qh-1}
\sdd q_h(x_i;v_j) \geq 1 \quad &\forall \; x_i \in \mathcal{N}_h^0, \; v_j \in \mathbb{S}, \\
\label{E:Barrier_qh-2}
-C \leq \; q_h(x) \; \leq 0 \quad &\forall \; x \in \Omega_h,
\end{align}
\end{subequations}
where constant $C$ only depends on $\Omega$.
\end{Lemma}
Now we construct our second discrete barrier function $p_h(x)$.
For $k=0,1$ and $0<\alpha \leq 1$, $p_h$ is to satisfy the property
\begin{equation*}
\max_{v_{\theta} \in \mathbb S_{\theta}} \sdd p_h(x_i;v_{\theta}) \geq \delta_i^{k+\alpha-2},
\quad \forall \; x_i \in \mathcal{N}_h^0 \setminus \Omega_{h,d\delta}.
\end{equation*}
We consider a convex function $\eta: [0,\infty) \rightarrow (-\infty,0]$ satisfying
\begin{equation}\label{E:eta-prop-1}
\eta''(t) = 2^{4-k-\alpha} \ t^{k+\alpha-2} \quad t \in (0,2d\delta);
\quad \eta(0) = 0; \quad
\eta'(t) = 0 \quad t \geq 2d\delta.
\end{equation}
Simple calculations reveal that for $k + \alpha \neq 1$,
\begin{equation*}
\eta(t) = \left\{
\begin{array}{ll}
\frac{2^{4-k-\alpha}}{k+\alpha-1}
\left( \frac{1}{k+\alpha}t^{k+\alpha} -
(2d\delta)^{k+\alpha-1}t \right) \quad & \quad 0 \leq t \leq 2d\delta\\
-\frac{16}{k+\alpha} (d\delta)^{k+\alpha}
\quad & \quad t > 2d\delta,
\end{array}
\right.
\end{equation*}
and for $k + \alpha = 1$,
\begin{equation*}
\eta(t) = \left\{
\begin{array}{ll}
8t \left( \ln{t} - \ln(2d\delta) - 1 \right)
\quad & \quad 0 \leq t \leq 2d\delta \\
-16d\delta \quad & \quad t > 2d\delta.
\end{array}
\right.
\end{equation*}
It can be seen immediately that $\eta$ is monotonically non-increasing, and satisfies
\begin{equation}\label{E:eta-prop-2}
-C \delta^{k+\alpha} \leq \eta(t) \leq 0 \qquad \forall t \geq 0.
\end{equation}
Then we define the barrier function $p_h$ as
\begin{equation}\label{E:def_p}
p(x) := \eta(\textrm{dist}(x, \partial \Omega_h)) \quad x \in \Omega_h,
\end{equation}
and denote by $p_h = \mathcal I_h p \in \mathbb{V}_h$ its Lagrange interpolant.
The following lemma is similar to \cite[Section 6.2]{NoZh1} and
\cite[Lemma 4.2]{LiNo}.
\begin{Lemma}[discrete barrier $p_h$]\label{L:Barrier_p}
If $\Omega$ is strictly convex
and $\theta \leq 1$, then
the discrete barrier function $p_h$
defined in \eqref{E:def_p} satisfies
\begin{subequations}
\begin{gather}\label{E:Barrier_ph-1}
\max_{v_{\theta} \in \mathbb S_{\theta}} \sdd p_h(x_i;v_{\theta}) \geq \; C\delta_i^{k+\alpha-2}
\quad \forall \; x_i \in \mathcal{N}_h^0 \setminus \Omega_{h,d\delta}, \\
\label{E:Barrier_ph-2}
\sdd p_h(x_i;v) \geq \; 0 \quad \forall \;
x_i \in \mathcal{N}_h^0, \; v \in \mathbb{S} , \\
\label{E:Barrier_ph-3}
-C\delta^{k+\alpha} \leq \; p_h(x) \; \leq 0 \quad \forall \;
x \in \Omega_h.
\end{gather}
\end{subequations}
Moreover, for $x_i \in \mathcal{N}_h^0 \setminus \Omega_{h,d\delta}$, we could choose $v_{\theta} \in \mathbb S_{\theta}$ only depending on $x_i, \mathbb S_{\theta}$
to satisfy $\sdd p_h(x_i;v_{\theta}) \geq \; \delta_i^{k+\alpha-2}$.
\end{Lemma}
\begin{proof}
We proceed as in \cite[Lemma 4.2]{LiNo}.
We first study the function $p$ defined on the convex domain $\Omega_h \subset \Omega$; the properties of $p_h$ will be simple consequences of those of $p$. Define $d(x) := \textrm{dist}(x,\Omega_h)$ for any $x \in \Omega_h$.
Given any $x_0 \in \Omega_h$, let $y \in \partial \Omega_h$ be a
(closest) point so that
\begin{equation*}
|y - x_0| = d(x_0).
\end{equation*}
Since $\Omega_h$ is convex,
there exists a supporting hyperplane $P$ of $\Omega_h$ touching $\Omega_h$ at $y$ and
perpendicular to $\nu := \frac{x_0 - y}{|x_0 - y|}$.
Consider any two points $x_+, x_- \in\Omega_h$
so that $x_0 = (x_+ + x_-)/2$. Then there exists a
vector $v$ such that $x_{\pm} = x_0 \pm v$ and,
without loss of generality, $\langle v, \nu \rangle \geq 0$; hence
\begin{equation}\label{E:proof-ph}
d(x_{\pm}) \leq \textrm{dist}(x_{\pm},P) = d(x_0) \pm \langle v, \nu \rangle.
\end{equation}
We now show that $p(x)$ is convex.
We exploit that $\eta$ is a nonincreasing convex function,
and $d(x_0) - \langle v, \nu \rangle \geq 0$, to write
\begin{equation*}
\begin{aligned}
p(x_+) + p(x_-) \geq \;
\eta \left( d(x_0) + \langle v, \nu \rangle \right)
+ \eta \left( d(x_0) - \langle v, \nu \rangle \right)
\geq \; 2 \eta \left(d(x_0) \right)
= \; 2 p(x_0).
\end{aligned}
\end{equation*}
Since this holds for any $x_{\pm}, x_0$ satisfying $x_0 = (x_+ + x_-)/2$,
we deduce that $p(x)$ is convex in $\Omega_h$. This immediately implies
\eqref{E:Barrier_ph-2}:
\[
\sdd p_h(x_i;v) \geq \; \sdd p(x_i;v) \ge \; 0 \quad \forall \;
x_i \in \mathcal{N}_h^0, \; v \in \mathbb{S}.
\]
We next prove \eqref{E:Barrier_ph-1}.
If $x_i \in \mathcal{N}_h^0 \setminus \Omega_{h,d\delta}$, then
$\delta_i \leq d(x_i) \leq d\delta_i \le d\delta$ and
$d(x_i) \pm \delta_i \in [0,2d(x_i)] \subset [0,2d\delta]$, where $\delta_i \leq \delta$ is defined in \eqref{E:deltai}.
It follows from the definition \eqref{E:def_p} of $p$, inequality \eqref{E:proof-ph} and the monotonicity of $\eta$ that
\begin{equation*}
\begin{aligned}
\sdd p_h(x_i;v) \geq & \sdd p(x_i; v) =
\frac{p(x_i+ \delta_i v)+p(x_i- \delta_i v)-2p(x_i)}{\delta_i^2} \\
\geq & \frac{\eta \left(d(x_i) + \delta_i \langle v,\nu \rangle \right)
+ \eta \left(d(x_i) - \delta_i \langle v,\nu \rangle \right)
-2\eta \left(d(x_i)\right)}{\delta_i^2},
\end{aligned}
\end{equation*}
for all $v \in \mathbb{S}$. Using the fact that for $t \in [0,2d(x_i)]$,
\begin{equation*}
\eta''(t) \geq 2^{4-k-\alpha} \left(2d(x_i)\right)^{k+\alpha-2}
= 4d(x_i)^{k+\alpha-2} \geq 4 (d\delta_i)^{k+\alpha-2},
\end{equation*}
Taylor expansion gives
\begin{equation*}
\begin{aligned}
\sdd p_h(x_i;v)
\ge & \frac{\eta''(\xi) \left(\delta_i \langle v,\nu \rangle \right)^2}{\delta_i^2}
\ge \frac{4 (d\delta_i)^{k+\alpha-2} \; \delta_i^2 \langle v,\nu \rangle^2}{\delta_i^2}
= 4 \langle v,\nu \rangle^2 (d\delta_i)^{k+\alpha-2},
\end{aligned}
\end{equation*}
where $\xi \in (0, 2d(x_i))$.
By definition of $\mathbb S_{\theta}$, there exists $v_{\theta} \in \mathbb S_{\theta}$ such that
$|v_{\theta} - \nu| \leq \theta \leq 1$, whence
\begin{equation*}
\langle v_{\theta}, \nu \rangle = \frac{|v_{\theta}|^2+|\nu|^2 - |v_{\theta} - \nu|^2}{2}
\geq \frac{1}{2},
\end{equation*}
which yields $\sdd p_h(x_i;v_{\theta}) \geq 4 \langle v_{\theta},\nu \rangle^2 (d\delta_i)^{k+\alpha-2} \geq C\delta_i^{k+\alpha-2} $.
This proves \eqref{E:Barrier_ph-1}, whereas \eqref{E:Barrier_ph-3} is a direct consequence of \eqref{E:eta-prop-2}.
\end{proof}
\begin{remark}[boundary resolution]
Notice that we only assume $\theta \leq 1$ here. Our two-scale method can actually be generalized in such a way that each $x_i \in \mathcal{N}_h^0$ has a different choice of $\mathbb S_{\theta}(x_i)$.
In fact, in our derivation
of error estimate later, for those $x_i$ with
$\textrm{dist}(x_i,\partial\Omega) < d\delta$, we only require the $\mathbb S_{\theta}(x_i)$ to satisfy
requirements of discretization for $\theta \leq 1$. This means in practice, for nodes
near the boundary $\partial\Omega$, we do not need as many directions as for the nodes in the interior region.
\end{remark}
\subsection{Error Estimates for Solutions with H\"older Regularity}\label{S:RatesHolder}
In this subsection we deal with solutions
$u$ of \eqref{E:pde-CE} of class $C^{k,\alpha}(\overline{\Omega})$ for $k=0,1$ and $0<\alpha \leq 1$,
and derive convergence rates in the $L^{\infty}$ norm.
Our main analytic tool is \Cref{L:DCP} (discrete comparison principle),
along with the results of Sections \ref{S:Consistency-lowreg}
and \ref{S:DBarrier}.
\begin{Theorem}[error estimate]\label{T:error-estimate}
Let $\Omega$ be strictly convex.
Let $u$ be the viscosity solution of \eqref{E:pde-CE} and $u_{\varepsilon}$ be the discrete
solution of \eqref{E:2ScOp}. If $u \in C^{k,\alpha}(\overline{\Omega})$ for $k=0,1$
and $0<\alpha \leq 1$, and $\theta \leq 1$,
there exists a constant $C = C(\Omega,d,\sigma)$ such that
{\small
\begin{equation}\label{E:error-estimate}
\Vert \mathcal I_h u - u_{\varepsilon} \Vert_{L^{\infty}(\Omega_h)}
\leq C \left[
|u|_{C^{k,\alpha}(\overline{\Omega})} \frac{(\delta \theta)^{k+\alpha} + h^{k+\alpha}+ \delta^{2+k+\alpha}}{\delta^2}
+ |f|_{C^{k,\alpha}(\overline{\Omega})} \delta^{k+\alpha} \right].
\end{equation}}
\end{Theorem}
\begin{proof}
We find lower and upper bounds of
$u_{\varepsilon}$ in terms of $\mathcal I_h u$. For the lower bound,
we recall that $u_h^- = \mathcal I_h u$ is a discrete subsolution
of \eqref{E:2ScOp} and satisfies $u_h^- \leq u_{\varepsilon}$ from
\eqref{E:disc-sol-bounds} in the proof of \Cref{L:Exist-Uniq-Stab}
(existence, uniqueness and stability), thereby yielding a lower bound of $u_{\varepsilon}$.
For the upper bound, we construct a discrete supersolution $u_h^+ \in \mathbb{V}_h$ such that
\begin{equation*}
\left\{
\begin{aligned}
T_{\varepsilon}[u_h^+;f](x_i) &\leq 0 \quad \forall x_i \in \mathcal{N}_h^0 \\
u_h^+(x_i) &\geq f(x_i) \quad \forall x_i \in \mathcal{N}_h^b,
\end{aligned}
\right.
\end{equation*}
upon suitably modifying $\mathcal I_h u$.
We let $u_h^+ \in \mathbb{V}_h$ be of the form
\begin{equation*}
u_h^+ = \mathcal I_h u - K_1 q_h + K_2 - K_3 p_h,
\end{equation*}
where $q_h, p_h \leq 0$ in $\Omega_h$ according to \eqref{E:Barrier_qh-2}
and \eqref{E:Barrier_ph-3}, and the positive constants
$K_1, K_2, K_3$ are to be chosen properly. Since
\begin{equation*}
u_h^+(x_i) \geq \mathcal I_h u(x_i) = f(x_i) \quad\forall \; x_i \in \mathcal{N}_h^b,
\end{equation*}
to guarantee that $u_h^+$ is a discrete supersolution,
it remains to show $T_{\varepsilon}[u_h^+;f](x_i) \leq 0$
for all $x_i \in \mathcal{N}_h^0$. We divide the subsequent discussion into three cases
based on the position of $x_i$ relative to $\mathcal{C}(f)$ and $\partial\Omega$,
exactly as in \Cref{Prop:Consistency-lowreg}.
If $\textrm{dist}(x_i,\mathcal{C}(f)) \geq d \delta$, using the estimate
\eqref{E:error-noncontact} of \Cref{Prop:Consistency-lowreg} (consistency for $u$ with H\"older regularity) and the properties \eqref{E:Barrier_qh-1} of $q_h$ and \eqref{E:Barrier_ph-2} of $p_h$, we have
\begin{equation*}
\begin{aligned}
\min_{v \in \mathbb S_{\theta}} \sdd u_h^+(x_i;v)
&\leq \min_{v_{\theta} \in \mathbb S_{\theta}} \sdd [\mathcal I_h u - K_1 q_h](x_i;v)
\leq \min_{v_{\theta} \in \mathbb S_{\theta}} \sdd \mathcal I_h u(x_i;v) - K_1 \\
&\leq C(d,\sigma) \frac{(\delta \theta)^{k+\alpha} + h^{k+\alpha}}{\delta^2}
|u|_{C^{k,\alpha}(B_i)} - K_1 \leq 0,
\end{aligned}
\end{equation*}
provided that $K_1 = C(d,\sigma) \frac{(\delta \theta)^{k+\alpha}
+ h^{k+\alpha}}{\delta^2} |u|_{C^{k,\alpha}(\overline{\Omega})}$.
Consequently,
\begin{equation*}
T_{\varepsilon}[u_h^+;f](x_i) \leq \min_{v \in \mathbb S_{\theta}} \sdd u_h^+(x_i;v) \leq 0.
\end{equation*}
If $\; \textrm{dist}(x_i,\mathcal{C}(f)) < d \delta, \; \textrm{dist}(x_i,\partial\Omega) \geq d\delta$,
from \eqref{E:error-contact-0} and \eqref{E:error-contact-1}
in \Cref{Prop:Consistency-lowreg}, we have
\begin{equation*}
\begin{aligned}
f(x_i) - u_h^+(x_i) &\leq f(x_i) - \mathcal I_h u(x_i) - K_2 \\
&\leq C(d,\sigma) \delta^{k+\alpha}
\left( |u|_{C^{k,\alpha}(\wt{B_i})} + |f|_{C^{k,\alpha}(\wt{B_i})} \right) - K_2 \leq 0,
\end{aligned}
\end{equation*}
with $K_2 = C(d,\sigma) \delta^{k+\alpha} \left( |u|_{C^{k,\alpha}(\overline{\Omega})}
+ |f|_{C^{k,\alpha}(\overline{\Omega})} \right)$. This implies
$T_{\varepsilon}[u_h^+;f](x_i) \leq f(x_i) - u_h^+(x_i) \leq 0$.
If $\; \textrm{dist}(x_i,\partial\Omega) < d\delta$, we have $x_i \in \mathcal{N}_h^0 \setminus \Omega_{h,d\delta}$. Choosing $K_3 = C(d,\sigma)|u|_{C^{k,\alpha}(\overline{\Omega})}$ and invoking \eqref{E:error-boundary} in \Cref{Prop:Consistency-lowreg} and
the property \eqref{E:Barrier_ph-1} of $p_h$, we have
\begin{equation*}
\begin{aligned}
\small
\min_{v \in \mathbb S_{\theta}} \; \sdd u_h^+(x_i;v) &\leq
\min_{v \in \mathbb S_{\theta}} \; \sdd [\mathcal I_h u - K_3 p_h](x_i;v) \\
&\leq C(d,\sigma) \delta_i^{k+\alpha-2} |u|_{C^{k,\alpha}(B_i)}
- K_3 \max_{v \in \mathbb S_{\theta}} \sdd p_h(x_i;v) \\
& \leq C(d,\sigma) \delta_i^{k+\alpha-2} |u|_{C^{k,\alpha}(B_i)}
- C(d,\sigma)|u|_{C^{k,\alpha}(\overline{\Omega})} \; \delta_i^{k+\alpha-2} \le 0.
\end{aligned}
\end{equation*}
Therefore $T_{\varepsilon}[u_h^+;f](x_i) \leq \min_{v \in \mathbb S_{\theta}} \sdd u_h^+(x_i;v) \leq 0$.
The three cases show that $u_h^+$ is a discrete supersolution, and thus
by \Cref{L:DCP} (discrete comparison principle),
\begin{equation*}
\begin{aligned}
u_{\varepsilon} \leq \; & \mathcal I_h u - K_1 q_h + K_2 - K_3 p_h \\
= \; & \mathcal I_h u + C(d,\sigma, \Omega) \frac{(\delta \theta)^{k+\alpha}
+ h^{k+\alpha}}{\delta^2} |u|_{C^{k,\alpha}(\overline{\Omega})} \\
&+ C(d,\sigma) \delta^{k+\alpha} \left( |u|_{C^{k,\alpha}(\overline{\Omega})} + |f|_{C^{k,\alpha}(\overline{\Omega})} \right)
+ C(d,\sigma)|u|_{C^{k,\alpha}(\overline{\Omega})} \delta^{k+\alpha}.
\end{aligned}
\end{equation*}
This, conjunction with the lower bound of $u_{\varepsilon}$, completes
the proof.
\end{proof}
\begin{Corollary}[convergence rate]\label{C:convergence-rate}
Let $\Omega$ be strictly convex.
Let $u$ be the viscosity solution of \eqref{E:pde-CE} and $u_{\varepsilon}$ be the discrete
solution of \eqref{E:2ScOp}. If $u \in C^{k,\alpha}(\overline{\Omega})$ for $k=0,1$
and $0<\alpha \leq 1$, and $\theta \leq 1$, we have
\begin{equation}\label{E:convergence-rate}
\|u-u_{\varepsilon}\|_{L^\infty(\Omega_h)} \le C(\Omega,d,\sigma)
\Big( |u|_{C^{k,\alpha}(\overline{\Omega})} + |f|_{C^{k,\alpha}(\overline{\Omega})} \Big) \; h^{\frac{(k+\alpha)^2}{2+k+\alpha}},
\end{equation}
provided $R_\alpha(u) :=
|u|_{C^{k,\alpha}(\overline{\Omega})}^{\frac{1}{2+k+\alpha}}
\Big(|u|_{C^{k,\alpha}(\overline{\Omega})} + |f|_{C^{k,\alpha}(\overline{\Omega})} \Big)^{-\frac{1}{2+k+\alpha}}$
and
\begin{equation*}
\delta = R_\alpha(u) h^{\frac{k+\alpha}{2+k+\alpha}},
\quad
\theta = R_\alpha(u)^{-1} h^{\frac{2}{2+k+\alpha}}.
\end{equation*}
\end{Corollary}
\begin{proof}
Since the pointwise interpolation error satisfies \cite{BrennerScott}
\begin{equation*}
\|u - \mathcal I_h u\|_{L^\infty(\Omega_h)} \le C h^{k+\alpha} |u|_{C^{k,\alpha}(\overline{\Omega})}
\le C \frac{h^{k+\alpha}}{\delta^2} |u|_{C^{k,\alpha}(\overline{\Omega})},
\end{equation*}
and $h \le \delta$, we end up with the error estimate
\begin{equation*}
\|u - u_{\varepsilon}\|_{L^\infty(\Omega_h)} \le C \left[ |u|_{C^{k,\alpha}(\overline{\Omega})}
\frac{h^{k+\alpha} + (\delta \theta)^{k+\alpha}}{\delta^2}
+ \Big(|u|_{C^{k,\alpha}(\overline{\Omega})} + |u|_{C^{k,\alpha}(\overline{\Omega})}\Big) \delta^{k+\alpha} \right].
\end{equation*}
In order to balance all contributions, we
first choose $\theta=\frac{h}{\delta}$ and next equate the two terms
on the right-hand side to obtain the asserted relations between
$\delta,\theta$ and $h$. This completes the proof.
\end{proof}
\begin{remark}[two important scenarios]\label{R:two-scenarios}
We want to point out two important scenarios based on the regularity of $u$ for \Cref{C:convergence-rate} (convergence rate).
\begin{enumerate}[$\bullet$]
\item Full regularity $u \in C^{1,1}(\overline{\Omega})$,
i.e. $k = \alpha = 1$. The optimal choice of parameters $\delta \sim O(h^{1/2}), \theta \sim O(h^{1/2})$
in \Cref{C:convergence-rate} yields either a linear decay rate $O(h)$ or a quadratic rate $O(\delta^2)$ in terms of the fine scale $h$ or the coarse scale $\delta$.
\item Lipschitz regularity $u \in C^{0,1}(\overline{\Omega})$, i.e. $k = 0, \ \alpha = 1$.
Choosing optimal parameters $\delta \sim O(h^{1/3}), \theta \sim O(h^{2/3})$
in \Cref{C:convergence-rate} gives us either a rate $O(h^{1/3})$ in terms of the
fine scale $h$ or a linear rate $O(\delta)$ in terms of the coarse scale $\delta$.
\end{enumerate}
We point out that, since
$|u|_{C^{0,1}(\overline{\Omega})} \lesssim |f|_{C^{1,1}(\overline{\Omega})}$ and $|u|_{C^{1,1}(\overline{\Omega})} \lesssim |f|_{C^{3,1}(\overline{\Omega})}$ under proper assumptions of $\Omega$ \cite{DeFi},
the right hand side of \eqref{E:convergence-rate} can be bounded with only norms of $f$.
Our error estimates are thus realistic in terms of regularity.
\end{remark}
\begin{remark}[fine scale vs regularity]
It is instructive to realize that the coarse scale $\delta$ gets
finer with increasing regularity $k+\alpha$ of $u$, whereas the angular
scale $\theta$ gets coarser. This behavior is opposite to
the error estimates in \cite[Remark 5.4]{LiNo}.
\end{remark}
\begin{remark}[alternate proof]\label{R:alternate-proof-k0}
When $k=0$, the proof of \Cref{T:error-estimate} (error estimate) can be simplified a little bit. To be more specific, we can
construct a discrete supersolution $u_h^+ \in \mathbb{V}_h$ of
the form
\begin{equation*}
u_h^+ = \mathcal I_h u - K_1 q_h + K_2
\end{equation*}
provided that
\begin{equation*}
K_1 = C(d,\sigma) \frac{(\delta \theta)^{\alpha}
+ h^{\alpha}}{\delta^2} |u|_{C^{0,\alpha}(\overline{\Omega})}, \quad
K_2 = C(d,\sigma) \delta^{\alpha}
\left( |u|_{C^{0,\alpha}(\overline{\Omega})} + |f|_{C^{0,\alpha}(\overline{\Omega})} \right).
\end{equation*}
This is due to the fact that if $0 < \textrm{dist}(x_i,\partial\Omega) < d\delta$,
then invoking \eqref{E:error-contact-0} with our choice of $K_2$
implies $T_{\varepsilon}[u_h^+;f](x_i) \leq 0$.
\end{remark}
\subsection{Non-attainment of Dirichlet condition}\label{S:non-attainment}
Although we mainly focus on the case that the domain $\Omega$ is strictly convex, it is also possible to modify and extend our two-scale method to compute the convex envelope over {\it convex polytopes} $\Omega$, thus domains with piecewise linear boundary. For simplicity, we only explain the ideas in ${\mathbb{R}}^2$, but higher dimensions $d>2$ can be dealt with in a similar manner.
We need additional notation. A convex polytope $\Omega$ can be described by a set $\mathcal{N}^v$ of vertices on its boundary; thus $\Omega = \textrm{conv} (\mathcal{N}^v)$. We then let $\mathcal{N}^e = \partial\Omega \setminus \mathcal{N}^v$ be the set of boundary edges of $\Omega$ excluding vertices. While $u = f$ is no longer true on $\partial\Omega$ if $\Omega$ is not strictly convex, it can be shown using \cite[Corollary 17.1.5]{Rockafellar2015convex} that $u = f$ at vertices of $\mathcal{N}^v$, and on each edge of $\mathcal{N}^e$, the function $u$ is the convex envelope of $f$ restricted to that edge. One can thus show that $u$ is the viscosity solution of the following fully nonlinear obstacle problem:
\begin{equation}\label{E:pde-CE-polytope}
\left\{
\begin{array}{ll}
T[u;f](x) = 0 \quad \; & \forall x \in \Omega, \\
\min\left\{f(x) - u(x), e^T(x) D^2u(x) e(x) \right\} = 0\quad \; & \forall x \in \mathcal{N}^e, \\
u(x) = f(x) \quad \; & \forall x \in \mathcal{N}^v,
\end{array}
\right.
\end{equation}
where $e(x)$ is a unit vector parallel to the edge of $\Omega$ containing $x \in \mathcal{N}^e$; note that \eqref{E:pde-CE-polytope} is a modification of \eqref{E:pde-CE} on $\partial\Omega$. To discretize this system, let $\mathcal{N}_h^v := \mathcal{N}^v \subset \mathcal{N}_h^b$ and $\mathcal{N}_h^e := \mathcal{N}_h^b \cap \mathcal{N}^e$, then our discrete problem is to find $u_{\varepsilon} \in \mathbb{V}_h$ satisfying
\begin{equation}\label{E:2ScOp-Ex4}
\left\{
\begin{array}{ll}
T_{\varepsilon}[u_{\varepsilon};f](x_i) = 0 \quad \; & \forall x_i \in \mathcal{N}_h^0,\\
\min\left\{f(x_i) - u_{\varepsilon}(x_i), \sdd u_{\varepsilon}(x_i, e(x_i)) \right\} = 0\quad \; & \forall x_i \in \mathcal{N}_h^e, \\
u_{\varepsilon}(x_i) = f(x_i) \quad \; & \forall x_i \in \mathcal{N}_h^v,
\end{array}
\right.
\end{equation}
where the step size of $\sdd u_{\varepsilon}(x_i, e(x_i))$ should be defined as the maximum number $\delta_i$ in
$(0,\delta]$ such that $x_i \pm \delta_i e(x_i)$ are both inside $\overline{\Omega}$. The convergence of $u_{\varepsilon}$ can be derived in a similar way to \Cref{S:TwoSc}. We now prove an error estimate.
\begin{Proposition}[convergence rate for polytopes]\label{L:conv-rate-polytopes}
Let $\Omega$ be a convex polytope and $u \in C^{k,\alpha}(\overline{\Omega})$ with $k=0,1, \ 0<\alpha \leq 1$, and $\theta \leq 1$. Let $u_{\varepsilon}\in\mathbb{V}_h$ be the discrete solution of \eqref{E:2ScOp-Ex4}. If the discretization parameters $\varepsilon = (h,\delta,\theta)$ obey relations similar to those in \Cref{C:convergence-rate} (convergence rate), then
\[
\| u - u_{\varepsilon} \|_{L^\infty(\Omega)} \le C(u,\Omega,d,\sigma) \, h^{\frac{(k+\alpha)^2}{2+k+\alpha}}.
\]
\end{Proposition}
\begin{proof}
We first notice that $\Omega_h=\Omega$ and that \Cref{L:DCP} (discrete comparison principle) implies the following stability result: if $u_h, w_h \in \mathbb{V}_h$ satisfy
$T_{\varepsilon}[u_h;f](x_i) = T_{\varepsilon}[w_h;f](x_i)$ for all $x_i \in \mathcal{N}_h^0$, then
\begin{equation}\label{E:stability}
\max_{x_i \in \mathcal{N}_h} \left| u_h(x_i) - w_h(x_i) \right|
\leq \max_{x_i \in \mathcal{N}_h^b} \left| u_h(x_i) - w_h(x_i) \right|.
\end{equation}
We consider an auxiliary discrete problem: seek $\wt{u}_{\varepsilon} \in \mathbb{V}_h$ that solves
\begin{equation*}
\left\{
\begin{array}{ll}
T_{\varepsilon}[\wt{u}_{\varepsilon};f](x_i) = 0 \quad \; & \forall x_i \in \mathcal{N}_h^0,\\
\wt{u}_{\varepsilon}(x_i) = u(x_i) \quad \; & \forall x_i \in \mathcal{N}_h^b.
\end{array}
\right.
\end{equation*}
We observe that \Cref{C:convergence-rate}
still holds for $\wt{u}_{\varepsilon}$, without the strict convexity assumption on $\Omega$, because the Dirichlet boundary is attained. Therefore, choosing $\delta$ and $\theta$ as in \Cref{C:convergence-rate}, we obtain
\begin{equation*}
\Vert u - \wt{u}_{\varepsilon} \Vert_{L^{\infty}(\Omega_h)}
\leq C(u,\Omega,d,\sigma) \, h^{\frac{(k+\alpha)^2}{2+k+\alpha}}.
\end{equation*}
It remains to estimate $\Vert \wt{u}_{\varepsilon} - u_{\varepsilon} \Vert_{L^{\infty}(\Omega_h)}$, for which
we resort to \eqref{E:stability} because both $\wt{u}_{\varepsilon},u_{\varepsilon}\in\mathbb{V}_h$.
Since the boundary subsystem
\begin{equation*}
\left\{
\begin{array}{ll}
\min\left\{f(x_i) - u_{\varepsilon}(x_i), \sdd u_{\varepsilon}(x_i, e(x_i)) \right\} = 0\quad \; & \forall x_i \in \mathcal{N}_h^e, \\
u_{\varepsilon}(x_i) = f(x_i) \quad \; & \forall x_i \in \mathcal{N}_h^v,
\end{array}
\right.
\end{equation*}
can be viewed as several one dimensional two-scale discretizations of the convex envelope problem, \Cref{C:convergence-rate} again implies
\begin{equation*}
\max_{x_i \in \mathcal{N}_h^b} \left| \wt{u}_{\varepsilon}(x_i) - u_{\varepsilon}(x_i) \right| =
\max_{x_i \in \mathcal{N}_h^b} \left| u(x_i) - u_{\varepsilon}(x_i) \right|
\leq C(u,\Omega,d,\sigma) \, h^{\frac{(k+\alpha)^2}{2+k+\alpha}}.
\end{equation*}
This concludes the proof.
\end{proof}
It is worth pointing out that we may not need a two-scale structure on the boundary since it reduces to a one dimensional problem on the edge of a polytope in 2D. However, notice that this procedure extends to dimensions $d>2$, and in such case boundary subproblems
possess dimension higher than one and require a two-scale structure.
\section{Modified Wide Stencil Method}\label{S:modified-wide-stencil}
Our numerical analysis of the previous sections
could be applied to derive error estimates for a modified
wide stencil method obtained upon adding a two-scale structure into that of \cite{Ob2}.
Since key ideas and techniques are identical to those for the two-scale method,
we present them without proofs.
First let us briefly introduce the wide stencil method in a way convenient to our analysis; we refer the readers to \cite{Ob2} and \cite{ObRu} for more details.
For a strictly convex domain $\Omega \subset \mathbb{R}^d$, with abuse
of notations, let $\mathcal{N}_h^0 := \Omega \cap h\mathbb{Z}^d$ be a Cartesian grid
in $\Omega$, and $\mathbb{V}_h$ be the space consisting of all maps
$u_h: \mathcal{N}_h^0 \cup \partial\Omega \rightarrow \mathbb{R}$. Let a coarse scale
$\delta \ge \sqrt{d}{h}$ be used to define the set of discrete directions
\[
D_{\varepsilon} := \left\{ x \in h\mathbb{Z}^d: \textrm{dist}\big(x, \partial B(0,\delta) \big) \le \frac{\sqrt{d}}{2}h \right\},
\]
where $\varepsilon := (h,\delta)$ and $B(0,\delta)$ is the ball centered
at the origin with radius $\delta$. It is worth pointing out that $D_{\varepsilon}$ is just a few layers of grid points, and thus its cardinality satisfies $\#D_{\varepsilon} \lesssim \left(\frac{\delta}{h}\right)^{d-1}$.
The following lemma is similar to \cite[Lemma 4.4]{DolzWalk} and characterizes the consistency error
due to using $D_{\varepsilon}$ instead of $\partial B(0,\delta)$.
\begin{Lemma}[properties of $D_{\varepsilon}$]\label{L:consist-Dve}
For any $v \in \partial B(0,\delta)$, there exists $v_{\varepsilon} \in D_{\varepsilon}$ such that the angle between the vectors $v$ and $v_{\varepsilon}$
is bounded by $\frac{\sqrt{d}\pi h}{4\delta}$. Moreover, $\frac{\delta}{2}\le |v| \le \frac{3\delta}{2}$ for all $v\in D_\varepsilon$.
\end{Lemma}
\begin{proof}
Choose a Cartesian grid point in $v_{\varepsilon} \in h\mathbb{Z}^d$ closest to $v$, which in turn must satisfy $|v - v_{\varepsilon}| \le \frac{\sqrt{d}h}{2}$, whence $v_{\varepsilon} \in D_{\varepsilon}$. The angle $\theta$ between $v$ and $v_{\varepsilon}$ is dictated by \looseness=-1
\[
\sin\theta \le \frac{|v - v_{\varepsilon}|}{\delta} \le \frac{\sqrt{d}}{2}\frac{h}{\delta}.
\]
This implies $\theta \leq \frac{\pi}{2}\sin\theta \le \frac{\sqrt{d}\pi h}{4\delta}$.
Moreover, by definition of $D_\varepsilon$ we see that
$\frac{\delta}{2} \le \delta - \frac{\sqrt{d}}{2}h \le |v| \le
\delta + \frac{\sqrt{d}}{2}h \le \frac{3\delta}{2}$ for all $v\in D_\varepsilon$.
\end{proof}
For any function $w \in \mathbb{V}_h$ and any vector $v \in D_{\varepsilon}$,
let the centered second difference operator at any $x_i \in \mathcal{N}_h^0$ in the direction $v$ be
\begin{equation*}
\nabla^2_\varepsilon w(x_i;v) := \frac{2}{\left(\rho_{+} + \rho_{-}\right)|v|^2}
\left( \frac{w(x_i + \rho_{+}v) - w(x_i)}{\rho_{+}} + \frac{w(x_i - \rho_{-}v) - w(x_i)}{\rho_{-}} \right),
\end{equation*}
where $\rho_{\pm}$ are the biggest numbers in $(0, 1]$ such that $x_i \pm \rho_{\pm} v \in \overline{\Omega}$. Notice that this is well-defined for any $w \in \mathbb{V}_h$ because $x_i \pm \rho_{\pm} v$ are either in $\mathcal{N}_h^0$ or on the boundary $\partial\Omega$. Since for any
$v \in D_{\varepsilon}$ we have $\frac{\delta}{2} \le |v| \le \frac{3\delta}{2}$, the parameter
$\delta$ plays a role similar to the coarse scale $\delta$
for second differences in our two-scale method.
The cardinalities
$\#D_{\varepsilon} \approx (\delta/h)^{d-1}$ and $\#\mathbb S_{\theta} \approx \theta^{-(d-1)}$ are consistent
provided $\theta \approx h/\delta$.
We define the discrete operator for the modified wide stencil method to be
\begin{equation*}
T_{\varepsilon}[w;f](x_i) := \min\left\{f(x_i) - w(x_i),
\min_{v \in D_{\varepsilon}} \nabla^2_\varepsilon w(x_i;v) \right\} \quad\forall \, x_i \in \mathcal{N}_h^0
\end{equation*}
for any $w \in \mathbb{V}_h$. Finally, the discrete problem reads: find $u_{\varepsilon} \in \mathbb{V}_h$
such that
\begin{equation}\label{E:2ScOp-WD}
T_{\varepsilon}[u_{\varepsilon};f](x_i) = 0 \quad\forall \, x_i \in \mathcal{N}_h^0,
\end{equation}
and $u_{\varepsilon}(x) = f(x)$ for any $x \in \partial\Omega$.
It is now easy to check that \Cref{L:DCP} (discrete comparison principle) and
\Cref{Prop:Consistency-lowreg} (consistency for $u$ with H\"older regularity) are
valid verbatim in the present context, except that instead of \eqref{E:error-noncontact}
we now have
\[
\min_{v \in D_\varepsilon} \nabla^2_\varepsilon w(x_i;v) \le C(d,\sigma) \frac{h^{k+\alpha}}{\delta^2}
|u|_{C^{k,\alpha}(B_i)}.
\]
In fact, the modified wide stencil method can be viewed as a modified version of two-scale method without interpolation error and $\theta \approx h/\delta$.
The following error estimate mimics that in \Cref{S:RatesHolder}. It is
a consequence of the discrete comparison principle and consistency for the
wide stencil method together with
the discrete barrier functions of \Cref{S:DBarrier}. We omit its proof.
\begin{Theorem}[error estimate for the wide stencil method]\label{T:error-estimate-WD}
Let $\Omega$ be strictly convex.
Let $u$ be the viscosity solution of \eqref{E:pde-CE} and $u_{\varepsilon}$ be the discrete
solution of \eqref{E:2ScOp-WD}. If $u \in C^{k,\alpha}(\overline{\Omega})$ for $k=0,1$
and $0<\alpha \leq 1$, then the following error estimate holds
\begin{equation*}\label{E:error-estimate-WD}
\left| u(x_i) - u_{\varepsilon}(x_i) \right|
\leq C \left( |u|_{C^{k,\alpha}(\overline{\Omega})} \frac{h^{k+\alpha}+ \delta^{2+k+\alpha}}{\delta^2}
+ |f|_{C^{k,\alpha}(\overline{\Omega})} \delta^{k+\alpha} \right) \quad \forall x_i \in \mathcal{N}_h^0,
\end{equation*}
with $C = C(\Omega,d,\sigma)$. If $\delta :=
|u|_{C^{k,\alpha}(\overline{\Omega})}^{\frac{1}{2+k+\alpha}}
\Big(|u|_{C^{k,\alpha}(\overline{\Omega})} + |f|_{C^{k,\alpha}(\overline{\Omega})} \Big)^{-\frac{1}{2+k+\alpha}} h^{\frac{k+\alpha}{2+k+\alpha}}$,
we thus obtain the convergence rate
\begin{equation*}\label{E:convergence-rate-WD}
\left| u(x_i) - u_{\varepsilon}(x_i) \right| \leq C(\Omega,d,\sigma)
\Big( |u|_{C^{k,\alpha}(\overline{\Omega})} + |f|_{C^{k,\alpha}(\overline{\Omega})} \Big) \; h^{\frac{(k+\alpha)^2}{2+k+\alpha}} \quad \forall x_i \in \mathcal{N}_h^0.
\end{equation*}
\end{Theorem}
We point out that \Cref{R:two-scenarios} (two important scenarios) applies in this context. In particular, the convergence rate is of order $O(h)$ provided $\delta = O(h^{1/2})$ for functions $u \in C^{1,1}(\overline{\Omega})$.
\section{Numerical Experiments}\label{S:Exp}
To solve the discrete system \eqref{E:2ScOp}, we use
Howard's algorithm which converges superlinearly.
We implemented the 2-scale method within MATLAB, using
some of the routines provided by the software FELICITY \cite{Walker1, Walker2}.
\subsection{Howard's Algorithm}\label{S:Howard-Algorithm}
For convenience, let us order the nodes in
$\mathcal{N}_h = \{ x_1, \ldots, x_N\}$ with $x_i \in \mathcal{N}_h^0$
for $1 \leq i \leq N_0$ and $x_i \in \mathcal{N}_h^b$ for
$N_0+1 \leq i \leq N$; thus $N, N_0$ and $N_b := N-N_0$
are the cardinality of $\mathcal{N}_h, \mathcal{N}_h^0$ and $\mathcal{N}_h^b$ respectively.
In addition, let
$\bm{u} := (u_h(x_i))_{i=1}^N \in \mathbb{R}^N$ stand for the vector of
nodal values of a generic $u_h \in \mathbb{V}_h$, and
$\mathbb S_{\theta} = \left\{v_1,\ldots,v_{S} \right\}$, where $S$ is
the cardinality of $\mathbb S_{\theta}$. In view of the expression \eqref{E:disc-oper} for the discrete operator $T_{\varepsilon}$, the discrete system \eqref{E:2ScOp} reads
\begin{equation}\label{E:Howard-disc-system}
\sup_{\bm{\alpha} \in \mathcal{A}}
\left( B^{\bm{\alpha}} \bm{u} - F^{\bm{\alpha}} \right) = \bm{0},
\end{equation}
where $\mathcal{A} = \left\{(\alpha_1,\ldots,\alpha_{N_0}): \alpha_i \in \{j\}_{j=0}^S \right\}$,
matrix $B^{\bm{\alpha}}\in\mathbb{R}^{N\times N}$ satisfies
\begin{equation*}
\left( B^{\bm{\alpha}} \bm{u} \right)_i = \left\{
\begin{array}{ll}
u_h(x_i) \quad & i \geq N_0+1,\; 0 \le \alpha_i \le S \\
u_h(x_i) \quad & 1 \leq i \leq N_0,\;\alpha_i = 0, \\
-\sdd u_h(x_i;v_{\alpha_i}) \quad & 1 \leq i \leq N_0,\;1 \le \alpha_i \le S,
\end{array}
\right.
\end{equation*}
and $F^{\bm{\alpha}}$ is given by
\begin{equation*}
\left( F^{\bm{\alpha}} \right)_i = \left\{
\begin{array}{ll}
f(x_i) \quad & i \geq N_0+1, \;0 \le \alpha_i \le S \\
f(x_i) \quad & 1 \leq i \leq N_0,\;\alpha_i = 0, \\
0 \quad & 1 \leq i \leq N_0,\; 1 \le \alpha_i \le S.
\end{array}
\right.
\end{equation*}
We solve \eqref{E:Howard-disc-system} via the Howard's algorithm \cite{BoMaZi}, which is a semi-smooth Newton method \cite{BoMaZi,HIK:2002,SmearsSuli2014,Ulbrich:2011} also known as policy iteration in the financial literature \cite{PutermanBrumelle1979}:
\begin{algorithm}
\caption{(Howard's Algorithm)
\label{alg:Howard}}
\begin{algorithmic}[1]
\State Select an arbitrary initial $\bm{\alpha}_0 \in \mathcal{A}$, and let $n=0$.
\While{}
\State Let $\bm{u}_{n}$ be the solution of the linear equations
$B^{\bm{\alpha}_n} \bm{u}_{n} - F^{\bm{\alpha}_n} = \bm{0}$.
\State Let $\bm{\alpha}_{n+1} = \textrm{arg\,max}_{\bm{\alpha} \in \mathcal{A}} \left( B^{\bm{\alpha}} \bm{u}_{n} - F^{\bm{\alpha}} \right)$.
\State If $\bm{\alpha}_{n+1} = \bm{\alpha}_n$, stop; else $n = n+1$.
\EndWhile
\end{algorithmic}
\end{algorithm}
\noindent
Hereafter, the vector equality in \eqref{E:Howard-disc-system} and inequalities $\ge$ later are understood componentwise.
We could immediately see from the above
that for any $\bm{\alpha} \in \mathcal{A}$,
we have $\left(B^{\alpha}\right)_{ii} > 0$
and $\left(B^{\alpha}\right)_{ij} \leq 0$ for $i \neq j$.
In fact, we prove that $B^{\bm{\alpha}}$ is an M-matrix.
\begin{Lemma}[M-matrix property]\label{L:M-Matrix}
For any $\bm{\alpha} \in \mathcal{A}$, $B^{\bm{\alpha}}$ is an M-matrix.
\end{Lemma}
\begin{proof}
We only need to prove $B^{\bm{\alpha}} \bm{u} \geq \bm{0}$
implies $\bm{u} \geq \bm{0}$.
Given two vectors $\bm{u},\bm{w} \in \mathbb{R}^{N}$ so that $B^{\bm{\alpha}} \bm{u} \ge B^{\bm{\alpha}} \bm{w}$ for all $\bm{\alpha} \in \mathcal{A}$, we deduce $u_h \geq w_h$ for the corresponding functions $u_h,w_h\in\mathbb{V}_h$ in view of \Cref{L:DCP} (discrete comparison principle). This immediately implies $\bm{u}\ge\bm{w}$, and, upon taking $\bm{w}=\bm{0}$, that $\bm{u}\ge\bm{0}$ as desired.
\end{proof}
Invoking the fact that $B^{\bm{\alpha}}$ is an M-matrix
and applying \cite[Theorem 2.1]{BoMaZi}, we deduce that the $n$-th iterate
$\bm{u}_n$ of Howard's algorithm converges monotonically
and superlinearly to $u_{\varepsilon}$ as $n \to \infty$. The latter follows from the semi-smooth
Newton structure of Algorithm \ref{alg:Howard}. The former is a consequence of its
step 4 because
\[
B^{\bm{\alpha}_{n+1}} \bm{u}_{n} - F^{\bm{\alpha}_{n+1}} \ge
B^{\bm{\alpha}_{n}} \bm{u}_{n} - F^{\bm{\alpha}_{n}} = \bm{0} = B^{\bm{\alpha}_{n+1}} \bm{u}_{n+1} - F^{\bm{\alpha}_{n+1}} ,
\]
whence $\bm{u}_{n+1} \le \bm{u}_n$. Moreover, \cite[Theorem 2.1]{BoMaZi}
automatically gives existence and uniqueness
of our discrete system \eqref{E:2ScOp},
which we also proved in \Cref{L:Exist-Uniq-Stab} (existence, uniqueness and stability).
In practice, when
$\Vert \sup_{\bm{\alpha} \in \mathcal{A}} \left( B^{\bm{\alpha}} \bm{u}_n - F^{\bm{\alpha}} \right) \Vert_2$ is sufficiently small we can stop Algorithm \ref{alg:Howard}; we thus use the criterion
\begin{equation*}
\Vert T_{\varepsilon}[u_n;f] \Vert_{L^2(\Omega)} \leq 10^{-10} \Vert T_{\varepsilon}[f;f] \Vert_{L^2(\Omega)}
\end{equation*}
in all numerical experiments below.
\subsection{Accuracy}\label{S:Accuracy}
We now present several examples to examine the performance of
the two-scale method \eqref{E:2ScOp} for the convex envelope problem.
We choose $\delta = C_{\delta} h^{\alpha}$ and
$\theta = C_{\theta} h^{\beta}$ for different
$C_{\delta}, \alpha, C_{\theta}, \beta > 0$ in our experiments, and compare the computational rates with our theoretical rate of \Cref{C:convergence-rate} (convergence rate).
\begin{example}[full regularity $u \in C^{1,1}(\overline{\Omega})$] \label{Ex:ex1}
Let $\Omega = \{x\in \mathbb{R}^2: |x|<1 \}$ be the unit circle and
$f(\bm{x}) = \cos(2 \pi |\bm{x}|)$. Then the convex envelope $u$
is given by
\begin{equation*}
u(x) = \left\{
\begin{array}{ll}
0 \;, & \text{if} \quad |x| \leq 0.5 \\
\cos \left(2 \pi |x| \right) \;,
& \text{if} \quad 0.5 < |x| \leq \alpha_{*} \\
\cos \left(2 \pi \alpha_* \right) -
2\pi \sin \left(2 \pi \alpha_* \right) \left( |x| - \alpha_* \right) \;,
& \text{if} \quad \alpha_{*} < |x| \leq 1,
\end{array}
\right.
\end{equation*}
where the constant $\alpha_{*} \approx 0.6290$ satisfies the equation
\begin{equation*}
\cos \left(2 \pi \alpha_* \right) -
2\pi \sin \left(2 \pi \alpha_* \right) \left( 1 - \alpha_* \right) = 1.
\end{equation*}
The contact set $\mathcal{C}(f)$ consists of two disjoint sets $\{\frac12\le|x|\le\alpha_*\}$ and
$\partial\Omega$.
In this example we have $f$ smooth and $u \in C^{1,1}(\overline{\Omega})$ (full regularity).
Upon choosing $\delta = 0.5 h^{1/2}$ and $\theta\approx 0.25 h^{1/2}$ we obtain computationally a linear convergence rate with respect to $h$, thus consistent
with \Cref{C:convergence-rate} (convergence rate), and report it
in \Cref{Table:Ex1} and Figure \ref{F:Ex1}. Plots of $u_{\varepsilon}$ and $f$ are shown in \Cref{F:Ex1-function-plot} and slices of these functions on $\{(x,0): x \ge 0\}$ are depicted in \Cref{F:Ex1} (left). In \Cref{F:Ex1} (right), we also display the $L^{\infty}$ error vs meshsize $h$ for several choices $\delta = O(h^{\alpha})$ with different values of $\alpha$ together with $\theta \approx 0.25h^{1/2}$. The convergence rate for $\delta = O(h^{2/3})$ is better than the one predicted in \Cref{C:convergence-rate}, but other rates are consistent with our theory. We choose $\theta$ to be small enough to make the error induced by $\theta$ small relative to those of $\delta$ and $h$. In fact, we can see from \Cref{F:Ex1} (right) that the effect of changing from $\theta \approx 0.25 h^{1/2}$ to $\theta \approx h^{1/2}$ is relatively small, and thus conclude that $\theta$ is not a sensitive parameter.
\begin{table}[h!]
\begin{center}
\begin{tabular}[t]{ | l | l | c | c |}
\hline
Degrees of freedom & Number of directions & $L^{\infty}-$error & Iteration steps \\
\hline\hline
$N = 1557$, $h=2^{-4}$ & \qquad\quad $S = 26$ & $3.769 \times 10^{-2}$ & 6 \\
\hline
$N = 6317$, $h=2^{-5}$ & \qquad\quad $S = 36$ & $1.887 \times 10^{-2}$ & 10 \\
\hline
$N = 25469$, $h=2^{-6}$ & \qquad\quad $S = 51$ & $9.617 \times 10^{-3}$ & 11 \\
\hline
$N = 102445$, $h=2^{-7}$ & \qquad\quad $S = 72$ & $4.801 \times 10^{-3}$ & 11 \\
\hline
$N = 410793$, $h=2^{-8} $ & \qquad\quad $S = 101$ & $2.400 \times 10^{-3}$ & 11 \\
\hline
\end{tabular}
\end{center}
\vskip0.2cm
\caption{\small \Cref{Ex:ex1}: $\delta = 0.5h^{1/2},\theta \approx 0.25 h^{1/2}$.
The convergence rate is about linear
(see \Cref{F:Ex1}), thus consistent with \Cref{C:convergence-rate}.
The number of search directions $S$ scales like $S\approx\theta^{-1}\approx h^{-1/2}$,
whereas the number of Howard's steps is relatively uniform.}
\label{Table:Ex1}
\end{table}
\begin{figure}[!htb]
\includegraphics[width=0.48\linewidth]{figures/pic1_f-eps-converted-to.pdf}
\includegraphics[width=0.48\linewidth]{figures/pic1_uve-eps-converted-to.pdf}
\caption{\small \Cref{Ex:ex1},
left: plot of $f$; right: plot of $u_{\varepsilon}$ for $h = 2^{-6}$.
}
\label{F:Ex1-function-plot}
\end{figure}
\begin{figure}[!htb]
\includegraphics[width=0.48\linewidth]{figures/pic1_slice-eps-converted-to.pdf}
\includegraphics[width=0.48\linewidth]{figures/pic1_rate-eps-converted-to.pdf}
\caption{\small
\Cref{Ex:ex1}. Left: slice of numerical solution $u_{\varepsilon}$ on $\{(x,0): x \ge 0\}$ with $h = 2^{-6}, \delta = 0.25h^{1/2}, \theta \approx 0.25h^{1/2}$. Right: experimental rates of convergence
upon choosing $\theta \approx 0.25h^{1/2}$ and $\delta = O(h^{\alpha})$ with $\alpha = 1/3, 1/2, 2/3, 1$. A least square regression is performed for $h^{-k}$ with $k=6,7,8$ and
the case $\delta = O(h)$. The orders are about $0.67, 0.99, 1.30, 0.07$. We also plot the errors for $\theta \approx h^{1/2}, \delta =h^{2/3}$, and the errors are very close to choosing $\theta \approx 0.25h^{1/2}, \delta =h^{2/3}$.}
\label{F:Ex1}
\end{figure}
\end{example}
\begin{example}[Lipschitz regularity $u \in C^{0,1}(\overline{\Omega})$] \label{Ex:ex2}
Let $\Omega = \{x\in\mathbb{R}^2: |x|<1 \}$ and
\begin{equation*}
f(x) = \left\{
\begin{array}{ll}
1-4|x|, & 0 \leq |x| < 1/4 \\
4|x|-1, & 1/4 \leq |x| < 1/2 \\
2-2|x|, & 1/2 \leq |x| < 3/4 \\
2|x|-1, & 3/4 \leq |x| \leq 1,
\end{array}
\right. \qquad
u(x) = \left\{
\begin{array}{ll}
0, & 0 \leq |x| < 1/4 \\
|x|-1/4, & 1/4 \leq |x| < 3/4 \\
2|x|-1, & 3/4 \leq |x| \leq 1.
\end{array}
\right.
\end{equation*}
This example deals with $f, u \in C^{0,1}(\overline{\Omega})$, i.e.
both $f$ and $u$ are Lipschitz. The contact set $\mathcal{C}(f)$ consists of two disjoint components
$\{x\in \mathbb{R}^2: |x| \ge 3/4 \}$ and $\{x\in \mathbb{R}^2: |x|=1/4 \}$. See \Cref{F:Ex2} (left) that displays
slices on $\{(x,0): 0\le x\le 1\}$ of $f,u$ and the numerical solution
$u_{\varepsilon}$ with $h = 2^{-6}, \delta = 0.25 h^{1/2}, \theta \approx 0.25h^{1/2}$.
We point out that the pointwise error is very small in the regions
$\{x\in \mathbb{R}^2: |x| \ge 3/4 \}$ and $\{x\in \mathbb{R}^2: |x| \le 1/4 \}$; in the latter $u$ is linear and thus the interpolation error disappears. On the other hand, in the region $\{x\in \mathbb{R}^2: 1/4 < |x| < 3/4 \}$, where
$u$ is only linear in the radial direction, we observe larger error
for $u_{\varepsilon}$.
Experimental convergence rates for different choices of $\delta = O(h^{\alpha})$ are plotted in
\Cref{F:Ex2} (right): we see that these rates are better than those predicted in \Cref{C:convergence-rate} (convergence rate).
This theoretical rate can be improved upon exploiting that
both functions $f$ and $u$ are non-smooth only at $\{0\}$ and across the curves
$\{|x|=1/4\}$ and $\{|x|=3/4\}$. In fact, for those $x_i \in \mathcal{N}_h^0$ satisfying $\big| |x_i|-1/4 \big| \leq \delta$ or $\big| |x_i|-3/4 \big| \leq \delta$, according to \Cref{Prop:Consistency-lowreg} (consistency for $u$ with H\"older regularity), we have
\begin{equation*}
T_{\varepsilon}[\mathcal I_h u;f](x_i) \leq f(x_i) - u(x_i) \leq C(u) \delta,
\end{equation*}
whereas for the rest of $x_i \in \mathcal{N}_h^0$ the consistency error can be estimated
exactly as for $f, u \in C^{1,1}(\overline{\Omega})$. Therefore carrying out the
same analysis as in \Cref{T:error-estimate} (error estimate), we end up with the
error estimate
\begin{equation*}
\|u-u_{\varepsilon}\|_{L^\infty(\Omega_h)} \le C(u)
\left( \delta + \frac{(\delta \theta)^2 + h^2}{\delta^2}\right).
\end{equation*}
This yields a rate $O(h^{2/3})$ provided $\delta = O(h^{2/3})$, which is twice better than the rate from \Cref{C:convergence-rate} but still worse than the experimental ones in \Cref{F:Ex2} (right).
\begin{figure}[!htb]
\includegraphics[width=0.48\linewidth]{figures/pic2_slice-eps-converted-to.pdf}
\includegraphics[width=0.48\linewidth]{figures/pic2_rate-eps-converted-to.pdf}
\caption{\small \Cref{Ex:ex2}. Left: slices of $f,u$ and numerical solution $u_{\varepsilon}$ on $\{(x,0): x \ge 0\}$ with $h = 2^{-6}, \delta = 0.25h^{1/2}, \theta \approx 0.25h^{1/2}$. Right: experimental rates of convergence upon choosing $\theta = O(h^{1/2})$ and $\delta = O(h^{\alpha})$ with $\alpha = 1/3, 1/2, 2/3, 1$. The orders are about $0.78, 0.96, 1.06, 0.74$.}
\label{F:Ex2}
\end{figure}
\end{example}
\begin{example}[Lipschitz $u \in C^{0,1}{(\overline{\Omega})}$ and nonstrictly convex $\Omega$]\label{Ex:ex3} Let $\Omega = (-1,1)^2$ and $f,u$ be as in \cite[Example 6.3]{Ob2} with $\alpha = \beta = 1$, i.e.
\[
f(x,y) = xy \;,\qquad u(x,y) = |x+y| - 1 .
\]
We point out that the Dirichlet boundary condition $u = f$ is attained on $\partial\Omega$ although the
domain $\Omega$ is not strictly convex, whence \Cref{T:error-estimate} (error estimates) still applies.
In this example, $f$ is smooth but $u$ is only Lipschitz because $\Omega$ is not uniformly convex and non-smooth: $u$ exhibits a kink across the diagonal $\{(x,y): x+y=0\}$ and is piecewise linear otherwise. Moreover, $u<f$ in $\Omega$ whence the contact set $\mathcal{C}(f)$ reduces to
$\partial\Omega$.
\Cref{F:Ex3} (left) displays slices on $\{(x,y): x \ge 0, \ y = x\}$ of $f,u$ and the numerical solution $u_{\varepsilon}$ with $h = 2^{-6}, \delta = h^{1/2}, \theta \approx 0.25h^{1/2}$. One can observe a clear mismatch between $u_{\varepsilon}$ and $u$ near the singular set $\{(x,y): x+y=0 \}$. Compared with \Cref{Ex:ex1} (full regularity $u\in C^{1,1}(\overline\Omega)$), the lack of regularity of $u$ here entails larger consistency error and $L^{\infty}$ error between $u_{\varepsilon}$ and $u$. Experimental convergence rates for different choices of $\delta = O(h^{\alpha})$ are depicted in \Cref{F:Ex3} (right); we see that the best convergence rate $O(h^{0.58})$ is found when $\delta = O(h^{1/3})$,
which is again better than the $O(h^{1/3})$ rate predicted in \Cref{C:convergence-rate} (convergence rate).
\begin{figure}[!htb]
\includegraphics[width=0.495\linewidth]{figures/pic3_slice-eps-converted-to.pdf}
\includegraphics[width=0.495\linewidth]{figures/pic3_rate-eps-converted-to.pdf}
\caption{\small \Cref{Ex:ex3}. Left: slice of numerical solution $u_{\varepsilon}$ on $\{(x,y): x \ge 0, \ y = x\}$ with $h = 2^{-6}, \delta = h^{1/2}, \theta \approx 0.25h^{1/2}$. Right: experimental rates of convergence upon choosing $\theta = O(h^{1/2})$ and $\delta = O(h^{\alpha})$ with $\alpha = 1/3, 1/2, 2/3, 1$. The orders are about $0.58, 0.45, 0.41, 0.03$.}
\label{F:Ex3}
\end{figure}
\end{example}
\begin{example}[non-attainment of Dirichlet condition] \label{Ex:ex4}
Let $\Omega = (-1,1)^2$ and the function $f$ be $f(x,y) = \cos(\pi x)\cos(\pi y)$, whose restriction to $\partial\Omega$ is not convex. According to our definition \eqref{E:def-CE},
the convex envelope is given by
\begin{equation*}
u(x,y) =
\begin{cases}
-1 & \quad |x|+|y| \leq 1 \\
-\cos \big(\pi (|x|+|y| - 1) \big)
& \quad 1 < |x|+|y| \leq 1 + \beta_{*} \\
-\cos \left(\pi \beta_* \right) +
\pi \sin \left(\pi \beta_* \right) \big( |x|+|y| - 1 - \beta_* \big)
& \quad 1+\beta_{*} < |x|+|y|,
\end{cases}
\end{equation*}
where the constant $\beta_{*} \approx 0.2580$ satisfies the equation
\[
-\cos(\pi \beta_*) + \pi \sin(\pi \beta_*)(1 - \beta_*) = 1.
\]
This assertion requires a brief explanation. First
of all note that by symmetry it suffices to examine the first quadrant $0\le x,y \le 1$.
On the edges $\{y=1\}$ and $\{x=1\}$ the function $u$ is convex by construction and definition of $\beta_*$; see \Cref{F:Ex4} (left). Since $u$ is flat along lines $x+y=\beta$ and convex along perpendicular lines, we infer that $u$ is convex. It remains to show that $u\le f$ and $\ge$ than the convex envelope. To this end, we take convex combinations of boundary values $u(\beta-1,1)$ and $u(1,\beta-1)$ along the line $x+y=\beta$ with $1\le\beta\le2$ and show
that they are $\le f(x,y)$. For $\beta=1$ we realize that $u(x,y)=-1\le f(x,y)$ on $x+y=1$ and by symmetry for all $x+y\le1$. For $\beta>1$ a tedious calculation gives $u(x,y)=u(\beta-1,1) \le f(\beta-1,1) \le f(x,y)$ along $x+y=\beta$ as desired. We finally point out that the contact set $\mathcal{C}(f)$ consists of four boundary segments of length $2\beta_*$
centered at $(0,\pm 1), (\pm 1,0)$ and the four vertices $(\pm 1,\pm 1)$
of $\Omega$; see \Cref{F:Ex4} (left). \looseness=-1
\begin{figure}[!htb]
\includegraphics[width=0.48\linewidth]{figures/pic4_slice-eps-converted-to.pdf}
\includegraphics[width=0.48\linewidth]{figures/pic4_rate-eps-converted-to.pdf}
\caption{\small \Cref{Ex:ex4}. Left: slices $f,u$ and
$u_{\varepsilon}$ on the set $\{(x,1): x \ge 0 \}$ with $h = 2^{-6}, \delta = 2h^{1/2}, \theta \approx 0.5h^{1/2}$. Note that $u_{\varepsilon}$ is indistinguishable from $u$ on this part of $\partial\Omega$. Right: experimental rates of convergence upon choosing $\theta = O(h^{1/2})$ and $\delta = O(h^{\alpha})$ with $\alpha = 1/3, 1/2, 2/3, 1$; the orders of convergence are about $1.30, 1.04, 0.91, 0.20$.}
\label{F:Ex4}
\end{figure}
We implemented the modified two-scale method \eqref{E:2ScOp-Ex4}, which first solves boundary subproblems on each edge of $\partial\Omega$ to find the trace of the discrete convex envelope $u_{\varepsilon}$ and next determines $u_{\varepsilon}$ within $\Omega$. \Cref{F:Ex4} (left) shows $f,u$
and $u_{\varepsilon}$ on the boundary set $\{(x,1): 0\le x \le 1\}$; we point out that $u(x,1)=f(x,1)$ for $|x|\le \beta_*$. \Cref{F:Ex4} (right) displays the $L^{\infty}$ error for several choices of $h$ and $\delta$: we see that the experimental convergence rate is about $O(h)$ for $\delta=O(h^{1/2})$, in agreement with theory, but the rates for $O(h^{\alpha})$ with $\alpha = 1/3, 2/3$ seem to be better than those predicted in \Cref{C:convergence-rate} (convergence rate).
\end{example}
\subsection{Computational performance}
Thanks to the search tools provided by FELICITY \cite{Walker1,Walker2}, the process of locating the triangle of the mesh containing points $x_i \pm \delta_i v_j$ and computing the barycentric coordinates only takes a small percentage of the total computing time; this is consistent with the two-scale method for the Monge-Amp\`{e}re equation in \cite{NoNtZh1}. In \Cref{Ex:ex1} for $h = 2^{-6}, \delta = 0.25h^{1/2}, \theta \approx 2h^{1/2}$, this process is 6.7\% ($<$ 4 sec) of the total computation time (56.2 sec).
The most time consuming part of the experiment is constructing and solving the linear systems, i.e. the third line in \Cref{alg:Howard}; this takes 53.2\% of the total time. We do not attempt to exploit the sparsity pattern of the matrix $B^{\bm{\alpha}}$ and simply resort to MATLAB backslash command for solving linear systems; we leave this important issue open. All of our computations are performed on an Intel Xeon E5-2630 v2 CPU (2.6 GHz), 16 GB RAM using MATLAB R2016b.
\subsection{Comparison with other existing methods}
In this subsection, we briefly compare our two-scale method with two other methods for the computation of convex envelopes: the wide stencil method in \cite{Ob2} and the modified version of Dolzmann's method in \cite{Bartels}. Both the wide stencil method and our two-scale method are derived from the PDE formulation \eqref{E:pde-CE}, and have a discrete operator with similar structure. As explained in \Cref{S:modified-wide-stencil}, the wide stencil method can be viewed as a two-scale method with no interpolation error but with the constraint
$\theta \approx h/\delta$. Our two-scale method suffers from the interpolation error but allows some freedom in the choice of parameters and works well on unstructured grids, which provide geometric flexibility to fit the boundary $\partial\Omega$.
The modified version of Dolzmann's method in \cite{Bartels}, built for the computation of rank-one convex envelopes of functions defined on $\mathbb{R}^{n \times m}$, can be applied to compute the convex envelope by simply letting $m = 1$. When applied to compute convex envelopes, the technique of \cite{Bartels} hinges on the following algorithm: if $f^{(0)} = f$, and $f^{(k)}$ for $k \ge 1$ is iteratively defined as
\begin{equation}\label{E:iter-CE}
\begin{aligned}
f^{(k)}(x) = \inf\{ & \lambda f^{(k-1)}(x_1) + (1 - \lambda) f^{(k-1)}(x_2): \\
& \lambda \in [0,1], x_1, x_2 \in \mathbb{R}^d, \lambda x_1 + (1-\lambda) x_2 = x\},
\end{aligned}
\end{equation}
then the convex envelope $u = f^{(d)}$ by Carath\'{e}odory's theorem. Consequently, at the continuous level this process terminates in at most $d$ iterations. The method in \cite{Bartels} is a discrete version of this iteration on a structured grid $h\mathbb{Z}^d$ with interpolation on the finer grid $h^2 \mathbb{Z}^d$, namely $x\in h\mathbb{Z}^d$ but $x_1,x_2\in h^2\mathbb{Z}^d$ in \eqref{E:iter-CE}. This is thus a two-scale method, with coarse scale $h$, but conceptually different from ours because it does not solve a PDE but rather an algebraic iteration. Moreover, it assumes $u = f$ in a layer $\{x \in \Omega: \textrm{dist}(x,\partial\Omega) \le Ch \}$ near the boundary $\partial\Omega$ to deal with nodes in this region.
Regarding convergence rates, both the method in \cite{Bartels} and our two-scale method exhibit provable linear rates with respect to the coarse scale for solutions $u\in C^{0,1}(\overline\Omega)$ according to \Cref{R:two-scenarios} (two important scenarios); moreover, \Cref{R:two-scenarios} also shows that our method is quadratic in the coarse scale $\delta$ and linear in the fine scale $h$ for $u \in C^{1,1}(\overline\Omega)$ .
Performing $d$ iterations of the discrete version of \eqref{E:iter-CE} is enough for linear convergence, whereas those for Howard's method cannot be quantified a priori. However, practice reveals that $10$ iterations of Howard's method are enough for convergence, which is consistent with its superlinear structure. Our iterations are simpler than those in \cite{Bartels} because they require much fewer interpolation points. Finally, our two-scale method is designed to work on unstructured meshes and deal with the Dirichlet boundary condition in a natural fashion. The boundary layer effect is handled via discrete barrier functions.
\section*{Acknowledgement}
We are grateful to Dimitrios Ntogkas for allowing us to modify his codes on the two-scale method for the Monge-Amp\`{e}re equation to solve the convex envelope problems.
\bibliographystyle{amsplain}
| {
"timestamp": "2019-01-01T02:12:31",
"yymm": "1812",
"arxiv_id": "1812.11519",
"language": "en",
"url": "https://arxiv.org/abs/1812.11519",
"abstract": "We develop two-scale methods for computing the convex envelope of a continuous function over a convex domain in any dimension.This hinges on a fully nonlinear obstacle formulation [A. M. Oberman, \"The convex envelope is the solution of a nonlinear obstacle problem\", Proc. Amer. Math. Soc. 135(6):1689--1694, 2007]. We prove convergence and error estimates in the max norm. The proof utilizes a discrete comparison principle, a discrete barrier argument to deal with Dirichlet boundary values, and the property of flatness in one direction within the non-contact set. Our error analysis extends to a modified version of the finite difference wide stencil method of [A. M. Oberman, \"Computing the convex envelope using a nonlinear partial differential equation\", Math. Models Meth. Appl. Sci, 18(05):759--780, 2008].",
"subjects": "Numerical Analysis (math.NA)",
"title": "Two-scale methods for convex envelopes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850877244272,
"lm_q2_score": 0.815232480373843,
"lm_q1q2_score": 0.8014428944841218
} |
https://arxiv.org/abs/1805.04427 | Multi-crossing Braids | Traditionally, knot theorists have considered projections of knots where there are two strands meeting at every crossing. A multi-crossing is a crossing where more than two strands meet at a single point, such that each strand bisects the crossing. In this paper we generalize ideas in traditional braid theory to multi-crossing braids. Our main result is an extension of Alexander's Theorem. We prove that every link can be put into an $n$-crossing braid form for any even $n$, and that every link with two or more components can be put into an $n$-crossing braid form for any $n$. We find relationships between the $n$-crossing braid indices, or the number of strings necessary to represent a link in an $n$-crossing braid. | \section{Introduction}
\label{sec:intro}
In traditional knot theory, knots are drawn in a projection where there are two strands passing over each other at every crossing. An \emph{$n$-crossing} is a crossing where there are $n$ strands meeting at one point, with each strand bisecting the crossing. We call this crossing a \emph{multi-crossing} if $n>2$, and we call the traditional type ($n=2$) a \emph{double crossing}. The strands are labeled with the levels $1, \dots n$ from the top.
In~\cite{triple crossing}, Adams proved that every link has an $n$-crossing projection for all $n \ge 3$. This fact allows us to generalize notions in traditional knot theory to their multi-crossing versions. For example, the \emph{crossing number} $c(L)$, the minimum number of crossings in any double crossing projection of the link $L$, generalizes to the \emph{multi-crossing number} $c_n(L)$, which is the minimum number of crossings in any $n$-crossing projection of the link $L$. This gives us an infinite spectrum of crossing numbers that can be explored.
In this paper we similarly generalize ideas from braid theory to multi-crossing braids. Alexander's Theorem states that every link can be put into a braid form \cite{alexander}. This Theorem has since been proved in several ways \cite{yamada} \cite{braid index bound}. Is it true that every link can be put into an $n$-crossing braid, or a braid where every crossing is an $n$-crossing? If true, we can generalize the notion of \emph{braid index} $\beta(L)$, which is the minimum number of strings needed to represent the link $L$ as the closure of a double crossing braid. Can we define the $n$-crossing braid indices $\beta_n(L)$, and what are their properties?
In Section~\ref{sec:evenAlexander} we prove a version of Alexander's Theorem~\cite{alexander} for even multi-crossing braids. Specifically, we prove that every link can be represented as a closed $n$-crossing braid, for all even $n$. In Section~\ref{sec:tripleAlexander} we consider an equivalent of Alexander's Theorem for triple crossing braids. We prove that every link with at least two components can be represented as a closed triple-crossing braid. In Section~\ref{sec:oddAlexander} we extend this result to all odd $n$. Finally, in Section~\ref{sec:braidIndices} we find relationships between the $n$-crossing braid indices.
This paper is a part of a senior thesis completed at Williams College. I would like to thank my advisor Professor Colin Adams for his guidance throughout this year. Thanks to Daniel Vitek for suggesting this problem. He had proved Theorem~\ref{thm:evenAlexander} for $n=6,10,14,18,22,26$ using Lemma~\ref{lem:levelPosition} and computation by Mathematica. The idea to convert the problem into looking at the corresponding permutations is due to him, as is the proof of Lemma~\ref{lem:levelPosition}.
\section{Even Multi-crossing Braids}
\label{sec:evenAlexander}
In this section, we prove a result similar to Alexander's Theorem for $n$-crossing braids, where $n$ is even. Specifically, we prove the following.
\begin{theorem}
\label{thm:evenAlexander}
Every link can be represented as a closed $n$-crossing braid, for all even $n$.
\end{theorem}
We prove this theorem by starting with a link in double crossing braid form, and finding an isotopy to make it a sequence of $n$-crossings.
\subsection{Level position}
Given a collection of $m$ strings, label them $1, \dots, m$ from the left. We can then think of each crossing that occurs in these $m$ strings as a permutation of the strings. Specifically, we can define a homomorphism $\phi:B_n \rightarrow S_n$ by $\phi(\sigma_i) = \phi(\sigma_i^{-1}) = (i,i+1)$, and extend by linearity. A double crossing corresponds to a transposition $(i, i+1)$. An $n$-crossing corresponds to a permutation of the form $\pi_j = (j,j+n-1)(j+1,j+n-2)\cdots$; we call any permutation of this form a \emph{crossing permutation}. Note that there are $m-n+1$ possible $n$-crossing permutations, corresponding to each $j$ where $1 \le j \le m-n+1$.
First we show a lemma that allows us to ignore the levels of each crossing and focus on the images under $\phi$. We say a sequence of crossings is \emph{disjoint} if each string is switched by at most one crossing in the sequence. In this section we only need this lemma for $s=2$, but we state a general version which can be used in later sections for other values of $s$.
\begin{lemma}
\label{lem:levelPosition}
Let $\alpha$ be a sequence of disjoint $s$-crossings. Suppose that a product of $n$-crossing permutations in $S_m$ produces $\phi(\alpha)$. Then there exists a sequence of $n$-crossings over a $m$-string braid which produces $\alpha$.
\end{lemma}
\begin{proof}
Consider the sequence of $n$-crossing permutations that produces $\phi(\alpha)$. We want to show that we can choose the levels of the corresponding $n$-crossings appropriately so that the result is equivalent to $\alpha$.
We do this by placing the $m$ strands on different heights. Choose an $s$-crossing in $\alpha$. We place the $s$ strands of this crossing in the $s$ highest levels, according to their levels in the $s$-crossing. We continue this process by choosing a new $s$-crossing in $\alpha$, and placing the $s$ strands of this crossing in the next $s$ highest levels. Once we have exhausted $s$-crossings in $\alpha$, the remaining strands can be placed in any order. The heights are well defined since the crossings are disjoint.
For each permutation corresponding to an $n$-crossing, we want to assign levels to the strands to make it into an $n$-crossing. We can simply choose the levels of the strands in the order of the heights assigned above. This will mean that each strand always stays on the same level, and that each $n$-crossing can be untangled easily (Fig.~\ref{fig:diagonalBox}). We call this a \emph{level position} of the braid.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.2]{diagonalBox.png}
\caption{The left picture shows how we might use 6-crossings to obtain a double crossing which switches the first two strings, when the second string is the overstrand. The remaining strands have been ordered from left to right. The right picture is a view from above.}
\label{fig:diagonalBox}
\end{figure}
Then, once we have achieved $\phi(\alpha)$ we can pull the strings taut, so that the only crossings that are left are the $s$-crossing in $\alpha$ that we were looking for (Fig.~\ref{fig:diagonalBoxAfterPull}).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.2]{diagonalBoxAfterPull.png}
\caption{What happens after the strings are pulled taut. All the 6-crossings disappear and one double crossing remains.}
\label{fig:diagonalBoxAfterPull}
\end{figure}
\end{proof}
By this lemma, it suffices to show that we can use $n$-crossing permutations to obtain double crossing permutations.
\subsection{Conjugation of permutations}
Observe that for permutations $\pi, \sigma \in S_m$, where
\[
\pi = (i_1, i_2, \dots, i_r) \cdots (i_s, i_{s+1}, \dots, i_t)
\]
in cycle notation, the conjugate of $\pi$ by $\sigma$ has a nice form:
\[
\sigma \pi \sigma^{-1} = (\sigma(i_1), \sigma(i_2), \dots, \sigma(i_r))
\cdots (\sigma(i_s), \sigma(i_{s+1}), \dots, \sigma(i_t)).
\]
We consider the case when $\sigma$ is a crossing permutation. Then it is its own inverse, so $\sigma \pi \sigma^{-1} = \sigma \pi \sigma$. Thus, if we multiply both sides of $\pi$ by $\sigma$, we are essentially switching elements that appear in $\pi$ according to $\sigma$. Note that taking the conjugate does not change the cycle type, which is to say it keeps the number of cycles and the length of each cycle constant. Also note that taking conjugates by the crossing permutation $\pi_i$ can only affect elements between $i$ and $i+n-1$, and must leave all other elements fixed. Finally, observe that we can reverse this process since $\sigma$ is its own inverse; if we can obtain $\pi$ by taking conjugates of $\pi'$, then we can obtain $\pi'$ by taking conjugates of $\pi$ in reverse order.
\subsection{Obtaining permutations of the same cycle type}
Using this idea, we can start with $\pi_1$, and take conjugates by some $\pi_j$ to obtain different permutations with the same cycle type. In fact, we can show that for sufficiently large $m$, we can obtain any permutation with the same cycle type. Note that we can always make sure that we have enough strands (i.e. that $m$ is large enough) by taking stabilizations.
First we present several lemmas which will be helpful in proving this result. In the proofs of these lemmas we repeatedly take conjugates by crossing permutations. Recall that taking conjugates by the crossing permutation $\pi_i$ can only affect elements between $i$ and $i+n-1$, and must leave all other elements fixed. We can keep track of which entries of a permutation are affected by conjugation, by checking which entries lie within or outside the range from $i$ to $i+n-1$.
The first lemma shows how to obtain a permutation which sends 1 to some $N$.
\begin{lemma}
\label{lem:moveMaxPerm}
Let $n$ be even. Then, for any $N$ with $1 \le N \le \frac{3n}{2}$, there exists a sequence of $n$-crossing permutations over $S_{3n/2}$ whose product sends 1 to $N$.
\end{lemma}
\begin{proof}
First observe that over $S_{3n/2}$, we have the $n$-crossing permutations $\pi_1, \dots, \pi_{n/2+1}$.
We consider permutations which send $r$ to some $r+j$. We call this incrementing $r$ by $j$.
For $1 \le r \le \frac{n}{2}+1$, we can increment it by $n-1$ with $\pi_r$. For $r=1,2$ and $1 \le s \le \frac{n}{2}-1$, we can increment $r$ by $2s$ with $\pi_{r+s} \pi_r$. (First $\pi_r$ sends $r$ to $r+n-1$. Then observe that $r+n-1$ is $s$ away from $r+s+n-1$, which is the highest entry in $\pi_{r+s}$, and $r+2s$ is $s$ away from $r+s$, which is the lowest entry in $\pi_{r+s}$.) Also, if $r=1$, we can increment it by 1 with $\pi_{r+1} \pi_{r+n/2} \pi_r$.
Hence we can obtain a permutation that sends 1 to 2, or to $n$, or to $2s+1$ for $1 \le s \le \frac{n}{2}-1$. We can also obtain a permutation that sends 1 to $2s+2$, for $1 \le s < \frac{n}{2}-1$, by first sending it to 2 and then incrementing by $2s$. Thus, for any $N \le n$, we can obtain a sequence of $n$-crossing permutations over $S_{3n/2}$ that sends 1 to $N$.
If $n < N \le \frac{3n}{2}$, then we can first take a permutation that sends 1 to some element $N-n+1$, where $1 < N-n+1 \le \frac{n}{2}+1$. Then we can compose it with $\pi_{N-n+1}$, which will send 1 to $N$.
\end{proof}
The next lemma describes how to increment the largest entry in a permutation.
\begin{lemma}
\label{lem:moveMax}
Let $n$ be even. Then, given a permutation of the form
\[
\tau = (1, 2, \dots, l_1)(l_1+1, \dots, l_2) \cdots (l_{t-1}+1, \dots, l_t-1, l_t),
\]
we can conjugate it by the $n$-crossing permutations over $S_{l_t+3n-3}$ to obtain any permutation of the form
\[
(1, 2, \dots, l_1) (l_1+1, \dots, l_2) \cdots (l_{t-1}+1, \dots, l_t-1, N),
\]
where $l_t \le N \le l_t+3n-3$.
\end{lemma}
\begin{proof}
First observe that over $S_{l_t + 3n-3}$, we have the $n$-crossing permutations $\pi_1, \dots, \pi_{l_t + 2n-2}$.
By applying Lemma~\ref{lem:moveMaxPerm} to the strings from $l_t$ to $l_t + \frac{3n}{2}-1$, we can see that for any $N$ with $l_t \le N \le l_t + \frac{3n}{2}-1$, there is a sequence of $n$-crossing permutations over $S_{l_t+3n/2-1}$ which sends $l_t$ to $N$, and leaves $1,\dots,l_t-1$ fixed. Then we can conjugate $\tau$ by this sequence of $n$-crossing permutations to obtain the permutation $(1, \dots, l_1)\cdots (l_{t-1}+1, \dots, l_t-1, N)$.
If $l_t+\frac{3n}{2} \le N \le l_t+3n-3$, then we first obtain a permutation $(1, \dots, l_1)\cdots (l_{t-1}+1, \dots, l_t-1, N')$ where $N'$ is an element with $l_t \le N' \le l_t + \frac{3n}{2}-1$ that is a multiple of $n-1$ away from $N$. Then we can conjugate by $\pi_{N'}$, $\pi_{N'+n-1}$, and so on, until we reach $N$ after conjugating by $\pi_{N-n+1}$. Note that in doing so, we only need the crossing permutations $\pi_1, \dots, \pi_{l_t + 2n-2}$. Thus, the crossing permutations over $S_{l_t+3n-3}$ are sufficient to produce any permutation of the form $(1, \dots, l_1)\cdots (l_{t-1}+1, \dots, l_t-1, N)$, where $N \le l_t+3n-3$.
\end{proof}
The next lemma describes how to increment the remaining entries of a permutation, without changing the order of the entries.
\begin{lemma}
\label{lem:moveAll}
Let $n$ be even. Then, given $\tau$ as in the statement of Lemma~\ref{lem:moveMax}, we can conjugate it by the $n$-crossing permutations over $S_{l_t+3n-3}$ to obtain any permutation of the form
\[
(a_1, \dots, a_{l_1}) (a_{l_1+1}, \dots, a_{l_2}) \cdots (a_{l_{t-1}+1}, \dots, a_{l_t})
\]
where $1 \le a_1 < a_2 < \cdots < a_{l_t} \le l_t+3n-3$.
\end{lemma}
\begin{proof}
We prove this by iterating over each entry in $\tau$, from largest to smallest. The base case (largest entry in $\tau$) can be done by Lemma~\ref{lem:moveMax}.
Suppose we have moved the $k$ largest entries to their appropriate positions, where $1 \le k \le l_t$. Let $N = a_{l_t-k+1}$, which is to say that we have a permutation of the form
\[
\tau' = (1, \dots, l_1) \cdots (l_{s-1}+1, \dots, l_t-k, N, a_{l_t-k+2}, \dots, a_{l_s}) \cdots (a_{l_{t-1}+1}, \dots, a_{l_t}),
\]
Let $N' = a_{l_t - k}$, the new desired entry for the $k+1$st largest entry. Note that $N > N'$ by the hypothesis. We want to move $l_t-k$ to $N'$.
First consider the case when $N \le l_t - k + \frac{3n}{2} - 1$. Note that in this case $N' < l_t - k + \frac{3n}{2} - 1$. Observe that since we have only moved the $k$ largest entries, one of the $k$ indices between $l_t - k + \frac{3n}{2} - 1$ and $l_t + \frac{3n}{2} - 2$ must be ``empty'', which is to say it does not appear in $\tau'$, or that it is a fixed point in $\tau'$. Let $M$ be such an element.
We apply Lemma~\ref{lem:moveMaxPerm} to the $\frac{3n}{2}$ strings starting at $l_t - k + \frac{3n}{2} - 1$. Note that we need $l_t + 3n - 3$ strings for this to be possible for all $k$ with $1 \le k \le l_t$. Then, there exists a sequence $\{\pi_{b_i}\}$ of $n$-crossing permutations over $S_{l_t - k + 3n - 2}$ that sends $l_t - k + \frac{3n}{2} - 1$ to $M$, and leaves $1, \dots, l_t - k + \frac{3n}{2} - 2$ fixed. Note that this sequence in reverse order sends $M$ to $l_t-k+\frac{3n}{2} - 1$. Hence if we conjugate $\tau'$ by $\{\pi_{b_i}\}$ in reverse order, then the element $l_t-k+\frac{3n}{2} - 1$ will not appear in the resulting permutation. Thus we have some permutation of the form
\[
(1, \dots, l_1) \cdots (l_{s-1}+1, \dots, l_t-k, c_{l_t-k+1}, c_{l_t-k+2} \dots, c_{l_s}) \cdots (c_{l_{t-1}+1}, \dots, c_{l_t}),
\]
where all $c_i$ are strictly greater than $l_t - k + \frac{3n}{2} - 1$. By Lemma~\ref{lem:moveMaxPerm}, there is a sequence of $n$-crossing permutations $\{\pi_{b_i}\}$ over $S_{l_t-k+3n/2-1}$ which sends $l_t$ to $N'$, and leaves $1,\dots,l_t-k-1$ fixed. (Note that $\{\pi_{b_i}\}$ also leave entries greater than $l_t-k+\frac{3n}{2}-1$ fixed.) Then we can take conjugates by $\{\pi_{b_i}\}$ to obtain the permutation
\[
(1, \dots, l_1) \cdots (l_{s-1}+1, \dots, l_t-k-1, N', c_{l_t-k+1}, c_{l_t-k+2} \dots, c_{l_s}) \cdots (c_{l_{t-1}+1}, \dots, c_{l_t}).
\]
Finally, we can take conjugates by $\{\pi_{b_i}\}$ in the forward order to move the other entries back and obtain
\[
(1, \dots, l_1) \cdots (l_{s-1}+1, \dots, l_t-k-1, N', N, a_{l_t-k+2} \dots, a_{l_s}) \cdots (a_{l_{t-1}+1}, \dots, a_{l_t}).
\]
Note that since $N' < l_t - k + \frac{3n}{2} - 1$, the conjugations by $\{\pi_{b_i}\}$ does not affect $N'$.
Next, consider the case when $N' \le l_t - k + \frac{3n}{2} - 1 < N$. In this case, by Lemma~\ref{lem:moveMaxPerm} we can move $l_t-k$ to $N'$ without affecting any other elements.
Finally, consider the case when $N' > l_t - k + \frac{3n}{2} - 1$. First we use Lemma~\ref{lem:moveMaxPerm} to move $l_t-k$ to some $M'$, where $M'$ is an element with $l_t - k \le N' \le l_t + \frac{3n}{2}-1$ that is a multiple of $n-1$ away from $N$. Then we can conjugate by $\pi_{M'}$, $\pi_{M'+n-1}$, and so on, until we reach $N'$.
\end{proof}
Finally we prove the desired result. We define the \emph{size} of a permutation to be the number of distinct entries that appear in the permutation when written in cycle notation, where we drop any 1-cycles.
\begin{lemma}
\label{lem:sameCycleType}
Let $n$ be even. Then, given any $\tau \in S_{N+3n-3}$ with size at most $N$, we can conjugate by the $n$-crossing permutations over $S_{N+3n-3}$ to obtain any permutation in $S_{N+3n-3}$ of the same cycle type as $\tau$.
\end{lemma}
\begin{proof}
Write
\[
\tau = (d_1, \dots, d_{l_1})(d_{l_1+1}, \dots, d_{l_2}) \cdots (d_{l_{t-1}+1}, \dots, d_{l_t})
\]
in cycle notation. Note that $l_t \le N$ since the size of $\tau$ is at most $N$. Then consider the permutation
\[
\tau' = (1, \dots, l_1)(l_1+1, \dots, l_2) \cdots (l_{t-1}+1, \dots, l_t).
\]
Note $\tau'$ has the same cycle type as $\tau$. It suffices to show that we can conjugate $\tau'$ to get any permutation of the same cycle type, since we can reverse this process to obtain $\tau'$ from $\tau$.
Since $l_t \le N$, we know we can use the $n$-crossing permutations over $S_{l_t + 3n - 3}$. Thus we can apply Lemma~\ref{lem:moveAll} to obtain any permutation of the form
\[
(a_1, \dots, a_{l_1})(a_{l_1+1}, \dots, a_{l_2}) \cdots (a_{l_{t-1}+1}, \dots, a_{l_t}),
\]
where $a_1 < \cdots < a_{l_t}$. Hence it suffices to show that we can switch two entries in the permutation. We show how to switch two entries in $\tau'$, after which all entries can be moved to the appropriate positions.
Suppose we want to switch $r$ and $r+1$ for $r \le l_t$; in other words, suppose we want to obtain the permutation
\[
(1, \dots, l_1) \cdots (l_{s-1}+1, \dots, r-1, r+1, r, r+2, \dots, l_s) \cdots (l_{t-1}+1, \dots, l_t).
\]
Note that $s$ and $s+1$ may appear in different cycles in $\tau$, but the same argument applies. First we can conjugate $\tau'$ by $\pi_r$ to obtain the permutation
\[
(1, \dots, l_1) \cdots (l_{s-1}+1, \dots, r-1, r+n-1, r+n-2, e_{r+2} \dots, e_{l_s}) \cdots (e_{l_{t-1}+1}, \dots, e_{l_t}),
\]
where the $e_i$ are strictly less than $r+n-2$ for $i \ge r+2$. Then we can conjugate by $\pi_{r+n-2}$ to obtain the permutation
\[
(1, \dots, l_1) \cdots (l_{s-1}+1, \dots, r-1, r+2n-4, r+2n-3, e_{r+2}, \dots, e_{l_s}) \cdots (e_{l_{t-1}+1}, \dots, e_{l_t}).
\]
We can then conjugate by $\pi_{r+\frac{3n}{2}-3}$ to obtain the permutation
\[
(1, \dots, l_1) \cdots (l_{s-1}+1, \dots, r-1, r+2n-3, r+2n-4, e_{r+2}, \dots, e_{l_s}) \cdots (e_{l_{t-1}+1}, \dots, e_{l_t}).
\]
Observe that we have switched the $r$th entry with the $r+1$st entry. Finally, we can conjugate by $\pi_{r+n-2}$ and then by $\pi_r$ to move all entries back and obtain the desired permutation.
\end{proof}
While this suffices to prove Theorem~\ref{thm:evenAlexander}, we wish to reduce the number $d+3n-3$ as much as possible. As we will see in Section~\ref{sec:braidIndices}, we conjecture that it can be reduced down to $n+2$ for $d \le n$.
\subsection{Proof of Theorem~\ref{thm:evenAlexander}}
Before proving Theorem~\ref{thm:evenAlexander}, we present one final lemma.
\begin{lemma}
\label{lem:evenPair}
Let $n$ be even, and let $m \ge \frac{5n}{2}-1$. Then we can obtain the permutations $(1,n)(2n-1,2n)$ as a product of the $n$-crossing permutations over $S_m$.
\end{lemma}
\begin{proof}
First note that over $S_{5n/2-1}$ we have the crossing permutations $\pi_1, \dots, \pi_{3n/2}$.
Take the crossing permutation $\pi_1 = (1,n)(2,n-1)\cdots(\frac{n}{2}, \frac{n}{2}+1)$. We can conjugate by $\pi_n$ to obtain the permutation $(1,2n-1)(2,n-1)\cdots(\frac{n}{2}, \frac{n}{2}+1)$. Then conjugate by $\pi_{3n/2}$ to get $(1,2n)(2,n-1)\cdots(\frac{n}{2}, \frac{n}{2}+1)$. Thus we have moved the largest entry of the permutation to $2n$.
We then perform a similar sequence to move the smallest entry to $2n-1$. This is done by conjugating by $\pi_1$ and then by $\pi_n$. We are now left with the permutation $(2n-1,2n)(2,n-1)\cdots(\frac{n}{2},\frac{n}{2}+1)$. Note that the first conjugation by $\pi_1$ also moves the other entries around, but the pairs stay the same; for example $\pi_1$ switches the entries 2 and $n-1$, but this keeps the cycle $(2,n-1)$ constant.
Finally we can multiply this permutation by $\pi_1$. The transpositions in the middle would cancel, and we are left with $(1,n)(2n-1,2n)$.
\end{proof}
We are now ready to prove Theorem~\ref{thm:evenAlexander}.
\begin{proof}[Proof of Theorem~\ref{thm:evenAlexander}]
Let $L$ be a link. Put it in a double crossing braid form; call this braid $\alpha$.
First consider the case when $n = 4k + 2$. Start with $\alpha$, and take stabilizations until we have at least $3n+1$ strands; call this braid $\alpha'$.
By Lemma~\ref{lem:evenPair}, we can obtain the permutation $(1,n)(2n-1,2n)$. This permutation has size 4, so by Lemma~\ref{lem:sameCycleType}, we can conjugate $\pi_1$ to obtain any permutation whose cycle type is two transpositions.
We start with $\pi_1$, which has an odd number of transpositions. We can cancel pairs of transpositions if we multiply by the pairs, which we can obtain by Lemma~\ref{lem:sameCycleType}. Thus we can cancel all but one of the transpositions. By Lemma~\ref{lem:sameCycleType}, we can rearrange the entries of this transposition to obtain any transposition of the form $(i,i+1)$ for any $i$ with $1 \le i < 3n+1$. Hence, by Lemma~\ref{lem:levelPosition}, we can produce any double crossing that appears in $\alpha'$ as a product of $n$-crossings. Thus, we have produced an $n$-crossing braid that is equivalent to $L$.
Now consider the case when $n=4k$. Observe that the permutations obtained from the $n$-crossings are all even. Therefore these permutations cannot generate the transpositions, which are odd. However, we can start with a double crossing braid with an even number of crossings, which will be generated by all disjoint pairs of double crossings. Note that if two consecutive crossings involve a common strand, we can obtain one of them with a pair of crossings by creating an extra dummy crossing elsewhere, and obtain the other with a pair which undoes the dummy crossing.
Start with $\alpha$, and take stabilizations until we have at least $3n+1$ strands. If this braid has an odd number of crossings, take an extra stabilization so we have an even number of crossings. From the top of the braid, put the crossings in disjoint pairs, adding a pair of dummy crossings as necessary. Call this braid $\alpha'$.
As before, by Lemma~\ref{lem:evenPair} and Lemma~\ref{lem:sameCycleType} we can obtain any permutation whose cycle type is two transpositions. We start with $\pi_1$, which has an even number of transpositions. We can cancel pairs of transpositions as before, so we are left with a pair of transpositions. Then, by Lemma~\ref{lem:sameCycleType} we can rearrange the entries to get any pair of transpositions of the form $(a, a+1)(b,b+1)$, where the four entries are all distinct. Hence, by Lemma~\ref{lem:levelPosition}, we can produce any disjoint pair of double crossings that appears in $\alpha'$ as a product of $n$-crossings. Thus, we have produced an $n$-crossing braid that is equivalent to $L$.
\end{proof}
\section{Triple-crossing Braids}
\label{sec:tripleAlexander}
Once we know that every link can be put into an $n$-crossing braid for even $n$, a natural question to ask is: is this also true when $n$ is odd?
First, consider an open braid with $m$ strings. If we label the strings $1,\dots,m$ from left to right, we can see that any odd multi-crossing only mixes strings with the same parity. The closure of the braid must therefore have at least two components, which means that we cannot put knots in an odd multi-crossing braid form. However, we claim that we can put any link with two or more components into any odd multi-crossing braid form. In this section we prove this for $n=3$.
\begin{theorem}
\label{thm:tripleAlexander}
Every link with two or more components can be represented as a closed triple crossing braid.
\end{theorem}
This will be extended to other odd $n$ in the next section.
\subsection{Setup}
Consider an open braid with $m \ge 3$ strings. As before, we can label the strings $1,\dots,m$ from left to right, at the top of the whole braid. We call this label the \emph{index} of a string. At a given section of the braid, we can also label the strings $1, \dots, m$ from left to right at the top of this section. We call this the \emph{position} of the string in the section. Note that for a given string, the index stays the same from the top of the braid to the bottom, but the position changes every time it is involved in a crossing.
First we show a lemma which takes a double crossing braid and isotopes it into a form that is easier to work with.
\begin{lemma}
\label{lem:evenOdd}
Let $\alpha$ be a closed double crossing braid with at least 2 components. Then we can find an isotopy of $\alpha$ such that in the resulting braid, a set of components always enter and leave the braid in an odd position, while the remaining components always enter and leave the braid in an even position.
\end{lemma}
\begin{proof}
First, look at the the string with index 1, and remember this to be an odd component. Then we check if the string with index 2 is another component, which we remember to be an even component. If it is the same component, then we can find some string from another component that has a larger index, and make it have index 2 as follows. If the string from another component has index $i$, then we can conjugate the open braid by $\sigma^{i-1}\sigma^{i-2}\cdots\sigma^2$, so that this string now has index 2.
Then we continue this process, checking at every step that a string with an odd (similarly even) index $i$ is an odd (similarly even) component or a new component, and if not, finding an appropriate component and making it index $i$. Then at the top of the braid, the strings alternate between the odd and the even components. If we do not have enough strands for the odd components or the even components, then we can perform stabilizations on a component to obtain new strings.
\end{proof}
Note that once we go through this process and make sure that the top of the braid alternates between odd and even components, we know that the bottom of the braid follows the same pattern. This is because if a string leaves the braid in an odd position $i$, it must be the same component as the string that enters the braid at the $i$th index. This means that this string must be an odd component. Similarly a string that leaves the braid in an even position must be an even component. This also means that if a string enters the braid at an odd (similarly even) index, then it must leave the braid at an odd (similarly even) index. In other words, the parity of the position of the string at the bottom of the braid must be the same as the parity of its index.
\subsection{Putting the braid in level position}
Recall from Section~\ref{sec:evenAlexander} that we can put the braid in level position; we assign heights to each string, and if we put in crossings in a way that keeps each string on its level, we can pull the strings taut and then we are left with a set of crossings which represents the corresponding permutation of the strings. We can assign these heights according to the index of each string, so that the index and the level coincide.
We describe a process to isotope this braid so that, starting at the top of the braid and moving down, we are left with a sequence of triple crossings and then a double crossing braid in level position. (Note that when we refer to the ``top'' or the ``bottom'' of the braid, we always mean the beginning and the end of the braid word, rather than the heights that are assigned to a braid in level position, which we refer to in terms of its level.)
First we define a notion that becomes key to this argument. A \emph{clasp} is two strings that are hooked together as below (Fig.~\ref{fig:clasp}).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{clasp.png}
\caption{Two strings hooked together is called a clasp.}
\label{fig:clasp}
\end{figure}
Note that in a braid, a clasp appears as $\sigma_i \sigma_i$.
\begin{lemma}
\label{lem:tripleLevel}
Let $\alpha$ be a double crossing braid with 3 or more strings. We can find an isotopy of this braid such that we are left with a sequence of triple crossings followed by a double crossing braid in level position.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:evenOdd}, we can find an isotopy of $\alpha$ such that the parity of the position of each string is preserved from the top of the braid to the bottom.
In the course of this argument, we can ``ignore'' any sequence of triple crossings at the top of the braid, as long as we do not try to take conjugations or stabilizations. This is because triple crossings always preserve the parity of the position of each string, which is the property that is central to this argument. If we can turn some double crossing braid into a triple crossing braid, we can clearly do the same to a sequence of triple crossing braid followed by this double crossing braid.
Start with a double crossing braid $\alpha$ such that the parity of the position of each component remain the same. Assign the indices $1, \dots, m$ to the $m$ strings. We can then find a new (different) double crossing braid $\alpha'$, with the same projection as $\alpha$ but different crossings, that is in level position with respect to this leveling. We find an isotopy of $\alpha$ such that we are left with a sequence of triple crossings followed by $\alpha'$.
We start from the top of the braid $\alpha$, and at every step we find a crossing that is different from the braid $\alpha'$ in level position, and change this crossing so that we are closer to being in level position. Note that in changing this crossing, we may introduce triple crossings at the top of the braid to preserve the original braid type, but we are not concerned with this.
This will be done recursively. Suppose we want to change a double crossing at the top of the braid. Then we can introduce two trivial crossings under this crossing. Then the two crossings at the top can be turned into triple crossings as below. Thus we obtain a double crossing braid with this top crossing flipped (Fig.~\ref{fig:tripleTop}).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{tripleTop.png}
\caption{Changing a crossing at the top of the braid. Introduce two trivial crossings below, and use an extra strand to turn the two top crossings into triple crossings. We are now left with two triple crossings followed by a double crossing, which is now different from the one we started with.}
\label{fig:tripleTop}
\end{figure}
Now suppose we want to change a crossing $\sigma$ in the middle of a braid, assuming that all crossings above it have been changed so that the portion above it is in level position. Let $A$ be the portion of the braid above this crossing, not including itself. Then we can add trivial crossings and obtain $A\sigma = A\sigma \sigma A^{-1} A \sigma^{-1}$ (Fig.~\ref{fig:tripleSequence}). It suffices to turn $A \sigma \sigma A^{-1}$ into triple crossings, for this would mean we have changed $A \sigma$ into a sequence of triple crossings followed by $A \sigma^{-1}$. Note that since $A$ is in level position, we know that $A^{-1}$ must also be in level position with the same leveling of strings.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.25]{tripleSequence.png}
\caption{Starting with $A \sigma$, we can add $A^{-1} A \sigma^{-1}$ below it, since it is equal to the identity. If we can turn $A \sigma \sigma A^{-1}$ into triple crossings, then we are left with $A \sigma^{-1}$ as desired.}
\label{fig:tripleSequence}
\end{figure}
Let $\sigma$ be a crossing that switches the strings with indices $a,b$, where $a < b$. Note that the $\emph{position}$ of these strings must be adjacent on this portion of the braid. Recall that $A$ and $A^{-1}$ are in level position, so these strings are in the levels $a$, $b$ respectively in both $A$ and $A^{-1}$. The crossings $\sigma \sigma$ form a clasp that violates the level position, and may get in the way of strings that are between $a$ and $b$. For clarification, call the first crossing of the clasp $\sigma_r$, and the second one $\sigma_s$.
Let $i$, $i+1$ be the positions of the two strings at the top of the clasp. Note that because $\sigma_r$ is a crossing that violates the level position, $a$ must be the understrand of this crossing. Then $a$ is the overstrand of $\sigma_s$.
We consider two possible cases: $a$ could be in the $i$th position at the top of the clasp, or it could be in the $i+1$st position. If $a$ is in $i+1$st position, this means that $b$ is in the $i$th position. Now, pull the parts of $a$ and $b$ in $A$ and $A^{-1}$ taut. Then there must be a crossing between $a$ and $b$ in both $A$ and $A^{-1}$. Let $\sigma_p$ the one in $A$, and let $\sigma_q$ be the one in $A^{-1}$. Observe that $a$ is the overstrand of both of these crossings, since $a$ is in a higher level than $b$. Then $\sigma_q$ cancels with $\sigma_s$, since $a$ is the overstrand in both. We can then move $\sigma_p$ down so that $\sigma_p$ and $\sigma_r$ is a clasp between the $a$ and $b$ strings; however $a$ is now in the $i$th position at the top of the clasp, and $b$ is in the $i+1$st (Fig.~\ref{fig:tripleSigma}). Note that the portions above and below this clasp are still in level position. Hence we can assume that $a$ is the $i$th position at the top of the clasp, and $b$ is in the $i+1$st position.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.25]{tripleSigma.png}
\caption{If $a$ is the $i+1$st position at the top of the clasp, then there must be a crossing $\sigma_p$ between $a$ and $b$ in $A$, and a corresponding crossing $\sigma_q$ in $A^{-1}$. Then $\sigma_q$ cancels with $\sigma_s$, and a clasp is formed by $\sigma_p$ and $\sigma_r$. The portions above and below the clasp are still in level position. Thus we can assume that $a$ is in the $i$th position at the top of the clasp.}
\label{fig:tripleSigma}
\end{figure}
The strings $a$ and $b$ look like the following figure (Fig.~\ref{fig:tripleDiagonalMess}). Both strings stay on their levels until they reach the clasp, where they wrap around each other and move back to their original levels.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.2]{tripleDiagonalMess.png}
\caption{The left picture depicts the strings $a$ and $b$. Note that the other strings are not drawn, but they would all stay on the same level. The right picture is a view from the side, where this time the other strings are also drawn. The two strings move toward the level of the other string, wrap around each other, and then go back to their original levels.}
\label{fig:tripleDiagonalMess}
\end{figure}
Now pull all the strings taut. All strings on levels higher than $a$ or lower than $b$ (note lower numbers are on higher levels here) are not affected, so they will go straight down from the top of the braid to the bottom. The strings $a$ and $b$ create a diagonal plane between the $a$th and the $b$th levels. The strings that are in between the levels $a$ and $b$ will be either above or below this plane, depending on what its position was at the clasp. If its position was greater than $i+1$, then it will be above this diagonal plane; it its position was less than $i$, then it will be below the plane. Thus we have the following picture (Fig.~\ref{fig:tripleDiagonalTaut}).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.2]{tripleDiagonalTaut.png}
\caption{The left picture shows what happens to $a$ and $b$ once the strings are pulled taut. The right picture shows the view from the side, where we see which strings lie above/below the diagonal plane created by $a$ and $b$.}
\label{fig:tripleDiagonalTaut}
\end{figure}
Therefore when all of these strings are pulled taut we have the following picture (Fig.~\ref{fig:tripleHook}).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{tripleHook.png}
\caption{Once the strings are pulled taut, we are left with one clasp between the strings $a$ and $b$, and all other strings are either above or below this clasp.}
\label{fig:tripleHook}
\end{figure}
The strings that are in between $a$ and $b$ will go over both strands of the clasp, or under both strands of the clasp. This means that all of these $b-a-1$ strings can be moved to one side of the clasp. But any pair of such strings can be turned into two triple crossings as below (Fig.~\ref{fig:tripleResolve}).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{tripleResolve.png}
\caption{We can move the strings between $a$ and $b$ to one side, and resolve pairs of them into triple crossings.}
\label{fig:tripleResolve}
\end{figure}
Therefore we only need to consider the case when one string and the clasp are left, or the case when only the clasp is left. If we have one string left we can simply let it run through the middle of the clasp (Fig.~\ref{fig:tripleResolveLast}). Note that we can always do this because it can be under or over both strings of the clasp, but it is never over one and under the other. If only the clasp is left we can take a strand from another string, and pull it under both crossings of the clasp. We can always do this because we have assumed that we have at least 3 strings.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{TripleResolveLast.png}
\caption{If one string is left, we let it run through the middle of the clasp. The left figure depicts the case when this string goes under both strands of the clasp. If only the clasp is left, then we can take an extra string and pull it under both crossings of the clasp.}
\label{fig:tripleResolveLast}
\end{figure}
Therefore we can recursively change every crossing to turn the braid into a sequence of triple crossings followed by a double crossing braid in level position.
\end{proof}
\subsection{Obtaining the triple crossing braid}
All we have to do now is to turn this double crossing braid in level position into a triple crossing braid.
\begin{proof}[Proof of Theorem~\ref{thm:tripleAlexander}]
Let $L$ be a link with at least 2 components. Put it in double crossing braid form. If it is a 2-braid, take a stabilization so that it has at least 3 strings. By Lemma~\ref{lem:tripleLevel}, we can find an isotopy of this braid so that it becomes a sequence of triple crossings followed by a double crossing braid in level position. It remains to isotope this double crossing braid into a triple crossing braid.
Recall that each string enters and leaves the braid in positions with the same parity. We can pull the strings taut so that the braid is just a permutation of the even components, and a permutation of the odd components. We can then shift the strings slightly so that the permutation of the odd components occur in the top half of the braid, and the permutation of the even components occur in the bottom half of the braid (Fig.~\ref{fig:tripleLevel}).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{tripleLevel.png}
\caption{Once the strings are pulled taut, we are left with a permutation of the odd strings and a permutation of the even strings. Since the braid is in level position, we can separate the two permutations as above.}
\label{fig:tripleLevel}
\end{figure}
Now, any permutation of the odd strands is generated by a transposition of two adjacent odd strands. Note that a triple crossing switches two adjacent odd strands. We can therefore obtain any permutation of the odd strands as a sequence of triple crossings, and similarly for a permutation of the even strands. Thus we have a triple crossing braid form for $L$.
\end{proof}
\section{Odd Multi-crossing Braids}
\label{sec:oddAlexander}
We have already seen that given a link $L$ with two or more components, it can be put into a triple-crossing braid form. In this section, we extend this result to any $n$-crossings, where $n$ is odd. Our goal is to prove the following theorem:
\begin{theorem}
\label{thm:oddAlexander}
Every link with two or more components can be represented as a closed $n$-crossing braid, for all $n$.
\end{theorem}
\subsection{Setup}
We start by putting $L$ in a triple crossing braid form by Theorem~\ref{thm:tripleAlexander}, and finding an equivalent $n$-crossing braid. In other words, we want to show that we can produce the triple crossings from a sequence of $n$-crossings.
As in the case for $n$ even, by Lemma~\ref{lem:levelPosition} it suffices to show that we can obtain each triple crossing permutation as a combination of the $n$-crossing permutations. The triple crossing permutations are all transpositions of the form $(i, i+2)$, and when $n$ is odd, the $n$-crossing permutations are $\pi_j = (j,j+n-1)(j+1,j+n-2)\cdots(j+\frac{n-3}{2},j+\frac{n+1}{2})$. Observe that there are two types of triple crossing permutations: those that switch even numbered strings, and those that switch odd numbered strings. We call them \emph{even-string transpositions} and \emph{odd-string transpositions} respectively.
\subsection{Obtaining pairs of transpositions with same parity}
First we show that given a sufficiently large number of strings, we can use the $n$-crossing permutations to obtain any pair of even-string transpositions, or any pair of odd-string transpositions. Note that these pairs of transpositions must be ``disjoint'' in that the two transpositions permute 4 distinct numbers, for otherwise we would have a 3-cycle or the identity function. While we refrain from using the word ``disjoint'' in this context, it is worth noting that this is different from the meaning of disjoint crossings defined for Lemma~\ref{lem:levelPosition}. For example, a triple crossing permuting the first three strands would have the corresponding transposition $(1,3)$, and a triple crossing permuting the second to fourth strands would have the corresponding transposition $(2,4)$. These two crossings are not disjoint, but the two corresponding permutations are considered to be ``disjoint'' since they involve four distinct numbers.
We can always make sure that we have enough strands as follows. Take the second string from the right, and perform an Type II Reidemeister move over the rightmost string. Then we can stabilize the portion that has now become the rightmost string. We have now added three double crossings, which can be put together into a triple-crossing (Fig.~\ref{fig:3-stabilization}). We call this move a \emph{3-stabilization}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.2]{3-stabilization.png}
\caption{A 3-stabilization.}
\label{fig:3-stabilization}
\end{figure}
We start by obtaining one pair of even-string transpositions, and one pair of odd string transpositions.
\begin{lemma}
\label{lem:sameParityPairs}
Let $n$ be odd, and let $m \ge \frac{5n-1}{2}$. Then we can obtain the permutations $(3,n)(2n-1,2n+1)$ and $(2,n+1)(2n,2n+2)$, as a product of the $n$-crossing permutations over $S_m$.
\end{lemma}
\begin{proof}
First note that over $S_{(5n-1)/2}$ we have the crossing permutations $\pi_1, \dots, \pi_{(3n+1)/2}$.
Recall that multiplying both sides of $\pi$ by $\sigma$, which is equivalent to conjugating by $\sigma$ since $\sigma^{-1} = \sigma$, switches the entries of $\pi$ according to $\sigma$. Recall also that this does not change the cycle type, which is to say it keeps the number of cycles and the length of each cycle constant. We start with some crossing permutation $\pi_i$ and take conjugates by some $\pi_j$ to obtain different permutations with the same cycle type.
We start by obtaining one pair of odd-string transpositions. Take the crossing permutation $\pi_1 = (1,n)(2,n-1)\cdots(\frac{n-1}{2},\frac{n+3}{2})$. Note that in an odd multi-crossing, the central string is fixed; in this case $\pi_1$ keeps the $\frac{n+1}{2}$ string in the same position. We can conjugate by $\pi_n$ to obtain the permutation $(1,2n-1)(2,n-1)\cdots(\frac{n-1}{2},\frac{n+3}{2})$. Then conjugate by $\pi_{\frac{3n+1}{2}}$ to get $(1,2n+1)(2,n-1)\cdots(\frac{n-1}{2},\frac{n+3}{2})$. Thus we have moved the largest entry of the permutation to $2n+1$.
We then perform a similar sequence to move the smallest entry to $2n-1$. This is done by conjugating by $\pi_1$ and then by $\pi_n$. We are now left with the permutation $(2n-1,2n+1)(2,n-1)\cdots(\frac{n-1}{2},\frac{n+3}{2})$. Note that the first conjugation by $\pi_1$ also moves the other entries around, but the pairs stay the same; for example $\pi_1$ switches the entries 2 and $n-1$, but this keeps the cycle $(2,n-1)$ constant. We can then multiply this permutation by $\pi_1$. The transpositions in the middle would cancel, and we are left with $(1,n)(2n-1,2n+1)$.
Now, we can similarly obtain the permutation $(2,n+1)(2n,2n+2)$, a pair of even-string transpositions, by starting with $\pi_2$.
Finally, we can conjugate $(1,n)(2n-1,2n+1)$ by $\pi_{(n-1)/2}$ to obtain $(1,n-2)(2n-1,2n+1)$. We can conjugate the result by $\pi_1$ to obtain $(3,n)(2n-1,2n+1)$.
\end{proof}
Now we show that we can obtain any pair of even-string transpositions, and any pair of odd string transpositions. We have two cases: $n=4k+3$ and $n=4k+1$.
\begin{lemma}
\label{lem:sameParity4k+3}
Let $n = 4k+3$, and let $m \ge 3n+5$. Then we can obtain any pair of even-string transpositions in $S_m$, or any pair of odd-string transpositions in $S_m$, as a product of the $n$-crossing permutations over $S_m$.
\end{lemma}
\begin{proof}
First observe that $3n+5 = 3(4k+3)+5 = 12k+14 = 2(6k+7)$, so we have at least $6k+7$ odd and even strings respectively.
By Lemma~\ref{lem:sameParityPairs}, we can obtain the permutations $(3,n)(2n-1,2n+1)$ and $(2,n+1)(2n,2n+2)$ as a product of the $n$-crossing permutations over $S_m$. We want to move the entries of this permutation around so that we can get any pair of odd-string transpositions, or any pair of even-string transpositions.
If we focus on the odd strings and ignore the even strings, we can see that the $(4k+3)$-crossing permutations starting on odd indices function as $(2k+2)$-crossing permutations on the odd strings. Since $2k+2$ is even, we know by Lemma~\ref{lem:sameCycleType} that if we have $3(2k+2)+1 = 6k+7$ odd strings, then we can obtain any odd-string permutation of the same cycle type by taking conjugations. Therefore we can obtain any pair of odd-string transpositions. Similarly we can obtain any pair of even-string transpositions.
\end{proof}
\begin{lemma}
\label{lem:sameParity4k+1}
Let $n = 4k+1$, and let $m \ge 3n-2$. Then we can obtain any pair of even-string transpositions in $S_m$, or any pair of odd-string transpositions in $S_m$, as a product of the $n$-crossing permutations over $S_m$.
\end{lemma}
\begin{proof}
First observe that $3n-2 = 3(4k+1)-2 = 12k+4$ strings. This means that we have $12k+2 = 2(6k+1)$ strings that are not the 1st or the $m$th string. Hence we have $6k+1$ odd strings that are not on either end of the braid, and $6k+1$ even strings that are not on either end of the braid.
As in the proof of Lemma~\ref{lem:sameParity4k+3}, we start with some permutation obtained through Lemma~\ref{lem:sameParityPairs}, and move the entries of these permutations around.
Observe that if we consider an $n$-crossing permutation $\pi_i$ which starts on the $i$th strand, the $(4k+1)$-crossing permutations function as $2k$-crossing permutations on the string with parity different from $i$. Note, however, that none of the $(4k+1)$-crossings can function as a $2k$-crossings that acts on the 1st string or the $m$th string.
First suppose we want to obtain the permutation $(a,b)(c,d)$, where the entries are all odd, and none of them are equal to 1 or $m$. We know by Lemma~\ref{lem:sameParityPairs} that we can obtain the permutation $(3,n)(2n-1,2n+1)$. Since $2k$ is even, we know by Lemma~\ref{lem:sameCycleType} that if we have $3(2k)+1 = 6k+1$ odd strings excluding the first and the last strings, then we can take conjugations and obtain $(a,b)(c,d)$.
Now, suppose we want to obtain $(1,b)(c,d)$, where $b,c,d$ are all odd, and none of them are equal to $m$. Then, using Lemma~\ref{lem:sameParityPairs} and Lemma~\ref{lem:sameCycleType} we can first obtain some $(n,b')(c',d')$ where none of the entries are equal to 1 or $m$. Then we conjugate by $\pi_1$ to obtain $(1,b'')(c'',d'')$. Finally, using the same argument as in the proof of Lemma~\ref{lem:sameCycleType}, but only on the odd strings excluding the first and the last string, we can rearrange the remaining entries to obtain $(1,b)(c,d)$.
By symmetry, we can similarly obtain any permutation with an entry equal to $m$ and none equal to 1; we first obtain $(m-n+1,b')(c',d')$, conjugate by $\pi_{m-n+1}$, and then rearrange the remaining entries. If an entry is equal to 1 and another is equal to $m$, then we can first obtain some permutations where the corresponding entries are equal to $n$ and $m-n+1$ respectively, then conjugate by $\pi_1$ and $\pi_{m-n+1}$, and rearrange the remaining two entries.
We can apply the same argument to obtain any pair of even-string transpositions, by starting with $(2,n+1,2n,2n+2)$. We can use the same argument for the case when one of the entries are equal to $m$.
\end{proof}
Now we present a couple of lemmas for when $n=8k+5$ which will be useful in the proof of Theorem~\ref{thm:oddAlexander}.
\begin{lemma}
\label{lem:triple8k+5}
Let $n=8k+5$. Then we can obtain any odd-string triple crossing permutation in $S_{(3n-1)/2}$, by conjugating $(1,3)$ with the $n$-crossing permutations over $S_{(3n-1)/2}$.
\end{lemma}
\begin{proof}
First observe that over $S_{(3n-1)/2}$, we have the $n$-crossing permutations $\pi_1, \dots, \pi_{(n+1)/2}$.
Suppose we want to obtain the transposition $(2i-1,2i+1)$. First consider the case when $2i-1 \le n$. We conjugate $(1,3)$ by $\pi_1$ to obtain $(n-2,n)$. We then conjugate by $\pi_{(n+1)/2}$ to obtain $(n+2,n)$. We then conjugate by $\pi_{i+1}$ to obtain $(2i-1,2i+1)$. (To see this, observe that $n$ is $j$ away from $i+n$, the highest entry in $\pi_{i+1}$, and that $2j+1$ is $j$ away from $j+1$, the lowest entry in $\pi_{i+1}$.)
Now consider the case when $n < 2i-1 < \frac{3n-2}{2}$. Then we can move $(1,3)$ to some $(j,j+2)$ such that $j \le n$ and $j$ is $n-1$ away from $2i-1$. We can then conjugate $(j,j+2)$ by $\pi_{j+2}$ and then by $\pi_j$ to obtain $(2i-1,2i+1)$.
\end{proof}
\begin{lemma}
\label{lem:pair8k+5}
Let $n=8k+5$, and let $m \ge 4n-4$. Then we can obtain any pair of an even-string transposition and an odd-string transposition over $S_m$, corresponding to a disjoint pair of triple crossings, as a product of the $n$-crossing permutations over $S_m$.
\end{lemma}
\begin{proof}
Observe that when $n=8k+5$, an $n$-crossing permutation consists of $2k+1$ even-string transpositions and $2k+1$ odd-string transpositions. We now show that we can obtain any pair consisting of one odd-string transposition and one even-string transposition whose corresponding crossings are disjoint.
Let $(l_a, l_a+2)(l_b, l_b+2)$ be the permutation we want to obtain, with $l_a+2 < l_b$. First suppose $l_a$ is odd. We can then start $\pi_{(n+1)/2}$, and cancel all but one of the even-string transpositions by multiplying it with pairs of even-string transpositions as before. We can similarly cancel all but one of the odd-string transpositions. Thus we are left with a single even-string transposition and a single odd-string transposition. We may choose the pairs to cancel from the outside so that we are left with a permutation $(n-2,n+2)(n-1,n+1)$.
We first ``separate'' the two transpositions. We can conjugate $(n-2,n+2)(n-1,n+1)$ by $\pi_{n-1}$ to obtain $(n-2,2n-5)(2n-2,2n-4)$. When $n=5$ this is equivalent to $(n-2,n)(n+3,n+1)$.
When $n>5$, we conjugate $(n-2,2n-5)(2n-2,2n-4)$ by $\pi_{2n-4}$ to obtain $(n-2,2n-5)(3n-7,3n-5)$. Then we conjugate by $\pi_{(3n-7)/2}$ to obtain $(n-2,2n-3)(3n-7,3n-5)$. Then we conjugate by $\pi_{n-1}$ again to obtain $(n-2,n)(3n-7,3n-5)$. Then we conjugate by $\pi_{2n-3}$ to obtain $(n-2,n)(2n,2n-2)$. Finally, we conjugate by $\pi_{n+1}$ to obtain $(n-2,n)(n+1,n+3)$.
In both cases, we have the permutation $(n-2,n)(n+1,n+3)$. Now we can conjugate by $\pi_1$, and then by $\pi_4$ to obtain $(1,3)(4,6)$. Consider the $4n-8$ strings starting with the 4th string. Then, as in the proof of Lemma~\ref{lem:sameParity4k+1}, we can move the entries of $(4,6)$ to any even numbers between 4 and $4n-5$. Thus we can obtain the permutation $(1,3)(l_b,l_b+2)$. Now we want to move $(1,3)$ to $(l_a, l_a+2)$.
First consider the case when $l_b \le \frac{3n-1}{2}$.
We conjugate $(1,3)(l_b,l_b+2)$ by $\pi_{l_b}$ to obtain $(1,3)(l_b+n-1,l_b+n-3)$. We then conjugate by $\pi_{l_b+n-3}$ to obtain $(1,3)(l_b+2n-6, l_b+2n-4)$. Now by Lemma~\ref{lem:triple8k+5} we can conjugate by $n$-crossing permutations over $S_{(3n-1)/2}$ to move $(1,3)$ to $(l_a,l_a+2)$. Since $l_b+2n-6 > \frac{3n-1}{2}$ the other permutation is not affected, and we are left with $(l_a,l_a+2)(l_b+2n-6, l_b+2n-4)$ We can then conjugate back by $\pi_{l_b+n-3}$ and then by $\pi_{l_b}$ to obtain $(l_a, l_a+2)(l_b, l_b+2)$.
Next, consider the case when $l_a + 2 \le \frac{3n-1}{2} < l_b$. Then by Lemma~\ref{lem:triple8k+5} we can move $(1,3)$ to $(l_a,l_a+2)$ without affecting any other elements.
Finally consider the case when $l_a + 2 > \frac{3n-1}{2}$. Then by Lemma~\ref{lem:triple8k+5} we can move $(1,3)$ to some $(j,j+2)$, where $j < \frac{3n-1}{2}$ and $j$ is a multiple of $n-1$ away from $l_a$. Then we can conjugate by $\pi_{j+2}$ and then by $\pi_j$ to obtain $(j+n-1,j+n+1)$. We can continue this until we reach $(l_a,l_a+2)$.
If $l_a$ is even, we start with $\pi_{(n+3)/2}$, and similarly take conjugations to obtain the permutation $(2,4)(5,7)$. We can obtain $(2,4)(l_b,l_b+2)$ using the $4n-8$ strings starting with the 5th string. Moving $(2,4)$ to $(l_a,l_a+2)$ can be done in the same way as above.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:oddAlexander}}
Now we use these pairs of transpositions to produce the desired braid.
\begin{proof}[Proof of Theorem~\ref{thm:oddAlexander}]
If $n$ is even, the result follows directly from Theorem~\ref{thm:evenAlexander}.
If $n$ is odd, by Theorem~\ref{thm:oddAlexander} we can put the link $L$ in a triple crossing braid form. We consider three cases for when $n$ is odd: $n=4k+3$, $n=8k+5$, and $n=8k+1$.
\noindent
\textbf{Case 1: $n = 4k+3$.}
Take 3-stabilizations until we have at least $3n+5$ strings. It suffices to show that we can obtain any triple-crossing permutation, for then by Lemma~\ref{lem:levelPosition} we can produce any triple-crossing.
Observe that when $n=4k+3$, an $n$-crossing permutation consists of $k$ even-string transpositions and $k+1$ odd-string transpositions, or $k$ odd-string transpositions and $k+1$ even-string transpositions.
Recall that by Lemma~\ref{lem:sameParity4k+3}, we can obtain pairs of odd-string transpositions, and pairs of even-string transpositions, as products of the $n$-crossing permutations. We can obtain one odd-string transposition as follows: start with $\pi_1 = (1,n)(2,n-1)\cdots(\frac{n-1}{2},\frac{n+3}{2})$, which has an odd number of odd-string transpositions and an even number of even-string transpositions. Then we can cancel the even number of even-string transpositions, and all but one of the odd-string transpositions, by multiplying it with pairs of odd-string transpositions and pairs of even-string transpositions. We are left with a single odd-string transposition.
As before, we can focus on the odd strings and see that we can obtain any permutation of the same cycle type by Lemma~\ref{lem:sameCycleType}, since the $(4k+3)$-crossing permutations starting on odd indices function as $(2k+2)$-crossing permutations on the odd strings. Therefore we can obtain any odd-string transposition. We can similarly obtain any even-string transposition. This shows that we can produce any triple-crossing as a sequence of $n$-crossings, so $L$ can be put into an $n$-crossing braid.
\noindent
\textbf{Case 2: $n=8k+5$.} First observe that an $(8k+5)$-crossing permutation is an even permutation (here we mean even in the traditional sense of the word in symmetric groups, namely that it can be written as a product of an even number of transpositions.) Therefore we want to start with a triple-crossing braid with an even number of crossings, and then turn pairs of triple-crossings into sets of $n$-crossings.
Take 3-stabilizations of $L$ until we have at least $3n-2$ strings. If this braid has an odd number of triple crossings at this stage, take one more 3-stabilization so that we have an even number of triple crossings.
We claim we can put these triple-crossings in pairs such that each pair is disjoint, in the sense that they involve 6 distinct strings in total. This is because if two consecutive crossings involve a common strand, then we can put one of them in a pair with a dummy crossing far away, and put the other in another pair which undoes the dummy crossing (Fig.~\ref{fig:disjointPair}).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.2]{disjointPair.png}
\caption{If two consecutive crossings share a common strand, we can put them in two disjoint pairs along with two dummy crossings.}
\label{fig:disjointPair}
\end{figure}
Now it suffices to show that we can obtain any permutation corresponding to a disjoint pair of triple-crossings. By Lemma~\ref{lem:sameParity4k+1}, we can obtain pairs of transpositions of the same parity. By Lemma~\ref{lem:pair8k+5}, we can obtain any pair of an even-string transposition and an odd-string transposition, corresponding to a disjoint pair of triple crossings. Thus by Lemma~\ref{lem:levelPosition} we can produce any pair of disjoint triple-crossings using $n$-crossings. Hence $L$ can be put into a $n$-crossing braid.
\noindent
\textbf{Case 3: $n=8k+1$.} First observe that a $(8k+1)$-crossing permutation consists of $2k$ even-string transpositions and $2k$ odd-string transpositions. Therefore we want to start with a triple-crossing braid with an even number of even-string crossings, and an even number of odd-string crossings.
Take 3-stabilizations of $L$ until we have at least $3n-2$ strings. If the number of even-string crossings and odd-string crossings are both even at this stage, then it is in the desired form. If both numbers are odd, then we can take two 3-stabilizations, which will increase both numbers by one.
The remaining case is when the number of one set of crossings is odd, and the number of the other set of crossings is even. In this case, we consider two possibilities. First suppose the number of strings is even. Then note that a 3-stabilization on the second string will increase the number of even-string crossings, and a 3-stabilization on the second last string will increase the number of odd-string crossings (Fig.~\ref{fig:3-stabilization2}). Hence if the parity is off by one, we can always find the appropriate 3-stabilization.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{3-stabilization2.png}
\caption{A 3-stabilization on the second string will increase the number of even-string crossings, and a 3-stabilization on the second last string will increase the number of odd-string crossings.}
\label{fig:3-stabilization2}
\end{figure}
Now suppose the number of strings is odd. Then both 3-stabilizations increase the number of even-string crossings. But once we take this stabilization, the number of even-string crossings and the number of odd-string crossings will have the same parity. We can therefore put it in the desired form with at most two more 3-stabilizations.
Hence we can put $L$ into a triple-crossing braid with an even number of even-string crossings, and an even number of odd-string crossings. As before we can put these crossings into pairs that are disjoint. Note that here we must also put them in pairs such that they have the same parity. We can then produce each pair of crossings since we can obtain pairs of transpositions with the same parity by Lemma~\ref{lem:sameParity4k+1}. Therefore we can produce each pair of triple-crossings with the same parity using the $n$-crossings, so $L$ can be put into a $n$-crossing braid form.
\end{proof}
\section{Bounds on Braid Indices}
\label{sec:braidIndices}
In this section we find relationships between the multi-crossing braid indices. Many arguments in this section have been simplified; details can be found in the original thesis.
Let $\beta_n(L)$ be the minimum number of strings necessary to represent the link $L$ as an $n$-crossing braid. We define $\beta_n(L) = \infty$ if $L$ cannot be represented as an $n$-crossing braid. Note this happens if and only if $L$ is a knot and $n$ is odd. A simple observation tells us the following.
\begin{theorem}
Let $L$ be a link. For any $n \ge 2$, we have
\[
\beta_2(L) \le \beta_n(L).
\]
\end{theorem}
\begin{proof}
Observe that any multi-crossing braid can be turned into a double crossing braid with the same number of strings, by breaking up each multi-crossing into double crossings.
\end{proof}
\subsection{Even multi-crossing braids}
For even $n$ we have the following results.
\begin{theorem}
\label{thm:braidEven}
Let $L$ be a link that is not an unlink. Let $n \le 202$, and $m \ge n+2$.
\begin{itemize}
\setlength\itemsep{0.3em}
\item[(i)]
If $n=4k+2$, then we have $\beta_n(L) \le \beta_m(L)$.
\item[(ii)]
If $n=4k$, then we have $\beta_n(L) \le \beta_m(L) + 1$.
Moreover, if $m=4k$ or $m=4k+1$, then we have $\beta_n(L) \le \beta_m(L)$.
\end{itemize}
\end{theorem}
\begin{proof}
\text{}
\begin{itemize}
\setlength\itemsep{0.3em}
\item[(i)]
We can show through computations in Mathematica that for all $n\le 202$ with $n=4k+2$, the $n$-crossing permutations over $S_{n+2}$ generate $S_{n+2}$. (See the original thesis for the code.) By Lemma~\ref{lem:levelPosition}, this means that we can obtain every double crossing as a product of $n$-crossings if we have $n+2$ strings.
Let $m \ge n+2$. Since an $m$-crossing requires $m$ strings, and $L$ is not an unlink, we have $\beta_m(L) \ge n+2$. We can decompose an $m$-crossing braid into a double crossing braid with the same number of strings. Then each double crossing can be turned into a product of $n$-crossings, so we have an $n$-crossing braid with $\beta_m(L)$ strings. This means that there is no need for the extra stabilization, so we have $\beta_n(L) \le \beta_m(L)$ for all such $m \ge n$.
\item[(ii)]
We can again show with Mathematica that for all such $n \le 200$, the $n$-crossing permutations over $S_{n+2}$ generate $A_{n+2}$, the alternating group on $n$ elements. The first inequality follows as in (i), except in this case we may have to take a stabilization to ensure that the number of double crossings is even.
Observe that if $m = 4k$ or $m=4k+1$, then an $m$-crossing breaks down into an even number of double crossings. Thus the double crossing braid must have an even number of crossings. Therefore $\beta_n(L) \le \beta_m(L)$ for all such $m \ge n$.
\end{itemize}
\end{proof}
Hence we have the following relationships:
\begin{align*}
\beta_2&(L) \le \beta_6(L) \le \beta_{10}(L) \le \cdots \le \beta_{202}(L) \le \beta_{206}(L) \\
&\mathbin{\rotatebox[origin=c]{270}{$\le$}} \hspace{25pt} \mathbin{\rotatebox[origin=c]{270}{$\le$}} \hspace{32pt} \mathbin{\rotatebox[origin=c]{270}{$\le$}} \hspace{55pt} \mathbin{\rotatebox[origin=c]{270}{$\le$}} \\
\beta_4&(L) \le \beta_8(L) \le \beta_{12}(L) \le \cdots \le \beta_{204}(L).
\end{align*}
And the following:
\[
\beta_4(L) \le \beta_6(L) + 1, \hspace{5pt} \beta_8(L) \le \beta_{10}(L) + 1, \hspace{5pt} \dots, \hspace{5pt} \beta_{200}(L) \le \beta_{202}(L) + 1.
\]
Note that the inequalities need not end at 202; this is an arbitrary choice on how far to go with the Mathematica computation. It would however be interesting to ask whether there is a general argument that could extend the inequalities to all even $n$.
\begin{example}
The inequality $\beta_{4k} \le \beta_{4k+2} + 1$ is strict for any link $L$ with the property that $\beta_2(L) \ge 4k+4$, and the number of crossings when it realizes the double crossing braid index is odd. We know this because Markov moves (\cite{markov}) preserve the parity of $b+c$, where $b$ is the number of strings and $c$ is the number of crossings. A conjugation does not alter either $b$ or $c$, and a stabilization increases both $b$ and $c$ by 1. This means that given such a link, it cannot be represented as any double crossing braid with $\beta_2(L)$ strings and an even number of double crossings. In order to turn it into a $4k$-crossing braid we must therefore take a stabilization to make the number of crossings even, so $\beta_{4k}(L) = \beta_2(L) + 1$. We can of course realize $L$ as a $(4k+2)$-crossing braid with $\beta_2(L)$ strings, so $\beta_{4k+2}(L) = \beta_2(L) = \beta_{4k}(L) - 1$. An artificial example of such a link is a split link consisting of the trefoil knot and an unlink with $4k+2$ components.
\end{example}
We also have inequalities that hold for infinitely many even $n$.
\begin{theorem}
\label{thm:braidEvenAll}
Let $L$ be a link that is not an unlink.
\begin{itemize}
\setlength\itemsep{0.3em}
\item[(i)]
Let $n=4k+2$. Then for any $m \ge 3n+1$, we have $\beta_n(L) \le \beta_m(L)$.
\item[(ii)]
Let $n=4k$. Then for any $m \ge 3n+1$, we have $\beta_n(L) \le \beta_m(L) + 1$.
Moreover, if $m=4k$ or $m=4k+1$, then we have $\beta_n(L) \le \beta_m(L)$.
\end{itemize}
\end{theorem}
\begin{proof}
(i) The proof of Theorem~\ref{thm:evenAlexander} required at least $3n+1$ strings. This means that for $m \ge 3n+1$, we can turn an $m$-crossing braid into an $n$-crossing braid with the same number of strings, so $\beta_n(L) \le \beta_m(L)$. (ii) can be proved similarly.
\end{proof}
\begin{corollary}
Let $L$ be a link that is not an unlink. Then for all even $n$,
\[
\beta_n(L) \le \beta_{3n+1}(L).
\]
\end{corollary}
\begin{proof}
After noting that when $n=4k$, $3n+1$ can be written in as $8k'+1$ or $8k'+5$ for some $k'$, the result follows directly from Theorem~\ref{thm:braidEvenAll}.
\end{proof}
\subsection{Odd multi-crossing braids}
For links with two or more components, we can consider braid indices for odd $n$. First consider the case when $n=3$.
\begin{theorem}
\label{thm:braidn=3}
Let $L$ be a link with two or more components that is not an unlink. Then
\[
\beta_3(L) =
\begin{cases}
3 &\mbox{ if } \beta_2(L) = 2; \\
\beta_2(L) &\mbox{ otherwise.}
\end{cases}
\]
\end{theorem}
\begin{proof}
The proof of Theorem~\ref{thm:tripleAlexander} required that the double crossing braid have at least 3 strings, and gave us a triple crossing braid with the same number of strings.
\end{proof}
By this we can see that $\beta_3(L) \le \beta_n(L)$ for any odd $n$. However, this can also be seen by noting that any $n$-crossing can be decomposed into triple crossings; any $n$-crossing is a permutation of even strings and a permutation of odd strings, each of which can be realized as a product of triple crossings.
\begin{theorem}
\label{thm:braidOdd}
Let $L$ be a link with two or more components that is not an unlink. Let $n \le 205$, and $m \ge n+3$.
\begin{itemize}
\setlength\itemsep{0.3em}
\item[(i)]
If $n=4k+3$, then we have $\beta_n(L) \le \beta_m(L)$.
\item[(ii)]
If $n=8k+5$, then we have $\beta_n(L) \le \beta_m(L) + 1$.
Moreover, if $m=4k+1$, then we have $\beta_n(L) \le \beta_m(L)$.
\item[(iii)]
If $n=8k+1$, then we have $\beta_n(L) \le \beta_m(L) + 3$.
Moreover, if $m=8k+1$, then we have $\beta_n(L) \le \beta_m(L)$.
\end{itemize}
\end{theorem}
\clearpage
\begin{proof}
\text{}
\begin{itemize}
\setlength\itemsep{0.3em}
\item[(i)]
We can show through computations in Mathematica that for all $n\le 203$ with $n=4k+3$, the $n$-crossing permutations over $S_{n+3} = S_{4k+6}$ generate $S_{2k+3} \times S_{2k+3}$. Each $S_{2k+3}$ corresponds to permutations of odd strings and permutations of even strings. By Lemma~\ref{lem:levelPosition}, this means that we can obtain every triple crossing as a product of $n$-crossings if we have $n+2$ or more strings. The rest of the proof follows as in the proof of Theorem~\ref{thm:braidEven} (i), but by decomposing the $m$-crossing braid into triple crossings.
\item[(ii)]
We can again show with Mathematica that for all such $n \le 205$, the $n$-crossing permutations $S_{n+3} = S_{8k+8}$ generate half of $S_{4k+4} \times S_{4k+4}$. Since we know that the $n$-crossing permutations must be even, this shows that the $n$-crossing permutations generate all pairs of triple crossings permutations. Thus, if we have $n+3$ or more strings, we can obtain every pair of triple crossings as a product of $n$-crossings. The rest of the proof follows as in the proof of Theorem~\ref{thm:braidEven} (ii), but by decomposing the $m$-crossing braid into triple crossings.
\item[(iii)]
We can again show with Mathematica that for all such $n \le 201$, the $n$-crossing permutations $S_{n+3} = S_{8k+4}$ generate $A_{4k+2} \times A_{4k+2}$. This shows that the $n$-crossing permutations generate all pairs of even-string transpositions, and all pairs of odd-string transpositions. Thus, if we have $n+3$ or more strings, we can obtain every pair of odd triple crossings and every pair of even triple crossings as a product of $n$-crossings. The rest of the proof follows as in (ii).
\end{itemize}
\end{proof}
As for the case with $n$ even, we suspect that the inequalities can be extended for all $n$.
The following inequalities hold for infinitely many $n$. This is done as in the proof of Theorem~\ref{thm:braidEvenAll}, by checking the number of strings that were necessary for the proof of Theorem~\ref{thm:oddAlexander}.
\begin{theorem}
\label{thm:braidOddAll}
Let $L$ be a link with two or more components that is not an unlink.
\begin{itemize}
\setlength\itemsep{0.3em}
\item[(i)]
Let $n=4k+3$. Then for any $m \ge 3n+5$, we have $\beta_n(L) \le \beta_m(L)$.
\item[(ii)]
Let $n=8k+5$. Then for any $m \ge 3n-2$, we have $\beta_n(L) \le \beta_m(L) + 1$.
Moreover, if $m=4k+1$, then we have $\beta_n(L) \le \beta_m(L)$.
\item[(iii)]
Let $n=8k+1$. Then for any $m \ge 3n-2$, we have $\beta_n(L) \le \beta_m(L) + 3$.
Moreover, if $m=8k+1$, then we have $\beta_n(L) \le \beta_m(L)$.
\end{itemize}
\end{theorem}
Finally we present an inequality that holds for all $n \ge 8$, even or odd.
\begin{corollary}
Let $L$ be a link that is not an unlink. Then for all $n \ge 8$,
\[
\beta_n(L) \le \beta_{4n-3}(L).
\]
\end{corollary}
\begin{proof}
Observe that when $n \ge 8$, we have $4n-3 \ge 3n+5$. Observe the following about $4n-3$. When $n=4k$, we have $4n-3 = 16k-3 = 8(2k-1) + 5$. When $n=8k+5$, we have $4n-3 = 32k+20-3 = 8(4k+2) + 1$. Finally, when $n=8k+1$, we have $4n-3 = 32k+4-3 = 8(4k) + 1$. Then the result follows directly from Theorem~\ref{thm:braidEvenAll} and Theorem~\ref{thm:braidOddAll}.
\end{proof}
\FloatBarrier
| {
"timestamp": "2018-05-14T02:09:42",
"yymm": "1805",
"arxiv_id": "1805.04427",
"language": "en",
"url": "https://arxiv.org/abs/1805.04427",
"abstract": "Traditionally, knot theorists have considered projections of knots where there are two strands meeting at every crossing. A multi-crossing is a crossing where more than two strands meet at a single point, such that each strand bisects the crossing. In this paper we generalize ideas in traditional braid theory to multi-crossing braids. Our main result is an extension of Alexander's Theorem. We prove that every link can be put into an $n$-crossing braid form for any even $n$, and that every link with two or more components can be put into an $n$-crossing braid form for any $n$. We find relationships between the $n$-crossing braid indices, or the number of strings necessary to represent a link in an $n$-crossing braid.",
"subjects": "Geometric Topology (math.GT)",
"title": "Multi-crossing Braids",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9888419690807407,
"lm_q2_score": 0.8104789109591832,
"lm_q1q2_score": 0.8014355622112931
} |
https://arxiv.org/abs/2003.06457 | Interactions between Hlawka Type-1 and Type-2 Quantities | The classical Hlawka inequality possesses deep connections with zonotopes and zonoids in convex geometry, and has been related to Minkowski space. We introduce Hlawka Type-1 and Type-2 quantities, and establish a Hlawka-type relation between them, which connects a vast number of strikingly different variants of the Hlawka inequalities, such as Serre's reverse Hlawka inequality in the future cone of the Minkowski space, the Hlawka inequality for subadditive function on abelian group by Ressel, and the integral analogs by Takahasi et al. Besides, we announce several enhanced results, such as the Hlawka inequality for the power of measure function. Particularly, we give a complete study of the Hlawka inequality for quadratic form which relates to a work of Serre. | \section{Introduction}
Hlawka's inequality saying for any $x,y,z$ in a inner product space
\begin{equation}\label{eq:Hlawka-classical}
\|x\|+\|y\|+\|z\|+\|x+y+z\|\ge \|x+y\|+\|y+z\|+\|z+x\|,
\end{equation}
was proved firstly by Hlawka and originally appeared in 1942 in a paper of Hornich \cite{Hornich42}.
It has a long series of investigations and extensions, such as the Hlawka inequality in integral form \cite{TTW00,TTW09} and abelian group \cite{R15}. The readers can also find an excellent summary of related works in \cite{Wl}, and the beautiful relations to discrete and convex geometry like zonotopes as well as zonoids by Witsenhausen \cite{W78,W73,SW1983}.
Recently, Serre consider the pseudo-norm for the future cone of the Minkowski space \cite{Serre15}. There he presented the reverse Hlawka-type inequality.
According to these beautiful works, the classical Hlawka inequality has deep connections with zonotopes and zonoids in convex geometry, and relates to the geometry on timelike cone of Minkowski space.
Note that the proof of \eqref{eq:Hlawka-classical} depends on the identity
\begin{equation}\label{eq:Hlawka-classical-2form}
\|x\|^2+\|y\|^2+\|z\|^2+\|x+y+z\|^2= \|x+y\|^2+\|y+z\|^2+\|z+x\|^2,
\end{equation}
which is a quadratic form equality. To some extend, the one-homogeneous inequality \eqref{eq:Hlawka-classical} essentially relates to the two-homogeneous equality \eqref{eq:Hlawka-classical-2form}.
In this work, we introduce Hlawka one-form and Hlawka two-form, and establish a Hlawka-type relation which encodes the signatures of them (see Theorem \ref{thm:operator-main}).
This helps us to give the Hlawka inequality for a class of functions on semigroups. By this result, we can connect a vast number of Hlawka inequalities in the literature, even though they come from various perspectives and are very different from each other. Furthermore, we announce several exciting results, such as the Hlawka inequality for the power of measure function. Particularly, we investigate the Hlawka inequality on quadratic form thoroughly.
For a glimpse of these results, we give some remarkable notes here:
\begin{itemize}
\item For a reversed version of the Hlawka inequality, Serre gave a demonstration in the future cone of Minkowski space \cite{Serre15}. In that paper, he shows:
if $q$ is a quadratic form on $\mathbb{R}^n$ with signature $(1, n-1)$, then the length $l=\sqrt{q}$
satisfies
\begin{equation}\label{eq:Serre}
l(x)+l(y)+l(z)+l(x+y+z)\leq l(x+y)+l(y+z)+l(z+x)
\end{equation}
for every vectors $x,y,z$ in the future cone with respect to $q$. In the present paper, we show a simple proof for \eqref{eq:Serre}, and give a systematic study for the Hlawka inequality on quadratic form (see Section \ref{sec:quadratic}).
\item Ressel \cite{R15} shows a generalization of the Hlawka inequality for subadditive functions on abelian group. In this work, we extend his result to the setting of sub/super-additive functions on semigroup (see Section \ref{sec:semigroup}). For convenience, Ressel's result is provided in Example \ref{cor:R15} as an application.
\item Takahasi et al \cite{TTW00,TTW09} study the integral analogs of the Hlawka inequality. In Section \ref{sec:integral}, we generalize this integral inequality to the form of positive linear operator, and their main theorem is rewritten in Example \ref{cor:TTW00}.
\end{itemize}
This paper provides a theorem combining the above different progresses together in a unified form (see Theorem \ref{thm:operator-main}), which also produces several other promotive results.
\section{ The Hlawka-type relation of Hlawka one-form and two-form}\label{sec:Hlawka-relation}
\noindent \textbf{Basic setting}: Given nonempty sets $\Omega$ and $X$,
the ring $\mathbb{R}^\Omega$ of all real valued functions on $\Omega$ is a real linear space equipped with a product operator `$\cdot$'. Take
a linear subspace $\mathcal{S}\subset \mathbb{R}^\Omega$ equipped with a linear and signature-preserving function $T: \mathcal{S} \to \mathbb{R}$ (i.e., for $\zeta\in \mathcal{S}$ satisfying $\forall\omega\in\Omega,\,\zeta(\omega)\geq0$, there holds $T(\zeta)\geq0$).
We use $1$ to denote the constant function in $\mathcal{S}$ satisfying $1(\omega)=1$, $\forall\omega\in\Omega$.
\begin{thm}\label{thm:operator-main}
Given $a,b\in \mathbb{R}$ with $a\ne0$, $a+b >0$ and $\eta, \xi:\Omega\to X$, for $f: X\rightarrow \mathbb{R}$ satisfying $(f\circ\xi)(\omega)-(f\circ\eta)(\omega) \leq b$, $\forall\omega\in\Omega$, where `$\circ$' represents the composition operator, we have the following:
\item (I) If $(f\circ\xi)(\omega)+(f\circ\eta)(\omega) \leq a$, $\forall\omega\in\Omega$, then $C_2\ge 0$ implies $C_1\ge 0$, and $C_1\le 0$ implies $C_2\le 0$;
\item(II) If $(f\circ\xi)(\omega)+(f\circ\eta)(\omega) \geq a$, $\forall\omega\in\Omega$, then $C_1\ge 0$ implies $C_2\ge 0$, and $C_2\le 0$ implies $C_1\le 0$.
\noindent Here
$$C_1=(T(1)-c)b+T(f\circ\eta)- T(f\circ\xi)$$
is the {\sl Hlawka one-form},
and
$$C_2=(T(1)-c)b^2+T(f^2\circ\eta) - T(f^2\circ\xi)$$
is the corresponding {\sl Hlawka two-form}, in which $c:=\frac{2}{a}T(f\circ\eta)$.
\end{thm}
In summary, Theorem \ref{thm:operator-main} says that under suitable `summation control' and `difference control', the signatures of Hlawka one-form $C_1$ and Hlawka two-form $C_2$ are essentially depended on each other in some way.
\begin{proof}
First, the relation among the quantities in Theorem \ref{thm:operator-main} can be shown in the following diagram:
\begin{center}
\begin{spacing}{0.5}
$$
\xymatrix{
\Omega\ar[rr]^{\xi,\eta}\ar@/_1pc/[rrrr]_{ \scriptstyle\begin{array}{c}\scriptstyle f\circ \xi,f\circ\eta \\{ \scriptstyle \in}\end{array} }& & X\ar[rr]^f & &\mathbb{R}\\ & & {\mathcal{S} \ar[rru]_{T} } & &
}
$$
\end{spacing}
\end{center}
We note the following identities:
\begin{align*}
&\;\;\;\; T\left((a-f\circ\eta-f\circ\xi)(f\circ\eta+b-f\circ\xi)\right)
\\&= aT(f\circ\eta)-T(f^2\circ\eta)+T(f^2\circ\xi)+T\left((f\circ\eta) \cdot (f\circ\xi)\right)-T((f\circ\xi )\cdot (f\circ\eta))
\\&\;\;\;\; -aT(f\circ\xi)-bT(f\circ\eta)
-bT(f\circ\xi)+abT(1)
\\&= aT(f\circ\eta)+\left(T(1)-c\right)b^2+ T(f\circ\eta)b + (T(f^2\circ\xi) -T(f^2\circ\eta) -(T(1)-c)b^2)
\\&\;\;\;\; -T(f\circ\xi)(a+b)+(T(1)a-2T(f\circ\eta))b +T((f\circ\eta) \cdot (f\circ\xi))-T((f\circ\xi )\cdot (f\circ\eta))
\\&= T(f\circ\eta)(a+b)+\left(T(1)-c\right)b^2 + (T(f^2\circ\xi) -T(f^2\circ\eta) -(T(1)-c)b^2)
\\&\;\;\;\; -T(f\circ\xi) (a+b)+(T(1)-c)ab
\\&= \left((T(1)-c) b+T(f\circ\eta) -T(f\circ\xi)\right)(a+b)
+ (T(f^2\circ\xi) -T(f^2\circ\eta) -(T(1)-c)b^2),
\end{align*}
where the notation $f^2\circ\eta:=(f\circ\eta)\cdot(f\circ\eta)$ is used. Therefore, we obtain
\begin{align}
&\;\;\;\; T\left((a-f\circ\eta-f\circ\xi)(f\circ\eta+b-f\circ\xi)\right) \notag
\\&= \left((T(1)-c) b+T(f\circ\eta) -T(f\circ\xi)\right)(a+b) - ((T(1)-c)b^2+T(f^2\circ\eta) -T(f^2\circ\xi)). \label{eq:T}
\end{align}
(I). For any $ \omega\in \Omega$, $f\circ\eta(\omega)+b\geq f\circ\xi(\omega)$ and $a\geq f\circ\eta(\omega)+f\circ\xi(\omega)$.
By the assumption, $\forall \omega\in\Omega$,
$$(a-f\circ\eta(\omega)-f\circ\xi(\omega))(f\circ\eta(\omega)+b-f\circ\xi(\omega))\geq 0.$$
This deduces that
$$T\left((a-f\circ\eta-f\circ\xi)(f\circ\eta+b-f\circ\xi)\right)\geq0.$$
Accordingly, Eq.~\eqref{eq:T} gives
$$\left((T(1)-c)b+T(f\circ\eta) -T(f\circ\xi)\right)(a+b)\geq (T(1)-c)b^2+T(f^2\circ\eta) -T(f^2\circ\xi)$$
which arrives the final result.
(II). Since only the assumption $a\leq f\circ\eta(\omega)+f\circ\xi(\omega)$ is reversed, similar process gives
$$\left((T(1)-c)b+T(f\circ\eta) -T(f\circ\xi)\right)(a+b)\leq (T(1)-c)b^2+T(f^2\circ\eta) -T(f^2\circ\xi)$$
and then the reversed case could be verified immediately.
\end{proof}
\begin{remark}
From the proof of Theorem \ref{thm:operator-main}, it is obvious that the conditions could be weaken as follows:
Given $a\in \mathbb{R}\setminus\{0\}, b\in \mathbb{R}$ with $ a+b >0 $ and $\eta, \xi \in \mathcal{S}$, $f: X\rightarrow \mathbb{R}$, let $\tilde{\Omega}=\{\omega\in\Omega|b+f\circ\eta(\omega)-f\circ\xi(\omega)\neq 0\}$.
If $a \geq f\circ\eta(\omega)+f\circ\xi(\omega),\; b \geq f\circ\xi(\omega)-f\circ\eta(\omega) $, $\forall\omega\in
\tilde{\Omega}$, then
$C_2\ge 0$ $\Rightarrow$ $C_1\ge 0$, and $C_1\le 0$ $\Rightarrow$ $C_2\le 0$.
If $a \leq f\circ\eta(\omega)+f\circ\xi(\omega),\; b \geq f\circ\xi(\omega)-f\circ\eta(\omega)$, $\forall\omega\in
\tilde{\Omega}$, then $C_1\ge 0$ $\Rightarrow$ $C_2\ge 0$, and $C_2\le 0$ $\Rightarrow$ $C_1\le 0$.
\end{remark}
\begin{remark}
To some extent, the two key controls for $f\circ\eta+f\circ\xi$ and $f\circ\xi-f\circ\eta$ via constants $a$ and $b$ are indeed `summation control' and `difference control'. The identities used in the proof of Theorem \ref{thm:operator-main} is inspired by product-to-sum formulas.
\end{remark}
\vspace{0.5cm}
\section{Applications to variant Hlawka inequalities}
\subsection{Applications to quadratic form}\label{sec:quadratic}
Given a nondegenerate quadratic form $q$, i.e., $q(x)=x^\top Qx$, where $x^\top$ is the transpose of $x$ and $Q$ is a matrix of dimension $n$.
Henceforth a pair $(k,n-k)$ is said to be the signature of $Q$, if $Q$ has $k$ positive eigenvalues and $(n-k)$ negative eigenvalues. Consider $l=\sqrt{q}$,
then we have the following:
\begin{pro}\label{quadratic}
(P1) If $Q$ is of the signature $(1,n-1)$, then $l$ satisfies the reversed
Hlawka inequality in the closure of the future cone;
(P2) If $Q$ is of the signature $(n,0)$, then $l$ satisfies the Hlawka inequality in $\mathbb{R}^n$.
\end{pro}
\begin{proof}
We will apply Theorem \ref{thm:operator-main} to this setting, where the symbols appearing in Theorem \ref{thm:operator-main} can be concretely chosen (see Table~\ref{tab:pro1}).
Firstly, according to the definition of $q$, there is
\begin{equation}\label{quadratic1}
q(x+y+z)+q(x)+q(y)+q(z)=q(x+y)+q(x+z)+q(y+z).
\end{equation}
(P1) Since $Q$ is of the signature $(1,n-1)$, we may assume without loss of generality that $Q=\mathrm{diag}(1,-1,\cdots,-1)$ and let $X=\{x=(x_1,\cdots,x_n)|q(x)>0,x_1> 0\}$, i.e., the future cone in Minkowski space.
\begin{table}
\centering
\caption{\small The concrete quantities of Theorem \ref{thm:operator-main} used in the proof of Proposition \ref{quadratic} (1). Besides, for Proposition \ref{quadratic} (2), only let $X$ change to $\mathbb{R}^n$.}
\begin{tabular}{|l|l|}
\hline
Terminologies in Theorem \ref{thm:operator-main} & Concrete choices in Proposition \ref{quadratic} (1) for fixed $x,y,z\in X$ \\
\hline
$\Omega=$ & $\{1,2,3\}$\\
\hline
$X=$ & $\{x=(x_1,\cdots,x_n)|q(x)>0,x_1> 0\}$\\
\hline
$\mathcal{S}=$ & $\mathbb{R}^{\{1,2,3\}}$\\
\hline
$T=$&$\sum_{\omega\in\{1,2,3\}}$, i.e., $T(g)=g(1)+g(2)+g(3)$, $\forall g\in\mathcal{S}$\\
\hline
$\eta=$&$x,y,z$ for $\omega=1,2,3$ respectively\\
\hline
$\xi=$&$\sum_{\omega=1}^3\eta(\omega)-\eta$, i.e., $y+z, z+x, x+y$ for $\omega=1,2,3$ respectively\\
\hline
$f=$&$\sqrt{q}$\\
\hline
$a=$&$\sqrt{q(x)}+\sqrt{q(y)}+\sqrt{q(z)}$\\
\hline
$b=$&$\sqrt{q(x+y+z)}$\\
\hline
\end{tabular}
\label{tab:pro1}
\end{table}
Indeed, there is no subtraction `$-$' in $X$ and it is closed under addition.
Because $q(x)=x^2_1-x^2_2-\cdots-x^2_n>0, \ \ q(y)=y^2_1-y^2_2-\cdots-y^2_n>0$, i.e.,
$$
x^2_1>x^2_2+\cdots+x^2_n, \ \ y^2_1>y^2_2+\cdots+y^2_n,
$$
by Cauchy inequality, the following inequality holds:
$$
x^2_1y^2_1>(x^2_2+\cdots+x^2_n)(y^2_2+\cdots+y^2_n)\geq (x_2y_2+\cdots+x_ny_n)^2.
$$
Due to $x,y\in X$, there is $x_1y_1> 0$, so $x_1y_1>x_2y_2+\cdots+x_ny_n$, i.e., $x^\top Qy=y^\top Qx=x_1y_1-x_2y_2-\cdots-x_ny_n>0$.
Hence $q(x+y)=q(x)+q(y)+x^\top Qy+y^\top Qx> 0$ which implies $x+y\in X$.
According to the Azteca inequality (i.e., a reversed version of Cauchy inequality), for any $x,y \in X$, there is
$$(x_1y_1-\sum_{j\geq2}x_jy_j)^2\geq (x^2_1-\sum_{j\geq2}x^2_j)(y^2_1-\sum_{j\geq2}y^2_j),$$
i.e.,
\begin{equation}\label{quadratic2}
(x^\top Qy)^2\geq x^\top Qx\cdot y^\top Qy.
\end{equation}
By further elementary computation, \eqref{quadratic2} is equivalent to
\begin{equation}\label{quadratic3}
\sqrt{q(x+y)}\geq\sqrt{q(x)}+\sqrt{q(y)}
\end{equation}
whenever $x,y\in X$. By \eqref{quadratic3}, for any $x,y,z\in X$, there is
$$\sqrt{q(x)}+\sqrt{q(y+z)}\geq a,\ \ \sqrt{q(y+z)}-\sqrt{q(x)}\leq b.$$
By the parameters shown in Table~\ref{tab:pro1}, we further have $c=2$ in Theorem \ref{thm:operator-main}, $C_2=q(x+y+z)+q(x)+q(y)+q(z)-q(x+y)-q(x+z)-q(y+z)=0$ (by Eq.~\eqref{quadratic1}) and
$$C_1=l(x)+l(y)+l(z)+l(x+y+z)- l(x+y)-l(y+z)-l(z+x).$$
According to Theorem \ref{thm:operator-main} (II), $C_1\le0$, thus
$$
l(x)+l(y)+l(z)+l(x+y+z)\leq l(x+y)+l(y+z)+l(z+x),
$$
whenever $x,y,z\in X$. By taking limit, one can find that the reversed
Hlawka inequality also holds on the boundary of the future cone.
\vspace{0.2cm}
(P2) If $Q$ is $(n,0)$, we may assume without loss of generality that $Q=\mathrm{diag}(1,1,\cdots,1)$ and let $X=\mathbb{R}^n$. In this case, the inner product $\langle x, y\rangle:=x^\top Qy$ satisfies Cauchy inequality, i.e., $(x^\top Qy)^2\leq x^\top Qx\cdot y^\top Qy$.
By elementary computation, there is $\sqrt{q(x+y)}\leq\sqrt{q(x)}+\sqrt{q(y)}$. From this, we have $$\sqrt{q(x)}+\sqrt{q(y+z)}\leq a,\ \ \sqrt{q(y+z)}-\sqrt{q(x)}\leq b.$$
In case $a+b> 0$, similar to (P1), according to Theorem \ref{thm:operator-main} (I) and Eq.~\eqref{quadratic1}, we have $l(x)+l(y)+l(z)+l(x+y+z)\geq l(x+y)+l(y+z)+l(z+x)$.
In the case of $a=0$ or $a+b=0$, i.e., $x=y=z=0$, it is obvious that $l(x)+l(y)+l(z)+l(x+y+z)= l(x+y)+l(y+z)+l(z+x)=0$.
Consequently, $l$ satisfies the Hlawka inequality.
\end{proof}
Proposition \ref{quadratic} contains Hlawka-type inequalities in the settings of both Euclidean case and Minkowski case. Moreover, by using Theorem \ref{thm:operator-main}, here we indeed provide an alternative but easier proof of the reverse Hlawka inequality in Minkowski space (Theorem 1.1 in \cite{Serre15}).
However, there is no similar conclusion on other cases that $Q$ is of the signature $(k,n-k)$ for $2\le k\le n-1$, and we will give an example to show this.
\begin{example}
If $Q$ is of the signature $(k,n-k)$ for $2\le k\le n-1$, we may assume without loss of generality that $Q=\textrm{diag}(\mathop{\underbrace{1,\cdots,1}}\limits_k,\mathop{\underbrace{-1,\cdots,-1}}\limits_{n-k})$.
By finding suitable cone $X\subset \{x=(x_1,\cdots,x_n)\in\mathbb{R}^n|\,q(x)>0\}$, one may obtain that both the Hlawka inequality and the reversed Hlawka inequality fail for $l$.
Indeed, take $0<\epsilon \ll 1$, let
$$
\vec v_1=\vec v_2=(\mathop{\underbrace{1,1,\epsilon,\cdots,\epsilon}}\limits_k,
\mathop{\underbrace{1,\epsilon,\cdots,\epsilon}}\limits_{n-k})
,\ \
\vec v_3=(1,1,\epsilon,\cdots,\epsilon)
$$
and
$$
\vec v_4=(2,1,\epsilon,\cdots,\epsilon),\; \vec v_5=(1,2,\epsilon,\cdots,\epsilon).
$$
Consider $X=\{t_1\vec v_1+t_2\vec v_2+t_3\vec v_3+t_4\vec v_4+t_5\vec v_5|t_i>0, 1\leq i \leq 5\}$. It is clear that $X\subset\{x|\,q(x)>0\}$.
By computation, $l(\vec v_1)+l(\vec v_2)+l(\vec v_3)+l(\vec v_1+\vec v_2+\vec v_3)< l(\vec v_1+\vec v_2)+l(\vec v_2+\vec v_3)+l(\vec v_3+\vec v_1)$.
While, $l(\vec v_5)+l(\vec v_4)+l(\vec v_3)+l(\vec v_3+\vec v_4+\vec v_5)> l(\vec v_4+\vec v_5)+l(\vec v_3+\vec v_5)+l(\vec v_3+\vec v_4)$.
So in $X$ both the Hlawka inequality and the reversed Hlawka inequality fail for $l$.
\end{example}
\hspace{0.5cm}
\subsection{Applications to sub/super -additive functions on semigroup}\label{sec:semigroup}
Let $X$ in Theorem \ref{thm:operator-main} be an abelian semigroup $(G,+)$, and let $F:G\rightarrow \mathbb{R}$ be a non-negative real-valued function.
We will consider the Hlawka inequality in form
\begin{equation}\label{eq:2^k}
F(x+y)^{1/2^k}+F(y+z)^{1/2^k}+F(z+x)^{1/2^k}\le F(x)^{1/2^k}+F(y)^{1/2^k}+F(z)^{1/2^k}+F(x+y+z)^{1/2^k},
\end{equation}
$\forall x,y,z\in G$, where $k$ is an integer.
\begin{pro}\label{pro:2^k}
Let $G$ be an abelian semigroup, and let $x\mapsto F(x)$ be a non-negative real-valued function on $G$.
If $F$ is {\sl strong subadditive} (i.e., $F(x)+F(y)\ge F(x+y)$ and $F(x)+F(x+y)\ge F(y)$, $\forall x,y\in G$), and \eqref{eq:2^k} holds for some $k_0\ge -1$, then \eqref{eq:2^k} holds for all $k\ge k_0$.
If $F$ is assumed to be superadditive (i.e., $F(x)+F(y)\le F(x+y)$, $\forall x,y\in G$), and \eqref{eq:2^k} holds for some $k_0\le 0$, then \eqref{eq:2^k} holds for all $k\le k_0$.
\end{pro}
\begin{proof}
Given $a,b>0$, the function $(a^t+b^t)^{1/t}$ is decreasing on $(0,\infty)$.
\begin{table}
\centering
\caption{\small The concrete quantities of Theorem \ref{thm:operator-main} used in the proof of Proposition \ref{pro:2^k}.}
\begin{tabular}{|l|l|}
\hline
Terminologies in Theorem \ref{thm:operator-main} & Concrete choices in Proposition \ref{pro:2^k} for fixed $x,y,z\in X$ \\
\hline
$\Omega=$ & $\{1,2,3\}$\\
\hline
$X=$ & abelian semigroup $G$\\
\hline
$\mathcal{S}=$ & $\mathbb{R}^{\{1,2,3\}}$\\
\hline
$T=$& $\sum_{\omega\in\{1,2,3\}}$, i.e., $T(g)=g(1)+g(2)+g(3)$, $\forall g\in\mathcal{S}$\\
\hline
$\eta=$& $x,y,z$ for $\omega=1,2,3$ respectively\\
\hline
$\xi=$&$\sum_{\omega=1}^3\eta(\omega)-\eta$, i.e., $y+z, z+x, x+y$ for $\omega=1,2,3$ respectively\\
\hline
$f=$& $F^{\frac 1{2^{k}}}$\\
\hline
$a=$&$F(x)^{\frac 1{2^{k}}}+F(y)^{\frac 1{2^{k}}}+F(z)^{\frac 1{2^{k}}}$\\
\hline
$b=$&$F(x+y+z)^{\frac 1{2^{k}}}$\\
\hline
\end{tabular}
\label{tab:pro2}
\end{table}
Case (1). $F$ is strong non-negative subadditive.
For any $0<\alpha\le 1$,
$$
F(x+y)^\alpha\leq (F(x)+F(y))^\alpha\leq F(x)^\alpha+F(y)^\alpha.
$$
Suppose \eqref{eq:2^k} holds for some $k_0\ge -1$. Then for any $k> k_0$, and any $x,y,z\in G$,
$$
F(x+y)^{\frac 1{2^{k}}}+F(z)^{\frac 1{2^{k}}}\leq F(x)^{\frac 1{2^{k}}}+F(y)^{\frac 1{2^{k}}}+F(z)^{\frac 1{2^{k}}}
$$
and
$$
F(x+y)^{\frac 1{2^{k}}}-F(z)^{\frac 1{2^{k}}}\leq F(x+y+z)^{\frac 1{2^{k}}}.
$$
Here, let $a$ and $b$ in Theorem \ref{thm:operator-main} be $F(x)^{\frac 1{2^{k}}}+F(y)^{\frac 1{2^{k}}}+F(z)^{\frac 1{2^{k}}}$ and $F(x+y+z)^{\frac 1{2^{k}}}$,
respectively. The detailed parameters are shown in Table~\ref{tab:pro2}.
If $a\neq 0$ and $a+b>0$, then the proof is finished by Theorem \ref{thm:operator-main} (I).
If $a=0$, then $F(x)=F(y)=F(z)=F(x+y)=F(x+z)=F(y+z)=F(x+y+z)=0$ and \eqref{eq:2^k} is obvious.
\vspace{0.3cm}
\noindent Case (2). $F$ is non-negative superadditive, i.e., $F(x)+F(y)\leq F(x+y)$.
Note that for any $\alpha \geq 1$,
$$
F(x+y)^\alpha\geq(F(x)+F(y))^\alpha\geq F(x)^\alpha+F(y)^\alpha.
$$
In consequence, for any $k\leq0$ and any $x,y,z\in G$,
$$
F(x+y)^{\frac 1{2^{k}}}+F(z)^{\frac 1{2^{k}}}\geq F(x)^{\frac 1{2^{k}}}+F(y)^{\frac 1{2^{k}}}+F(z)^{\frac 1{2^{k}}}
$$
and
$$
F(x+y+z)^{\frac 1{2^{k}}}\geq F(x+y)^{\frac 1{2^{k}}}+F(z)^{\frac 1{2^{k}}}\geq F(x+y)^{\frac 1{2^{k}}}-F(z)^{\frac 1{2^{k}}}.
$$
If $a\neq 0$ and $a+b>0$, by Theorem \ref{thm:operator-main}, the result is proved.
If $a+b=0$, then $F(x)=F(y)=F(z)=F(x+y)=F(x+z)=F(y+z)=F(x+y+z)=0$, the result is obvious.
If $a=0$, then $F(x)=F(y)=F(z)=0$. According to the condition, we have
$$ F(x+y)^{1/2^{k}}+F(y+z)^{1/2^{k}}+F(z+x)^{1/2^{k}}\le F(x+y+z)^{1/2^{k}}$$
for some $k\leq 0$.
Take the square of above inequality, there is
$$
F(x+y)^{1/2^{k-1}}+F(y+z)^{1/2^{k-1}}+F(z+x)^{1/2^{k-1}}\le F(x+y+z)^{1/2^{k-1}}.
$$
Hereto, the prove is completed.
\end{proof}
Now we show an interesting example even though this result seems to be elementary.
\begin{example}
Taking $G=L^p$ and $F=\|\cdot\|_p$, together with Corollary 2.1 in \cite{W73} and Proposition \ref{pro:2^k}, we have
$$\|a+b\|_p^{\frac 1{2^k}}+\|b+c\|_p^{\frac 1{2^k}}+\|c+a\|_p^{\frac 1{2^k}}\le \|a\|_p^{\frac 1{2^k}}+\|b\|_p^{\frac 1{2^k}}+\|c\|_p^{\frac 1{2^k}}+\|a+b+c\|_p^{\frac 1{2^k}}$$
for any $a,b,c\in L^p$, and $k\in \mathbb{N}$, where $1\le p\le 2$.
Replacing $a,b,c$ respectively by $a^{2^k}$, $b^{2^k}$, $c^{2^k}$, one gets
$$\|(a^{2^k}+b^{2^k})^{\frac 1{2^k}}\|_{2^kp}+\|(b^{2^k}+c^{2^k})^{\frac 1{2^k}}\|_{2^kp}+\|(c^{2^k}+a^{2^k})^{\frac 1{2^k}}\|_{2^kp} \le \|a\|_{2^kp}+\|b\|_{2^kp}+\|c\|_{2^kp}+\|(a^{2^k}+b^{2^k}+c^{2^k})^{\frac 1{2^k}}\|_{2^kp}.$$
For convenience, we define an operation $\Diamond_k$ by $a\Diamond_k b=(a^{2^k}+b^{2^k})^{\frac 1{2^k}}$ for $1\le k<+\infty$, $a\Diamond_0 b:= a+b$ and $a\Diamond_\infty b:=|a|\vee |b|:=\max\{|a|,|b|\}$. Then using this notation, we obtain
$$\|a \Diamond_k b\|_{2^kp}+\|b\Diamond_k c\|_{2^kp}+\|c\Diamond_k a\|_{2^kp} \le \|a\|_{2^kp}+\|b\|_{2^kp}+\|c\|_{2^kp}+\|a\Diamond_k b\Diamond_k c\|_{2^kp}$$
for any $p\in[1,2]$ and any $k\in \mathbb{N}\cup\{+\infty\}$. Thus
$$\|a\Diamond_k b\|_p+\|b\Diamond_k c\|_p+\|c\Diamond_k a\|_p \le \|a\|_p+\|b\|_p+\|c\|_p+\|a\Diamond_k b\Diamond_k c\|_p$$
holds for $p\in[2^k,2^{k+1}]$, and
$$
\|\max\{|a|,|b|\}\|_\infty+\|\max\{|b|,|c|\}\|_\infty+\|\max\{|c|,|a|\}\|_\infty\le \|a\|_\infty+\|b\|_\infty+\|c\|_\infty+\|\max\{|a|,|b|,|c|\}\|_\infty
$$
by taking $k\to+\infty$.
\end{example}
A direct application of Proposition \ref{pro:2^k} is the following Hlawka inequality on abelian group.
\begin{example}[Theorem 2 in \cite{R15}]\label{cor:R15}
Let $G$ be an abelian group, $x\mapsto |x|$ a non-negative symmetric and subadditive function on $G$ (i.e., $|-x|=|x|$ and $|x|+|y|\ge|x+y|$, $\forall x,y\in G$), and let $S:[0,\infty)\to [0,\infty)$ be concave. Then, if $\forall x,y,z\in G$,
$$S^2(|x+y|)+S^2(|y+z|)+S^2(|z+x|)\le S^2(|x|)+S^2(|y|)+S^2(|z|)+S^2(|x+y+z|),$$
so does $S$.
In fact, the function $F(\cdot):=S(|\cdot|)$ must be non-negative and strong subadditive. So, Proposition \ref{pro:2^k} is applicable here.
\end{example}
The following measure-type Hlawka inequality is non-trivial and it cannot be deduced from Theorem 2 in \cite{R15} (i.e., Example \ref{cor:R15} above), because a measure space equipped with any set operation is not a group. But it can be obtained straightforward by Proposition \ref{pro:2^k} since a measure space with any set operation becomes a semigroup.
\begin{example}
Let $G$ be a measure space and $F=\mu$ be the measure. For the case of $k=0$, note that $\mu(A)+\mu(B)+\mu(C)-\mu(A\cup B)-\mu(B\cup C)-\mu(C\cup A)+\mu(A\cup B\cup C)=\mu(A\cap B\cap C)\ge 0$ and for symmetric difference $\triangle$,
$\mu(A)+\mu(B)+\mu(C)-\mu(A\triangle B)-\mu(B\triangle C)-\mu(C\triangle A)+\mu(A\triangle B\triangle C)=3\mu(A\cap B\cap C)\ge0$.
According to Proposition \ref{pro:2^k}, we have for any $k\ge 0$,
$$
\mu(A)^{1/2^k}+\mu(B)^{1/2^k}+\mu(C)^{1/2^k}+\mu(A\cup B\cup C)^{1/2^k}\ge \mu(A\cup B)^{1/2^k}+\mu(B\cup C)^{1/2^k}+\mu(C\cup A)^{1/2^k}
$$
and
$$
\mu(A)^{1/2^k}+\mu(B)^{1/2^k}+\mu(C)^{1/2^k}+\mu(A\triangle B\triangle C)^{1/2^k}\ge \mu(A\triangle B)^{1/2^k}+\mu(B\triangle C)^{1/2^k}+\mu(C\triangle A)^{1/2^k}
$$
because $
\mu(A)+\mu(B)\ge \mu(A\cup B)\ge \mu(A\triangle B)$ and $\mu(A)+\mu(A\cup B)\ge \mu(A)+\mu(A\triangle B)\ge \mu(B)$.
\end{example}
\vspace{0.5cm}
\subsection{Applications to integral form}\label{sec:integral}
Next, we would pay our attention to the following setting.
Let $\Omega$ be a nonempty set and let $G$ be an abelian group, and let $x\mapsto |x|$ be a non-negative symmetric and subadditive function on $G$ (i.e., $|-x|=|x|$ and $|x|+|y|\ge|x+y|$, $\forall x,y\in G$). The function spaces $G^\Omega$ and $\mathbb{R}^\Omega$ are also abelian groups under the natural operation `$+$'. Take abelian subgroups ${\mathcal F}\subset G^\Omega$ and linear subspace $\widehat{{\mathcal F}}\subset \mathbb{R}^\Omega$ equipped with $T:\widehat{{\mathcal F}}\to\mathbb{R}$ satisfying the \textbf{basic setting} in the beginning of Section \ref{sec:Hlawka-relation}. Moreover,\footnote{Here, for $f\in {\mathcal F}$, $|f|$ is a function mapping $\Omega$ to $[0,\infty)$.} $\forall f\in {\mathcal F}$, $|f|\in \widehat{{\mathcal F}}$, $1\in \widehat{{\mathcal F}}$.
\begin{table}
\centering
\caption{\small The concrete quantities of Theorem \ref{thm:operator-main} used in the proof of Proposition \ref{thm:groupmain}.
\begin{tabular}{|l|l|}
\hline
Terminologies in Theorem \ref{thm:operator-main} & Concrete choices in Proposition \ref{thm:groupmain} \\
\hline
$X=$ & abelian group $G$\\
\hline
$\mathcal{S}=$ & $\widehat{{\mathcal F}}$\\
\hline
$\eta=$& $\hat{g}$ \\
\hline
$\xi=$& $\mathcal{T}g-\hat{g}$\\
\hline
$f=$& $S|\cdot|$\\
\hline
$a=$& $A$\\
\hline
$b=$& $T(S|\hat{g}|)$\\
\hline
\end{tabular}
\label{tab:pro3}
\end{table}
Applying Theorem \ref{thm:operator-main} to the above restricted situation, we have:
\begin{pro}\label{thm:groupmain}
Given $A\neq 0 \in \mathbb{R}$, an operator ${\mathcal T}: {\mathcal F}\to G$, two maps $g,\hat{g}\in {\mathcal F}$, and a concave function $S:[0,+\infty)\to [0,+\infty)$ such that $A>0$ or $S(|{\mathcal T} g|)>0$ and when $x$ satisfies $S|\hat{g}(x)|+S|{\mathcal T} g|\ne S|\hat{g}(x)-{\mathcal T} g|$, there is $A\ge S|\hat{g}(x)|+S|{\mathcal T} g-\hat{g}(x)|$, then
\begin{equation}\label{eq:HlawkaS2}
\left(T(1)-C\right)S^2|{\mathcal T} g|+T(S^2|\hat{g}|)\ge T(S^2|\hat{g}-{\mathcal T} g|)
\end{equation}
implies
\begin{equation}\label{eq:Hlawka}
\left(T(1)-C\right)S|{\mathcal T} g|+T(S|\hat{g}|)\ge T(S|\hat{g}-{\mathcal T} g|),
\end{equation}
where $C=2T(S|\hat{g}|)/A$.
\end{pro}
\begin{proof}
Taking $\xi=\mathcal{T}g-\hat{g}$, $\eta=\hat{g}$ and $f(\cdot)=S|\cdot|$ in Theorem \ref{thm:operator-main} (see Table~\ref{tab:pro3} for details), we immediately complete the proof.
\end{proof}
The main theorems of \cite{TTW00,R15} can be seen as direct conclusions of Proposition \ref{thm:groupmain}.
\begin{proof}[A proof of Example \ref{cor:R15} (i.e., main theorem in \cite{R15}) via Proposition \ref{thm:groupmain}]
Take $\Omega=\{1,2,3\}$, and for given $x,y,z\in G$, let $\hat{g}=g$ be defined as $g(1)=x$, $g(2)=y$ and $g(3)=z$.
Let $T(S|g|)=S|g(1)|+S|g(2)|+S|g(3)|$ and ${\mathcal T} g=g(1)+g(2)+g(3)$ in Proposition \ref{thm:groupmain}. Then $T(1)=3$, ${\mathcal T} g-g(i)=g(j)+g(k)$, where $\{i,j,k\}=\{1,2,3\}$. Hence, the result is easy to check.
\end{proof}
\vspace{0.3cm}
Given an inner product space $(H,\langle\cdot,\cdot\rangle)$, suppose ${\mathcal F}\subset H^\Omega$ and $\widehat{{\mathcal F}}\subset \mathbb{R}^\Omega$ are linear spaces equipped with linear operators ${\mathcal T}: {\mathcal F}\to H$ and $T:\widehat{{\mathcal F}}\to\mathbb{R}$, then we have the following:
\begin{cor}\label{pro:inner}
Given ${\mathcal T},T$ and a map $f\in {\mathcal F}$ with $T(|f|)>0$ and for any $a\in H$, there is $T\langle f,a\rangle=\langle {\mathcal T} f,a\rangle$, where $|\cdot|$ is the norm induced by the inner product. If $T(|f|)\ge |f(x)|+|{\mathcal T} f-f(x)|$ whenever $x$ satisfies $-f(x)\ne \alpha {\mathcal T} f$ for any $\alpha\ge 0$, then the following hold:
$$\left(T(1)-2\right)|{\mathcal T} f|+T(|f|)\ge T(|f-{\mathcal T} f|).$$
\end{cor}
\begin{proof}
By the basic equality for inner product, we have
\begin{align*}
T\left(|f-{\mathcal T} f|^2\right)&=T\left(|f|^2+|{\mathcal T} f|^2-2\langle f,{\mathcal T} f\rangle\right)
\\ ~&=T(|f|^2)+|{\mathcal T} f|^2T(1)-2\langle {\mathcal T} f,{\mathcal T} f\rangle
\\ ~&=T(|f|^2)+|{\mathcal T} f|^2(T(1)-2).
\end{align*}
Let $S$ in Proposition \ref{thm:groupmain} be the identity operator and
the rest conditions in Proposition \ref{thm:groupmain} are easy to verify. The prove is completed.
\end{proof}
It is clear that $T$ and ${\mathcal T}$ are uniquely determined by each other according to Riesz representation theorem.
\begin{cor}\label{pro:inner2integral}
Let $(\Omega,\mu)$ be a finite measurable space and let $(H,\|\cdot\|)$ be an inner product space. Suppose $f,g:\Omega\to H$ are two nonzero integrable functions satisfying
$$\frac{\int_\Omega fd\mu}{\int_\Omega \|f\|d\mu}=\frac{\int_\Omega gd\mu}{\int_\Omega \|g\|d\mu}.$$
Assume that for $x$ with $-g(x)\ne\alpha \int_\Omega fd\mu$ for any $\alpha\ge0$, there is $$\int_\Omega \|f\|d\mu\ge \|g(x)\|+\left\|g(x)-\int_\Omega fd\mu\right\|.$$
Then we have the following Hlawka's inequality
$$\left(\mu(\Omega)-C\right)\left\| \int_\Omega fd\mu\right\|+\int_\Omega \|g\|d\mu\ge \int_\Omega\left\|g -\int_\Omega fd\mu\right\|d\mu,$$
where $C=2\int_\Omega \|g\|d\mu/\int_\Omega \|f\|d\mu$.
\end{cor}
\begin{proof}
Take ${\mathcal T} f= \int_\Omega f(\omega)d\mu$ and $T (\|f\|) = \int_\Omega\|f(t)\|d\mu(t)$. Now it is ready to apply Proposition \ref{thm:groupmain} to finish the proof.
\end{proof}
\begin{cor}\label{cor:t=lambda}
Suppose for $x$ with $- f(x)\ne\alpha \int_\Omega fd\mu$ for any $\alpha\ge0$, there is $$\int_\Omega \|f\|d\mu\ge t\| f(x)\|+\left\| t f(x)-\int_\Omega fd\mu\right\|$$
for some $t\ge 0$.
Then we have the following Hlawka's inequality
$$\left(\mu(\Omega)-2t\right)\left\| \int_\Omega fd\mu\right\|+t\int_\Omega \|f\|d\mu\ge \int_\Omega\left\| t f -\int_\Omega fd\mu\right\|d\mu.$$
\end{cor}
\begin{cor}\label{bar}
If $\bar f $ is a rearrangement of $f$ with the same distribution, and for $x$ with $-\bar f(x)\ne\alpha \int_\Omega fd\mu$ for any $\alpha\ge0$, there is $$\int_\Omega \|f\|d\mu\ge \|\bar f(x)\|+\left\|\bar f(x)-\int_\Omega fd\mu\right\|.$$
Then we have the following Hlawka's inequality
$$\left(\mu(\Omega)-2\right)\left\| \int_\Omega fd\mu\right\|+\int_\Omega \|\bar f\|d\mu\ge \int_\Omega\left\| \bar f -\int_\Omega fd\mu\right\|d\mu.$$
\end{cor}
\begin{proof}
Clearly, the properties of rearrangement mean that $ \int_\Omega fd\mu=\int_\Omega \bar fd\mu$ and $\int_\Omega \| f\|d\mu=\int_\Omega \|\bar f\|d\mu$. Hence, Corollary \ref{pro:inner2integral} is applicable here.
\end{proof}
Theorem 1 in \cite{TTW00} could be viewed as a corollary of Corollary \ref{bar}. In fact, take $\bar f=f$ in Corollary \ref{bar}, it is easy to verify the following.
\begin{example}[Theorem 1 in \cite{TTW00}]\label{cor:TTW00}
Let $H$ be a Hilbert space, $(\Omega,\mu)$ a finite measure space and let $f$ be a Bochner integrable $H$-valued function on $(\Omega,\mu)$. Suppose that
$$\int_\Omega\|f(t)\|d\mu(t)\ge \left\|f(\omega)-\int_\Omega f(t)d\mu(t)\right\|+\|f(\omega)\|\;\;(a.e., \omega\in\Omega_f),$$ where $\Omega_f=\{\omega\in\Omega:-f(\omega)\ne \alpha \int_\Omega f(t)d\mu(t)\text{ for any }\alpha\ge 0\}$.
Then
$$(\mu(\Omega)-2)\left\|\int_\Omega f(\omega)d\mu\right\|+\int_\Omega\|f(\omega)\|d\mu\ge \int_\Omega\left\|f(\omega)-\int_\Omega fd\mu\right\|d\mu.$$
\end{example}
Next remark contains some interesting examples as corollaries of Proposition \ref{thm:groupmain}.
\begin{remark}Given an inner product space $H$, we have:
\begin{itemize}
\item For any $\lambda\in[0,1]$, $x,y,z\in H$,
$$(1-\lambda)(\|x\|+\|y\|+\|z\|)+(1+2\lambda)\|x+y+z\|\ge \|\lambda x+y+z\|+\|x+ \lambda y+z\|+\|x+\lambda y+z\|.$$
{\sl It is deduced by taking $\Omega=\{1,2,3\}$ and $t=(1-\lambda)$ in Corollary \ref{cor:t=lambda}, which is rather different from Corollary 2 in \cite{TTW00}.}
\item Let $\mu_i,\lambda\ge 0$ such that $\sum_{i=1}^n\mu_i\|x_i\|\ge \lambda\mu_i\|x_i\|+\|\lambda x_i-\sum_{j=1}^n\mu_j x_j\|$ for any $1\leq i \leq n$. Then
$$\left(\sum_{i=1}^n\mu_i-2\lambda\right)\left\|\sum_{i=1}^n\mu_ix_i\right\|+
\lambda\sum_{i=1}^n\mu_i\|x_i\|\ge \sum_{i=1}^n\mu_i\left\|\lambda x_i-\sum_{j=1}^n\mu_jx_j\right\|.$$
{\sl It is deduced by taking $\Omega=\{1,\cdots,n\}$ and $\mu(i)=\mu_i$ in Corollary \ref{cor:t=lambda}, which is an improved version of Corollary 2 in \cite{TTW00} and Proposition 11 in \cite{TTW09}.}
\end{itemize}
\end{remark}
\vspace{1cm}
{\bf Acknowledgements.} This research is supported by grant from the Project
funded by China Postdoctoral Science Foundations (No. 2019M660829). The author thanks her husband for interesting discussions.
| {
"timestamp": "2020-03-17T01:01:11",
"yymm": "2003",
"arxiv_id": "2003.06457",
"language": "en",
"url": "https://arxiv.org/abs/2003.06457",
"abstract": "The classical Hlawka inequality possesses deep connections with zonotopes and zonoids in convex geometry, and has been related to Minkowski space. We introduce Hlawka Type-1 and Type-2 quantities, and establish a Hlawka-type relation between them, which connects a vast number of strikingly different variants of the Hlawka inequalities, such as Serre's reverse Hlawka inequality in the future cone of the Minkowski space, the Hlawka inequality for subadditive function on abelian group by Ressel, and the integral analogs by Takahasi et al. Besides, we announce several enhanced results, such as the Hlawka inequality for the power of measure function. Particularly, we give a complete study of the Hlawka inequality for quadratic form which relates to a work of Serre.",
"subjects": "Functional Analysis (math.FA); Operator Algebras (math.OA)",
"title": "Interactions between Hlawka Type-1 and Type-2 Quantities",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9859363717170516,
"lm_q2_score": 0.8128673246376009,
"lm_q1q2_score": 0.801435460740543
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.